id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.05444
Collaborative Machine Learning Model Building with Families Using Co-ML
Existing novice-friendly machine learning (ML) modeling tools center around a solo user experience, where a single user collects only their own data to build a model. However, solo modeling experiences limit valuable opportunities for encountering alternative ideas and approaches that can arise when learners work together; consequently, it often precludes encountering critical issues in ML around data representation and diversity that can surface when different perspectives are manifested in a group-constructed data set. To address this issue, we created Co-ML -- a tablet-based app for learners to collaboratively build ML image classifiers through an end-to-end, iterative model-building process. In this paper, we illustrate the feasibility and potential richness of collaborative modeling by presenting an in-depth case study of a family (two children 11 and 14-years-old working with their parents) using Co-ML in a facilitated introductory ML activity at home. We share the Co-ML system design and contribute a discussion of how using Co-ML in a collaborative activity enabled beginners to collectively engage with dataset design considerations underrepresented in prior work such as data diversity, class imbalance, and data quality. We discuss how a distributed collaborative process, in which individuals can take on different model-building responsibilities, provides a rich context for children and adults to learn ML dataset design.
Tiffany Tseng, Jennifer King Chen, Mona Abdelrahman, Mary Beth Kery, Fred Hohman, Adriana Hilliard, R. Benjamin Shapiro
2023-04-11T18:31:07Z
http://arxiv.org/abs/2304.05444v3
# Collaborative Machine Learning Model Building with Families Using Co-ML ###### Abstract Existing novice-friendly machine learning (ML) modeling tools center around a solo user experience, where a single user collects only their own data to build a model. However, solo modeling experiences limit valuable opportunities for encountering alternative ideas and approaches that can arise when learners work together; consequently, it often precludes encountering critical issues in ML around data representation and diversity that can surface when different perspectives are manifested in a group-constructed data set. To address this issue, we created Co-ML - a tablet-based app for learners to collaboratively build ML image classifiers through an end-to-end, iterative model-building process. In this paper, we illustrate the feasibility and potential richness of collaborative modeling by presenting an in-depth case study of a family (two children 11 and 14-years-old working with their parents) using Co-ML in a facilitated introductory ML activity at home. We share the Co-ML system design and contribute a discussion of how using Co-ML in a collaborative activity enabled beginners to collectively engage with dataset design considerations underrepresented in prior work such as data diversity, class imbalance, and data quality. We discuss how a distributed collaborative process, in which individuals can take on different model-building responsibilities, provides a rich context for children and adults to learn ML dataset design. machine learning; children; families; learning; collaboration + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: [email protected] + Footnote †: leftmargin=*]E-mail: mike@cs. Machine Learning Model Building with Families Using Co-ML. In _Interaction Design and Children (IDC '23), June 19-23, 2023, Chicago, IL, USA_. ACM, New York, NY, USA, 12 pages. [https://doi.org/10.1145/3585088.3589356](https://doi.org/10.1145/3585088.3589356) ## 1. Introduction A pressing issue in ML today is the design of responsible AI systems that minimizes bias and works well in diverse contexts. Responsible ML systems require rich, varied data representative of the range of scenarios in which a model may be applied. A lack of balanced datasets has enormous public consequences, such as systems that do not account for differences in skintone and gender (Krishnan et al., 2017) or are grounded in historical data with embedded inequalities, and in turn reproduce those inequalities (Krishnan et al., 2017; Krishnan et al., 2017). Teaching best practices for creating balanced datasets is an essential first step towards promoting critical conversations and education about ML systems. Yet with K-12 ML education being relatively nascent (Krishnan et al., 2017), best practices for teaching dataset design are still unestablished, and research is needed on how these ideas can be embedded into ML learning experiences and tools for beginners. Existing ML modeling tools for novices like Teachable Machine (Manning et al., 2017) limit opportunities for encountering dataset diversity issues because they center around a solo modeling experience, where a single user typically collects data and tests models using only their own data. Solo experiences can result in imbalanced datasets or otherwise reflect an individual's own perspective or unconscious biases. In contrast, we posit that a collaborative modeling experience, in which multiple people contribute to data collection and model testing, can facilitate the creation of balanced datasets representing multiple points of view. To understand the role collaboration can play in learning balanced dataset design, we developed Co-ML, a tablet-based app for groups to build image classifiers. With Co-ML, groups decide on items for their classifier to recognize, and individuals collectively add to a shared dataset by adding images from their own devices. They then work together to train an ML model, test its performance, and iterate on their model by refining the underlying data. In this paper, we describe how we designed Co-ML to support collaborative model building and promote good dataset design practices, via features like a shared dashboard for reviewing data, an interface for adding and evaluating test data, as well as a game that engages users in playfully and systematically testing model performance. We designed a 2-hour facilitated activity for families to build ML models with Co-ML using everyday items at home. To illustrate the affordances of the collaborative modeling experience and the ML dataset design ideas users encountered, we share an in-depth case study of a single family (mother and father working alongside their two sons, ages 11 and 14-years old) using Co-ML. Through this case study, we report on how the collaborative modeling experience supported by Co-ML provided a rich context to discuss, debate, and test theories about factors in dataset design critical to model performance. ## 2. Related Work We begin with a background on ML concepts and practices that underlie model building and are central to how models are created with Co-ML. Next, we summarize related work on AI literacy for children and collaborative social learning, highlighting opportunities for collaborative model building to support education on dataset design best practices. ### ML Model Building Machine Learning is a type of AI in which computers detect patterns in data to make useful predictions. _Image classifiers_ are a type of ML model that classifies unseen data such as images into predetermined classes or labels, based upon those learned patterns. In _supervised_ ML, classifiers are first trained on human-labeled data (_training data_) using one of many possible underlying algorithms. The result of training is a _model_ that takes input (new data) and returns an output (a classification of what the new data is). When training a model, an algorithm detects a set of _features_, or characteristics of the data, to distinguish between labels; then when testing a classifier on new data, the model returns a probability distribution for its _confidence_ in associating a given label with each element of the new data. For example, to build an ML classifier to identify fruits, a model would consist of labels for each type of fruit (e.g., banana, orange, and pear). Training data would consist of _samples_, or images of each fruit labelled with the type of fruit pictured. Collecting these images is a process called _data collection_. When testing the model (_model testing_), human modeler(s) assess how accurately the model classifies new data. A substantial part of the ML model-building pipeline focuses on data, including its collection, cleaning, and analysis. It is often easier to improve model performance by iterating on data, such as adding more samples, rather than tweaking model architecture or algorithms (Krishnan et al., 2017). When constructing quality datasets, ML practitioners need to consider how _representative_ the data is (that it accurately reflects characteristics of the labels), how _diverse_ it is (that it presents a variety of use cases and contexts), and how _balanced_ it is (ideally, the distribution of samples across labels is roughly equal). Model failures often result from data challenges. For example, commercial facial recognition algorithms that worked poorly for people of color (Krishnan et al., 2017), an issue known as _class imbalance_. Having many users collecting data often contributes to more diverse datasets (Krishnan et al., 2017) that can match a range of real-world use cases (Bordes et al., 2018). Taking the same fruit classifier example, model designers might consider distinct features that distinguish labels, including representations such as color and shape. Data diversity might entail capturing fruit in different lighting conditions (indoor or outdoor), states (whole or cut), and camera angles (such as top-down or side views). If any images are found to be unrepresentative (such as being blurry or otherwise low quality, or being mislabeled), they can be removed or edited, a process referred to as _quality control_. The dataset should have relatively equal number of samples of banana, oranges, and pears to avoid issues caused by class imbalance. Integrating dataset design practices into introductory ML experiences has the potential to prepare learners to understand pitfalls of biased models and ultimately inform their understanding of responsible ML design. ### AI Literacy for Youth AI4K12 (Krishnan et al., 2017) is one of several efforts (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017) to identify K-12 AI learning goals. AI4K12 has outlined a set of big ideas around ML, ranging from how computers perceive the world, how they represent data, and how they learn from data. While dataset design is one component of their framework (including helping learners examine features of training data, identify potential sources of bias, and understand how to properly balance datasets), more research is needed about how best to support youth learning these practices. A growing set of tools enable youth to play with off-the-shelf AI services (Han et al., 2017; Krizhevsky et al., 2014) or even build models from scratch by collecting their own data and training a model (typically using visual programming or no-code options (Krizhevsky et al., 2014)). Existing model-building tools for youth enable capturing physical activity (Bahdan et al., 2015; Krizhevsky et al., 2014; Krizhevsky et al., 2014) and building interactive toys that respond to custom gestures (Krizhevsky et al., 2014). Teachable Machine(Grover et al., 2016) (inspired by Wekinator (2017)) supports beginners more broadly, with building ML models on the web and testing them in real time via a live classification interface. However, a limitation of existing tools is that they center around solo model building, with limited opportunity to encounter dataset design issues such as a lack of diversity. This is because with existing tools, beginners largely build models that only work for themselves, using only their own data; this may lead to biased data and inaccurate model performance metrics since the ways a model is tested represent a single point of view. The value of different perspectives in model building has been demonstrated in an evaluation of Teachable Machine in which youth exchanged models with their peers, helping them identify limitations in how their model can work for others (Han et al., 2017). With growing public discourse around bias in AI systems, curricula have been developed to expose ethical concerns to young people (Krizhevsky et al., 2014; Krizhevsky et al., 2014). Creating tools that aid beginners in building balanced, diverse datasets is critical to support young people interpreting limitations of existing ML systems and empowering them to build ethical ML systems in the future; we believe that a collaborative tool in which multiple viewpoints are represented in the data can be especially fruitful towards this end. ### Social & Family Learning The value of social learning, where individuals have the opportunity to learn with and from peers, is substantiated by decades of work in social psychology and child development. Our work is inspired by the foundational socio-constructivist idea that knowledge stems from socially mediated interactions (Krizhevsky et al., 2014) in which collaboration supports the construction of knowledge through conversation and shared action, with learners interpreting information and using evidence to draw new conclusions collectively (Bahdan et al., 2015) and make arguments and explanations to one another (Krizhevsky et al., 2014; Krizhevsky et al., 2014). Parents can play critical roles in fostering their children's interests and skill development in technology. Previous work has identified a range of roles parents take on, including project collaborator (working directly with their children) and learning broker (identifying resources for children to use) (Bahdan et al., 2015; Krizhevsky et al., 2014). Activities that support fluent participation and flexible roles adaptive to a family's personal styles and learning agendas are valuable in a variety of contexts, including at-home enrichment (Bahdan et al., 2015), workshops and camps (Krizhevsky et al., 2014; Krizhevsky et al., 2014), and museums (Krizhevsky et al., 2014). Regardless of their technical fluency, parents can support their children's learning through mediation strategies like posing questions and drawing attention (Krizhevsky et al., 2014). In programming activities, parent-child pairs have been found to produce more "compact and well-structured" code with fewer errors compared to children working alone; additionally, children reflected on their solutions more when working alongside a parent (Krizhevsky et al., 2014). Early work on families learning about AI has examined how families engage with concepts like semantic networks using paper-based games (Krizhevsky et al., 2014; Krizhevsky et al., 2014), studied how families interpret AI technologies like voice assistants (Krizhevsky et al., 2014), and looked at the roles parents can play in facilitating children learning about AI (Krizhevsky et al., 2014). A distinction between the contributions of our research and prior work is that we center families learning AI through an experience where novices build functioning ML models together by iterating on the underlying dataset, empowering beginners to wrestle with ML dataset design issues firsthand. Thus, our focus is not on how families interact with existing production ML systems and how they believe they work, but rather how they go about building ML systems themselves and develop understandings of the role dataset diversity, features, and class balance work in how ML systems make decisions. ## 3. Co-ML System The Co-ML iPad app supports collaborative ML model-building through an end-to-end iterative workflow consisting of **defining an ontology of labels** for their classifier to identify, **collecting and reviewing training data** by taking photos on the tablet, **training a model** on device, **testing the model** on new data, **evaluating the model** by playing an in-app game, and **iterating on the model** by revising the underlying dataset. We begin our design of Co-ML with image classification because of the interpretability and support for visual inspection compared to other modalities (Bahdan et al., 2015). Modalities like sound may also be difficult to support in a co-located collaborative experience because of interference when multiple people speak simultaneously. Co-ML is optimized for tablets because the mobility of tablets reduces friction for collecting data in the wild (compared to laptops) while their larger screen size (compared to phones) can better support review and inspection of data. Computationally, tablets are powerful enough to generate and test ML models using small datasets (hundreds to thousands of images). ### Defining an Ontology of Labels First, users define a set of labels, or classes of objects a user wants the classifier to tell apart. Labels can be modified over time. In our example project used throughout this section, the labels are Banana, Orange, and Pear. ### Collecting and Reviewing Training Data Tapping a label name opens a camera interface for collecting labelled photographs (Figure 2). Existing data for a given label are displayed alongside the camera to support users adding new images. The relative sample count for each label is displayed alongside the label name to enable quick inspection of class balance. Users can tap an image and remove it from the dataset if they chose to. Tapping the **View all training data** button displays all training images in a grid of thumbnails grouped by label name as shown in Figure 2. Collaborative data collection is supported by synchronizing data across all devices and presenting data in a single dashboard, inviting users to review and identify patterns and gaps in the dataset. Image data are stored and synced using the CloudKit service as the back-end (Beng et al., 2017), with images stored in private CloudKit data stores accessible only to the users within a project. ### Training a Model Tapping the **Train Model** button generates an ML model using transfer learning and the Create ML Image Classifier API, a Swift framework for training Core ML models (Beng et al., 2017; Chen et al., 2017). The model is then consumed using the Core ML API to classify new data. On-device training typically takes about 5 seconds for training data consisting of a several hundred images using 2019 or newer iPads. At the moment, Co-ML centers the ML model building process around working with data (its collection, cleaning, and analysis), so we do not expose the underlying model architectures and algorithms for a trained model. As it is often easier to improve model performance by iterating on data rather than tweaking model architecture or algorithms more broadly (Krizhevsky et al., 2014), we believe that focusing user exploration on dataset design offers a powerful entry point for beginners to start to reason about how ML models work. ### Testing the Model After a model is trained, the app presents the user with a camera interface for classifying new data as shown in Figure 3. Users can test the model's performance in two ways: photo mode and live classification. In the default photo mode, the user captures an image and is presented with a summary of classification results, including a bar chart of relative confidence levels for each label. If the model is wrong, users can provide the correct label. All images taken in this mode are added as test data to assess model performance across iterations (described in the next section). A secondary Live Classification mode enables users to classify data in real time and test boundary conditions where the model transitions from one classification to another. We chose to default to the photo classification mode rather than live classification (the default mode in existing tools like Teachable Machine (Tie et al., 2016)) because 1) we want to support beginners actively forming hypotheses before seeing the classifier results, and 2) users can review and discuss results without having to interpret a constantly updating confidence barchart. Closing the classification interface presents the testing dashboard, where test images collected during photo classification can be reviewed (Figure 4). Similar to training data, we support collaboration by synchronizing test data to encourage all users to share and reflect on their collective model testing results. Each test image thumbnail has a badge indicating if the model classified the image correctly or not (with a green checkmark or red X), with misclassified images grouped first. Tapping a test image presents a barchart of classification results. Whenever the model is retrained, all test images are automatically re-classified using the latest model, and their corresponding badges are updated; users can thus see whether previously misclassified images are correctly classified after iterating on their model. ### Evaluating the Model: Restaurant Frenzy Game To support playful model testing, we created an in-app game called _Restaurant Frenzy_, inspired by the popular games Diner Dash (Diner, 2017) and Overcooked (Diner, 2017). In the game, the player is a cook trying to complete as many orders as they can in a fixed time period (90 seconds) by showing the camera the needed ingredient. The 90 second time limit supports testing each label several times. We designed a food-themed game because cooking is often a shared experience for families, it can use inexpensive and easily obtainable ingredients, and physical objects naturally introduce variables important for quality image datasets like perspective, lighting, and size. Every round presents players with a random target label, and they have 5 seconds to present the object to the camera (Figure 5). During each round, a barchart of live classification results is displayed alongside the camera, and round scores are calculated using the model's confidence at the end of the round (a confidence level of 78% gets 7.8 points, and a user scores 0 points if the model Figure 2. Data collection interface for adding labelled images to the shared dataset (left). All images added across devices are visible in the synchronized Training Data Dashboard, organized by label name (right). misclassifies the object). At the end of the game, a Game Over screen shows their final score and the average confidence per label. Tapping the **View Round Details** button lets users review specific rounds of the game to see when items were misclassified. The game is designed to encourage collaborative play (with one person controlling the iPad, and others positioning items for the camera to classify) and collective discussion to identify failure cases, which families can then use to decide how to improve their model. ### Iterating on the Model Users can revise their model by adding or removing training data, retraining their model, and seeing whether the model improved. To foster collective ownership of the dataset, all users have equal Figure 4. Testing mode interfaces. Users can review collectively added test data, and classification results are based on the latest trained local model. Tapping on a misclassified sample shows a bar chart of confidence levels to help users debug model performance. Figure 3. Classifying new data using the Photo Classification Mode (left) and Live Classification Mode (right). In Photo Classification Mode, the user takes a photograph and can review the classification results. In Live Classification, users can see an updating bar chart displaying relative confidence levels for each class. access to editing the project including adding new labels, renaming or deleting existing labels, and adding or removing image data. Improvement can be seen by reviewing whether test data classifications results have changed after re-training or by re-playing the game and seeing if they can beat their high score. ## 4. Methodology We recruited families with children ages 10-14 to try out Co-ML and our companion activity. Families were recruited via a database of employees at Apple who had previously expressed interest in participating in research studies. We used a recruitment survey to screen for a number of factors including: ethnicity, number of parents/guardians, number of children and their ages and gender identities, and the profession(s) of the parent(s) and their prior exposure to ML. We chose families with two children and at least one parent who could participate. Once families were selected, we client them 11" iPads with the Co-ML app pre-installed, providing a tablet for each participating family member. Each family participated in a single 2-hour session facilitated via video conference to maintain safety during the COVID-19 pandemic.. Each participant was compensated with a $40 gift card. ### Session Structure and Activity In advance, we asked each family to decide on a favorite family dish and gather 5-8 items (ingredients, utensils, and spices) to use during the activity. We conducted sessions remotely through video-conferencing using the same activity script for each 2-hour session. Three researchers were present: One researcher facilitated the session and engaged with the participants, the second took observation notes, and the third provided technical support. All sessions were conducted in English. At the start of each session, we introduced the research team, answered any questions about the consent and assent forms, and ensured the forms were reviewed and signed by each participant before proceeding. After the family shared the family dish they selected (e.g., pizza) and what ingredients they had (e.g., pita bread, tomato sauce, and mushrooms), we presented a brief demo of the Co-ML app before asking participants to narrow their pool of collected ingredients (and thus, the labels for their model) to 3-4 items (one per family member). Next, each participant received their own iPad and were instructed to take 5 photos of one item using their own device to add to the training dataset. The facilitator then guided a discussion in which participants reviewed their collective dataset. Next, we asked family members to swap items and take additional photos until each label had at least 15 images. Families reviewed the images again as a group and were asked if they could anticipate any sources of confusion for the model based on the data. We then guided the family to train their model and explained how to take and review testing data. Families tested every item in their classifier and discussed whether the model was correct or incorrect in its predictions. The family then used a single iPad to play the Restaurant Frenzy game to evaluate their model, and after playing the game once, families were given 15 minutes to freely iterate and try to improve their models together. They then played the game again to see if their revised model resulted in a higher game score. At the end of the session, families completed a semi-structured interview about their experience. Figure 5. Restaurant Frenzy Game. Users fulfill as many rounds as they can within a 90 second time limit. In each 5 second round, the user presents a target object to the camera and is scored by the confidence level of the classification. After the game is complete, a Game Over screen displays their total score and average confidence levels per label. Tapping the View Round Details button provides information about model performance in each round of the game. ### Data Collection and Analysis After confirming consent to do so, we video-recorded each session using the video-conferencing platform and collected screen recordings of each individual's iPad to understand how each participant iterated on their models. We supplemented our video data with logs of timestamped records on CloudKit showing when individuals added or removed data. Data families added to Co-ML were only viewable to other family members and our research team. Upon completion of all sessions, we exported image data from CloudKit, encrypted the photos so they were only available to the research team, and deleted all cloud-stored data for participants' privacy. Four researchers analyzed the data, which involved 1) stitching together the video recording from the video conference with individual iPad screen recordings to analyze each participant's actions during the activity; 2) transcribing the conversations and deductively coded for ML model building concepts that emerged when users verbally discussed strategies for improving their model; 3) compiling timestamped events from the stitched video along with CloudKit event records to create visualizations of how users navigated through Co-ML in support of their model building process. Researchers met for periodic analysis discussions to reach consensus around our interpretations, including families' interactions with each other and with Co-ML. ### Case Study Selection As an exploratory study, our aim is to illustrate the feasibility and potential benefits of collaborative ML modeling with Co-ML through analyzing learners' interactions while building ML models together. Because we are concerned with understanding how and why individuals refined their approaches to dataset design throughout the activity, we chose a case study approach (Kal label and asking who would retake the images. YS offered to retake the spaghetti images, Dad provided advice on how to prevent the mistake from happening again (by pointing out where in the UI the label name is shown), and Mom invited others to proactively delete mislabeled images directly. ### First Misclassification and Resulting Debugging The facilitator asked the family to train their model and test it in Photo Classification mode. The spoon, sauce, and pot were each correctly classified with high confidence, but spaghetti was misclassified as spoon as displayed in Figure 7A. This misclassification set off a series of model testing trials in which several theories emerged. Mom shared that when a new item is introduced, the camera refocuses on the newer object before classifying, while YS agreed by added that the camera may become blurry in the process of refocusing. Dad described how the "shininess of the spoon is grabbing the camera's attention more than spaghetti." OS offered that the similarly long shape of the spoon and spaghetti may an issue. Their discussion surfaced competing theories ranging from similar properties of the object (their reflective qualities and shape), properties of the image (whether it is blurry or not), and properties of the sensor (camera) classifying the object (whether classification depends on what the camera focuses on). ### Game Play 1 When the family played the game, they found that spaghetti was misclassified again (all other items were classified correctly with high confidence). After reviewing the game over screen (Figure 7B), they began to discuss how multiple properties, or data _features_, in combination may play a role; when YS brought up that the pot is also shiny like the spoon, Dad said that the length of the spaghetti and spoon were both long, suggesting that since the pot was not similarly long, it would not be confused for spoon. Mom then described a realization: "When we were doing the exercise [data collection], I said I don't think it matters if we take a picture of the top [of the sauce jar], but watching Dad in the game, everything was from the top down. So now I actually think taking a top-down picture of the jar was more important than I thought originally." Through seeing how the model was used, she was able to revise her original ideas about what types of data variation are valuable. ### Live Classification: Spaghetti Zoom Insight Next, the researcher introduced the family to the Live Classification to view classification results in real-time. They proceeded to test this feature with the spaghetti and spoon as shown in Figure 7C, where Dad moved the iPad closer and farther away from the packaging and witnessed a boundary condition where the classification toggled between spoon and spaghetti. When the researcher asked for ideas for when the classification switches, YS replied, "When it sees the entire thing [spaghetti packaging]," sharing his insight that the misclassification only occurs when more of the spaghetti packaging is visible. Mom then offered her own debugging suggestion: "Does it do the same thing with the spoon, or is it just confused by the spaghetti?" Dad and YS respond that it only misclassifies with the spaghetti, and Dad showed how zooming in and out of the spoon does not change the model's classification of the spoon. Through their discussion, they decided that the spaghetti data needed to be improved more than the spoon data. ### Dataset Gap Realization Over the next 15 minutes, the family iterated on their model however they chose to do so. Unlike the rest of the activity up to this point, this period was unfacilitated and thus provided a means to observe choices individual family members took to improve the model, as well as how they worked together to address their spaghetti misclassification. In Figure 8, we represent the family's parallel efforts during this free play period, categorizing three different types of modeling actions: **Data Collecting** - Adding new training data; **Model Testing** - Using classification to identify where the model fails; **Quality Control** - Culling through the data dashboard and selectively removing images to clean the data. Figure 8 also shows which items each family member was attending to over the 15 minutes as well as how often they switched between training and testing, with all but Mom shifting between the two throughout this period. For context, we provide three markers indicating moments during this period that we describe in the following sections. Dad added training samples of spaghetti, while YS identified that the model classified an image of the family's vivid tablecth Figure 6. A. Initial training data before discussion. B. Expanded training data for Sauce added after discussion, where photos of the top and front-labels of the jar were added. (on which all other training samples were taken), without any items present, as spoon. After YS reviewed all the training data for spaghetti, he noticed, "We don't really have a lot of photos of the entire thing {spaghetti packaging]," and pointed out that the majority of the existing training images consist of zoomed in details on the spaghetti packaging. YS then went on to take 23 images of the spaghetti from above, capturing the entire package in each photo: "I'm going to figure out this mystery. I'm trying to become George...Curious George!" At the same time, Mom continued to test the model's performance on both spoon and spaghetti. In doing so, the two members are each tackling the same issue from multiple angles: data collection and testing. ### Class Imbalance At this point, YS had added more images of spaghetti (there were 41 training images for spaghetti but only 23 for spoon). As the family continued debugging, the researcher asked, "So have you added photos of both spaghetti and spoon since you last trained the model?" In response, Mom said, "Yes, but we've added significantly more pictures of spaghetti than spoon," and YS agreed: "A lot!" This realization about _class imbalance_ (that there were more spaghetti images than spoon) kicked off further data collection from Dad, who captured additional training data for spoon. Figure 8. Family A iteration moments during 15 minutes of free play, displaying how individual family members chose to iterate on the model through data collecting, quality control, and testing. The ingredients the family members chose to work with (spaghetti, sauce, pot, and spoon) are also displayed. Three moments are highlighted, alongside screenshots from the Co-ML app, that show how the family reasoned about their model’s performance. Figure 7. A. The first misclassification of spaghetti as spoon. B. The game over screen showing misclassified rounds. C. Using live classification to find where the model switches between spaghetti and spoon classifications. ### Data Quality for Spaghetti YS, who continued reviewing spaghetti data with the training data dashboard, then declared, "I found a [spaghetti] image with a spoon in it!" He showed Mom an image where the spoon was on the edge of a spaghetti sample, and then deleted it. _Quality control_ actions were then taken on by both YS and OS, where both actively cleaned the dataset by removing misrepresentative images. While OS was relatively quiet throughout the activity, our screen recordings revealed his active participation, where he took on the most frequent number of Quality Control actions. Collectively, OS deleted a total of 28 images, identifying samples where multiple items were visible (for spaghetti as well as for spoon and sauce). As they continued over the next four minutes, YS shared that after retraining, the model had improved. ### Game Play 2 & Activity Reflection At the end of their iteration period, the family had increased their training dataset from 74 total images to 126 by the time they played Restaurant Frenzy again. The first round spaghetti was the target ingredient, YS stated, "The hard one...this is the moment of truth!", and when the classifier returned a confidence of 100% for spaghetti, the family cheered. But in later rounds, the model was not as successful - spaghetti was misclassified as spoon in another round, and pot was also misclassified as spoon. While the family got a slightly lower score in their second play of the game, the average confidence level for spaghetti improved from 0% in the first gameplay to 68%, suggesting they had improved the model's performance on spaghetti overall. When the family reflected on their experience at the end of the session, the researcher asked, "Do you think adding more data affects the accuracy of the model?" In response, YS shared, "The extra data could also change other data...because if you fix one thing, the thing that it takes to fix that could cause another problem with another thing." Here, the child revealed an understanding that simply adding more data to a model does not necessarily improve how well the model performs. Improving performance for one label can cause regressions for other labels (Song et al., 2018), a critical insight that the 11-year-old was able to develop in his limited experience using Co-ML. In their discussion, YS shared a breakthrough in his understanding of the spoon's role in misclassification: "Because the spoon is the thinnest thing of all, the reason it [the model] thought the other things were the spoon is that the more we zoomed out, the more it saw the background. Since the spoon is the smallest thing, it shows more of the background." This related to his insight earlier where a photo of the tablecth classified as spoon, identifying a spurious relationship in which the model is classifying the tablecth as spoon. Mom added that it would be important to teach the computer "how to ignore the busy background." Mom described how Co-ML supported each son's preferred approach; while OS was less talkative than his brother, he contributed in his own way by, "catching people putting pictures in the wrong category and combing through the data and deleting data points that he thought were not quality for the exercise." Through YS's continual discussion with his parents and OS's more quiet data cleaning practices, the family collectively debugged their model. ### Comparison to Other Families While presenting our full analysis of all three families is out of scope for this work, here we summarize some of the main findings for the Salad and Pizza families for context. Similar to the Spaghetti family, the Salad and Pizza families also focused on data representation, noting the unique packaging of their ingredients (e.g., cans, jars, bags or printed labels). Further, Salad family engaged with the idea of class imbalance. After noticing that they had significantly more samples for Spoon than their other labels, the son wondered whether this might hurt the model's performance: "If you took hundreds of pictures of the spoon, that means basically [if] any part of the spoon [is] showing, it [the model] would instantly recognize it. So if you tried adding a different thing that maybe had a similar color and also reflected, it would think that it's a spoon. In some ways, it would hurt it [the model] instead of make it better." Unlike the Spaghetti family, who mainly took photos of their objects placed on their dining table, the other two families ended up utilizing diverse backgrounds either unintentionally (due to where people were positioned, such as working on different surfaces or against different backdrops) or through intentional design (at one point the Salad family tested their model by holding objects up against a bright yellow hoodie to test whether the color of the background mattered). Both Spaghetti and Salad families actively engaged in discussion with one another, with individuals fluidly switching between training and testing and working with multiple ingredients. In contrast, the Pizza family preferred to work silently, with each person focused on debugging one single ingredient for the entirety of the free play period. We wonder whether promoting collaborative dialogue and interactions may help encourage engagement with multiple stages of the model-building process. Additional studies and further work would be needed to explore and verify this hypothesis. ## 6. Discussion The multi-user modeling experience of Co-ML supported the collaborative collection, review, and revision of data, enabling the family to encounter core dataset design considerations of data representation, diversity, and class balance. Notably, these dataset practices are foundational for addressing existing biases in AI systems as discussed in Section 2.2, so we are encouraged that this short activity using everyday objects could facilitate learning about these considerations. **Data Representation and Diversity.** Since the family had one item for each label, their _dataset diversity_ included multiple camera angles (top, side, or bottom views for objects) and different zoom levels (close up pictures versus showing a whole object). Discussion over the synchronized dataset dashboard in Co-ML helped family members notice and consider types of data representation. Attributes like color, length, and shininess were discussed as _features_ the model might use to tell items apart. These discussions helped surface missing, erroneous, or underrepresented data such as top-down images of the sauce jar or mislabeled images (5.1). Dataset gaps were further identified through discussion about the Game Over Screen to see where the model had the poorest performance (5.3) and using the Live Classification feature to realize that the dataset was missing images of the entire spaghetti packaging (5.4). These results illustrate how a collaborative experience, specifically a shared dataset and collective discussion, can help surface differences in data collection approaches that can ultimately inform and enhance ML datasets. **Class Imbalance**. Along with considering diversity within a single label, the family also recognized that balancing datasets _across_ labels was important. Mom and YS recognized the class imbalance of having far more images of spaghetti collected than spoon, which led Dad to capture more images of the spoon (5.6). Yet, it was through direct experimentation that YS identified that adding more data can cascade into other issues (5.8), going beyond the common misconception that ML models can be improved just by adding more data (Zhou et al., 2019):Because the extra data can change other data. If you fix one thing, the thing it takes to fix that could cause another problem with another thing [label]* (YS). This is a common problem that professional ML engineers wrestle with -- that adding data alone does not necessarily improve a model because of the introduction of new issues (Zhou et al., 2019). **Data Quality**. When YS and OS each identified issues with data quality (mislabeled images as in 5.1 or images with multiple objects visible as in 5.7), YS, OS, and Mom contributed to cleaning the dataset while Dad helpfully point out where to confirm images were added to the correct label in the Co-ML interface. Most deleted images were taken by another family members (rather than the user taking on the data quality role), supporting our idea that having multiple people contribute to the data helps learners encounter viewpoints different than their own. **Overfitting**. Overfitting, or situations where the model learns patterns in the training data so closely as to not generalize to new contexts, did not arise in the family's conversations, but ended up being a challenge with the study activity more broadly. Because the family used the same items for both training and testing their model, and trained and tested their model in the same place (around their kitchen table), the model they developed likely overfit to a narrow set of use cases. Overfitting could be potentially reduced if, 1) families were asked to reserve a similar but distinct version of an item to use for testing (such as having 2 jars of sauce from two different brands, with one only used for testing), and/or 2) the family was asked to test with a different background (such as a different countertop in their house). Future work could adapt this activity to introduce novel contexts to reduce overfitting and support testing how well a model generalizes. **Multiple modes of participation**. We observed that Co-ML supported equitable and differentiated approaches to model building. Since no single user has sole control or influence over the developing dataset, each user can decide which ideas and priorities they themselves want to pursue, encouraging different styles of participation. In summarizing the distinct preferences of OS and YS, Mom said, "I think a good distinction is our two kids. We had one YS who was very interactive, and one OS that was not verbally interactive but was really more focused on data quality...focused on catching people putting pictures in the wrong category and combining the data and deleting data points that he thought were not quality for the exercise." Multiple ways of participating suggests that even for those who are not as involved in the verbal discourse, listening to other people's concerns can still enable individuals to act upon new ideas. ## 7. Conclusion We presented Co-ML, a novel tablet-based application for beginners to build and test ML image classifiers using a collaboratively built dataset. Through an in-depth case study, we presented how a family using Co-ML encountered critical ideas in ML dataset design, including data representation, diversity, and class imbalance. These design considerations were fundamentally shaped by having multiple points of view from each of the family members represented in the data, enabling different strategies for training and testing models to be debated and explored to ultimately diversify their dataset. Further, through a distributed modeling experience with Co-ML, family members were able to work in parallel through multiple stages of the model-building pipeline, including data collection, testing, and quality control, showing that these fundamental ML modeling practices can be supported with appropriately designed tools and activities. Through this work, we contribute a hands-on, collaborative approach to introducing strategies for creating balanced datasets and ultimately hope to encourage more people to consider collaborative, social learning as a powerful means to broaden ML participation and literacy. ## 8. Selection and participation of children Families were recruited via a database of employees at Apple that had expressed interest in participating in research studies. We selected families after they completed an initial recruitment survey with questions such as their ethnicity, number of children as well as their ages and their gender identities, and the professions of their parents. At the beginning of our virtual sessions, a researcher went over consent forms and answered any questions from the family, and children and their families were each required to sign individual consent forms before we proceeded with the study. The protocol and data collection for this research study was reviewed and approved by a research ethics committee and legal counsel at a large technology company. All data collected using the Co-ML app was only visible to members of the family and the research team, and upon completion of the study, data was removed from Cloud-based data stores, encrypted, and made available only to our research team for analysis.
2305.12211
Coordinate-Update Algorithms can Efficiently Detect Infeasible Optimization Problems
Coordinate update/descent algorithms are widely used in large-scale optimization due to their low per-iteration cost and scalability, but their behavior on infeasible or misspecified problems has not been much studied compared to the algorithms that use full updates. For coordinate-update methods to be as widely adopted to the extent so that they can be used as engines of general-purpose solvers, it is necessary to also understand their behavior under pathological problem instances. In this work, we show that the normalized iterates of randomized coordinate-update fixed-point iterations (RC-FPI) converge to the infimal displacement vector and use this result to design an efficient infeasibility detection method. We then extend the analysis to the setup where the coordinates are defined by non-orthonormal basis using the Friedrichs angle and then apply the machinery to decentralized optimization problems.
Jinhee Paeng, Jisun Park, Ernest K. Ryu
2023-05-20T15:20:48Z
http://arxiv.org/abs/2305.12211v2
# Coordinate-Update Algorithms can Efficiently ###### Abstract Coordinate update/descent algorithms are widely used in large-scale optimization due to their low per-iteration cost and scalability, but their behavior on infeasible or misspecified problems has not been much less than algorithms that use full updates. For coordinate-update methods to be as widely adopted to the extent that they can be used as engines of general-purpose solvers, it is necessary to also understand their behavior under pathological problem instances. In this work, we show that the normalized iterates of randomized coordinate-update fixed-point iterations (RC-FPI) converge to the infimal displacement vector and use this result to design an efficient infeasibility detection method. We then extend the analysis to the setup where the coordinates are defined by non-orthonormal bases using the Friedrichs angle and then apply the machinery to decentralized optimization problems. ## 1 Introduction Coordinate update/descent algorithms are widely used in large-scale optimization due to their low per-iteration cost and scalability. These algorithms update only a single block of coordinates of an optimization variable per iteration in contrast to full or stochastic gradient algorithms, which update all variables every iteration. The convergence of coordinate update algorithms has been analyzed extensively, and they have been shown to achieve strong practical and theoretical performance in many large-scale machine learning and optimization problems [1] for non-pathological problem instances. However, the behavior of coordinate update algorithms on infeasible or misspecified problems has not been analyzed, which sharply contrasts with algorithms that use full (deterministic) updates. The recent interest in building general-purpose optimization solvers with first-order algorithms has led to much work analyzing the behavior of full-update first-order algorithms on pathological problem instances so that the solvers can robustly detect such instances. For coordinate-update methods to be as widely adopted, to the extent that they can be used as engines of general-purpose solvers, it is necessary to also understand their behavior under pathological problem instances. ### Summary of results, contribution, and organization In this work, we analyze the behavior of randomized coordinate-update fixed-point iterations (RC-FPI) applied to inconsistent problem instances. Analogous to the classical results of the full-update fixed-point iterations, we show that the normalized iterate \(\frac{x^{k}}{k}\) generated by RC-FPI converges toward the infimal displacement vector, which serves as a certificate of infeasibility, in the sense of both \(L^{2}\) and almost sure convergence. We then bound the asymptotic bias and variance of the estimator, thereby establishing an asymptotic convergence rate. Finally, we extend the analysis to the setup where the coordinates are defined by non-orthonormal bases using the Friedrichs angle and then apply the machinery to decentralized optimization problems. Section 3 defines the randomized-coordinate update. The _uniform expected step-size condition_ is defined, which is a condition ensuring that each coordinate is updated equally in expectation. \(\alpha\) value is defined as the expected scale of the update on each coordinate. Section 4 presents the convergence of normalized iterate. It starts with Section 4.1, presenting properties about expectation values in (RC-FPI). The upper bound condition for the expectation value of squared norm is presented with \(\beta\) value. We show that when \(M=\mathrm{I\!I}\), such condition is satisfied with \(\beta=\alpha\). Additionally, non-expansive behavior in expectation of squared norm are shown, which in result bounds the expected difference between iterates from (RC-FPI) and (FPI with \(\bar{\mathrm{I}}\)). Section 4.2 and Section 4.4 present first two key achievements of this paper, the convergence of normalized iterate. Section 4.2 handles the \(L^{2}\) convergence, when the norm is \(\|\cdot\|\)-norm. Section 4.4 handles almost sure convergence, with extra condition \(\theta<1\). The normalized iterate \(\frac{x^{k}}{k}\) converges to the \(-\alpha\mathbf{v}\), where \(\mathbf{v}\) is the infimal displacement vector. Section 5 presents the third main achievement, the asymptotic upper bound of the variance of normalized iterate. We show that as the iteration count \(k\to\infty\), the variance is bounded in \(\mathcal{O}\left(1/k\right)\). We further find the example with the equality to show the bound is strict. Then, we present an experiment about a relation between range set and the variance. Section 6 presents the infeasibity detection method for (RC-FPI). Results from preceeding sections are used to construct such method. The method focuses on rejecting the null hypothesis \(\left\|\mathbf{v}\right\|_{M}\leq\delta\), by checking \(\left\|\frac{x^{k}}{k}\right\|_{M}\geq\epsilon\), after certain iteration count. We provide the required iteration count to use such method. Section 7 presents an extension of our results to the non-orthogonal basis. We show that a certain condition on Friedrichs angle need to be satisfied to obtain the same results. Then we apply this result on a decentralized optimization, (PG-EXTRA). Furthermore, in the experiment of (PG-EXTRA) on infeasible problem, the convergence of the normalized iterate was found to be faster in (RC-FPI) than (FPI). The paper is organized as follows. Section 2 sets up notations and reviews known results and notions. Section 4 presents the \(L^{2}\) and almost sure convergence of the normalized iterate. Section 5 provides the asymptotic upper bound for the normalized iterate. Section 6 then uses these results to build the infeasibility detection in (RC-FPI). Section 7 extends our result to the non-orthogonal basis, allowing application to the optimization methods such as (PG-EXTRA). Section 8 concludes the paper. ### Prior work #### 1.2.1 FPI of inconsistent case. Behavior of the inconsistent fixed-point iteration has been first characterized by Browder and Petryshyn [2], who showed that the iterates are not bounded. Later, Pazy [3] showed that the iterates actually diverge in a sense that \(\lim_{k\to\infty}\frac{\mathbf{T}^{k}x^{0}}{k}=-\mathbf{v}\), and this work also led to the similar results in more general Banach space settings [4, 5, 6, 7] or geodesic spaces [8, 9]. If the operator is more than just non-expansive, then the difference of iterates is also convergent to \(\mathbf{v}\); see Bruck Jr [10], Bailion et al. [11], Reich and Shafrir [12]. There are also in-depth analyses on the characteristics of infimal displacement vector, regarding its direction [13, 14, 15] and the composition and convex combinations of non-expansive operators [16, 17]. #### 1.2.2 Infeasibility detection and numerical solvers. Fixed-point iteration covers a broad range of optimization algorithms, including Douglas-Rachford splitting (DRS) [18] or alternating direction method of multipliers (ADMM) [19, 20], which are commonly used as a first-order methods for solving general convex optimization problems. The infimal displacement vector of DRS and ADMM operator have been recently studied [21, 22, 23, 24, 25, 26], and it was proven to have meaning in terms of primal and dual problems as well [27, 28]. Related to such behaviors, ADMM-based infeasibility detecting algorithms have been suggested [29, 30, 31], which led to the first-order numerical solvers like OSQP [32] and COSMO [33]. Apart from above, SCS [34, 35] uses homogeneous self-dual embedding [36, 37]. #### 1.2.3 Randomized coordinate update and RC-FPI. The coordinate descent is a method which updates one coordinate or blocks at each iteration [38, 39, 40, 41, 42, 43, 44]. Such methods are also popular in proximal setup [45, 46, 47], prox-linear [1, 48, 49, 50, 51, 52, 53, 54, 55], distributed (or asynchronous) setup [56, 57], and even in discrete optimizations [58, 59, 60]. There are in-depth complexity analysis and accelerated variants of coordinate descent method as well [61, 62, 63, 64, 65, 66, 67, 68, 69]. Furthermore, there are attempts to hybrid coordinate update with full update in primal-dual algorithms [70, 71]. Randomized coordinate-update for fixed-point iteration has been first proposed by Verkama [72]. General framework for randomized block-coordinate fixed-point iteration was suggested by Combettes and Pesquet [73, 74], followed by similar line of works including block-coordinate update fixed-point iteration in asynchronous parallel setup [57], forward-backward splitting [75, 76], Douglas-Rachford splitting [77], and so on. It also led to the refined analysis in cyclic fixed-point iterations [78, 79], and the iteration complexity of coordinate update fixed-point iterations and their variants [80, 81, 82, 55, 83]. #### 1.2.4 Friedrichs angle and splitting methods. Friedrichs angle [84, 85, 86] measures an angle between a number of subspaces, and is often used to characterize the convergence rate of projection methods. [87, 88, 89, 90, 91, 92, 93, 94, 95, 96]. This kind of approach has been extended to cover splitting methods such as DRS and ADMM as well [97, 98, 99, 100, 101, 102, 103]. ## 2 Preliminaries and notations In this section, we set up notations and review known results. First, let's clarify the underlying space. Throughout this paper, a Hilbert space refers to a real Hilbert space. The underlying space is a real Hilbert space \(\mathcal{H}\), which is consisted of \(m\) real Hilbert spaces. \[\mathcal{H}=\mathcal{H}_{1}\oplus\mathcal{H}_{2}\oplus\ldots\mathcal{H}_{m}.\] An element \(u\in\mathcal{H}\) can be decomposed into \(m\) blocks as \[u=\left(u_{1},u_{2},\ldots,u_{m}\right),\quad u_{i}\in\mathcal{H}_{i},\] and \(u_{i}\) is called the \(i\)th block coordinates of \(u\). The Hilbert space \(\mathcal{H}\) has its induced norm and inner product as \[\|x\|^{2}=\sum_{i=1}^{m}\|x_{i}\|_{i}^{2},\qquad\langle x,y\rangle=\sum_{i=1} ^{m}\langle x_{i},y_{i}\rangle_{i},\] for all \(x,y\in\mathcal{H}\), where \(\|\cdot\|_{i}\) and \(\langle\cdot,\cdot\rangle_{i}\) are the norm and inner product of \(\mathcal{H}_{i}\) and \(x_{i},y_{i}\) are \(i\)th block coordinates of \(x,y\), respectively. Consider a linear, bounded, self-adjoint and positive definite operator \(M:\mathcal{H}\rightarrow\mathcal{H}\). The \(M\)-norm and \(M\)-inner product of \(\mathcal{H}\) are defined as \[\|x\|_{M}=\sqrt{\langle x,Mx\rangle},\qquad\langle x,y\rangle_{M}=\langle x, My\rangle,\] which can also be a pair of norm and inner product of the space \(\mathcal{H}\). \(\|\cdot\|\) and \(\langle\cdot,\cdot\rangle\) are simply the instances of \(M\)-norm and \(M\)-inner product with \(M\) as an identity map. For the remark, the map \(M\) can be expressed as a symmetric positive definite matrix if \(\mathcal{H}=\mathbb{R}^{n}\). In this case, \(M\)-inner product and \(M\)-norm are \[\|x\|_{M}=\sqrt{x^{T}Mx},\qquad\langle x,y\rangle_{M}=x^{T}My.\] Define the \(M\)-variance of a random variable \(X\) with the domain \(\mathcal{H}\) as \[\text{Var}_{M}[X]=\mathbb{E}[\|X\|_{M}^{2}]-\|\mathbb{E}[X]\|_{M}^{2}.\] We develop the theory of Sections 4 and 5 with the general \(M\)-norm so that the theory is applicable to the applications of Section 7. ### Operators Denote \(\mathds{I}\colon\mathcal{H}\to\mathcal{H}\) as the identity operator. For an operator \(\mathds{T}\colon\mathcal{H}\to\mathcal{H}\), let \(\operatorname{range}\mathds{T}\) be a range of \(\mathds{T}\). If \(x_{\star}\in\mathcal{H}\) is a point such that \(x_{\star}=\mathds{T}x_{\star}\), it is called a fixed point of \(\mathds{T}\). We say an operator \(\mathds{T}\colon\mathcal{H}\to\mathcal{H}\) is non-expansive with respect to \(\left\|\cdot\right\|_{M}\) if \[\left\|\mathds{T}x-\mathds{T}y\right\|_{M}\leq\left\|x-y\right\|_{M},\qquad \forall x,y\in\mathcal{H},\] and is \(\theta\)-averaged for \(\theta\in(0,1)\) if \(\mathds{T}=(1-\theta)\mathds{I}+\theta\mathds{C}\) for some non-expansive operator \(\mathds{C}\). For notational convenience, we will refer to non-expansive operators to be \(\theta\)-averaged with \(\theta=1\), even though, strictly speaking, \(\theta=1\) means the operator is not averaged. An operator \(\mathds{S}\colon\mathcal{H}\to\mathcal{H}\) is \(\frac{1}{2}\)-cocoercive with respect to \(\left\|\cdot\right\|_{M}\) if \[\left\langle\mathds{S}x-\mathds{S}y,\,x-y\right\rangle_{M}\geq\frac{1}{2}\left\| \mathds{S}x-\mathds{S}y\right\|_{M}^{2},\qquad\forall x,y\in\mathcal{H}.\] \(\mathds{T}\) is non-expansive if and only if \(\mathds{S}=\mathds{I}-\mathds{T}\) is \(\frac{1}{2}\)-cocoercive. Also, \(\mathds{T}\) is \(\theta\)-averaged for some \(\theta\in(0,1)\) if and only if \(\mathds{S}=\frac{\mathds{I}-\mathds{T}}{\theta}\) is \(\frac{1}{2}\)-cocoercive. ### Inconsistent operators and infimal displacement vector We say an operator \(\mathds{T}\colon\mathcal{H}\to\mathcal{H}\) is consistent if it has a fixed-point, and inconsistent if it does not have a fixed-point. \(\mathds{T}\) is consistent if and only if \(0\in\operatorname{range}\left(\mathds{I}-\mathds{T}\right)\). When \(\mathds{T}\) is non-expansive, the closure range \(\left(\mathds{I}-\mathds{T}\right)\) is a nonempty closed convex set, so it has a unique minimum-norm element, which we denote \(\mathbf{v}\)[3]. We call \(\mathbf{v}\) the _infimal displacement vector_ of \(\mathds{T}\)[104, 21]. Alternatively, \(\mathbf{v}\) is the projection of \(0\) onto \(\operatorname{range}\left(\mathds{I}-\mathds{T}\right)\). Equivalently, \(\mathbf{v}\) is the infimal displacement vector of \(\mathds{T}\) if and only if \[\left\langle y-\mathbf{v},\,\mathbf{v}\right\rangle_{M}\geq 0,\qquad\forall y \in\operatorname{range}\left(\mathds{I}-\mathds{T}\right). \tag{1}\] For a convex optimization problem, let \(\mathds{T}\) be an operator corresponding to an iterative first-order method, such as the Douglas-Rachford splitting (DRS) operator [18], and let \(\mathbf{v}\) be its infimal displacement vector. Loosely speaking, if the optimization problem is feasible and the problem is well-behaved, then \(\mathbf{v}=0\). (However, it is possible for "weakly infeasible" problems to have \(\mathbf{v}=0\), so \(\mathbf{v}=0\) does not guarantee feasibility.) On the other hand, \(\mathbf{v}\neq 0\) implies that the problem or its dual problem is infeasible, so \(\mathbf{v}\neq 0\) serves as a certificate of infeasibility [29, 30]. ### Fixed point iteration and normalized iterate The fixed-point iteration (FPI) with respect to an operator \(\mathbb{T}\colon\mathcal{H}\to\mathcal{H}\) is defined as \[x^{k+1}=\mathbb{T}x^{k},\quad k=0,1,2,\ldots,\] (FPI) where \(x^{0}\in\mathcal{H}\) is a starting point. Let \(x^{0},x^{1},x^{2},\ldots\) be the iterates of (FPI). We call \(\frac{x^{k}}{k}\) the \(k\)th _normalized iterate_ of (FPI) for \(k=1,2,\ldots\). The seminal work of Pazy [3] characterizes the dynamics of normalized iterates of (FPI). **Theorem 2.1** (Pazy [3]).: _Let \(\mathbb{T}\colon\mathcal{H}\to\mathcal{H}\) be non-expansive. Let \(x^{0},x^{1},x^{2},\ldots\) be the iterates of (FPI). Then, the normalized iterate \(\frac{x^{k}}{k}\) converges strongly,_ \[\frac{x^{k}}{k}\to-\mathbf{v}\] _as \(k\to\infty\), where \(\mathbf{v}\) is the infimal displacement vector of \(\mathbb{T}\)._ Since the underlying space is a Hilbert space, we clarify that the convergence in the space \(\mathcal{H}\) throughout this paper refers to the strong convergence of Hilbert space. ## 3 Randomized-coordinate update setup In this section, we focus on a variant of (FPI) with randomized coordinate updates. Consider a \(\theta\)-averaged operator \(\mathbb{T}\colon\mathcal{H}\to\mathcal{H}\) with its corresponding \(\frac{1}{2}\)-cocoercive operator \(\mathbf{S}=\frac{1}{\theta}\left(\mathbb{I}-\mathbb{T}\right)\) with \(\theta\in(0,1]\). To clarify, we will refer to non-expansive operators to be \(\theta\)-averaged with \(\theta=1\). Define \(\mathbf{S}_{i}\colon\mathcal{H}\to\mathcal{H}\) for \(i=1,2,\ldots,m\) as \(\mathbf{S}_{i}x=(0,\ldots,0,(\mathbf{S}x)_{i},0,\ldots,0)\), where \((\mathbf{S}x)_{i}\in\mathcal{H}_{i}\). We call \(\mathcal{I}=(\mathcal{I}_{1},\mathcal{I}_{2},\ldots,\mathcal{I}_{m})\in[0,1] ^{m}\subset\mathbb{R}^{m}\) a _selection vector_ and use it as follows. Define \(\mathbf{S}_{\mathcal{I}}\colon\mathcal{H}\to\mathcal{H}\) and \(\mathbb{T}_{\mathcal{I}}\colon\mathcal{H}\to\mathcal{H}\) as \[\mathbf{S}_{\mathcal{I}}=\sum_{i=1}^{m}\mathcal{I}_{i}\mathbf{S}_{i},\qquad \mathbb{T}_{\mathcal{I}}=\mathbb{I}-\theta\mathbf{S}_{\mathcal{I}}.\] We can think of \(\mathbf{S}_{\mathcal{I}}\) as the selection of blocks based on \(\mathcal{I}\) and \(\mathbb{T}_{\mathcal{I}}\) as the update based on the selected blocks. Throughout this paper, we assume that \(\mathcal{I}\) is randomly sampled from a distribution on \(\left[0,1\right]^{m}\) that satisfies the _uniform expected step-size condition_ \[\mathbb{E}_{\mathcal{I}}\left[\mathcal{I}\right]=\alpha\mathbf{1} \tag{2}\] for some \(\alpha\in(0,1]\), where \(\mathbf{1}\in\mathbb{R}^{m}\) is the vector with all entries equal to \(1\). (Note, \(\mathcal{I}\in\left[0,1\right]^{m}\) already implies \(\alpha\in[0,1]\) so we are additionally assuming that \(\alpha>0\).) The randomized coordinate fixed-point iteration (RC-FPI) is defined as \[x^{k+1}=\mathbb{T}_{\mathcal{I}^{k}}x^{k},\qquad k=0,1,2,\ldots,\] (RC-FPI) where \(\mathcal{I}^{0},\mathcal{I}^{1},\ldots\) is sampled IID and \(x^{0}\in\mathcal{H}\) is a starting point. (RC-FPI) is a randomized variant of (FPI). The uniform expected step-size condition (2) allows us to view one step of (RC-FPI) to correspond to a step of (FPI) with \(\bar{\mathbb{T}}\colon\mathcal{H}\to\mathcal{H}\) defined as \[\bar{\mathbb{T}}x=\mathbb{E}_{\mathcal{I}}\left[\mathbb{T}_{\mathcal{I}}x \right],\qquad\forall x\in\mathcal{H}.\] Equivalently, \(\bar{\mathbb{T}}=\mathbb{I}-\alpha\theta\mathbf{S}\). ## 4 Convergence of normalized iterates In this section, we show that (RC-FPI) exhibits behavior similar to Theorem 2.1. Let \(x^{0},x^{1},x^{2},\ldots\) be the iterates of (RC-FPI). For each \(k\in\mathbb{N}\), we likewise call \(\frac{x^{k}}{k}\) the \(k\)th _normalized iterate_ of (RC-FPI) for \(k=1,2,\ldots\). Then, \(\frac{x^{k}}{k}\) converges to \(-\alpha\mathbf{v}\) both in \(L^{2}\) and almost surely. \[\frac{x^{k}}{k}\xrightarrow{L^{2}}-\alpha\mathbf{v},\quad\frac{x^{k}}{k} \xrightarrow{\text{a.s.}}-\alpha\mathbf{v}.\] To clarify, \(\xrightarrow{L^{2}}\) and \(\xrightarrow{\text{a.s.}}\) respectively denote \(L^{2}\) and almost sure convergence of random variables. The almost sure convergence refers that \(\frac{x^{k}}{k}\) being strongly convergent to \(-\alpha\mathbf{v}\) happens with the probability \(1\). ### Properties of expectation on RC-FPI We first characterize certain aspects of (RC-FPI) before establishing our main results. For any \(u\in\mathcal{H}\) and selection vector \(\mathcal{I}\), define \[u_{\mathcal{I}}=\sum_{i=1}^{m}\underbrace{\mathcal{I}_{i}}_{\in\mathbb{R}} \underbrace{(0,\ldots,0,u_{i},0,\ldots,0)}_{\in\mathcal{H}},\] where \(u_{i}\in\mathcal{H}_{i}\) for \(i=1,\ldots,m\). If \(\mathcal{I}\) satisfies the uniform expected step-size condition (2) with \(\alpha\in(0,1]\), then clearly \(\mathbb{E}_{\mathcal{I}}\left[u_{\mathcal{I}}\right]=\alpha u\). Let \(\beta>0\) be a coefficient such that \[\mathbb{E}_{\mathcal{I}}\left[\left\|u_{\mathcal{I}}\right\|_{M}^{2}\right] \leq\beta\left\|u\right\|_{M}^{2},\qquad\forall\,u\in\mathcal{H}. \tag{3}\] **Lemma 4.1**.: _Let's consider the norm of \(\mathcal{H}\) as \(\|\cdot\|\)-norm. If \(\mathcal{I}\) satisfies the uniform expected step-size condition (2) with \(\alpha\in(0,1]\), then \(\beta=\alpha\) satisfies (3)._ Proof.: Since \(\mathcal{I}_{i}\in[0,1]\), \[\mathbb{E}_{\mathcal{I}}\left[\left\|\sum_{i=1}^{m}\mathcal{I}_{ i}(0,\ldots,0,u_{i},0,\ldots,0)\right\|^{2}\right]=\mathbb{E}_{\mathcal{I}} \left[\sum_{i=1}^{m}\left\|\mathcal{I}_{i}u_{i}\right\|_{i}^{2}\right]\] \[\leq\mathbb{E}_{\mathcal{I}}\left[\sum_{i=1}^{m}\mathcal{I}_{i} \left\|u_{i}\right\|_{i}^{2}\right]=\sum_{i=1}^{m}\alpha\left\|u_{i}\right\|^{ 2}=\alpha\left\|u\right\|^{2}.\] Thus, choose \(\alpha\) as \(\beta\) to satisfy the inequality. Next, we present a lemma exhibiting a non-expansiveness. **Lemma 4.2**.: _Let \(\mathbf{T}\colon\mathcal{H}\to\mathcal{H}\) be \(\theta\)-averaged with respect to \(\left\|\cdot\right\|_{M}\) with \(\theta\in(0,1].\) Let \(\mathcal{I}\) be a random selection vector with distribution satisfying the uniform expected step-size condition (2) with \(\alpha\in(0,1].\) Assume (3) holds with some \(\beta\) that \(\beta\leq\alpha/\theta.\) Let \(X\) and \(Y\) be random variables on \(\mathcal{H}\) that are independent of \(\mathcal{I}.\) (However, \(X\) and \(Y\) need not be independent.) Then,_ \[\mathop{\mathbb{E}}_{\mathcal{I},X,Y}\left[\left\|\mathbf{T}_{\mathcal{I}}X- \mathbf{T}_{\mathcal{I}}Y\right\|_{M}^{2}\right]\leq\mathop{\mathbb{E}}_{X,Y }\left[\left\|X-Y\right\|_{M}^{2}\right].\] Proof.: Substitute \(\mathbf{T}_{\mathcal{I}}=\mathds{1}-\theta\mathbf{S}_{\mathcal{I}}\) at \(\mathop{\mathbb{E}}_{\mathcal{I},X,Y}\left[\left\|\mathbf{T}_{\mathcal{I}}X- \mathbf{T}_{\mathcal{I}}Y\right\|_{M}^{2}\right]\) and apply (3) with \(u\) as \(\mathbf{S}X-\mathbf{S}Y\) to get \[\mathop{\mathbb{E}}_{\mathcal{I},X,Y}\left[\left\|\mathbf{T}_{ \mathcal{I}}X-\mathbf{T}_{\mathcal{I}}Y\right\|_{M}^{2}\right]=\mathop{ \mathbb{E}}_{\mathcal{I},X,Y}\left[\left\|X-Y-\theta\left(\mathbf{S}_{ \mathcal{I}}X-\mathbf{S}_{\mathcal{I}}Y\right)\right\|_{M}^{2}\right]\] \[\quad=\mathop{\mathbb{E}}_{X,Y}\left[\left\|X-Y\right\|_{M}^{2} \right]+\theta^{2}\mathop{\mathbb{E}}_{\mathcal{I},X,Y}\left[\left\|\mathbf{S }_{\mathcal{I}}X-\mathbf{S}_{\mathcal{I}}Y\right\|_{M}^{2}\right]-2\theta \mathop{\mathbb{E}}_{\mathcal{I},X,Y}\left[\left\langle X-Y,\mathbf{S}_{ \mathcal{I}}X-\mathbf{S}_{\mathcal{I}}Y\right\rangle_{M}\right]\] \[\quad\leq\mathop{\mathbb{E}}_{X,Y}\left[\left\|X-Y\right\|_{M}^{2 }\right]+\beta\theta^{2}\mathop{\mathbb{E}}_{X,Y}\left[\left\|(\mathbf{S}X- \mathbf{S}Y)\right\|_{M}^{2}\right]-2\alpha\theta\mathop{\mathbb{E}}_{X,Y} \left[\left\langle X-Y,\mathbf{S}X-\mathbf{S}Y\right\rangle_{M}\right].\] Since \(\beta\theta\leq\alpha\) and \(\mathbf{S}\) is \(\frac{1}{2}\)-cocoercive, \[\beta\theta^{2}\mathop{\mathbb{E}}_{X,Y}\left[\left\|\mathbf{S}X-\mathbf{S}Y \right\|_{M}^{2}\right]\leq\alpha\theta\mathop{\mathbb{E}}_{X,Y}\left[\left\| \mathbf{S}X-\mathbf{S}Y\right\|_{M}^{2}\right]\leq 2\alpha\theta\mathop{ \mathbb{E}}_{X,Y}\left[\left\langle X-Y,\mathbf{S}X-\mathbf{S}Y\right\rangle_{M }\right].\] Thus, we can reach the conclusion \[\mathop{\mathbb{E}}_{\mathcal{I},X,Y}\left[\left\|\mathbf{T}_{\mathcal{I}}X- \mathbf{T}_{\mathcal{I}}Y\right\|_{M}^{2}\right]\leq\mathop{\mathbb{E}}_{X,Y} \left[\left\|X-Y\right\|_{M}^{2}\right].\] **Lemma 4.3**.: _Let \(\mathbf{T}\colon\mathcal{H}\to\mathcal{H}\) be \(\theta\)-averaged respect to \(\left\|\cdot\right\|_{M}\) with \(\theta\in(0,1].\) Let \(\mathcal{I}\) be a random selection vector with distribution satisfying the uniform expected step-size condition (2) with \(\alpha\in(0,1].\) Assume (3) holds with some \(\beta.\) For any \(x,z\in\mathcal{H}\),_ \[\mathop{\mathbb{E}}_{\mathcal{I}}\left[\left\|\mathbf{T}_{\mathcal{I}}x- \bar{\mathbf{T}}z\right\|_{M}^{2}\right]\leq\left\|x-z\right\|_{M}^{2}+\theta ^{2}\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}x\right\|_{M}^{2}-\alpha \theta\left(1-\alpha\theta\right)\left\|\mathbf{S}x-\mathbf{S}z\right\|_{M}^{2 }.\] Proof.: First, substitute \(\mathbf{T}_{\mathcal{I}}=\mathds{1}-\theta\mathbf{S}_{\mathcal{I}}\) and \(\bar{\mathbf{T}}=\mathds{1}-\alpha\theta\mathbf{S}\) at the expectation \(\mathop{\mathbb{E}}_{\mathcal{I}}\left[\left\|\mathbf{T}_{\mathcal{I}}x- \bar{\mathbf{T}}z\right\|_{M}^{2}\right]\). \[\mathbb{E}\left[\left\|\mathbf{T}_{\mathcal{I}}x-\bar{\mathbf{T}}z \right\|_{M}^{2}\right] =\mathbb{E}\left[\left\|x-z-\theta\left(\mathbf{S}_{\mathcal{I}}x- \alpha\mathbf{S}z\right)\right\|_{M}^{2}\right]\] \[=\left\|x-z\right\|_{M}^{2}+\theta^{2}\mathbb{E}\left[\left\| \mathbf{S}_{\mathcal{I}}x-\alpha\mathbf{S}z\right\|_{M}^{2}\right]-2\theta \mathbb{E}\left[\left\langle x-z,\mathbf{S}_{\mathcal{I}}x-\alpha\mathbf{S}z \right\rangle_{M}\right]\] \[=\left\|x-z\right\|_{M}^{2}+\theta^{2}\mathbb{E}\left[\left\| \mathbf{S}_{\mathcal{I}}x-\alpha\mathbf{S}z\right\|_{M}^{2}\right]-2\alpha \theta\left\langle x-z,\mathbf{S}x-\mathbf{S}z\right\rangle_{M}.\] Then, use \(\frac{1}{2}\)-cocoercive property of the operator \(\mathbf{S}\). Finally, apply an inequality \[\mathbb{E}\left[\left\|\mathbf{S}_{\mathcal{I}}x-\alpha\mathbf{S}z \right\|_{M}^{2}\right]=\mathbb{E}\left[\left\|\left(\mathbf{S}_{\mathcal{I}}x -\alpha\mathbf{S}x\right)+\alpha\left(\mathbf{S}x-\mathbf{S}z\right)\right\|_ {M}^{2}\right]\] \[=\mathbb{E}\left[\left\|\mathbf{S}_{\mathcal{I}}x-\alpha\mathbf{S }x\right\|_{M}^{2}\right]+2\alpha\left\langle\mathbb{E}\left[\mathbf{S}_{ \mathcal{I}}x-\alpha\mathbf{S}x\right],\mathbf{S}x-\mathbf{S}z\right\rangle_{M }+\alpha^{2}\left\|\mathbf{S}x-\mathbf{S}z\right\|_{M}^{2}\] \[=\mathbb{E}\left[\left\|\mathbf{S}_{\mathcal{I}}x\right\|_{M}^{2} \right]-\left\|\alpha\mathbf{S}x\right\|_{M}^{2}+\alpha^{2}\left\|\mathbf{S}x -\mathbf{S}z\right\|_{M}^{2}\] \[\leq\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}x\right\|_{M}^ {2}+\alpha^{2}\left\|\mathbf{S}x-\mathbf{S}z\right\|_{M}^{2},\] we get the desired inequality \[\mathbb{E}_{\mathcal{I}}\left[\left\|\mathbf{T}_{\mathcal{I}}x-\bar{\mathbf{T }}z\right\|_{M}^{2}\right]\leq\left\|x-z\right\|_{M}^{2}-\alpha\theta\left(1- \alpha\theta\right)\left\|\mathbf{S}x-\mathbf{S}z\right\|_{M}^{2}+\theta^{2} \left(\beta-\alpha^{2}\right)\left\|\mathbf{S}x\right\|_{M}^{2}.\] ### \(L^{2}\) convergence of normalized iterate **Theorem 4.4**.: _Let \(\mathbf{T}\colon\mathcal{H}\to\mathcal{H}\) be \(\theta\)-averaged with respect to \(\|\cdot\|\)-norm with \(\theta\in(0,1]\). Assume \(\mathcal{I}^{0},\mathcal{I}^{1},\ldots\) is sampled IID from a distribution satisfying the uniform expected step-size condition (2) with \(\alpha\in(0,1]\). Let \(x^{0},x^{1},x^{2},\ldots\) be the iterates of (RC-FPI). Then_ \[\frac{x^{k}}{k}\xrightarrow{L^{2}}-\alpha\mathbf{v}\] _as \(k\to\infty\), where \(\mathbf{v}\) is the infimal displacement vector of \(\mathbf{T}\)._ While the full proof will be presented, here is the key outline for the proof. Define another sequence \(z^{0},z^{1},z^{2},\ldots\) with \[z^{k+1}=\bar{\mathbf{T}}z^{k},\qquad k=0,1,2,\ldots.\] (FPI with \[\bar{\mathbf{T}}\] ) Apply Lemma 4.3 on the iterates of (RC-FPI) starting from \(x^{0}\) and the iterates of (FPI with \(\bar{\mathbf{T}}\)) starting from \(z^{0}=x^{0}\). In Section 4.3, we obtain a bound on the last two terms of Lemma 4.3 that is independent of \(k\). More specifically, for all \(k=1,2,\dots,\) \[\mathbb{E}\left[\left\|x^{k}-z^{k}\right\|_{M}^{2}\right]\leq\mathbb{E}\left[ \left\|x^{k-1}-z^{k-1}\right\|_{M}^{2}\right]+A,\] where \(A=(1-\alpha\theta)\left[2\sqrt{\alpha\theta}\left\|\mathbf{S}x^{0}\right\|_{M} ^{2}-\frac{\alpha}{\theta}\left\|\mathbf{v}\right\|_{M}^{2}\right]\). Dividing by \(k^{2}\) to get \[\mathbb{E}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\right] \leq\frac{A}{k}\] and appealing to Theorem 2.1, we conclude the \(L^{2}\) convergence. We defer the detailed proof to Section 4.3. ### Proof of Theorem 4.4 In the proof, the norm and inner product being \(\left\|\cdot\right\|\) and \(\left\langle\cdot,\cdot\right\rangle\) is not used until the final part. Lemmas prior to the main proof of Theorem 4.4 uses the general \(M\)-norm and \(M\)-inner product. We start the proof of Theorem 4.4 by presenting two lemmas to make upper bounds for terms \(\left\|\mathbf{S}z^{k}\right\|_{M}\) and \(\left\|\mathbb{E}\left[\mathbf{S}x^{k}\right]\right\|_{M}\). **Lemma 4.5**.: \(\mathbf{T}\colon\mathcal{H}\to\mathcal{H}\) _is a \(\theta\)-averaged with \(\theta\in(0,1]\) and choose any starting point \(z^{0}\in\mathcal{H}\) for (FPI with \(\overline{\mathbf{T}}\)). When \(\mathbf{S}=\frac{\mathbf{T}-\mathbf{T}}{\theta}\),_ \[\left\|\mathbf{S}z^{k}\right\|_{M}\leq\left\|\mathbf{S}z^{k-1}\right\|_{M} \leq\dots\leq\left\|\mathbf{S}z^{0}\right\|_{M}.\] Proof of Lemma 4.5.: All we need to prove is, \[\left\|\mathbf{S}\mathbf{T}z\right\|_{M}\leq\left\|\mathbf{S}z\right\|_{M}, \qquad\forall z\in\mathcal{H}.\] From \(\mathbf{S}\) being \(\frac{1}{2}\)-cocoercive operator, \[2\left\langle\mathbf{S}\mathbf{T}z-\mathbf{S}z,\mathbf{T}z-z\right\rangle_{M }\geq\left\|\mathbf{S}\mathbf{T}z-\mathbf{S}z\right\|_{M}^{2}.\] With \(\mathbf{T}z-z=-\theta\mathbf{S}z\), we get \[\theta\left\langle\mathbf{S}\mathbf{T}z-\mathbf{S}z,-\mathbf{S}z-\mathbf{S} \mathbf{T}z\right\rangle_{M}\geq\left(1-\theta\right)\left\|\mathbf{S}\mathbf{ T}z-\mathbf{S}z\right\|_{M}^{2}\geq 0,\] which is equivalent to \[\left\|\mathbf{S}\mathbf{T}z\right\|_{M}^{2}\leq\left\|\mathbf{S}z\right\|_{M} ^{2}.\] **Lemma 4.6**.: \(\mathbf{T}\colon\mathcal{H}\to\mathcal{H}\) _is a \(\theta\)-averaged with \(\theta\in(0,1]\) and choose any starting point \(x^{0}\in\mathcal{H}\) for (RC-FPI). When \(\mathbf{S}=\frac{\mathbf{T}-\mathbf{T}}{\theta}\),_ \[\left\|\mathbb{E}\left[\mathbf{S}\mathbf{T}_{\mathcal{I}^{k}}\dots\mathbf{T}_{ \mathcal{I}^{0}}x^{0}\right]\right\|_{M}\leq\beta^{1/2}\alpha^{-1}\left\| \mathbf{S}x^{0}\right\|_{M}\] _holds if \(\mathcal{I}_{0},\mathcal{I}_{1},\dots,\mathcal{I}_{k}\) follows IID distribution with the condition (2) with \(\alpha\in(0,1]\) and (3) holds with some \(\beta\) that \(\beta\leq\alpha/\theta\)._ Proof of Lemma 4.6.: Apply Lemma 4.2 repeatedly, we get \[\mathbb{E}\left[\left\|\mathbf{T}_{\mathcal{I}^{k}}\dots\mathbf{T}_{\mathcal{ I}^{1}}X-\mathbf{T}_{\mathcal{I}^{k}}\dots\mathbf{T}_{\mathcal{I}^{1}}Y \right\|_{M}^{2}\right]\leq\mathbb{E}\left[\left\|X-Y\right\|_{M}^{2}\right],\] for arbitrary random variable \(X,Y\). From Jensen's inequality, \[\left\|\mathbb{E}\left[\mathbf{T}_{\mathcal{I}^{k}}\dots\mathbf{T}_{\mathcal{ I}^{1}}X-\mathbf{T}_{\mathcal{I}^{k}}\dots\mathbf{T}_{\mathcal{I}^{1}}Y \right]\right\|_{M}^{2}\leq\mathbb{E}\left[\left\|X-Y\right\|_{M}^{2}\right].\] Now set up \(X,Y\) as \[X=\mathbf{T}_{\mathcal{I}^{0}}x^{0},\quad Y=x^{0}.\] Then as a result, we have an inequality \[\left\|\mathbb{E}\left[\mathbf{T}_{\mathcal{I}^{k}}\dots\mathbf{T}_{\mathcal{ I}^{1}}\mathbf{T}_{\mathcal{I}^{0}}x^{0}-\mathbf{T}_{\mathcal{I}^{k}}\dots \mathbf{T}_{\mathcal{I}^{1}}x^{0}\right]\right\|_{M}^{2}\leq\mathbb{E}\left[ \left\|\theta\mathbf{S}_{\mathcal{I}^{0}}x^{0}\right\|_{M}^{2}\right]\leq \beta\left\|\theta\mathbf{S}x^{0}\right\|_{M}^{2}.\] Since \(\mathcal{I}_{0},\mathcal{I}_{1},\dots,\mathcal{I}_{n}\) are independent and identically distributed, the following equivalence holds. \[\mathbb{E}\left[\mathbf{T}_{\mathcal{I}^{k}}\dots\mathbf{T}_{\mathcal{I}^{1}} x^{0}\right]=\mathbb{E}\left[\mathbf{T}_{\mathcal{I}^{k-1}}\dots\mathbf{T}_{ \mathcal{I}^{0}}x^{0}\right].\] This equality gives the conclusion \[\left\|\alpha\mathbb{E}\left[\theta\mathbf{S}\mathbf{T}_{\mathcal{ I}^{k-1}}\dots\mathbf{T}_{\mathcal{I}^{1}}\mathbf{T}_{\mathcal{I}^{0}}x^{0} \right]\right\|_{M}\] \[\quad=\left\|\mathbb{E}\left[\left(\mathbf{I}-\bar{\mathbf{T}} \right)\mathbf{T}_{\mathcal{I}^{k-1}}\dots\mathbf{T}_{\mathcal{I}^{1}}\mathbf{ T}_{\mathcal{I}^{0}}x^{0}\right]\right\|_{M}\] \[\quad=\left\|\mathbb{E}\left[\mathbf{T}_{\mathcal{I}^{k}}\dots \mathbf{T}_{\mathcal{I}^{1}}\mathbf{T}_{\mathcal{I}^{0}}x^{0}-\mathbf{T}_{ \mathcal{I}^{k-1}}\dots\mathbf{T}_{\mathcal{I}^{1}}x^{0}\right]\right\|_{M}\] \[\quad=\left\|\mathbb{E}\left[\mathbf{T}_{\mathcal{I}^{k}}\dots \mathbf{T}_{\mathcal{I}^{1}}\mathbf{T}_{\mathcal{I}^{0}}x^{0}-\mathbf{T}_{ \mathcal{I}^{k}}\dots\mathbf{T}_{\mathcal{I}^{1}}x^{0}\right]\right\|_{M}\leq \beta^{1/2}\left\|\theta\mathbf{S}x^{0}\right\|_{M}.\] **Lemma 4.7**.: _Let \(\mathbf{T}\colon\mathcal{H}\to\mathcal{H}\) be \(\theta\)-averaged with respect to \(\left\|\cdot\right\|_{M}\) with \(\theta\in(0,1]\), and let \(x^{0},x^{1},x^{2},\dots\) be the iterates of (RC-FPI) and let \(z^{0},z^{1},z^{2},\dots\) be the iterates of (FPI with \(\bar{\mathbf{T}}\)). Assume that the distribution of \(\mathcal{I}\) satisfies the uniform expected step-size condition (2) with \(\alpha\in(0,1]\) and (3) holds with some \(\beta\) that \(\beta\leq\alpha/\theta\). Then_ \[\mathbb{E}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M }^{2}\right]\leq\frac{1}{k^{2}}\left\|x^{0}-z^{0}\right\|_{M}^{2}\] \[\quad+\frac{1}{k}\left(1-\alpha\theta\right)\left[2\sqrt{\alpha \theta}\left\|\mathbf{S}x^{0}\right\|_{M}\left\|\mathbf{S}z^{0}\right\|_{M}- \frac{\alpha}{\theta}\left\|\mathbf{v}\right\|_{M}^{2}\right],\] _where \(\mathbf{v}\) is the infimal displacement vector of \(\mathbf{T}\)._ Proof.: The key step of proving Lemma 4.7 is to bound the last two terms in Lemma 4.3. To achieve this, rewrite the terms as \[-\alpha\theta\left(1-\alpha\theta\right)\left\|\mathbf{S}x-\mathbf{S }z\right\|_{M}^{2}+\theta^{2}\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}x \right\|_{M}^{2}\] \[\qquad=-\theta\left(\alpha-\beta\theta\right)\left\|\mathbf{S}x \right\|_{M}^{2}+2\alpha\theta\left(1-\alpha\theta\right)\left\langle\mathbf{S} x,\mathbf{S}z\right\rangle_{M}-\alpha\theta\left(1-\alpha\theta\right)\left\| \mathbf{S}z\right\|_{M}^{2}\] \[\qquad\leq-\theta^{-1}\alpha\left(1-\alpha\theta\right)\left\| \mathbf{v}\right\|_{M}^{2}+2\alpha\theta\left(1-\alpha\theta\right)\left\langle \mathbf{S}x,\mathbf{S}z\right\rangle_{M},\] where the last inequality is from \(\mathbf{v}\) being infimal displacement vector, i.e. \(\left\|\mathbf{v}\right\|_{M}\leq\left\|\theta\mathbf{S}x\right\|_{M},\left\| \theta\mathbf{S}z\right\|_{M}\). Now use Lemma 4.3 with \(x\) as \(x^{k}\) and \(z\) as \(z^{k}\). Take a full expectation among \(\mathcal{I}_{0},\mathcal{I}_{1},\ldots,\mathcal{I}_{k-1}\), we get \[\mathbb{E}\left[\left\|x^{k}-z^{k}\right\|_{M}^{2}\right]\] \[\quad\leq\mathbb{E}\left[\left\|x^{k-1}-z^{k-1}\right\|_{M}^{2} \right]-\theta^{-1}\alpha\left(1-\alpha\theta\right)\left\|\mathbf{v}\right\| _{M}^{2}+2\alpha\theta\left(1-\alpha\theta\right)\left\|\mathbf{S}x^{0}\right\| _{M}\left\|\mathbf{S}z^{0}\right\|_{M}\] \[\quad\leq\mathbb{E}\left[\left\|x^{k-1}-z^{k-1}\right\|_{M}^{2} \right]-\theta^{-1}\alpha\left(1-\alpha\theta\right)\left\|\mathbf{v}\right\| _{M}^{2}+2\sqrt{\alpha\theta}\left(1-\alpha\theta\right)\left\|\mathbf{S}x^{0} \right\|_{M}\left\|\mathbf{S}z^{0}\right\|_{M},\] where the third inequality uses Lemma 4.5 and Lemma 4.6. Note that the term \[-\theta^{-1}\alpha\left(1-\alpha\theta\right)\left\|\mathbf{v}\right\|_{M}^{2} +2\sqrt{\alpha\theta}\left(1-\alpha\theta\right)\left\|\mathbf{S}x^{0}\right\| _{M}\left\|\mathbf{S}z^{0}\right\|_{M}\] is independent from the random process and iterations. Thus, above inequality can be applied through iterations, in result, Divide both side with \(k^{2}\) to obtain the desired result \[\mathbb{E}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M }^{2}\right]\\ \leq\frac{1}{k}\left(2\sqrt{\alpha\theta}\left(1-\alpha\theta \right)\left\|\mathbf{S}x^{0}\right\|_{M}\left\|\mathbf{S}z^{0}\right\|_{M}- \frac{\alpha}{\theta}\left(1-\alpha\theta\right)\left\|\mathbf{v}\right\|_{M }^{2}\right)+\frac{1}{k^{2}}\left\|x^{0}-z^{0}\right\|_{M}^{2}.\] Proof of and Theorem 4.4.: Since the \(M=\mathbf{I}\), we have \(\beta=\alpha\leq\alpha/\theta\) due to Lemma 4.1. Thus, we may apply Lemma 4.7 with \(z^{0}=x^{0}\). \[\mathbb{E}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|^{2}\right]\leq \frac{1}{k}\left(2\sqrt{\alpha\theta}\left(1-\alpha\theta\right)\left\| \mathbf{S}x^{0}\right\|^{2}-\frac{\alpha}{\theta}\left(1-\alpha\theta\right) \left\|\mathbf{v}\right\|^{2}\right).\] When the limit \(k\to\infty\) is taken, \[\lim_{k\to\infty}\mathbb{E}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|^{ 2}\right]=0,\quad\lim_{k\to\infty}\left\|\frac{z^{k}}{k}+\alpha\mathbf{v} \right\|=0,\] where the second equation is from Theorem 2.1. These two limits provide \(L^{2}\) convergence of normalized iterate, namely \[\frac{x^{k}}{k}\overset{L^{2}}{\to}-\alpha\mathbf{v},\] as \(k\to\infty\). ### Almost sure convergence of normalized iterate **Theorem 4.8**.: _Under the conditions of Theorem 4.4 with \(\theta\in(0,1)\), \(\frac{x^{k}}{k}\) is strongly convergent to \(-\alpha\mathbf{v}\) in probability \(1\). In other words,_ \[\frac{x^{k}}{k}\overset{\text{\rm a.s.}}{\to}-\alpha\mathbf{v}\] _as \(k\to\infty\)._ While the full proof is presented in the next subsection, here is the outline of the proof of the theorem. Let \(z^{0},z^{1},z^{2},\dots\) be the iterates of (FPI with \(\bar{\mathbb{T}}\)). Assume \(\beta<\alpha/\theta\). From Lemma 4.3 and further analysis in Section 4.5, we obtain \[\mathbb{E}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\middle| \mathcal{F}_{k-1}\right]\leq\left\|\frac{x^{k-1}}{k-1}-\frac{z^{k-1}}{k-1} \right\|_{M}^{2}+\frac{B}{k^{2}}\left\|\mathbf{S}z^{0}\right\|_{M}^{2}\] for \(k=2,3,\dots\), where \(B=\frac{\alpha\theta^{2}(1-\alpha\theta)(\beta-\alpha^{2})}{\alpha-\beta \theta}\geq 0\) and \(\mathcal{F}_{k}\) is a filtration consisting of information up to the \(k\)th iterate. We then apply the Robbins-Siegmund quasi-martingale theorem [105], restated as Lemma 4.9, to conclude that the random sequence \(\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\) converges almost surely to some random variable. Then, by Fatou's lemma and the \(L^{2}\) convergence of Theorem 4.4, we have \[\mathbb{E}\left[\lim_{k\to\infty}\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k} \right\|_{M}^{2}\right]\leq\lim_{k\to\infty}\mathbb{E}\left[\left\|\frac{x^{k }}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\right]=0.\] Thus, as \(k\to\infty\), \[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\overset{\text{\rm a.s. }}{\to}0,\] and appealing to Theorem 2.1, we conclude the almost sure convergence. Finally, in the case of \(\|\cdot\|\)-norm, the assumption \(\beta<\alpha/\theta\) is satisfied by Lemma 4.1. We defer the detailed proof to Section 4.5. ### Proof of Theorem 4.8 First, let's recall the Robbins-Siegmund quasi-martingale theorem [105]. **Lemma 4.9** (Robbins and Siegmund [105]).: \(\mathcal{F}_{1}\subset\mathcal{F}_{2}\subset\dots\) _is a sequence of sub-\(\sigma\)-algebras of \(\mathcal{F}\) where \(\left(\Omega,\mathcal{F},P\right)\) is a probability space. When \(X_{k},b_{k},\tau_{k},\zeta_{k}\) are non-negative \(\mathcal{F}_{k}\)-random variables such that_ \[\mathbb{E}\left[X_{k+1}\mid\mathcal{F}_{k}\right]\leq\left(1+b_{k}\right)X_{k} +\tau_{k}-\zeta_{k},\] \(\lim_{k\to\infty}X_{k}\) _exists and is finite and \(\sum_{k=1}^{\infty}\zeta_{k}<\infty\) almost surely if \(\sum_{k=1}^{\infty}b_{k}<\infty,\sum_{k=1}^{\infty}\tau_{k}<\infty\)._ Now, we present a proof of Theorem 4.8 with the norm of \(\left\|\cdot\right\|_{M}\). **Lemma 4.10**.: _Let \(\mathbb{T}\colon\mathcal{H}\to\mathcal{H}\) be \(\theta\)-averaged with respect to \(\left\|\cdot\right\|_{M}\) with \(\theta\in(0,1]\), and let \(x^{0},x^{1},x^{2},\dots\) be the iterates of (RC-FPI) and let \(z^{0},z^{1},z^{2},\dots\) be the iterates of (FPI with \(\mathbb{T}\)). Assume that the distribution of \(\mathcal{I}\) satisfies the uniform expected step-size condition (2) with \(\alpha\in(0,1]\) and (3) holds with some \(\beta\) that \(\beta<\alpha/\theta\). Then \(\frac{x^{k}}{k}\) is strongly convergent to \(-\alpha\mathbf{v}\) in probability \(1\), \(i.e.\)_ \[\frac{x^{k}}{k}\overset{a.s.}{\to}-\alpha\mathbf{v}\] _as \(k\to\infty\), where \(\mathbf{v}\) is the infimal displacement vector of \(\mathbb{T}\)._ Proof.: To use the almost super-martingale Lemma 4.9, we cannot take full expectation to bound the extra terms in Lemma 4.3. In this proof we provide another way to bound the last two terms of Lemma 4.3. \[-\alpha\theta\left(1-\alpha\theta\right)\left\|\mathbf{S}x- \mathbf{S}z\right\|_{M}^{2}+\theta^{2}\left(\beta-\alpha^{2}\right)\left\| \mathbf{S}x\right\|_{M}^{2}\] \[\quad=-\theta\left(\alpha-\beta\theta\right)\left\|\mathbf{S}x \right\|_{M}^{2}+2\alpha\theta\left(1-\alpha\theta\right)\left\langle\mathbf{S }x,\mathbf{S}z\right\rangle_{M}-\alpha\theta\left(1-\alpha\theta\right)\left\| \mathbf{S}z\right\|_{M}^{2}\] \[\quad=-\theta\left(\alpha-\beta\theta\right)\left\|\mathbf{S}x -\frac{\alpha-\alpha^{2}\theta}{\alpha-\beta\theta}\mathbf{S}z\right\|_{M}^{2 }+\underbrace{\alpha\theta^{2}(1-\alpha\theta)(\beta-\alpha^{2})}_{=:B\geq 0} \left\|\mathbf{S}z\right\|_{M}^{2}\] \[\quad\leq B\left\|\mathbf{S}z\right\|_{M}^{2}.\] Note that this inequality only holds when \(\beta<\alpha/\theta\). Also, \(\beta-\alpha^{2}\geq 0\) comes from \(\left\|\mathbb{E}\left[u_{\mathcal{I}}\right]\right\|_{M}^{2}\leq\mathbb{E} \left[\left\|u_{\mathcal{I}}\right\|_{M}^{2}\right]\). From Lemma 4.3, \[\mathbb{E}_{\mathcal{I}^{k}}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k} \right\|_{M}^{2}\mid\mathcal{F}_{k-1}\right]\leq\left\|\frac{x^{k-1}}{k-1}- \frac{z^{k-1}}{k-1}\right\|_{M}^{2}+\frac{B}{k^{2}}\left\|\mathbf{S}z^{k-1} \right\|_{M}^{2},\] where \(x^{0},x^{1},x^{2},\dots\) is a random sequence generated by (RC-FPI) with \(\mathbf{T}\), \(z^{0},z^{1},z^{2},\dots\) is a sequence generated by (FPI with \(\overline{\mathbf{T}}\)) and starting point \(z^{0}=x^{0}\), and \(\mathcal{F}_{k}\) is a filtration consisting of information up to \(n\)th iteration. With \(\left\|\mathbf{S}z^{k-1}\right\|_{M}\leq\left\|\mathbf{S}z^{0}\right\|_{M}\) from the Lemma 4.5, \[\mathbb{E}_{\mathcal{I}^{k}}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k} \right\|_{M}^{2}\mid\mathcal{F}_{k-1}\right]\leq\left\|\frac{x^{k-1}}{k-1}- \frac{z^{k-1}}{k-1}\right\|_{M}^{2}+\frac{B}{k^{2}}\left\|\mathbf{S}z^{0} \right\|_{M}^{2}.\] We may apply the Robbins-Siegmund quasi-martingale theorem, Lemma 4.9, and conclude that the random sequence \(\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\) converges almost surely to some random variable since \(\sum_{n=1}^{\infty}\frac{1}{n^{2}}B\left\|\mathbf{S}z^{0}\right\|_{M}^{2}<\infty.\) Then, \[\mathbb{E}\left[\lim_{k\to\infty}\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k} \right\|_{M}^{2}\right]\leq\lim_{k\to\infty}\mathbb{E}\left[\left\|\frac{x^{k }}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\right]=0\] holds, where the inequality comes from the Fatou's lemma and the equality comes from \(L^{2}\) convergence by taking \(k\to\infty\) in Lemma 4.7. Thus, \[\lim_{k\to\infty}\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}=0, \qquad a.s.\] happens, which, with the strong convergent \(\frac{z^{k}}{k}\) to \(-\alpha\mathbf{v}\) by Theorem 2.1, gives the almost sure convergence of Lemma 4.10. Proof of Theorem 4.8.: In the case of \(M=\mathbf{I}\), with \(\theta\in(0,1)\), we have \(\beta=\alpha<\alpha/\theta\) from Lemma 4.1. Thus, from Lemma 4.10, we can conclude \[\frac{x^{k}}{k}\overset{a.s.}{\to}-\alpha\mathbf{v}\] as \(k\to\infty\). ### Infeasibility detection. Since \(\frac{x^{k}}{k}\to-\alpha\mathbf{v}\) and since \(\mathbf{v}\neq 0\) implies the problem is inconsistent, we propose \[\frac{1}{k}\|x^{k}\|>\varepsilon \tag{4}\] as a test of inconsistency with sufficiently large \(k\in\mathbb{N}\) and sufficiently small \(\varepsilon>0\). The remaining question of how to choose the iteration count \(k\) and threshold \(\varepsilon\) will be considered in Section 6. (This test is not able to detect inconsistency in the pathological case where the problem is inconsistent despite \(\mathbf{v}=0\).) ## 5 Bias and variance of normalized iterates Previously in Section 4, we showed that the normalized iterate \(x^{k}/k\) of (RC-FPI) converges to the scaled infimal displacement vector \(-\alpha\mathbf{v}\). However, to use \(x^{k}/k\) as an estimator of \(-\alpha\mathbf{v}\) and to use \(x^{k}/k\neq 0\) as a test for inconsistency, we need to characterize the error \(\|x^{k}/k+\alpha\mathbf{v}\|^{2}\). In this section, we provide an asymptotic upper bound of the bias and variance of \(x^{k}/k\) as an estimator of \(-\alpha\mathbf{v}\). **Theorem 5.1**.: _Let \(\mathbf{T}\colon\mathcal{H}\to\mathcal{H}\) be \(\theta\)-averaged with respect to \(\left\|\cdot\right\|_{M}\) with \(\theta\in(0,1]\). Let \(\mathbf{v}\) be the infimal displacement vector of \(\mathbf{T}\). Assume \(\mathcal{I}^{0},\mathcal{I}^{1},\ldots\) is sampled IID from a distribution satisfying the uniform expected step-size condition (2) with \(\alpha\in(0,1]\), and assume (3) holds with some \(\beta>0\) such that \(\beta<\alpha/\theta\). Let \(x^{0},x^{1},x^{2},\ldots\) be the iterates of (RC-FPI)._ 1. _If_ \(\mathbf{v}\in\operatorname{range}\left(\mathbf{I}-\mathbf{T}\right)\)_, then as_ \(k\to\infty\)_,_ \[\mathbb{E}\left[\left\|\frac{x^{k}}{k}+\alpha\mathbf{v}\right\|_{M}^{2} \right]\lesssim\frac{(\beta-\alpha^{2})\left\|\mathbf{v}\right\|_{M}^{2}}{k}.\] 2. _In general, regardless of whether_ \(\mathbf{v}\) _is in_ \(\operatorname{range}\left(\mathbf{I}-\mathbf{T}\right)\)_,_ \[\operatorname{Var}_{M}\left(\frac{x^{k}}{k}\right)\lesssim\frac{(\beta- \alpha^{2})\left\|\mathbf{v}\right\|_{M}^{2}}{k}\] _as_ \(k\to\infty\)_._ To clarify, the precise meaning of the first asymptotic statement of (a) is \[\limsup_{k\to\infty}k\,\mathbb{E}\left[\left\|\frac{x^{k}}{k}+\alpha\mathbf{v} \right\|_{M}^{2}\right]\leq(\beta-\alpha^{2})\left\|\mathbf{v}\right\|_{M}^{2}.\] The precise meaning of the asymptotic statement of (b) is defined similarly. Here, we outline the proof for the special case, \(\mathbf{v}\in\operatorname{range}\left(\mathbf{I}-\mathbf{T}\right)\), while deferring the full proof without such restriction to Section 5.1. Let \(z^{0},z^{1},z^{2},\ldots\) be the iterates of (FPI with \(\bar{\mathbf{T}}\)) with \(z^{0}\) satisfying \(\theta\mathbf{S}z^{0}=\mathbf{v}\). Then, \(\theta\mathbf{S}z^{k}=\mathbf{v}\) for all \(k\in\mathbb{N}\). Apply Lemma 4.3 on \(x^{k}\) and \(z^{k}\) and take full expectation to get \[\mathbb{E}\left[k\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2} \right]\leq\frac{1}{k}\left\|x^{0}-z^{0}\right\|_{M}^{2}+\mathbb{E}\left[ \frac{1}{k}\sum_{j=0}^{k-1}U^{j}\right]\] where \(U^{0},U^{1},U^{2},\ldots\) is a sequence of random variables : \[U^{k}= -\alpha\left(\theta^{-1}-\alpha\right)\left\|\theta\mathbf{S}x^{k }-\mathbf{v}\right\|_{M}^{2}+\theta^{2}\left(\beta-\alpha^{2}\right)\left\| \mathbf{S}x^{k}\right\|_{M}^{2}.\] The key idea is to bound the \(U^{k}\) terms. This can be done by showing that \(\mathbf{v}\) and \(\theta\mathbf{S}x^{k}-\mathbf{v}\) are almost orthogonal. Such property is exhibited in the next two lemmas. **Lemma 5.2**.: _Suppose \(\theta\)-averaged operator \(\mathbf{T}\colon\mathcal{H}\to\mathcal{H}\) has an infimal displacement vector \(\mathbf{v}\). Consider a closed cone \(C_{\delta}\) in \(\mathcal{H}\) with \(\delta\in\left(0,\frac{\pi}{2}\right)\) as a set of vectors that forms an angle between \(\mathbf{v}\) to be less then \(\frac{\pi}{2}-\delta\),_ \[C_{\delta}=\left\{x:\left\langle\mathbf{v},x\right\rangle_{M}\geq\sin\delta \left\|\mathbf{v}\right\|_{M}\left\|x\right\|_{M}\right\}.\] _When the points \(y,z\in\mathcal{H}\) satisfy that \(\mathbf{S}y\in\mathbf{S}z+C_{\delta}\) and \(\mathbf{S}y\neq\mathbf{S}z\), then the following inequality holds._ \[\left\langle-\mathbf{v},y-z\right\rangle_{M}\leq\cos\delta\left\|\mathbf{v} \right\|_{M}\left\|y-z\right\|_{M}.\] Proof.: Since the inequality holds if \(\mathbf{v}=0\), let's assume that \(\mathbf{v}\neq 0\). Define \(x\) as \(y-z\), \(u\) as \(\mathbf{S}y-\mathbf{S}z\). Since \(u\in C_{\delta}\), there exist \(\phi\in\left[\delta,\frac{\pi}{2}\right]\) such that \[\left\langle\mathbf{v},u\right\rangle_{M}=\sin\phi\left\|\mathbf{v}\right\|_{ M}\left\|u\right\|_{M}.\] Due to cocoersivity of \(\mathbf{S}\), \(x\) and \(u\) must satisfy \[\left\langle x,u\right\rangle_{M}=\left\langle y-z,\mathbf{S}y-\mathbf{S}z \right\rangle_{M}\geq 0.\] Since \(\mathbf{v}\) is nonzero, decompose \(x\) and \(u\) as \[x=\left\langle x,\mathbf{v}\right\rangle_{M}\frac{\mathbf{v}}{\left\|\mathbf{ v}\right\|_{M}^{2}}+\mathbf{v}_{x}^{\perp},\quad u=\left\langle u,\mathbf{v} \right\rangle_{M}\frac{\mathbf{v}}{\left\|\mathbf{v}\right\|_{M}^{2}}+\mathbf{ v}_{u}^{\perp}.\] Both \(\mathbf{v}_{x}^{\perp}\) and \(\mathbf{v}_{u}^{\perp}\) are orthogonal to \(\mathbf{v}\) and \[\left\|\mathbf{v}_{u}^{\perp}\right\|_{M}=\cos\phi\left\|u\right\|_{M},\quad \left\|\mathbf{v}_{x}^{\perp}\right\|_{M}^{2}=\left\|x\right\|_{M}^{2}-\left( \frac{\left\langle x,\mathbf{v}\right\rangle_{M}}{\left\|\mathbf{v}\right\|_{ M}}\right)^{2}.\] Compute \(\left\langle x,u\right\rangle\) using the decomposition above, \[\left\langle x,u\right\rangle_{M} =\frac{1}{\left\|\mathbf{v}\right\|_{M}^{2}}\left\langle x, \mathbf{v}\right\rangle_{M}\left\langle u,\mathbf{v}\right\rangle_{M}+\left \langle\mathbf{v}_{x}^{\perp},u\right\rangle_{M}\] \[=\frac{1}{\left\|\mathbf{v}\right\|_{M}}\left\langle x,\mathbf{v} \right\rangle_{M}\sin\phi\left\|u\right\|_{M}+\left\langle\mathbf{v}_{x}^{ \perp},\mathbf{v}_{u}^{\perp}\right\rangle_{M}\] \[\leq\frac{1}{\left\|\mathbf{v}\right\|_{M}}\left\langle x, \mathbf{v}\right\rangle_{M}\sin\phi\left\|u\right\|_{M}+\cos\phi\left\|u \right\|_{M}\sqrt{\left\|x\right\|_{M}^{2}-\left(\frac{\left\langle x, \mathbf{v}\right\rangle}{\left\|\mathbf{v}\right\|_{M}}\right)^{2}}.\] If \(\left\langle x,-\mathbf{v}\right\rangle_{M}\leq 0\), then \(\left\langle-\mathbf{v},y-z\right\rangle_{M}\leq 0\leq\cos\delta\left\| \mathbf{v}\right\|_{M}\left\|y-z\right\|_{M}\) and the conclusion holds. Thus, consider only the case where \(\left\langle x,-\mathbf{v}\right\rangle_{M}>0\). In such case, \(\left\langle x,u\right\rangle_{M}\geq 0\) with \(\left\|u\right\|_{M}>0\) gives \[0<\frac{1}{\left\|\mathbf{v}\right\|_{M}}\left\langle x,-\mathbf{v}\right\rangle _{M}\sin\phi\leq\cos\phi\sqrt{\left\|x\right\|_{M}^{2}-\left(\frac{\left\langle x,-\mathbf{v}\right\rangle_{M}}{\left\|\mathbf{v}\right\|_{M}}\right)^{2}},\] which leads to a conclusion by squaring each sides : \[\left\langle x,-\mathbf{v}\right\rangle_{M}\leq\cos\phi\left\|\mathbf{v}\right\|_{M }\left\|x\right\|_{M}\leq\cos\delta\left\|\mathbf{v}\right\|_{M}\left\|x \right\|_{M}.\] **Lemma 5.3**.: _Let \(\mathbf{T}\colon\mathcal{H}\rightarrow\mathcal{H}\) be a \(\theta\)-averaged operator with respect to \(\left\|\cdot\right\|_{M}\). Let \(\mathbf{v}\) be the infimal displacement vector of \(\mathbf{T}\). Let \(\mathbf{S}=\frac{1}{\theta}\left(\mathbf{I}-\mathbf{T}\right)\). Suppose a sequence \(y^{0},y^{1},y^{2},\dots\) in \(\mathcal{H}\) converges strongly,_ \[\lim_{k\rightarrow\infty}\frac{y^{k}}{k}=-\gamma\mathbf{v}\] _for some \(\gamma>0\). Then, for any \(\delta\in(0,\pi/2)\) and \(z^{0}\in\mathcal{H}\), there exists \(N_{\delta,z^{0}}\in\mathbb{N}\) such that, for all \(k>N_{\delta,z^{0}}\),_ \[\left\langle\mathbf{v},\mathbf{S}y^{k}-\mathbf{S}z^{0}\right\rangle_{M}\leq \left\|\mathbf{v}\right\|_{M}\left\|\mathbf{S}y^{k}-\mathbf{S}z^{0}\right\|_{ M}\sin\delta.\] Proof.: Choose a point \(z\) in \(\mathcal{H}\). To prove by contradiction, suppose that for any \(l\), there exists \(k_{l}>l\) such that \[\mathbf{S}y^{k_{l}}\in\mathbf{S}z+C_{\delta},\quad\mathbf{S}y^{k_{l}}\neq \mathbf{S}z.\] The subsequence \(y^{k_{1}},y^{k_{2}},y^{k_{3}},\dots\) satisfies the inequality below for all \(l\), due to Lemma 5.2. \[\left\langle-\mathbf{v},y^{k_{l}}-z\right\rangle_{M}\leq\cos\delta\left\| \mathbf{v}\right\|_{M}\left\|y^{k_{l}}-z\right\|_{M}.\] Divide each side by \(k_{l}\) and take a limit as \(l\rightarrow\infty\). Since \(\lim_{l\rightarrow\infty}\frac{y^{k_{l}}}{k_{l}}=-\gamma\mathbf{v}\) strongly, \[\gamma\left\|\mathbf{v}\right\|_{M}^{2}=\left\langle-\mathbf{v},-\gamma \mathbf{v}\right\rangle_{M}\leq\cos\delta\left\|\mathbf{v}\right\|_{M}\left\| -\gamma\mathbf{v}\right\|_{M}<\gamma\left\|\mathbf{v}\right\|_{M}^{2},\] which yields a contradiction. Thus, when \(z\) is given, for any \(\delta\in\left(0,\frac{\pi}{2}\right)\), there exist a \(N_{\delta}\) such that for all \(k>N_{\delta}\), it is either \(\mathbf{S}y^{k}=\mathbf{S}z\) or \(\mathbf{S}y^{k}\notin\mathbf{S}z+C_{\delta}\). As a conclusion, for all \(k>N_{\delta}\), \[\left\langle\mathbf{v},\mathbf{S}y^{k}-\mathbf{S}z\right\rangle_{M}\leq\sin \delta\left\|\mathbf{v}\right\|_{M}\left\|\mathbf{S}y^{k}-\mathbf{S}z\right\| _{M}.\] Returning to the proof outline of Theorem 5.1, by Equation (1) and Lemma 5.3, \[0\leq\left\langle\mathbf{v},\theta\mathbf{S}x^{k}-\mathbf{v}\right\rangle_{M} \leq\left\|\mathbf{v}\right\|_{M}\left\|\theta\mathbf{S}x^{k}-\mathbf{v} \right\|_{M}\sin\delta\approx 0\] for small \(\delta\). Therefore, for \(k\) large enough \[\left\|\theta\mathbf{S}x^{k}\right\|_{M}^{2}\lesssim\left\|\mathbf{v}\right\|_ {M}^{2}+\left\|\theta\mathbf{S}x^{k}-\mathbf{v}\right\|_{M}^{2}.\] Since \(\theta\mathbf{S}x^{k}\to\mathbf{v}\) as \(k\to\infty\), we have \[U^{k}\lesssim\left(\beta-\alpha^{2}\right)\left\|\mathbf{v}\right\|_{M}^{2}.\] Finally, we conclude \[\limsup_{k\to\infty}\mathbb{E}\left[k\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k} \right\|_{M}^{2}\right]\lesssim\left(\beta-\alpha^{2}\right)\left\|\mathbf{v} \right\|_{M}^{2}.\] Note that for (b), \(k\mathrm{Var}_{M}\left(x^{k}/k\right)\) can be bounded by \[k\mathrm{Var}_{M}\left(\frac{x^{k}}{k}\right)\leq\mathbb{E}\left[k\left\|\frac {x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\right],\] and so does (a) with an extra \(\mathcal{O}\left(1/\sqrt{k}\right)\) term. ### Proof of Theorem 5.1 Proof.: When \(x^{0},x^{1},x^{2},\dots\) is a random sequence generated by RC-FPI of \(\mathbb{T}\) and \(z^{0},z^{1},z^{2},\dots\) is a sequence generated by FPI with \(z^{0}=z\), from Lemma 4.3, it is already known that for all \(k\), \[\mathbb{E}_{\mathcal{I}^{k}}\left[\left\|\mathbf{T}_{\mathcal{I}^{k}}x^{k}- \bar{\mathbf{T}}z^{k}\right\|_{M}^{2}\right]\leq\left\|x^{k}-z^{k}\right\|_{M }^{2}-\alpha\theta\left(1-\alpha\theta\right)\left\|\mathbf{S}x^{k}-\mathbf{ S}z^{k}\right\|_{M}^{2}+\theta^{2}\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}x^{k} \right\|_{M}^{2}.\] For convenience, define \[U^{k}=-\alpha\theta\left(1-\alpha\theta\right)\left\|\mathbf{S}x^{k}-\mathbf{ S}z^{k}\right\|_{M}^{2}+\theta^{2}\left(\beta-\alpha^{2}\right)\left\| \mathbf{S}x^{k}\right\|_{M}^{2}.\] Consequently, by taking a full expectation, \[\mathbb{E}\left[k\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2} \right]\leq\frac{1}{k}\left\|x^{0}-z\right\|_{M}^{2}+\mathbb{E}\left[\frac{1} {k}\sum_{j=0}^{k-1}U^{j}\right].\] The key of this proof is to bound the term \(\left\|\mathbf{S}x^{k}\right\|_{M}^{2}\) in \(U^{k}\) using Lemma 5.3 since we have the fact that \(\frac{x^{k}}{k}\) is strongly convergent as \(\lim_{k\to\infty}\frac{x^{k}}{k}=-\alpha\mathbf{v}\) almost surely. Suppose the case where \(\lim_{k\to\infty}\frac{x^{k}}{k}=-\alpha\mathbf{v}\) holds, which does actually holds almost surely. For such \(x^{k}\), for an arbitrary \(\delta\in\left(0,\frac{\pi}{2}\right)\), there exist a \(N_{\delta}\) such that for all \(k>N_{\delta}\), \[\left\langle\mathbf{v},\mathbf{S}x^{k}-\mathbf{S}z\right\rangle_{M}\leq\sin \delta\left\|\mathbf{v}\right\|_{M}\left\|\mathbf{S}x^{k}-\mathbf{S}z\right\| _{M}.\] From the inequality above, for all sufficiently large \(k>N_{\delta}\), \[\left\|\mathbf{S}x^{k}\right\|_{M}^{2} =\left\|\mathbf{S}x^{k}-\mathbf{S}z\right\|_{M}^{2}+2\left\langle \mathbf{S}x^{k}-\mathbf{S}z,\mathbf{S}z\right\rangle_{M}+\left\|\mathbf{S}z \right\|_{M}^{2}\] \[=\left\|\mathbf{S}x^{k}-\mathbf{S}z\right\|_{M}^{2}+2\left\langle \mathbf{S}x^{k}-\mathbf{S}z,\mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\rangle _{M}+2\left\langle\mathbf{S}x^{k}-\mathbf{S}z,\frac{1}{\theta}\mathbf{v} \right\rangle_{M}+\left\|\mathbf{S}z\right\|_{M}^{2}\] \[\leq\left\|\mathbf{S}x^{k}-\mathbf{S}z\right\|_{M}^{2}+2\left\{ \left\|\mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\|_{M}+\sin\delta\left\| \frac{1}{\theta}\mathbf{v}\right\|_{M}\right\}\left\|\mathbf{S}x^{k}-\mathbf{ S}z\right\|_{M}+\left\|\mathbf{S}z\right\|_{M}^{2}.\] By the inequality above, the term \(U^{k}\) can be bounded as : \[U^{k}\leq -\left(\alpha\theta-\alpha^{2}\theta^{2}\right)\left\|\mathbf{S}x ^{k}-\mathbf{S}z^{k}\right\|_{M}^{2}\] \[=\] \[+2\theta^{2}\left(\beta-\alpha^{2}\right)\left\{\left\|\mathbf{S }z-\frac{1}{\theta}\mathbf{v}\right\|_{M}+\sin\delta\left\|\frac{1}{\theta} \mathbf{v}\right\|_{M}\right\}\left\|\mathbf{S}x^{k}-\mathbf{S}z\right\|_{M}\] \[+\left(\alpha\theta-\alpha^{2}\theta^{2}\right)\left\{\left\| \mathbf{S}x^{k}-\mathbf{S}z\right\|_{M}^{2}-\left\|\mathbf{S}x^{k}-\mathbf{S} z^{k}\right\|_{M}^{2}\right\}.\] Here, the term \(\left\|\mathbf{S}x^{k}-\mathbf{S}z\right\|_{M}^{2}-\left\|\mathbf{S}x^{k}- \mathbf{S}z^{k}\right\|_{M}^{2}\) is bounded above by \[\left\|\mathbf{S}x^{k}-\mathbf{S}z\right\|_{M}^{2}-\left\| \mathbf{S}x^{k}-\mathbf{S}z^{k}\right\|_{M}^{2}\] \[\leq-\left\|\mathbf{S}z-\mathbf{S}z^{k}\right\|_{M}^{2}+2\left\| \mathbf{S}x^{k}-\mathbf{S}z\right\|_{M}\left\|\mathbf{S}z-\mathbf{S}z^{k} \right\|_{M},\] from the triangular inequality. \[U^{k}\leq\] \[\quad+2\theta^{2}\left(\beta-\alpha^{2}\right)\left\{\left\| \mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\|_{M}+\sin\delta\left\|\frac{1}{ \theta}\mathbf{v}\right\|_{M}\right\}\left\|\mathbf{S}x^{k}-\mathbf{S}z\right\| _{M}\] \[\quad+\left(\alpha\theta-\alpha^{2}\theta^{2}\right)\left\{- \left\|\mathbf{S}z-\mathbf{S}z^{k}\right\|_{M}^{2}+2\left\|\mathbf{S}x^{k}- \mathbf{S}z\right\|_{M}\left\|\mathbf{S}z-\mathbf{S}z^{k}\right\|_{M}\right\}\] \[=\theta^{2}\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}z\right\| _{M}^{2}-\left(\alpha\theta-\alpha^{2}\theta^{2}\right)\left\|\mathbf{S}z- \mathbf{S}z^{k}\right\|_{M}^{2}\] \[\quad-\theta\left(\alpha-\beta\theta\right)\left\|\mathbf{S}x^{k} -\mathbf{S}z\right\|_{M}^{2}+2\theta\tau_{\delta,z,k}\left\|\mathbf{S}x^{k}- \mathbf{S}z\right\|_{M}\] \[\leq\theta^{2}\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}z \right\|_{M}^{2}\] \[\quad-\theta\left(\alpha-\beta\theta\right)\left\|\mathbf{S}x^{k} -\mathbf{S}z\right\|_{M}^{2}+2\theta\tau_{\delta,z,k}\left\|\mathbf{S}x^{k}- \mathbf{S}z\right\|_{M},\] where \(\tau_{\delta,z,k}\) is defined as \[\tau_{\delta,z,k}=\left(\beta-\alpha^{2}\right)\left(\left\|\theta\mathbf{S}z- \mathbf{v}\right\|_{M}+\sin\delta\left\|\mathbf{v}\right\|_{M}\right)+\left( \alpha-\alpha^{2}\theta\right)\left\|\mathbf{S}z-\mathbf{S}z^{k}\right\|_{M}.\] To make an upper bound of \(\tau_{\delta,z,k}\) regardless of \(k\), an upper bound of \(\left\|\mathbf{S}z-\mathbf{S}z^{k}\right\|_{M}\) independent from \(k\) is required. Since \(\mathbf{v}\) is the infimal displacement vector, \[\left\langle\mathbf{S}z^{k}-\frac{1}{\theta}\mathbf{v},\mathbf{v}\right\rangle _{M}\geq 0,\quad\left\|\mathbf{S}z\right\|_{M}^{2}-\left\|\frac{1}{\theta} \mathbf{v}\right\|_{M}^{2}\geq 0,\] hold. With \(\left\|\mathbf{S}z^{k}\right\|_{M}\leq\left\|\mathbf{S}z\right\|_{M}\) from equation (4.5), such uniform upper bound can be built as \[\left\|\mathbf{S}z-\mathbf{S}z^{k}\right\|_{M} \leq\left\|\mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\|_{M}+ \left\|\mathbf{S}z^{k}-\frac{1}{\theta}\mathbf{v}\right\|_{M}\] \[=\left\|\mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\|_{M}+ \sqrt{\left\|\mathbf{S}z^{k}\right\|_{M}^{2}-2\left\langle\mathbf{S}z^{k}, \frac{1}{\theta}\mathbf{v}\right\rangle_{M}+\left\|\frac{1}{\theta}\mathbf{v} \right\|_{M}^{2}}\] \[=\left\|\mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\|_{M}+ \sqrt{\left\|\mathbf{S}z^{k}\right\|_{M}^{2}-2\left\langle\mathbf{S}z^{k}- \frac{1}{\theta}\mathbf{v},\frac{1}{\theta}\mathbf{v}\right\rangle_{M}- \left\|\frac{1}{\theta}\mathbf{v}\right\|_{M}^{2}}\] \[\leq\left\|\mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\|_{M}+ \sqrt{\left\|\mathbf{S}z\right\|_{M}^{2}-\left\|\frac{1}{\theta}\mathbf{v} \right\|_{M}^{2}}.\] Now define \(\tilde{\tau}_{\delta,z}\) as \[\tilde{\tau}_{\delta,z}=\left(\beta-\alpha^{2}\right)\left(\left\|\theta \mathbf{S}z-\mathbf{v}\right\|_{M}+\sin\delta\left\|\mathbf{v}\right\|_{M} \right)+\left(\alpha-\alpha^{2}\theta\right)\left\{\left\|\mathbf{S}z-\frac{1 }{\theta}\mathbf{v}\right\|_{M}+\sqrt{\left\|\mathbf{S}z\right\|_{M}^{2}- \left\|\frac{1}{\theta}\mathbf{v}\right\|_{M}^{2}}\right\},\] then we have \(\tau_{\delta,z,k}\leq\tilde{\tau}_{\delta,z}\) for any \(k\in\mathbb{N}\). Thus, we can bound \(U^{k}\) as \[U^{k} \leq\theta^{2}\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}z \right\|_{M}^{2}-\theta\left(\alpha-\beta\theta\right)\left\|\mathbf{S}x^{k} -\mathbf{S}z\right\|_{M}^{2}+2\theta\tau_{\delta,z,k}\left\|\mathbf{S}x^{k}- \mathbf{S}z\right\|_{M}\] \[\leq\theta^{2}\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}z \right\|_{M}^{2}-\theta\left(\alpha-\beta\theta\right)\left\|\mathbf{S}x^{k} -\mathbf{S}z\right\|_{M}^{2}+2\theta\tilde{\tau}_{\delta,z}\left\|\mathbf{S}x ^{k}-\mathbf{S}z\right\|_{M}.\] From the fact that \(-at^{2}+2bt\leq\frac{b^{2}}{a}\) for any \(a,b>0\), \(U^{k}\) has a upper bound completely independent from \(x^{k}\) and \(k\), \[U^{k}\leq\theta^{2}\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}z\right\|_{M }^{2}+\frac{\theta}{\alpha-\beta\theta}\tilde{\tau}_{\delta,z}^{2}.\] However, this upper bound holds only at \(k>N_{\delta}\). Since \(N_{\delta}\) depends on the choice of the sequence \(x^{0},x^{1},x^{2},\dots\), such upper bound only works when the sequence \(x^{0},x^{1},x^{2},\dots\) is fixed. To avoid this problem, take a limit supremum of \(U^{k}\) over \(k\), \[\limsup_{k\to\infty}U^{k}\leq\theta^{2}\left(\beta-\alpha^{2}\right)\left\| \mathbf{S}z\right\|_{M}^{2}+\frac{\theta}{\alpha-\beta\theta}\tilde{\tau}_{ \delta,z}^{2}.\] Furthermore, due to Cesaro mean, \[\limsup_{k\to\infty}\left\{\frac{1}{k}\left\|x^{0}-z\right\|_{M}^{2}+\frac{1} {k}\sum_{j=0}^{k-1}U^{j}\right\}\leq\theta^{2}\left(\beta-\alpha^{2}\right) \left\|\mathbf{S}z\right\|_{M}^{2}+\frac{\theta}{\alpha-\beta\theta}\tilde{ \tau}_{\delta,z}^{2}.\] Now, this inequality always holds with only one condition on the choice of sequence \(x^{0},x^{1},x^{2}\dots\), \[\lim_{k\to\infty}\frac{x^{k}}{k}=-\alpha\mathbf{v},\] regardless of \(z,\delta\). Since \(\lim_{k\to\infty}\frac{x^{k}}{k}=-\alpha\mathbf{v}\) holds almost surely, \[\mathbb{E}\left[\limsup_{k\to\infty}\left\{\frac{1}{k}\left\|x^{0}-z\right\|_ {M}^{2}+\frac{1}{k}\sum_{j=0}^{k-1}U^{j}\right\}\right]\leq\theta^{2}\left( \beta-\alpha^{2}\right)\left\|\mathbf{S}z\right\|_{M}^{2}+\frac{\theta}{ \alpha-\beta\theta}\tilde{\tau}_{\delta,z}^{2}\] holds almost surely for any \(z,\delta\). By Fatou's lemma, we also have \[\limsup_{k\to\infty}\mathbb{E}\left[\frac{1}{k}\left\|x^{0}-z\right\|_{M}^{2} +\frac{1}{k}\sum_{j=0}^{k-1}U^{j}\right]\leq\mathbb{E}\left[\limsup_{k\to \infty}\left\{\frac{1}{k}\left\|x^{0}-z\right\|_{M}^{2}+\frac{1}{k}\sum_{j=0} ^{k-1}U^{j}\right\}\right].\] Thus, \(\limsup_{k\to\infty}\mathbb{E}\left[k\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k} \right\|_{M}^{2}\right]\) has a upper bound of \[\begin{split}\limsup_{k\to\infty}\mathbb{E}\left[k\left\|\frac{x^ {k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\right]&\leq\limsup_{k\to \infty}\mathbb{E}\left[\frac{1}{k}\left\|x^{0}-z\right\|_{M}^{2}+\frac{1}{k} \sum_{j=0}^{k-1}U^{j}\right]\\ &\leq\theta^{2}\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}z \right\|_{M}^{2}+\frac{\theta}{\alpha-\beta\theta}\tilde{\tau}_{\delta,z}^{2}.\end{split} \tag{5}\] **Proof of statement (b).** First, let's prove the statement (b) of Theorem 5.1. Since \[\limsup_{k\to\infty}k\mathrm{Var}_{M}\left(\frac{x^{k}}{k}\right)\leq\limsup_ {k\to\infty}\mathbb{E}\left[k\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M }^{2}\right],\] from (5) we have \[\limsup_{k\to\infty}k\mathrm{Var}_{M}\left(\frac{x^{k}}{k}\right)\leq\theta^{ 2}\left(\beta-\alpha^{2}\right)\left\|\mathbf{S}z\right\|_{M}^{2}+\frac{ \theta}{\alpha-\beta\theta}\tilde{\tau}_{\delta,z}^{2}.\] At start, we chose \(z\) and \(\delta\) arbitrarily. Since \(\mathbf{v}\) is the infimal displacement vector, there exists a sequence of \(z\)'s that allows us to take a limit \(\left\|\mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\|_{M}\to 0\). When we take a limit \(\left\|\mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\|_{M}\to 0\) and \(\delta\to 0\), \[\lim_{\left\|\mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\|_{M}\to 0,\delta \to 0}\tilde{\tau}_{\delta,z}=0.\] Since \(\limsup_{k\to\infty}k\mathrm{Var}_{M}\left(\frac{x^{k}}{k}\right)\) is independent from \(\delta\) and \(z\), by \(\left\|\mathbf{S}z-\frac{1}{\theta}\mathbf{v}\right\|_{M}\to 0\) and \(\delta\to 0\) we have \[\limsup_{k\to\infty}k\mathrm{Var}_{M}\left(\frac{x^{k}}{k}\right)\leq\left( \beta-\alpha^{2}\right)\left\|\mathbf{v}\right\|_{M}^{2}.\] **Proof of statement (a).** Next, to prove the statement (a) of Theorem 5.1, let's start again from inequality (5), \[\limsup_{k\to\infty}\mathbb{E}\left[k\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k} \right\|_{M}^{2}\right]\leq\theta^{2}\left(\beta-\alpha^{2}\right)\left\| \mathbf{S}z\right\|_{M}^{2}+\frac{\theta}{\alpha-\beta\theta}\tilde{\tau}_{ \delta,z}^{2}.\] Expand the term \(\left\|\frac{x^{k}}{k}+\alpha\mathbf{v}\right\|_{M}^{2}\) as From Lemma 4.7 we have \[\left\|\mathbb{E}\left[\frac{x^{k}}{k}-\frac{z^{k}}{k}\right] \right\|_{M}^{2} \leq\mathbb{E}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\| _{M}^{2}\right]\] \[\leq\frac{1}{k}\left(1-\alpha\theta\right)\left[2\sqrt{\alpha \theta}\left\|\mathbf{S}x^{0}\right\|_{M}\left\|\mathbf{S}z^{0}\right\|_{M}- \frac{\alpha}{\theta}\left\|\mathbf{v}\right\|_{M}^{2}\right]+\frac{1}{k^{2}} \left\|x^{0}-z^{0}\right\|_{M}^{2}.\] When \(x_{\star}\) is a point such that \(x_{\star}-\mathbf{T}x_{\star}=v\), set \(z^{0}=x_{\star}\). Then, \[z^{k}=-k\alpha\mathbf{v}+x_{\star},\] since \(\left\|\theta\mathbf{S}z^{k}\right\|_{M}\leq\left\|\theta\mathbf{S}z\right\|_ {M}=\left\|\mathbf{v}\right\|_{M}\) makes \(\theta\mathbf{S}z^{k}=\mathbf{v}\) for all \(k\in\mathbb{N}\). Thus, \[\mathbb{E}\left[\left\|\frac{x^{k}}{k}+\alpha\mathbf{v}\right\|_ {M}^{2}\right]^{2}\leq\mathbb{E}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k} \right\|_{M}^{2}\right]\\ +2\frac{1}{\sqrt{k}}\sqrt{\left(1-\alpha\theta\right)\left[2 \sqrt{\alpha\theta}\left\|\mathbf{S}x^{0}\right\|_{M}\left\|\mathbf{S}x_{\star }\right\|_{M}-\frac{\alpha}{\theta}\left\|\mathbf{v}\right\|_{M}^{2}\right]+ \frac{1}{k}\left\|x^{0}-z^{0}\right\|_{M}^{2}}\left\|\frac{x_{\star}}{k} \right\|_{M}+\left\|\frac{x_{\star}}{k}\right\|_{M}^{2}.\] Note that the last two terms are \(\mathcal{O}\left(k^{-3/2}\right)\). By taking \(\limsup\sup\) as \(k\to\infty\), \[\limsup_{k\to\infty}\mathbb{E}\left[k\left\|\frac{x^{k}}{k}+\alpha\mathbf{v} \right\|_{M}^{2}\right]\leq\limsup_{k\to\infty}\mathbb{E}\left[k\left\|\frac{x ^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\right].\] Thus, \[\limsup_{k\to\infty}\mathbb{E}\left[k\left\|\frac{x^{k}}{k}+\alpha\mathbf{v} \right\|_{M}^{2}\right]\leq\theta^{2}\left(\beta-\alpha^{2}\right)\left\| \mathbf{S}x_{\star}\right\|_{M}^{2}+\frac{\theta}{\alpha-\beta\theta}\tilde{ \tau}_{\delta,x_{\star}}^{2}.\] Since \(\theta\mathbf{S}x_{\star}=\mathbf{v}\), by \(\delta\to 0\), we have \(\tilde{\tau}_{\delta,x_{\star}}\to 0\) and \[\limsup_{k\to\infty}\mathbb{E}\left[k\left\|\frac{x^{k}}{k}+\alpha\mathbf{v} \right\|_{M}^{2}\right]\leq\left(\beta-\alpha^{2}\right)\left\|\mathbf{v} \right\|_{M}^{2}.\] ### Tightness of variance bounds In this section, we provide examples for which the variance bound of Theorem 5.1 holds with equality and with a strict inequality. We then discuss how the geometry of \(\operatorname{range}\left(\mathbf{I}-\mathbf{T}\right)\) influences the tightness of the inequality. Throughout this section, we consider the setting where the norm and inner product is \(\|\cdot\|\)-norm and \(\langle\cdot,\cdot\rangle\), with \(\mathcal{H}=\mathbb{R}^{m}\), \(\mathcal{H}_{i}=\mathbb{R}\), and \(\mathcal{I}_{i}\) follows uniform distribution on the set of standard unit vectors of \(\mathcal{H}\). In this case, the smallest \(\beta\) we can choose is \(\alpha=1/m\). #### 5.2.1 Example: Theorem 5.1(b) holds with equality. Consider the translation operator \(\mathbf{T}(x)=x-\mathbf{v}\). When \(x^{0},x^{1},x^{2},\dots\) are the iterates of (RC-FPI) with \(\mathbf{T}\), then \[k\mathrm{Var}_{M}\left(\frac{x^{k}}{k}\right)=\alpha\left(1-\alpha\right)\left\| \mathbf{v}\right\|^{2}\] for \(k=1,2,\dots\), and the variance bound of Theorem 5.1 holds with equality. #### 5.2.2 Example: Theorem 5.1(b) holds with strict inequality. Define \(\mathbf{T}\colon\mathbb{R}^{2}\to\mathbb{R}^{2}\) as \[\mathbf{T}\colon\,(x,y)\mapsto\left(x-\frac{1+x-y}{2},\,y-\frac{1+y-x}{2} \right),\] which is \(\frac{1}{2}\)-averaged and has the infimal displacement vector \(\left(\frac{1}{2},\frac{1}{2}\right)\). When \((x^{0},y^{0}),(x^{1},y^{1}),(x^{2},y^{2}),\dots\) are the iterates of (RC-FPI) with T, then \[\limsup_{k\rightarrow\infty}\,k\mathrm{Var}_{M}\left(\frac{\left(x^{k},y^{k} \right)}{k}\right)=\frac{1}{24}<\frac{1}{8}, \tag{6}\] where \(\frac{1}{8}=\alpha\left(1-\alpha\right)\left\|\mathbf{v}\right\|^{2}\). _Proof of Equation (6)._ An \(\frac{1}{2}-\)averaged operator T in \(\mathbb{R}^{2}\) is defined as, \[\texttt{T}\colon(x,y)\mapsto\left(x-\frac{1+x-y}{2},y-\frac{1+y-x}{2}\right),\] with the infimal displacement vector \(\mathbf{v}\) of range \(\mathbf{I}-\texttt{T}\) as \(\left(\frac{1}{2},\frac{1}{2}\right)\). The RC-FPI by T with the distribution as a uniform distribution of \(\{(1,0),(0,1)\}\). The random coordinate operators are respectively, \[\texttt{T}_{(1,0)} :(x,y)\mapsto\left(x-\frac{1+x-y}{2},y\right),\] \[\texttt{T}_{(0,1)} :(x,y)\mapsto\left(x,y-\frac{1+y-x}{2}\right).\] When we set the initial point \(\left(x^{0},y^{0}\right)\) as the origin, from the relations \[\mathbb{E}\left[x^{k+1}\right] =\mathbb{E}\left[x^{k}\right]-\frac{1}{4}-\frac{1}{4}\mathbb{E} \left[x^{k}-y^{k}\right],\] \[\mathbb{E}\left[y^{k+1}\right] =\mathbb{E}\left[y^{k}\right]-\frac{1}{4}+\frac{1}{4}\mathbb{E} \left[x^{k}-y^{k}\right],\] \[\mathbb{E}\left[x^{k+1}-y^{k+1}\right]=\frac{1}{2}\mathbb{E}\left[ x^{k}-y^{k}\right],\] each expectations have a value of \(\mathbb{E}\left[x^{k}\right]=\mathbb{E}\left[y^{k}\right]=-\frac{1}{4}k\). Next, an expectation \(\mathbb{E}\left[\left\|x^{k}-y^{k}\right\|^{2}\right]\) has a recurrence relation of \[\mathbb{E}\left[\left\|x^{k+1}-y^{k+1}\right\|^{2}\right] =\mathbb{E}_{(x^{k},y^{k})}\mathbb{E}\left[\left\|x^{k+1}-y^{k+1} \right\|^{2}\mid\left(x^{k},y^{k}\right)\right]\] \[=\mathbb{E}_{(x^{k},y^{k})}\left[\frac{1}{4}\left(x^{k}-y^{k} \right)^{2}+\frac{1}{4}\right],\] which obtains \(\left[\left\|x^{k}-y^{k}\right\|^{2}\right]=\frac{1}{3}\left(1-4^{-k}\right)\) as a solution. Finally, an expectation \(\mathbb{E}\left[\left\|x^{k}\right\|^{2}+\left\|y^{k}\right\|^{2}\right]\) has a relation \[\mathbb{E}\left[\left\|x^{k+1}\right\|^{2}+\left\|y^{k+1}\right\|^ {2}\right]\] \[=\mathbb{E}\left[\left\|x^{k}\right\|^{2}+\left\|y^{k}\right\|^{ 2}\right]+\frac{1}{4}-\frac{1}{2}\mathbb{E}\left[x^{k}+y^{k}\right]-\frac{1}{ 4}\mathbb{E}\left[\left\|x^{k}-y^{k}\right\|^{2}\right],\] which can be applied inductively, and as a result, \[\mathbb{E}\left[\left\|x^{k}\right\|^{2}+\left\|y^{k}\right\|^{2}\right]=\frac {1}{8}k^{2}+\frac{1}{24}k+\frac{1}{9}\left(1-4^{-k}\right).\] From above computations, a variance of \(\left(x^{k},y^{k}\right)\) can be estimated explicitly as \[\mathrm{Var}_{M}\left(x^{k},y^{k}\right)=\frac{1}{24}k+\frac{1}{9}\left(1-4^{ -k}\right).\] Thus, \(\limsup_{k\to\infty}k\mathrm{Var}_{M}\left(x^{k},y^{k}\right)\) is \[\limsup_{k\to\infty}k\mathrm{Var}_{M}\left(\frac{\left(x^{k},y^{k}\right)}{k} \right)=\frac{1}{24},\] while, as a comparison, a variance of RC-FPI via translation is \(\frac{1}{2}\left(1-\frac{1}{2}\right)\left\|\mathbf{v}\right\|^{2}=\frac{1}{8}\). ### Relationship between the variance and the range set. Consider the three convex sets \(A\), \(B\), and \(C\) in Figure 1 as a subset of \(\mathcal{H}=\mathbb{R}^{2}\). The explicit definitions are \[A =\left\{(x,y)\mid x\leq-10,y\leq-5\right\}\] \[B =\left\{(x,y)\mid\mathrm{dist}\left((x,y),3A\right)\leq 2\sqrt{5^{2} +10^{2}}\right\}\] \[C =\left\{(x,y)\mid-2x-y\geq 25\right\},\] where \(\mathrm{dist}\left((x,y),3A\right)\) denotes the (Euclidean) distance of \((x,y)\) to the set \(3A=\left\{(3x,3y)\mid(x,y)\in A\right\}\). The minimum norm elements in each set are all identically equal to \((-10,-5)\). Let \(\mathbf{T}=\mathbf{I}-\theta\mathrm{Proj}\), where \(\mathrm{Proj}\) denotes the projections onto \(A\), \(B\), and \(C\). Then \(\mathbf{T}\) is \(\theta\)-averaged and \(\mathrm{range}\left(\theta^{-1}(\mathbf{I}-\mathbf{T})\right)\) is equal to \(A\), \(B\), and \(C\), respectively. The three sets are designed so that \(\mathbf{T}\), in all three cases, have the same infimal displacement vector. Figure 2 (left), shows that the normalized iterates of the three instances have different asymptotic variances despite \(\mathbf{v}\) all being identical. In the experiment, \(\theta\) was set as \(0.2\), and as a consequence, \(\mathbf{v}=(-2,-1)\) is the infimal displacement vector for each experiments. (RC-FPI) is performed with \(x^{0}=(0,0)\), \(m=2\) and \(\mathcal{H}_{1}=\mathcal{H}_{2}=\mathbb{R}\). We conjecture that the asymptotic variance is intimately related to the geometry of the set \(\operatorname{range}\left(\theta^{-1}(\mathds{I}-\mathds{T})\right)\). Let \(z\in\mathbb{R}^{n}\), and \(\delta>0\), and \[D_{z,\delta}=\left\{u\in\mathbb{R}^{2}\big{|}\langle\mathbf{v},u-\mathbf{S}z \rangle\leq\|\mathbf{v}\|\,\|u-\mathbf{S}z\|\sin\delta\right\}.\] Lemma 5.3 states that \(\mathbf{S}x^{k}\in D_{z,\delta}\) eventually, i.e., the inclusion holds for large enough \(k\). Of course, \(\mathbf{S}x^{k}\in\operatorname{range}\left(\theta^{-1}(\mathds{I}-\mathds{T})\right)\) for all \(k\), and Figure 2 (right) depicts \(D_{z,\delta}\cap\operatorname{range}\left(\theta^{-1}(\mathds{I}-\mathds{T})\right)\). In the proof of Theorem 5.1, loosely speaking, we establish the upper bound using \[-\theta\left(\alpha-\beta\theta\right)\left\|\mathbf{S}x^{k}-\theta^{-1} \mathbf{v}\right\|_{M}^{2}\leq 0.\] Therefore, the variance can be strictly smaller than the upper bound when \(\left\|\mathbf{S}x^{k}-\theta^{-1}\mathbf{v}\right\|_{M}^{2}\) is large, which can happen when the intersection is large. Indeed, Figure 2 shows that the sets with large intersection with \(D_{z,\delta}\) have smaller asymptotic variance. Figure 1: Visualization \(A\), \(B\), and \(C\) as defined in Section 5.2. The grey dot is \(\theta^{-1}\mathbf{v}\), where \(\mathbf{v}\) is the infimal displacement vector of \(\mathbf{T}=\mathds{I}-\theta\text{Proj}\). Figure 2: (Left) Graph of \(k\widetilde{\operatorname{Var}}\left(\frac{x^{k}}{k}\right)\) by \(k\), where \(\widetilde{\operatorname{Var}}\left(\frac{x^{k}}{k}\right)\) is the variance estimate with 10,000 samples. (Right) Visualization of \(A\), \(B\), and \(C\) as red, yellow, green regions and \(D_{z,\delta}\) as the hatched area, where the sets are as defined in Section 5.2. We conjecture that the broader intersection with \(D_{z,\delta}\) leads to smaller asymptotic variance. ## 6 Infeasibility detection In this section, we present the infeasibility detection method for (RC-FPI) using the hypothesis testing. **Theorem 6.1**.: _Let \(\mathbf{T}\colon\mathcal{H}\to\mathcal{H}\) be \(\theta\)-averaged with respect to \(\left\|\cdot\right\|_{M}\) with \(\theta\in(0,1].\) Let \(\mathbf{v}\) be the infimal displacement vector of \(\mathbf{T}.\) Assume \(\mathcal{I}^{0},\mathcal{I}^{1},\ldots\) is sampled IID from a distribution satisfying the uniform expected step-size condition (2) with \(\alpha\in(0,1]\), and assume (3) holds with some \(\beta>0\) such that \(\beta<\alpha/\theta.\) Let \(x^{0},x^{1},x^{2},\ldots\) be the iterates of (RC-FPI). Let \(x^{0},x^{1},x^{2},\ldots\) be the iterates of (RC-FPI). Then_ \[\mathbb{P}\left(\left\|\frac{x^{k}}{k}\right\|_{M}\geq\varepsilon\right) \lesssim\frac{\left(\beta-\alpha^{2}\right)\delta^{2}}{k(\varepsilon-\alpha \delta)^{2}}\] _as \(k\to\infty\), where \(\mathbf{v}\) is the infimal displacement vector of \(\mathbf{T}.\)_ Therefore, for any statistical significance level \(p\in(0,1),\) the test \[\left\|\frac{x^{k}}{k}\right\|_{M}\geq\varepsilon\] with \(k\gtrsim\frac{\left(\beta-\alpha^{2}\right)\delta^{2}}{p\left(\varepsilon- \alpha\delta\right)^{2}}\) can reject the null hypothesis and conclude that \(\left\|\mathbf{v}\right\|_{M}>\delta,\) which implies that the problem is inconsistent. For the proof of the Theorem 6.1, we begin with the proof of the simpler case with the assumption \(\mathbf{v}\in\text{range}\left(\mathbf{I}-\mathbf{T}\right)\). Let \(\left\|\mathbf{v}\right\|_{M}\leq\delta\) be the null hypothesis with \(\delta\) satisfying \(\alpha\delta<\epsilon.\) By the triangle inequality, Markov inequality, and Theorem 5.1, under the null hypothesis, \[\mathbb{P}\left(\left\|\frac{x^{k}}{k}\right\|_{M}\geq\varepsilon\right) \leq\mathbb{P}\left(\left\|\frac{x^{k}}{k}+\alpha\mathbf{v} \right\|_{M}\geq\varepsilon-\alpha\delta\right)\] \[\leq\frac{1}{\left(\varepsilon-\alpha\delta\right)^{2}}\mathbb{E }\left[\left\|\frac{x^{k}}{k}+\alpha\mathbf{v}\right\|_{M}^{2}\right]\] \[\lesssim\frac{\left(\beta-\alpha^{2}\right)\delta^{2}}{k( \varepsilon-\alpha\delta)^{2}}\] as \(k\to\infty.\) When \(\mathbf{v}\notin\text{range}\left(\mathbf{I}-\mathbf{T}\right)\), we can still obtain the same (asymptotic) statistical significance with the same test and the same iteration count \(k\gtrsim\frac{\left(\beta-\alpha^{2}\right)\delta^{2}}{p\left(\varepsilon- \alpha\delta\right)^{2}}.\) Below, we present the full proof of this general case without the assumption \(\mathbf{v}\notin\text{range}\left(\mathbf{I}-\mathbf{T}\right)\). Proof.: First by the triangle inequality and Markov inequality, \[\mathbb{P}\left(\left\|\frac{x^{k}}{k}\right\|_{M}\geq\varepsilon\right)\leq \mathbb{P}\left(\left\|\frac{x^{k}}{k}-\mathbb{E}\left[\frac{x^{k}}{k}\right] \right\|_{M}\geq\varepsilon-\left\|\mathbb{E}\left[\frac{x^{k}}{k}\right] \right\|_{M}\right)\] \[\leq\left(\varepsilon-\left\|\mathbb{E}\left[\frac{x^{k}}{k}\right] \right\|_{M}\right)^{-2}\mathrm{Var}_{M}\left(\frac{x^{k}}{k}\right).\] To bound the term \(\left\|\mathbb{E}\left[\frac{x^{k}}{k}\right]\right\|_{M}\), use triangle inequality, Jensen's inequality and Lemma 4.7 with \(z^{0}=x^{0}\) to have \[\left\|\mathbb{E}\left[\frac{x^{k}}{k}\right]\right\|_{M}\leq\left\|\mathbb{E} \left[\frac{x^{k}}{k}\right]-\frac{z^{k}}{k}\right\|_{M}+\left\|\frac{z^{k}}{k }\right\|_{M}\leq\mathcal{O}\left(\frac{1}{\sqrt{k}}\right)+\left\|\frac{z^{k} }{k}\right\|_{M}.\] By [106, Theorem 3], for any \(\omega>0\), there exist \(\omega\)-dependent constant \(C_{\omega}\) such that \[\left\|\frac{z^{k}}{k}\right\|_{M}\leq\alpha\left\|\mathbf{v}\right\|_{M}+ \frac{1}{k}C_{\omega}+\omega.\] Thus, \[\left\|\mathbb{E}\left[\frac{x^{k}}{k}\right]\right\|_{M}\leq\alpha\left\| \mathbf{v}\right\|_{M}+\mathcal{O}\left(\frac{1}{\sqrt{k}}\right)+\frac{1}{k} C_{\omega}+\omega.\] Substitute this inequality at \(\left\|\mathbb{E}\left[\frac{x^{k}}{k}\right]\right\|_{M}\) and obtain \[k\mathbb{P}\left(\left\|\frac{x^{k}}{k}\right\|_{M}\geq\varepsilon\right) \leq\left(\varepsilon-\left\|\mathbb{E}\left[\frac{x^{k}}{k} \right]\right\|_{M}\right)^{-2}k\mathrm{Var}_{M}\left(\frac{x^{k}}{k}\right)\] \[\leq\left(\varepsilon-\alpha\left\|\mathbf{v}\right\|_{M}- \mathcal{O}\left(\frac{1}{\sqrt{k}}\right)-\frac{1}{k}C_{\omega}-\omega \right)^{-2}k\mathrm{Var}_{M}\left(\frac{x^{k}}{k}\right).\] when choice of \(\omega\) is small and \(k\) being sufficiently large to keep \[\varepsilon-\alpha\left\|\mathbf{v}\right\|_{M}-\mathcal{O}\left(\frac{1}{ \sqrt{k}}\right)-\frac{1}{k}C_{\omega}+\omega>0.\] Take a limit supremum by \(k\to\infty\). Then by Theorem 5.1, \[\limsup_{k\to\infty}k\mathbb{P}\left(\left\|\frac{x^{k}}{k} \right\|_{M}\geq\epsilon\right) \leq\limsup_{k\to\infty}\left(\varepsilon-\alpha\left\|\mathbf{v} \right\|_{M}-\mathcal{O}\left(\frac{1}{\sqrt{k}}\right)-\frac{1}{k}C_{\omega}- \omega\right)^{-2}k\mathrm{Var}_{M}\left(\frac{x^{k}}{k}\right)\] \[\leq\frac{\left(\beta-\alpha^{2}\right)\left\|\mathbf{v}\right\| _{M}^{2}}{\left(\varepsilon-\alpha\left\|\mathbf{v}\right\|_{M}-\omega\right) ^{2}}\] holds for all sufficiently small \(\omega>0\). Thus, by \(\omega\to 0\), \[\limsup_{k\to\infty}k\mathbb{P}\left(\left\|\frac{x^{k}}{k}\right\|_{M}\geq \epsilon\right)\leq\frac{\left(\beta-\alpha^{2}\right)\left\|\mathbf{v}\right\| _{M}^{2}}{\left(\varepsilon-\alpha\left\|\mathbf{v}\right\|_{M}\right)^{2}}\] Now consider a null hypothesis of \(\left\|\mathbf{v}\right\|_{M}\leq\delta\). Under the null hypothesis, \[\limsup_{k\to\infty}k\mathbb{P}\left(\left\|\frac{x^{k}}{k}\right\|_{M}\geq \epsilon\right)\leq\frac{(\beta-\alpha^{2})\delta^{2}}{\left(\varepsilon- \alpha\delta\right)^{2}},\] or in other word, \[\mathbb{P}\left(\left\|\frac{x^{k}}{k}\right\|_{M}\geq\epsilon\right)\lesssim \frac{(\beta-\alpha^{2})\delta^{2}}{k\left(\varepsilon-\alpha\delta\right)^{2 }},\] as \(k\to\infty\). ## 7 Extension to non-orthogonal basis and applications to decentralized optimization Operator splitting methods such as ADMM/DRS [18, 19, 20] or PDHG [107] are fixed-point iterations with operators that are non-expansive with respect to \(M\)-norms where \(M\neq\mbox{\rm 1I}\), and in such cases, the coordinates form a non-orthogonal basis. Our analyses of Sections 4 and 5 were mostly general, accommodating any \(M\)-norm, with the sole exception of Lemma 4.1, which only applies to the case where \(M=\mbox{\rm 1I}\). In this section, we use the notion of the Friedrichs angle to extend our analysis to general \(M\)-norms. We then apply our framework to decentralized optimization and present a numerical experiment. ### Convergence condition in non-orthogonal basis Let's modify the underlying space \(\mathcal{H}\) with extra \(\mathcal{H}_{0}\) block, making \(\mathcal{H}\) as \[\mathcal{H}=\mathcal{H}_{0}\oplus\mathcal{H}_{1}\oplus\mathcal{H}_{2}\oplus \ldots\mathcal{H}_{m},\] where each \(\mathcal{H}_{i}\) is a Hilbert space. Consider two subspaces \(U_{1}\) and \(U_{2}\) of \(\mathcal{H}\) as \[U_{1}=\left\{(x_{0},0,0,\ldots,0)|x_{0}\in\mathcal{H}_{0}\right\},\quad U_{2}= \left\{(0,x_{1},x_{2},\ldots,x_{m})|x_{i}\in\mathcal{H}_{i},1\leq i\leq m \right\},\] so \(U_{1}\cap U_{2}=\{0\}\). We further assume that with \(M\)-inner product of \(\mathcal{H}\), block components of \(U_{2}\) are orthogonal to each other : \[\langle(0,0,\ldots,x_{i},\ldots,0),(0,0,\ldots,x_{j},\ldots,0)\rangle_{M}=0, \quad x_{i}\in\mathcal{H}_{i},x_{j}\in\mathcal{H}_{j},\quad 1\leq i<j\leq m.\] Note that every vector in \(\mathcal{H}\) can be uniquely expressed as a linear combination of vectors in \(U_{1}\) and \(U_{2}\). Given \(\mbox{\rm T}\colon\mathcal{H}\to\mathcal{H}\) and \(\mbox{\rm\bf S}=(1/\theta)(\mbox{\rm 1I}-\mbox{\rm\bf T})\), define \(\mbox{\rm\bf G}\) and \(\mbox{\rm\bf H}\) as \[\mbox{\rm\bf S}x=\mbox{\rm\bf G}x+\mbox{\rm\bf H}x,\quad\mbox{\rm\bf G}x\in U_ {1},\quad\mbox{\rm\bf H}x\in U_{2}\] for all \(x\in\mathcal{H}\). We decompose \(U_{2}\) into \(m\) block coordinates, which is also the set of orthogonal subspaces. (To clarify, the \(m\) blocks of \(U_{2}\) are orthogonal with respect to the \(M\)-norm.) With a selection vector \(\mathcal{I}\in\left[0,1\right]^{m}\), define a randomized coordinate operator as \[\begin{split}\mathbf{S}_{\mathcal{I}}&=\alpha\mathbf{ G}+\sum_{i=1}^{m}\mathcal{I}_{i}\mathbf{H}_{i}\\ \mathbf{T}_{\mathcal{I}}&=\mathbf{I}-\theta\mathbf{ S}_{\mathcal{I}},\end{split} \tag{7}\] where \(\mathbf{H}_{i}\) is defined similarly to how \(\mathbf{S}_{i}\) was defined in Section 3. The cosine of the Friedrichs angle \(c_{F}\) between \(U_{1}\) and \(U_{2}\)[84] is defined as a smallest value among \(c\leq 0\) such that satisfies \[\left|\left\langle u_{1},u_{2}\right\rangle_{M}\right|\leq c\left\|u_{1} \right\|_{M}\left\|u_{2}\right\|_{M}\quad\forall\,u_{1}\in U_{1},\,u_{2}\in U_ {2}.\] The RC-FPI by (7) converges, almost surely and in \(L^{2}\), if the cosine of the Friedrichs angle is sufficiently small. **Theorem 7.1**.: _Let \(\mathbf{T}\colon\mathcal{H}\to\mathcal{H}\) be \(\theta\)-averaged with respect to \(\left\|\cdot\right\|_{M}\) with \(\theta\in(0,1]\). Let \(\mathbf{v}\) be the infimal displacement vector of \(\mathbf{T}\). Assume \(\mathcal{I}^{0},\mathcal{I}^{1},\ldots\) is sampled IID from a distribution satisfying the uniform expected step-size condition (2) with \(\alpha\in(0,1]\). Let \(x^{0},x^{1},x^{2},\ldots\) be the iterates of (RC-FPI) \(x^{k+1}=\mathbf{T}_{\mathcal{I}^{k}}x^{k}\), where \(\mathbf{T}_{\mathcal{I}^{k}}\) is as defined in equation (7). Let \(c_{F}\) be the cosine of the Friedrichs angle between \(U_{1},U_{2}\)._ 1. _If_ \(c_{F}\leq\sqrt{\frac{1-\theta}{1-\alpha\theta}}\)_, then_ \(\frac{x^{k}}{k}\overset{L^{2}}{\to}-\alpha\mathbf{v}\) _as_ \(k\to\infty\)_._ 2. _If_ \(c_{F}<\sqrt{\frac{1-\theta}{1-\alpha\theta}}\)_, then_ \(\frac{x^{k}}{k}\overset{a.s.}{\to}-\alpha\mathbf{v}\) _as_ \(k\to\infty\)_. (_\(\frac{x^{k}}{k}\) _converges strongly to_ \(-\alpha\mathbf{v}\) _in probability_ \(1\)_.) Furthermore, the results of Theorem_ 5.1 _hold._ ### Proof of Theorem 7.1 **Lemma 7.2**.: _Suppose the subspaces \(U_{1},U_{2}\) of \(\mathcal{H}\) with \(U_{1}\cap U_{2}=\{0\}\) satisfy the condition_ \[\left|\left\langle u_{1},u_{2}\right\rangle_{M}\right|\leq c_{F}\left\|u_{1} \right\|_{M}\left\|u_{2}\right\|_{M},\quad c_{F}\leq\sqrt{\frac{1-\theta}{1- \alpha\theta}}\] _for any \(u_{1}\in U_{1},u_{2}\in U_{2}\)._ _Then, there exists \(\beta\geq 0\) such that \(\beta\theta\leq\alpha\) and_ \[\mathbb{E}_{\mathcal{I}}\left[u_{\mathcal{I}}\right]=\alpha u,\quad\mathbb{E}_ {\mathcal{I}}\left[\left\|u_{\mathcal{I}}\right\|_{M}^{2}\right]\leq\beta \left\|u\right\|_{M}^{2}\] _where \(u_{\mathcal{I}}\) is defined as_ \[u_{\mathcal{I}}=\alpha g+\sum_{i=1}^{m}\mathcal{I}_{i}h_{i},\quad u=g+h,\quad g \in U_{1},\quad h\in U_{2}.\] Proof.: First equation comes from, \[\mathbb{E}_{\mathcal{I}}\left[u_{\mathcal{I}}\right]=\alpha g+\mathbb{E}_{ \mathcal{I}}\left[\sum_{i=1}^{m}\mathcal{I}_{i}h_{i}\right]=\alpha g+\alpha h= \alpha u.\] The expectation in the second equation is \[\mathbb{E}_{\mathcal{I}}\left[\left\|u_{\mathcal{I}}\right\|_{M}^ {2}\right] =\alpha^{2}\left\|g\right\|_{M}^{2}+2\mathbb{E}\left[\left\langle \sum_{i=1}^{m}\mathcal{I}_{i}h_{i},\alpha g\right\rangle_{M}\right]+\mathbb{E }\left[\left\|\sum_{i=1}^{m}\mathcal{I}_{i}h_{i}\right\|_{M}^{2}\right]\] \[=\alpha^{2}\left\|g\right\|_{M}^{2}+2\left\langle\alpha g,\alpha h \right\rangle_{M}+\sum_{i=1}^{m}\mathbb{E}\left[\mathcal{I}_{i}^{2}\right] \left\|h_{i}\right\|_{M}^{2}\] \[=\alpha^{2}\left\|g\right\|_{M}^{2}+2\alpha^{2}\left\langle g,h \right\rangle_{M}+\alpha\left\|h\right\|_{M}^{2}\] \[=\alpha^{2}\left\|u\right\|_{M}^{2}+\left(\alpha-\alpha^{2} \right)\left\|h\right\|_{M}^{2}.\] Note that Thus, set \(\beta\) as \[\beta=\alpha^{2}+\frac{\alpha-\alpha^{2}}{1-c_{F}^{2}}.\] Then, \[\mathbb{E}_{\mathcal{I}}\left[\left\|u_{\mathcal{I}}\right\|_{M}^{2}\right] \leq\alpha^{2}\left\|u\right\|_{M}^{2}+\left(\alpha-\alpha^{2}\right)\left\|h \right\|_{M}^{2}\leq\left(\alpha^{2}+\frac{\alpha-\alpha^{2}}{1-c_{F}^{2}} \right)\left\|u\right\|_{M}^{2}=\beta\left\|u\right\|_{M}^{2},\] with \[\theta\beta\leq\theta\left(\alpha+\frac{1-\alpha\theta}{\theta}\right)\alpha=\alpha.\] Additionally, if \(c_{F}<\sqrt{\frac{1-\theta}{1-\alpha\theta}}\) in Lemma 7.2, we have \(\theta\beta<\alpha\). Proof of Theorem 7.1.: With Lemma 7.2, we know that \(\beta\) is dependent on the value of the cosine of Friedrichs angle \(c_{F}\) as : \[\mathbb{E}_{\mathcal{I}}\left[u_{\mathcal{I}}\right]=\alpha u,\quad\mathbb{E} _{\mathcal{I}}\left[\left\|u_{\mathcal{I}}\right\|_{M}^{2}\right]\leq\beta \left\|u\right\|_{M}^{2},\quad\beta=\alpha^{2}+\frac{\alpha-\alpha^{2}}{1-c_{F }^{2}}.\] Hence, when \(c_{F}\leq\sqrt{\frac{1-\theta}{1-\alpha\theta}}\), we have \(\beta\leq\alpha/\theta\), and when \(c_{F}<\sqrt{\frac{1-\theta}{1-\alpha\theta}}\), we have \(\beta<\alpha/\theta\). **Proof of statement (a).** Since \(c_{F}\leq\sqrt{\frac{1-\theta}{1-\alpha\theta}}\), we have \(\beta\leq\alpha/\theta\). Therefore, we may use the result of Lemma 4.7 with \(z^{0}=x^{0}\). \[\mathbb{E}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{k}\right\|_{M}^{2}\right] \leq\frac{1}{k}\left(2\sqrt{\alpha\theta}\left(1-\alpha\theta\right)\left\| \mathbf{S}x^{0}\right\|_{M}\left\|\mathbf{S}z^{0}\right\|_{M}-\frac{\alpha}{ \theta}\left(1-\alpha\theta\right)\left\|\mathbf{v}\right\|_{M}^{2}\right).\] When the limit \(k\rightarrow\infty\) is taken, \[\lim_{k\rightarrow\infty}\mathbb{E}\left[\left\|\frac{x^{k}}{k}-\frac{z^{k}}{ k}\right\|_{M}^{2}\right]=0,\quad\lim_{k\rightarrow\infty}\left\|\frac{z^{k}}{k}+ \alpha\mathbf{v}\right\|_{M}=0,\] where the second equation is from Theorem 2.1. These two limits provide \(L^{2}\) convergence of normalized iterate, namely \[\frac{x^{k}}{k}\overset{L^{2}}{\rightarrow}-\alpha\mathbf{v},\] as \(k\rightarrow\infty\). **Proof of statement (b).** Since \(c_{F}<\sqrt{\frac{1-\theta}{1-\alpha\theta}}\), we have \(\beta<\alpha/\theta\). Thus, from Lemma 4.10, we can conclude the strong convergence in probability 1, \[\frac{x^{k}}{k}\overset{a.s.}{\rightarrow}-\alpha\mathbf{v}\] as \(k\rightarrow\infty\). Furthermore, since \(\beta<\alpha/\theta\), we now satisfy every conditions of Theorem 5.1. Thus, identical results of Theorem 5.1 are obtained in this case. ### Application of Theorem 7.1 in PG-EXTRA Consider the convex optimization problem \[\underset{x\in\mathbb{R}^{d}}{\text{minimize}}\,\sum_{i=1}^{m}f_{i}(x), \tag{8}\] where \(f_{i}\colon\mathbb{R}^{d}\rightarrow\mathbb{R}\) is convex for \(i=1,\ldots,m\). Consider the decentralized algorithm PG-EXTRA [108] \[\begin{split} x_{i}^{k+1}&=\text{Prox}_{\tau f_{i}} \left(\sum_{j=1}^{m}W_{ij}x_{i}^{k}-w_{i}^{k}\right)\\ w_{i}^{k+1}&=w_{i}^{k}+\frac{1}{2}\left(x_{i}^{k }-\sum_{j=1}^{m}W_{ij}x_{j}^{k}\right)\end{split}\] (PG-EXTRA) for \(i=1,2,\ldots,m\). In decentralized optimization, we use network of agents to compute the algorithm. If a pair of agents could communicate, we say that they are connected. For each agents \(i=1,2\ldots,m\), \(N_{i}\) is a set of agents connected to agent \(i\). A matrix \(W\) is called a mixing matrix, and it is a symmetric \(m\) by \(m\) matrix with \(W_{ij}=0\) if \(i\neq j\) and \(j\notin N_{i}\). A randomized coordinate-update version of PG-EXTRA randomly chooses \(i\) among \(1,2,\ldots,m\) to update \(x_{i}^{k}\), while every \(w_{1},w_{2},\ldots,w_{m}\) gets updated at each iterations. ``` for\(i\in\{1,2,\ldots,m\}\)do Initialize:\(w_{i}=0\), \(x_{i}=0\), \([Wx]_{i}=0\) endfor for\(j\in\{1,2,\ldots,m\}\)do Update:\(w_{j}=w_{j}+\frac{\alpha}{2}\left(x_{i}-[Wx]_{i}\right)\) endfor while Not converged do Sample:\(\mathcal{I}\) for\(i\) such that \(\mathcal{I}_{i}\neq 0\)do \(\Delta x_{i}=\text{Prox}_{rf_{i}}\left([Wx]_{i}-w_{i}\right)-x_{i}\) Update:\(x_{i}=x_{i}+\mathcal{I}_{i}\Delta x_{i}\) for\(j\in N_{i}\cup\{i\}\)do Send:\(\Delta x_{i}\) From \(i\)th agent to \(j\)th agent. \([Wx]_{j}=[Wx]_{j}+W_{ij}\Delta x_{i}\) endfor endfor endwhile ``` **Algorithm 1** RC-PG-EXTRA Note that \(\Delta x_{i}\) is the only quantity communicated across agents. \(|N_{i}|\) communications happen each iteration, while values \(x_{i},w_{i},[Wx]_{i}\) are stored in \(i\)th agent. (PG-EXTRA) is a fixed-point iteration with an averaged operator with respect to \(M\)-norm where \(M\neq\text{I}\). Under the conditions of Corollary 7.3, the condition regarding the Friedrichs angle of Theorem 7.1 holds and Algorithm 1 converges. **Corollary 7.3**.: _Suppose \(\mathcal{I}^{0},\mathcal{I}^{1},\ldots\) is sampled IID from a distribution satisfying the uniform expected step-size condition (2) with \(\alpha\in(0,1]\). Consider Algorithm 1 with \(\mathcal{I}=\mathcal{I}^{0},\mathcal{I}^{1},\ldots\). If the minimum eigenvalue of the symmetric mixing matrix \(W\in\mathbb{R}^{m}\) satisfies_ \[\lambda_{min}(W)>-\frac{\alpha}{2-\alpha},\] _the normalized iterate of Algorithm 1 converges to \(-\alpha\mathbf{v}\), where \(\mathbf{v}\) is the infimal displacement vector of (PG-EXTRA), both in \(L^{2}\) and almost surely._ Proof.: In the proofs, we use a stack notation for convenience. With stack notation, \(\mathbf{x}\in\mathbb{R}^{m\times d}\) with \[\mathbf{x}=\begin{bmatrix}\text{--- }x_{1}^{\intercal}&\text{--- }\\ \text{--- }x_{2}^{\intercal}&\text{---}\\ \vdots&\\ \text{--- }x_{m}^{\intercal}&\text{---}\end{bmatrix},\quad\left[W\mathbf{x} \right]_{i}=\sum_{j=1}^{m}W_{ij}x_{j}.\] PG-EXTRA originates from Condat-Vu [109, 110] with \(\mathbf{w}^{k}=\tau U\mathbf{u}^{k}\), where Condat-Vu is \[\mathbf{x}^{k+1} =\text{Prox}_{\tau f}(W\mathbf{x}^{k}-\tau U\mathbf{u}^{k})\] \[\mathbf{u}^{k+1} =\mathbf{u}^{k}+\frac{1}{\tau}U\mathbf{x}^{k},\] which is a fixed-point iteration with an operator that's \(\frac{1}{2}\)-averaged in \(M\)-norm. Thus, \(\theta\) value in (PG-EXTRA) is \[\theta=\frac{1}{2}.\] The matrix \(M\) in (PG-EXTRA) is \[M=\begin{bmatrix}\frac{1}{\tau}I&U\\ U&\tau I\end{bmatrix},\] where \(U\) is a positive semidefinite matrix such that \(U^{2}=\frac{1}{2}\left(I-W\right)\). Note that the inner product in this case is \[\left\langle\begin{bmatrix}\mathbf{x}\\ \mathbf{u}\end{bmatrix},\begin{bmatrix}\mathbf{y}\\ \mathbf{v}\end{bmatrix}\right\rangle_{M}=\text{tr}\left(\begin{bmatrix} \mathbf{x}\\ \mathbf{u}\end{bmatrix}^{T}M\begin{bmatrix}\mathbf{y}\\ \mathbf{v}\end{bmatrix}\right).\] Due to the given inner product, two subspaces \(V_{1}=\left(\mathbb{R}^{m}\times\left\{0\right\}^{m}\right)^{d}\) and \(V_{2}=\left(\left\{0\right\}^{m}\times\mathbb{R}^{m}\right)^{d}\) are no longer orthogonal to each other. On the other hand, \(m\) subspaces of \(V_{1}\), \[\left(\left\{0\right\}^{i-1}\times\mathbb{R}\times\left\{0\right\}^{m-i} \times\left\{0\right\}^{m}\right)^{d},\quad i=1,2,\ldots,m,\] are orthogonal to each other. Inner product between \(V_{1}\) and \(V_{2}\) is constrained as Since \(\lambda_{\min}^{W}>-\frac{\alpha}{2-\alpha}\), \[\lambda_{\max}^{U}=\sqrt{\frac{1-\lambda_{\min}^{W}}{2}}<\sqrt{\frac{1}{2- \alpha}}=\sqrt{\frac{1-\frac{1}{2}}{1-\frac{\alpha}{2}}},\] and we may apply Theorem 7.1 with \[\mathbf{u}\in V_{2}=U_{1},\quad\mathbf{x}\in V_{1}=U_{2},\quad\mathcal{H}_{0} =\mathbb{R}^{m\times d},\quad\mathcal{H}_{1}=\mathcal{H}_{2}=\cdots=\mathcal{ H}_{m}=d\] and block coordinate update with each orthogonal blocks as \[\left(\left\{0\right\}^{i-1}\times\mathbb{R}\times\left\{0\right\}^{m-i} \times\left\{0\right\}^{m}\right)^{d},\quad i=1,2,\ldots,m,\] to conclude Corollary 7.3. Additionally, here is the infimal displacement vector of (PG-EXTRA). **Lemma 7.4**.: _The infimal displacement vector \(\mathbf{v}=(\mathbf{v}_{1},\ldots,\mathbf{v}_{m})\) of (PG-EXTRA) is_ \[\mathbf{v}_{i}=\begin{bmatrix}\frac{\tau}{m}\sum_{j=1}^{m}\nabla f_{j}(y_{j}) \\ -\frac{1}{2}\left(y_{i}-\sum_{j=1}^{m}W_{ij}y_{j}\right)\end{bmatrix}\] _for \(i=1,\ldots,m\), where \((y_{1},y_{2},\ldots,y_{m})\) is_ \[\operatorname*{argmin}_{y_{1},y_{2},\ldots y_{m}\in\mathbb{R}^{d}}\;\left\| \frac{\tau}{m}\sum_{j=1}^{m}\nabla f_{j}(y_{j})\right\|^{2}+\frac{1}{2}\sum_{i,j=1}^{m}W_{ij}\left\|y_{i}-y_{j}\right\|^{2}.\] Proof.: Recall that the PG-EXTRA originated from Condat-Vu, an FPI with \[\mathbf{T}\begin{bmatrix}\mathbf{x}\\ \mathbf{u}\end{bmatrix}=\begin{bmatrix}\operatorname*{Prox}_{\tau f}(W\mathbf{ x}-\tau U\mathbf{u})\\ \mathbf{u}+\frac{1}{\tau}U\mathbf{x}\end{bmatrix}\] which is a non-expansive mapping in \(M\)-norm, where \[M=\begin{bmatrix}\frac{1}{\tau}I&U\\ U&\tau I\end{bmatrix}.\] Finding the infimal displacement vector of \(\mathbf{T}\) is equivalent to \[\operatorname*{argmin}_{\mathbf{x},\mathbf{u}}\left\|\begin{bmatrix}\Delta \mathbf{x}\\ \Delta\mathbf{u}\end{bmatrix}\right\|_{M}^{2},\quad\begin{bmatrix}\Delta \mathbf{x}\\ \Delta\mathbf{u}\end{bmatrix}=(\mathbf{I}-\mathbf{T})\begin{bmatrix}\mathbf{x} \\ \mathbf{u}\end{bmatrix}=\begin{bmatrix}\mathbf{x}-\operatorname*{Prox}_{\tau f}(W \mathbf{x}-\tau U\mathbf{u})\\ -\frac{1}{\tau}U\mathbf{x}\end{bmatrix}.\] From \(\Delta\mathbf{u}=-\frac{1}{\tau}U\mathbf{x}\), \[\left\|\begin{bmatrix}\Delta\mathbf{x}\\ \Delta\mathbf{u}\end{bmatrix}\right\|_{M}^{2} =\frac{1}{\tau}\left\|\Delta\mathbf{x}\right\|^{2}+\tau\left\| \Delta\mathbf{u}\right\|^{2}+2\mathrm{tr}\left(\Delta\mathbf{x}^{T}U\Delta \mathbf{u}\right)\] \[=\frac{1}{\tau}\left\|\Delta\mathbf{x}\right\|^{2}+\frac{1}{\tau} \left\|U\mathbf{x}\right\|^{2}-\frac{2}{\tau}\mathrm{tr}\left(\Delta\mathbf{ x}^{T}U^{2}\mathbf{x}\right)\] \[=\frac{1}{\tau}\left[\left\|\Delta\mathbf{x}\right\|^{2}+\frac{1} {2}\mathrm{tr}\left(\mathbf{x}^{T}(I-W)\mathbf{x}\right)-\mathrm{tr}\left( \Delta\mathbf{x}^{T}(I-W)\mathbf{x}\right)\right].\] When \(\Delta\mathbf{x}=\mathbf{x}_{\mathbf{C}}+\mathbf{x}_{\perp}\), where \(\mathbf{x}_{\mathbf{C}}=\mathbf{1}\tilde{x}^{T}\) for some \(\tilde{x}\in\mathbb{R}^{d}\) and \(\mathbf{1}^{T}\mathbf{x}_{\perp}=\mathbf{0}\), we have \[\Delta\mathbf{x}=\mathbf{x}-\operatorname*{Prox}_{\tau f}(W \mathbf{x}-\tau U\mathbf{u})\] \[\Leftrightarrow\mathbf{x}_{\mathbf{C}}=(\mathbf{x}-\mathbf{x}_{ \perp})-\operatorname*{Prox}_{\tau f}(W\left(\mathbf{x}-\mathbf{x}_{\perp} \right)-(\tau U\mathbf{u}-W\mathbf{x}_{\perp})).\] Since \[\left\{\tau U\mathbf{u}:\mathbf{u}\in\mathbb{R}^{m\times d}\right\}=\left\{ \mathbf{w}\in\mathbb{R}^{m\times d}:\mathbf{1}^{T}\mathbf{w}=\mathbf{0}\right\}, \quad\mathbf{1}^{T}W\mathbf{x}_{\perp}=\mathbf{1}^{T}\mathbf{x}_{\perp}= \mathbf{0},\] \(\tau U\mathbf{u}-W\mathbf{x}_{\perp}=\tau U\tilde{\mathbf{u}}\) for some \(\tilde{\mathbf{u}}.\) Thus, \[\left[\begin{matrix}\Delta\left(\mathbf{x}-\mathbf{x}_{\perp}\right)\\ \Delta\tilde{\mathbf{u}}\end{matrix}\right]=\begin{bmatrix}\mathbf{x}_{ \mathbf{C}}\\ -\frac{1}{\tau}U\left(\mathbf{x}-\mathbf{x}_{\perp}\right)\end{bmatrix},\] \[\left\|\begin{bmatrix}\Delta\left(\mathbf{x}-\mathbf{x}_{\perp}\right)\\ \Delta\tilde{\mathbf{u}}\end{bmatrix}\right\|_{M}^{2}=\frac{1}{\tau}\left[ \left\|\mathbf{x}_{\mathbf{C}}\right\|^{2}+\frac{1}{2}\mathrm{tr}\left(\left( \mathbf{x}-\mathbf{x}_{\perp}\right)^{T}\left(I-W\right)\left(\mathbf{x}- \mathbf{x}_{\perp}\right)\right)\right].\] Due to the inequality \(\left\|\mathbf{x}_{\perp}\right\|^{2}\geq\mathrm{tr}\left(\mathbf{x}_{\perp} {}^{T}(I-W)\mathbf{x}_{\perp}\right)\) with equality only when \(\mathbf{x}_{\perp}=\mathbf{0},\) \[\left\|\begin{bmatrix}\Delta\mathbf{x}\\ \Delta\mathbf{u}\end{bmatrix}\right\|_{M}^{2} =\frac{1}{\tau}\left[\left\|\Delta\mathbf{x}\right\|^{2}+\frac{1 }{2}\mathrm{tr}\left(\mathbf{x}^{T}(I-W)\mathbf{x}\right)-\mathrm{tr}\left( \Delta\mathbf{x}^{T}(I-W)\mathbf{x}\right)\right]\] \[=\frac{1}{\tau}\left[\left\|\mathbf{x}_{\mathbf{C}}\right\|^{2}+ \left\|\mathbf{x}_{\perp}\right\|^{2}+\frac{1}{2}\mathrm{tr}\left(\mathbf{x}^ {T}(I-W)\mathbf{x}\right)-\mathrm{tr}\left(\mathbf{x}_{\perp}{}^{T}(I-W) \mathbf{x}\right)\right]\] \[\geq\left\|\begin{bmatrix}\Delta\left(\mathbf{x}-\mathbf{x}_{ \perp}\right)\\ \Delta\tilde{\mathbf{u}}\end{bmatrix}\right\|_{M}^{2},\] with equality only when \(\mathbf{x}_{\perp}=\mathbf{0}.\) Thus, the infimal displacement vector \(\tilde{\mathbf{v}}\) of Condat-Vu follows a form of \[\tilde{\mathbf{v}}=\begin{bmatrix}\mathbf{v}_{x}\\ \mathbf{v}_{u}\end{bmatrix},\quad\mathbf{v}_{x}=\mathbf{1}\tilde{x}^{T},\] for some \(\tilde{x}\in\mathbb{R}^{d}.\) Now we may consider only the case where \(\Delta\mathbf{x}=\mathbf{1}x^{T}.\) In this case, \[\left\|\begin{bmatrix}\Delta\mathbf{x}\\ \Delta\mathbf{u}\end{bmatrix}\right\|_{M}^{2}=\frac{1}{\tau}\left[\left\|\Delta \mathbf{x}\right\|^{2}+\frac{1}{2}\mathrm{tr}\left(\mathbf{x}^{T}(I-W)\mathbf{ x}\right)\right],\] and the relation \[\mathbf{1}x^{T}=\mathbf{x}-\mathrm{Prox}_{\tau f}(W\mathbf{x}-\tau U\mathbf{u})\] must hold. This relation is equivalent to \[0\in\tau\nabla f(\mathbf{x}-\mathbf{1}x^{T})+\mathbf{x}-\mathbf{1}x^{T}-W \mathbf{x}+\tau U\mathbf{u}.\] By taking \(\mathbf{1}\) direction to consideration, \[0\in\tau\mathbf{1}^{T}\nabla f(\mathbf{x}-\mathbf{1}x^{T})+\mathbf{1}^{T} \mathbf{x}-mx^{T}-\mathbf{1}^{T}\mathbf{x}.\] When we set the new variable \(\mathbf{y}=\mathbf{x}-\mathbf{1}x^{T}\), \(x^{T}\) is expressed as \[x^{T}\in\tau\frac{1}{m}\mathbf{1}^{T}\nabla f(\mathbf{y}),\] which makes \[\left\|\begin{bmatrix}\Delta\mathbf{x}\\ \Delta\mathbf{u}\end{bmatrix}\right\|_{M}^{2} =\frac{1}{\tau}\left[\left\|\Delta\mathbf{x}\right\|^{2}+\frac{1 }{2}\mathrm{tr}\left(\mathbf{y}^{T}(I-W)\mathbf{y}\right)\right]\] \[=\frac{1}{\tau}\left[\tau^{2}\frac{1}{m}\left\|\mathbf{1}^{T} \nabla f(\mathbf{y})\right\|^{2}+\frac{1}{2}\mathrm{tr}\left(\mathbf{y}^{T}(I -W)\mathbf{y}\right)\right].\] Thus, the infimal displacement vector of Condat-Vu is \[\tilde{\mathbf{v}}=\begin{bmatrix}\tau\frac{1}{m}\mathbf{1}\mathbf{1}^{T} \nabla f(\mathbf{y})\\ -\frac{1}{\tau}U\mathbf{y},\end{bmatrix}\] Where \(\mathbf{y}\) is \[\operatorname*{argmin}_{\mathbf{y}\in\mathbb{R}^{m\times d}}\;\left[\tau^{2} \,\frac{1}{m}\left\|\mathbf{1}^{T}\nabla f(\mathbf{y})\right\|^{2}+\frac{1}{2 }\mathrm{tr}\left(\mathbf{y}^{T}(I-W)\mathbf{y}\right)\right].\] Thus, the infimal displacement vector of (PG-EXTRA) is, \[\mathbf{v}_{i}=\begin{bmatrix}\frac{\tau}{m}\sum_{j=1}^{m}\nabla f_{j}(y_{j}) \\ -\frac{1}{2}\left(y_{i}-\sum_{j=1}^{m}W_{ij}y_{j}\right)\end{bmatrix}\] for \(i=1,\ldots,m\), where \((y_{1},y_{2},\ldots,y_{m})\) is \[\operatorname*{argmin}_{y_{1},y_{2},\ldots y_{m}\in\mathbb{R}^{d}}\;\left\| \frac{\tau}{m}\sum_{j=1}^{m}\nabla f_{j}(y_{j})\right\|^{2}+\frac{1}{2}\sum_{i,j=1}^{m}W_{ij}\left\|y_{i}-y_{j}\right\|^{2}.\] ### Experiment of the infeasible case in PG-EXTRA We perform an experiment on an instance of (8) using Algorithm 1. Figure 3 shows that RC-PG-EXTRA, Algorithm 1, converges to the infimal displacement vector faster in terms of communication count. Specifically, define \(f_{i}\colon\mathbb{R}^{2}\to\mathbb{R}\) for \(i=1,\cdots,m\) as \[f_{i}(x)=\left\{\begin{array}{ll}0&\mbox{if }x\in C_{i}\\ \infty&\mbox{otherwise.}\end{array}\right.\] with \(C_{1}=\left\{(x,y)\mid x\leq-10\right\}\) and \(C_{2}=C_{3}=\cdots=C_{m}=\left\{(x,y)\mid x>0,\,xy\leq-1\right\}\). The network is depicted in Figure 3. We use Metropolis constant edge weight matrix [111, 112] for our mixing matrix \(W\). Metropolis mixing matrix is a symmetric matrix of the form \[W_{ij}=\begin{cases}\frac{1}{\max(|N_{i}|,|N_{j}|)+\epsilon}&\text{if }j\in N_{i} \\ 1-\sum_{l\in N_{i}}W_{il}&\text{if }j=i\\ 0&\text{otherwise}\end{cases}\] with \(\epsilon>0\). We choose \(\epsilon=0.05\) in our experiment. In this setting, the infimal displacement vector has the analytical form : \[\mathbf{v}_{i}=\frac{b_{i}}{2(m-1+\epsilon)}\begin{bmatrix}\mathbf{0}\\ u_{1}-u_{2}\end{bmatrix},\quad b_{i}=\begin{cases}1&\text{if }i=1\\ -1&\text{if }i=2\\ 0&\text{if }i>2,\end{cases}\] where \(u_{1},u_{2}\in\mathbb{R}^{d}\) is a vector defined as \[(u_{1},u_{2})=\operatorname*{argmin}_{u_{1}\in\overline{C_{1}},u_{2}\in \overline{C_{2}}}\|u_{1}-u_{2}\|.\] Calculation of the infimal displacement vectorFrom Lemma 7.4, the infimal displacement vector of (PG-EXTRA) is \[\mathbf{v}_{i}=\begin{bmatrix}\frac{\tau}{m}\sum_{j=1}^{m}\nabla f_{j}(y_{j}) \\ -\frac{1}{2}\left(y_{i}-\sum_{j=1}^{m}W_{ij}y_{j}\right)\end{bmatrix},\] where \((y_{1},y_{2},\ldots,y_{m})\) is \[(y_{1},y_{2},\ldots,y_{m})=\operatorname*{argmin}_{y_{1},y_{2},\ldots y_{m}\in \mathbb{R}^{d}}\,\left\|\frac{\tau}{m}\sum_{j=1}^{m}\nabla f_{j}(y_{j}) \right\|^{2}+\frac{1}{2}\sum_{i,j=1}^{m}W_{ij}\left\|y_{i}-y_{j}\right\|^{2}.\] Figure 3: (Left) Network used in our experiment, consisting of \(m=40\) agents, with agents \(2,\ldots,40\) densely connected. (Right) Graph of \(\left\|\frac{1}{k}\left(\mathbf{x}^{k},\mathbf{u}^{k}\right)+\alpha\mathbf{v} \right\|^{2}\) against the communication count for (PG-EXTRA) and RC-PG-EXTRA, Algorithm 1. Note that the subgradient of the indicator function is the normal cone operator \[\nabla\delta_{C}(x)=\mathbf{N}_{C}=\begin{cases}\emptyset&\text{if }z\notin C\\ \{y\mid\left\langle y,z-x\right\rangle\leq 0,\forall z\in C\}&\text{if }z\in C.\end{cases}\] Thus, with the choice \(\nabla\delta_{C}(y)=0\), the problem of \((y_{1},y_{2},\ldots,y_{m})\) is equivalent to \[(y_{1},y_{2},\ldots,y_{m})=\operatorname*{argmin}_{y_{1}\in C_{1},y_{2},\ldots y _{m}\in C_{2}}\frac{1}{2}\sum_{i,j=1}^{m}W_{ij}\left\|y_{i}-y_{j}\right\|^{2}.\] Since \[\sum_{i,j=1}^{m}W_{ij}\left\|y_{i}-y_{j}\right\|^{2}\\ =\frac{1}{m-1+\epsilon}\left\|y_{1}-y_{2}\right\|^{2}+\sum_{j>2} ^{m}\frac{1}{m-1+\epsilon}\left\|y_{2}-y_{j}\right\|^{2}+\sum_{i,j>2,i\neq j} ^{m}\frac{1}{m-2+\epsilon}\left\|y_{i}-y_{j}\right\|^{2},\] \((y_{1},y_{2},\ldots,y_{m})\) take value of \(y_{2}=y_{3}=\cdots=y_{m}\) with \[(y_{1},y_{2})=\operatorname*{argmin}_{y_{1}\in C_{1},y_{2}\in C_{2}}\,\left\| y_{1}-y_{2}\right\|^{2}.\] Since we chose \(\nabla\delta_{C_{1}}(y_{1})=0\) and \(\nabla\delta_{C_{2}}(y_{2})=0\), \[\mathbf{v}_{i}=\begin{bmatrix}\mathbf{0}\\ -\frac{1}{2}\left(y_{i}-\sum_{j=1}^{m}W_{ij}y_{j}\right)\end{bmatrix}= \begin{bmatrix}\mathbf{0}\\ -\frac{1}{2}\sum_{j\neq i}^{m}W_{ij}\left(y_{i}-y_{j}\right)\end{bmatrix}.\] With \(y_{2}=y_{3}=\cdots=y_{m}\), \[\mathbf{v}_{1}=\begin{bmatrix}\mathbf{0}\\ \frac{1}{2(m-1+\epsilon)}\left(y_{1}-y_{2}\right)\end{bmatrix},\quad\mathbf{v }_{2}=\begin{bmatrix}\mathbf{0}\\ \frac{1}{2(m-1+\epsilon)}\left(y_{2}-y_{1}\right)\end{bmatrix},\quad\mathbf{v }_{i}=\begin{bmatrix}\mathbf{0}\\ \mathbf{0}\end{bmatrix},\quad i>2.\] Thus, the infimal displacement vector is \[u=\operatorname*{argmin}_{u\in\overline{\{u_{1}-u_{2}|u_{1}\in C_{1},u_{2} \in C_{2}\}}}\,\left\|u\right\|.,\quad\mathbf{v}_{i}=\frac{b_{i}}{2(m-1+ \epsilon)}\begin{bmatrix}\mathbf{0}\\ u\end{bmatrix},\quad b_{i}=\begin{cases}1&\text{if }i=1\\ -1&\text{if }i=2\\ 0&\text{if }i>2.\end{cases}\] The distribution of \(\mathcal{I}\) used for the experiment is \[P\left(\mathcal{I}\right)=\begin{cases}0.3&\text{if }\mathcal{I}=\frac{0.7}{0.3 \times(m-1)}e_{1}\\ \frac{0.7}{m-1}&\text{if }\mathcal{I}=e_{i}\text{ for some }i\geq 2\\ 0&\text{otherwise,}\end{cases}\] where \(e_{i}\in\mathbb{R}^{m}\) is the \(i\)th standard unit vector. ## 8 Conclusion This work analyzes the asymptotic behavior of the (RC-FPI) and establishes convergence of the normalized iterates to the infimal displacement vector, and this allows us to use the normalized iterates to test for infeasibility. We also extend our analyses to the setup with non-orthogonal bases, thereby making our results applicable to the decentralized optimization algorithm (PG-EXTRA). One possible direction of future work would be to use variance reduction techniques in the style of, say, SVRG [113] or [114] to improve the convergence rate. Such techniques allow stochastic-gradient-type methods to exhibit a rate faster than \(\mathcal{O}(1/k)\), and may be applicable in to the coordinate-update setup accelerate the infeasibility detection. Please refer to Journal-level guidance for any specific requirements. Acknowledgments.We thank Kibeom Myoung for providing careful reviews and valuable feedback. ## Declarations The authors have been funded by the National Research Foundation of Korea and the Samsung Science and Technology Foundation and were affiliated with Seoul National University while conducting the research presented in this manuscript. All of the papers related to this submission have been adequately referenced and discussed in the prior works section.
2303.10950
Symmetric-conjugate splitting methods for linear unitary problems
We analyze the preservation properties of a family of reversible splitting methods when they are applied to the numerical time integration of linear differential equations defined in the unitary group. The schemes involve complex coefficients and are conjugated to unitary transformations for sufficiently small values of the time step-size. New and efficient methods up to order six are constructed and tested on the linear Schr\"odinger equation.
Joackim Bernier, Sergio Blanes, Fernando Casas, Alejandro Escorihuela-Tomàs
2023-03-20T09:18:18Z
http://arxiv.org/abs/2303.10950v2
# Symmetric-conjugate splitting methods for linear unitary problems ###### Abstract We analyze the preservation properties of a family of reversible splitting methods when they are applied to the numerical time integration of linear differential equations defined in the unitary group. The schemes involve complex coefficients and are conjugated to unitary transformations for sufficiently small values of the time step-size. New and efficient methods up to order six are constructed and tested on the linear Schrodinger equation. **Keywords**: Splitting methods, complex coefficients, unitary problems **MSC numbers**: 65L05, 65L20, 65M70 ## 1 Introduction We are concerned in this work with the numerical integration of the linear ordinary differential equation \[i\frac{du}{dt}+Hu=0,\qquad\quad u(0)=u_{0}, \tag{1.1}\] where \(\,u\in\mathbb{C}^{N}\,\) and \(\,H\in\mathbb{R}^{N\times N}\,\) is a real matrix. A particular example of paramount importance leading to eq. (1.1) is the time-dependent Schrodinger equation once it is discretized in space. In that case \(\,H\,\) (related to the Hamiltonian of the system) can be typically split into two parts, \(\,H=A+B\,\). The equation \[y^{\prime\prime}+Ky=0\] with \(\,y\in\mathbb{R}^{d}\,\), \(\,K\in\mathbb{R}^{d\times d}\,\) can also be recast in the form (1.1) if the matrix \(\,K\,\) satisfy certain conditions [6]. Although the solution of (1.1) is given by \(\,u(t)=\mathrm{e}^{itH}u_{0}\,\), very often the dimension of \(\,H\,\) is so large that evaluating directly the action of the matrix exponential on \(\,u_{0}\,\) is computationally very expensive, and so other approximation techniques are desirable. When \(\,H=A+B\,\) and \(\,\mathrm{e}^{itA}u_{0}\,\), \(\,\mathrm{e}^{itB}u_{0}\,\) can be efficiently evaluated, then splitting methods constitute a natural option [17]. They are of the form \[S_{h}=\mathrm{e}^{iha_{0}A}\,\mathrm{e}^{ihb_{0}B}\,\cdots\,\mathrm{e}^{ihb_{2 n-1}B}\,\mathrm{e}^{iha_{2n}A} \tag{1.2}\] for a time step \(\,h\,\). Here \(\,a_{j}\,\), \(\,b_{j}\,\) are coefficients chosen in such a way that \(\,S_{h}=\mathrm{e}^{ihH}+\mathcal{O}(h^{p+1})\,\) when \(\,h\to 0\,\) for a given \(\,p\geq 1\,\). After applying the Baker-Campbell-Hausdorff (BCH) formula, \(\,S_{h}\,\) can be formally expressed as \(\,S_{h}=\exp\left(ihH_{h}\right)\,\), with \(\,iH_{h}=iH_{h}^{o}+H_{h}^{e}\,\) and \[H_{h}^{o} = (g_{1,1}A+g_{1,2}B)+h^{2}(g_{3,1}[A,[A,B]]+g_{3,2}[B,[A,B]])+\ldots\] \[H_{h}^{e} = hg_{2,1}[A,B]+h^{3}(g_{4,1}[A,[A,[A,B]]]+\ldots)+\ldots\] Here \(\,[A,B]:=AB-BA\,\), \(\,g_{k,j}\,\) are polynomials of degree \(\,k\,\) in the coefficients \(\,a_{i},b_{i}\,\) verifying \(\,g_{1,1}=g_{1,2}=1\,\) (for consistency), and \(\,g_{k,j}=0,\,\,k=1,2,\ldots,p,\,\,\forall j\,\) for achieving order \(\,p\,\). If \(\,A\,\) and \(\,B\,\) are real symmetric matrices, then \(\,[A,B]\,\) is skew-symmetric and \(\,[A,[A,B]]\,\) is symmetric. In general, all nested commutators with an even number of matrices \(\,A,B\,\) are skew-symmetric and those containing an odd number are symmetric, so that \(\,(H_{h}^{o})^{T}=H_{h}^{o}\,\) and \(\,(H_{h}^{e})^{T}=-H_{h}^{e}\,\). When the coefficients \(\,a_{j},b_{j}\,\) are real, then \(\,g_{k,j}\,\) are also real and therefore \(\,S_{h}=\mathrm{e}^{ihH_{h}}\,\) is a unitary matrix. In addition, if the composition (1.2) is palindromic, i.e., \(\,a_{2n-j}=a_{j}\,\), \(\,b_{2n-1-j}=b_{j}\,\), \(\,j=1,2,\ldots\,\), then \(\,g_{2k,j}=0\,\) and \(\,H_{-h}=H_{h}\,\), thus leading to a time-reversible method, \(\,S_{-h}=S_{h}^{-1}\,\). In other words, if \(\,u_{n}\,\) denotes the approximation at time \(\,t=nh\,\), then \(\,S_{-h}(u_{n+1})=u_{n}\,\). As a result, one gets a very favorable long-time behavior of the error for this type of integrators [16]. Thus, in particular, \[\mathcal{M}(u):=|u|^{2}\qquad\qquad\text{(norm)}\] and \[\mathcal{H}(u):=\bar{u}^{T}Hu\qquad\qquad\text{(expected value of the energy)}\] are almost globally preserved. Recently, some preliminary results obtained with a different class of splitting methods (1.2) have been reported when they are applied to the semi-discretized Schrodinger equation [4]. These schemes are characterized by the fact that the coefficients in (1.2) are _complex numbers_. Notice, however, that in this case the polynomials \(\,g_{k,j}\in\mathbb{C}\,\), so that \(\,S_{h}=\mathrm{e}^{ihH_{h}}\,\) is _not_ unitary in general. This is so even for palindromic compositions, since \(\,g_{2\ell+1,j}\,\) are complex anyway. There is nevertheless a special symmetry in the coefficients, namely \[a_{2n-j}=\overline{a}_{j}\qquad\text{ and }\qquad b_{2n-1-j}=\overline{b}_{j}, \qquad j=1,2,\ldots, \tag{1.3}\] worth to be considered. Methods of this class can be properly called _symmetric-conjugate_ compositions. In that case, a straightforward computation shows that the resulting composition satisfies \[\overline{S}_{h}=S_{h}^{-1} \tag{1.4}\] for real matrices \(A\) and \(B\), and in addition \[(\overline{S}_{h})^{T}=S_{-h} \tag{1.5}\] if \(A\) and \(B\) are real symmetric. In consequence, \[iH_{h}=i(H+\hat{H}_{h}^{o})+i\hat{H}_{h}^{e}\] for certain real matrices \(\hat{H}_{h}^{o}\) (symmetric), and \(\hat{H}_{h}^{e}\) (skew-symmetric). Since \(i\hat{H}_{h}^{e}\) is not real, then unitarity is lost. In spite of that, the examples collected in [4] seem to indicate that this class of schemes behave as compositions with real coefficients regarding preservation properties, at least for sufficiently small values of \(h\). Intuitively, this can be traced back to the fact that \(i\hat{H}_{h}^{e}={\cal O}(h^{p})\) and is purely imaginary. One of the purposes of this paper is to provide a rigorous justification of this behavior by generalizing the treatment done in [4] for the problem (1.1) defined in the group SU(2), i.e., when \(H\) is a linear combination of Pauli matrices. In particular, we prove here that, typically, _any consistent symmetric-conjugate splitting method applied to (1.1) when \(H\) is real symmetric, is conjugated to a unitary method for sufficiently small values of \(h\)_. In fact, this property can be related to the reversibility of the map \(S_{h}\) with respect to complex conjugation, as specified next. Let \(C\) be the linear transformation defined by \(C(u)=\overline{u}\) for all \(u\in\mathbb{C}^{N}\). Then, the differential equation (1.1) is \(C\)-reversible, in the sense that \(C(iHu)=-iH(C(u))\)[12, section V.1]. Moreover, since (1.4) holds, then \(C\circ S_{h}=S_{h}^{-1}\circ C\). In other words, the map \(S_{h}(u)\) is \(C\)-reversible [12] (or reversible for short). Notice that this also holds for palindromic compositions (1.2) with real coefficients. In the sequel we will refer to compositions verifying (1.3) as symmetric-conjugate or reversible methods. Splitting and composition methods with complex coefficients have also interesting properties concerning the magnitude of the successive terms in the asymptotic expansion of the local truncation error. Contrarily to methods with real coefficients, higher order error terms in the expansion of a given method have essentially a similar size as lower order terms [3]. In addition, an integrator of a given order with the minimum number of flows typically achieves a good efficiency, whereas with real coefficients one has to introduce additional parameters (and therefore more flows in the composition) for optimization purposes. It makes sense, then, to apply this class of schemes to equation (1.1) and eventually compare their performance with splitting methods involving real coefficients, since in any case the presence of complex coefficients does not lead to an increment in the overall computational cost. The structure of the paper goes as follows. In section 2 we provide further experimental evidence of the preservation properties exhibited by \(C\)-reversible splitting methods applied to different classes of matrices \(H\) by considering several illustrative numerical examples. In section 3 we analyze in detail this type of methods and validate theoretically the observed results by stating two theorems concerning consistent reversible maps. Then, in section 4 we present new symmetric-conjugate schemes up to order 6 specifically designed for the semi-discretized Schrodinger equation and other problems with the same algebraic structure. Finally, these new methods are tested in section 5 for a specific potential. ## 2 Symmetric-conjugate splitting methods in practice: some illustrative examples To illustrate the preservation properties exhibited by symmetric-conjugate (or reversible) methods when applied to (1.1) with \(H=A+B\), we consider some low order compositions of this type. Specifically, the tests will be carried out with the following schemes: Order 3.The simplest symmetric-conjugate method corresponds to \[S_{h}^{[3,1]}={\rm e}^{ih\overline{b}_{0}B}\,{\rm e}^{ih\overline{a}_{1}A}\,{\rm e }^{ihb_{1}B}\,{\rm e}^{iha_{1}A}\,{\rm e}^{ihb_{0}B}, \tag{2.1}\] with \(a_{1}=\frac{1}{2}+i\frac{\sqrt{3}}{6}\), \(b_{0}=\frac{a_{1}}{2}\), \(b_{1}=\frac{1}{2}\) and was first obtained in [1]. In addition, and as a representative of the schemes considered in section 4, we also use the following method, with \(a_{j}>0\) and \(b_{j}\in\mathbb{C}\), \(\Re(b_{j})>0\): \[S_{h}^{[3,2]}={\rm e}^{ih\overline{b}_{0}B}\,{\rm e}^{iha_{1}A}\,{\rm e}^{ih \overline{b}_{1}B}\,{\rm e}^{iha_{2}A}\,{\rm e}^{ihb_{1}B}\,{\rm e}^{iha_{1}A} \,{\rm e}^{ihb_{0}B}, \tag{2.2}\] where \[a_{1}=\frac{3}{10},\quad a_{2}=\frac{2}{5},\quad b_{0}=\frac{13}{126}-i\frac{ \sqrt{59/2}}{63},\quad b_{1}=\frac{25}{63}+i\frac{5\sqrt{59/2}}{126}.\] Order 4.The scheme has the same exponentials as (2.2), \[S_{h}^{[4]}={\rm e}^{ih\overline{b}_{0}B}\,{\rm e}^{ih\overline{a}_{1}A}\,{ \rm e}^{ih\overline{b}_{1}B}\,{\rm e}^{iha_{2}A}\,{\rm e}^{ihb_{1}B}\,{\rm e}^ {iha_{1}A}\,{\rm e}^{ihb_{0}B}, \tag{2.3}\] but now \[a_{1}=\frac{1}{12}(3+i\sqrt{15}),\quad a_{2}=\frac{1}{2},\quad b_{0}=\frac{a_ {1}}{2},\quad b_{1}=\frac{1}{24}(9+i\sqrt{15}).\] When the matrix \(H\) results from a space discretization of the time-dependent Schrodinger equation (for instance, by means of a pseudo-spectral method), then it is real symmetric and \(A\), \(B\) are also symmetric (in fact, \(B\) is diagonal). It makes sense, then, start analyzing this situation, where, in addition, _all the eigenvalues of \(H\) are simple_. To proceed, we generate a \(N\times N\) real matrix with \(N=10\) and uniformly distributed elements in the interval \((0,1)\), and take \(H\) as its symmetric part. The symmetric matrix \(A\) is generated analogously, and finally we fix \(B=H-A\). Next we compute the approximations obtained by \(S_{h}^{[3,1]}\), \(S_{h}^{[3,2]}\) and \(S_{h}^{[4]}\) for different values of \(h\), determine their eigenvalues \(\omega_{j}\) and compute the quantity \[D_{h}=\max_{1\leq j\leq N}(||\omega_{j}|-1|)\] for each \(h\). Finally, we depict \(D_{h}\) as a function of \(h\). Figure 1 (left) is representative of the results obtained in all cases we have tested: all \(|\omega_{j}|\) are 1 (except round-off) for some interval \(0<h<h^{*}\), and then there is always some \(\omega_{\ell}\) such that \(|\omega_{\ell}|>1\). In other words, \(S_{h}^{[3,1]}\), \(S_{h}^{[3,2]}\) and \(S_{h}^{[4]}\) behave as unitary maps in this interval. This is precisely what happens in the group SU(2), as shown in [4]. The right panel of Figure 1 is obtained in the same situation (i.e., \(H\) real symmetric with simple eigenvalues), but now both \(A\) and \(B\) are no longer symmetric: essentially the same behavior as before is observed. Of course, when \(h<h^{*}\), both the norm of \(u\), \({\cal M}(u)\), and the expected value of the energy, \({\cal H}(u)\) are preserved for long times, as shown in [4]. Our next simulation concerns a real (but not symmetric) matrix \(H\) with all its eigenvalues _real and simple_. Again, there exists a threshold \(h^{*}>0\) such that for \(h<h^{*}\) the schemes render unitary approximations. This is clearly visible in Figure 2 (left panel). If we consider instead a completely arbitrary real matrix \(H\), then the outcome is rather different: \(D_{h}>0\) for any \(h>0\) (right panel; for this example \(D_{h}=9.79\cdot 10^{-4}\) already for \(h=0.001\)). Next we illustrate the situation when _the real matrix \(H\) has multiple eigenvalues but is still diagonalizable_. As before, we consider first the analogue of Figure 1, namely: \(H\) is symmetric, with \(A\) and symmetric matrices (Figure 3, left panel) and \(A\) and \(B\) are real, but not symmetric (right panel). In the first case we notice that, whereas all the eigenvalues of the approximations rendered by \(S_{h}^{[3,1]}\) and \(S_{h}^{[4]}\) still have absolute value 1 for some interval \(0<h<h^{*}\), this is clearly not the case of \(S_{h}^{[3,2]}\). If, on the other hand, the splitting is done is such a way that \(A\) and \(B\) are not symmetric (but still real), then \(D_{h}>0\) even for very small values of \(h\). The same behavior is observed when \(H\) is taken as a real (but not symmetric), diagonalizable matrix with multiple real eigenvalues. The different phenomena exhibited by these examples require then a detailed numerical analysis of the class of schemes involved, trying to explain in particular the role played by the eigenvalues of the matrix \(H\) in the final outcome, as well as the different behavior of \(S_{h}^{[3,1]}\) and \(S_{h}^{[3,2]}\). This will be the subject of the Figure 2: Same as Figure 1 when \(H=A+B\) is a real (but not symmetric) matrix. Left: the eigenvalues of \(H\) are real and simple. Right: the eigenvalues of \(H\) are arbitrary. Figure 1: Absolute value of the largest eigenvalue of the approximations \(S_{h}^{[3,1]}\) (black solid line), \(S_{h}^{[3,2]}\) (red dash-dotted line) and \(S_{h}^{[4]}\) (blue dashed line) for different values of \(h\) when \(H=A+B\) is a real symmetric matrix with simple eigenvalues. Left: \(A\) and \(B\) are also real symmetric. Right: \(A\) and \(B\) are real, but not symmetric. next section. ## 3 Numerical analysis of reversible integration schemes ### Main results We next state two theorems and two additional corollaries that, generally speaking, justify the previous experiments and explain the good behavior exhibited by reversible methods. **Theorem 3.1**: _Let \(H\in\mathbb{R}^{N\times N}\) be a real matrix and let \(S_{h}\in\mathbb{C}^{N\times N}\) be a family of complex matrices depending smoothly on \(h\in\mathbb{R}\) such that_ * \(S_{h}\) _is a reversible map in the previous sense, so that_ \[\overline{S}_{h}=S_{h}^{-1};\] * \(S_{h}\) _is consistent with_ \(\exp(ihH)\)_, i.e. there exists_ \(p\geq 1\) _such that_ \[S_{h}\underset{h\to 0}{=}\mathrm{e}^{ihH}+\mathcal{O}(h^{p+1});\] (3.1) * _the eigenvalues of_ \(H\) _are real and simple._ _Then there exist_ * \(D_{h}\)_, a family of real diagonal matrices depending smoothly on_ \(h\)_,_ * \(P_{h}\)_, a family of real invertible matrices depending smoothly on_ \(h\)_,_ _such that \(P_{h}=P_{0}+\mathcal{O}(h^{p})\), \(D_{h}=D_{0}+\mathcal{O}(h^{p})\) and, provided that \(|h|\) is small enough,_ \[S_{h}=P_{h}\,\mathrm{e}^{ihD_{h}}\,P_{h}^{-1}. \tag{3.2}\] Figure 3: Same as Figure 1 when \(H=A+B\) is a real symmetric matrix with multiple eigenvalues. Left: \(A\) and \(B\) are real symmetric matrices. Right: \(A\) and \(B\) are real, but not symmetric. **Corollary 3.2**: _In the setting of Theorem 3.1, there exists a constant \(C>0\) such that, provided that \(|h|\) is small enough, for all \(u\in\mathbb{C}^{N}\) and all eigenvalues \(\omega\in\sigma(H)\), one has_ \[\sup_{n\geq 0}\,\Big{|}|\Pi_{\omega}S_{h}^{n}u|-|\Pi_{\omega}u|\Big{|}\leq C|h|^{ p}|u|, \tag{3.3}\] _where \(\Pi_{\omega}\) denotes the spectral projector onto \(\mathrm{Ker}(H-\omega I_{N})\). Moreover, if \(H\) is symmetric, the norm and the energy are almost conserved, in the sense that, for all \(u\in\mathbb{C}^{N}\), it holds that_ \[\sup_{n\in\mathbb{Z}}\,\big{|}{\cal M}(S_{h}^{n}u)-{\cal M}(u)\big{|}\leq C|h|^{ p}|u|^{2}\qquad\mathrm{and}\qquad\sup_{n\in\mathbb{Z}}\,\big{|}{\cal H}(S_{h}^{n}u)-{ \cal H}(u)\big{|}\leq C|h|^{p}|u|^{2}, \tag{3.4}\] _where \({\cal M}(u)=|u|^{2}\) and \({\cal H}(u)=\overline{u}^{T}Hu\)._ **Proof:[Proof of Corollary 3.2]** First, we focus on (3.3). We note that by consistency, we have \[D_{0}=P_{0}^{-1}HP_{0}.\] Since the eigenvalues of \(H\) are simple, it follows that the spectral projectors are all of the form \[\Pi^{(j)}=P_{0}(e_{j}\otimes e_{j})P_{0}^{-1}, \tag{3.5}\] where \(e_{1},\ldots,e_{N}\) denotes the canonical basis of \(\mathbb{R}^{N}\). Then, we note that for all \(n\in\mathbb{Z}\), we have \[S_{h}^{n}=P_{h}\,\mathrm{e}^{inhD_{h}}\,P_{h}^{-1}.\] Therefore, since \(\mathrm{e}^{inhD_{h}}\) is uniformly bounded with respect to \(h\) and \(n\) (because \(D_{h}\) is a real diagonal matrix) and \(P_{h}=P_{0}+{\cal O}(h^{p})\), it follows that \[S_{h}^{n}=P_{0}\,\mathrm{e}^{inhD_{h}}\,P_{0}^{-1}+{\cal O}(h^{p}),\] where the implicit constant in \({\cal O}\) term does not depend on \(n\) (here and later). Therefore, it is enough to use the explicit formula (3.5) to prove that \[\Pi^{(j)}S_{h}^{n}=P_{0}(e_{j}\otimes e_{j})P_{0}^{-1}P_{0}\,\mathrm{e}^{inhD _{h}}P_{0}^{-1}+{\cal O}(h^{p})=\mathrm{e}^{inh(D_{h})_{j,j}}\Pi^{(j)}+{\cal O }(h^{p}).\] As a consequence, the estimate (3.3) follows directly by the triangular inequality : \[|\Pi^{(j)}S_{h}^{n}u|=|\mathrm{e}^{inh(D_{h})_{j,j}}\Pi^{(j)}u+{\cal O}(h^{p} )(u)|\leq|\mathrm{e}^{inh(D_{h})_{j,j}}\Pi^{(j)}u|+|u|{\cal O}(h^{p})=|\Pi^{(j )}u|+|u|{\cal O}(h^{p}).\] Now, we focus on (3.4). Here, since \(H\) is assumed to be symmetric, its eigenspaces are orthogonal. Therefore by the Pythagorean theorem, we have \[{\cal M}(u)=\sum_{\omega\in\sigma(H)}|\Pi_{\omega}(u)|^{2}\quad\mathrm{and} \quad{\cal H}(u)=\sum_{\omega\in\sigma(H)}\omega|\Pi_{\omega}(u)|^{2}.\] As a consequence, (3.4) follows directly of (3.3). \(\Box\) The main limitation of Theorem 3.1 is the assumption on the simplicity of the eigenvalues of \(H\). Indeed, even if this assumption is typically satisfied, it depends only on the equation we aim at solving and not of the numerical method one uses. The following theorem, which is a refinement of Theorem 3.1, remedies this point by making an assumption on the leading term of the consistency error (which is typically satisfied for generic choices of numerical integrators). **Theorem 3.3**: _Let \(H\in\mathbb{R}^{N\times N}\) be a real matrix and let \(S_{h}\in\mathbb{C}^{N\times N}\) be a family of complex matrices depending smoothly on \(h\) such that_ * \(S_{h}\) _is a reversible map, i.e._ \[\overline{S}_{h}=S_{h}^{-1};\] * \(S_{h}\) _is consistent with_ \(\exp(ihH)\)_, i.e._ \[S_{h}\mathop{=}\limits_{h\to 0}\mathrm{e}^{ihH}+ih^{p+1}R+\mathcal{O}(h^{p+2}),\] (3.6) _where_ \(p\geq 1\) _is the order of consistency and_ \(R\) _is a real matrix_1_;_ Footnote 1: The fact that \(R\) is a real matrix is a consequence of the reversibility of \(S_{h}\)_._ * \(H\) _is diagonalizable and its eigenvalues are real;_ * _for all_ \(\omega\in\sigma(H)\)_, the eigenvalues of_ \(\Pi_{\omega}R_{|E_{\omega}(H)}\) _are real and simple, where_ \(\Pi_{\omega}\) _denotes the spectral projector on_ \(E_{\omega}(H):=\mathrm{Ker}(H-\omega I_{N})\)_._ _Then there exist_ * \(D_{h}\)_, a family of real diagonal matrices depending smoothly on_ \(h\)_,_ * \(P_{h}\)_, a family of real invertible matrices depending smoothly on_ \(h\)_,_ _such that, both \(P_{0}^{-1}BP_{0}\) and \(P_{0}^{-1}HP_{0}\) are diagonal, where \(B:=\sum_{\omega\in\sigma(H)}\Pi_{\omega}\,R\,\Pi_{\omega}\), and provided that \(|h|\) is small enough, it holds that_ \[S_{h}=P_{h}\,\mathrm{e}^{ihD_{h}}\,P_{h}^{-1}. \tag{3.7}\] **Corollary 3.4**: _In the setting of Theorem 3.3, there exists a constant \(C>0\) such that, provided that \(|h|\) is small enough, for all \(u\in\mathbb{C}^{N}\), all \(\omega\in\sigma(H)\) and all \(\lambda\in\sigma(\Pi_{\omega}R_{|E_{\omega}(H)})\), we have_ \[\sup_{n\geq 0}\,\Big{|}|\mathcal{P}_{\lambda,\omega}S_{h}^{n}u|-|\mathcal{P}_ {\lambda,\omega}u|\Big{|}\leq C|h||u|,\] _where \(\mathcal{P}_{\lambda,\omega}\) denotes the projector along \(\bigoplus_{(\eta,\mu)\neq(\lambda,\omega)}E_{\eta}(\Pi_{\mu}R_{|E_{\mu}(H)})\) onto \(E_{\lambda}(\Pi_{\omega}R_{|E_{\omega}(H)})\)._ _Moreover, if \(H\) and \(R\) are symmetric, for all \(\omega\in\sigma(H)\), one gets_ \[\sup_{n\geq 0}\,\Big{|}|\Pi_{\omega}S_{h}^{n}u|^{2}-|\Pi_{\omega}u|^{2}\Big{|} \leq C|h||u|^{2},\] _and the mass and the energy are almost conserved, i.e. for all \(u\in\mathbb{C}^{N}\), it holds that_ \[\sup_{n\in\mathbb{Z}}\,\big{|}\mathcal{M}(S_{h}^{n}u)-\mathcal{M}(u)\big{|} \leq C|h||u|^{2}\qquad\mathrm{and}\qquad\sup_{n\in\mathbb{Z}}\,\big{|}\mathcal{ H}(S_{h}^{n}u)-\mathcal{H}(u)\big{|}\leq C|h||u|^{2},\] _where, as before, \(\mathcal{M}(u)=|u|^{2}\) and \(\mathcal{H}(u)=\overline{u}^{T}Hu\)._ **Proof:**[Proof of Corollary 3.4] The proof is almost identical to the one of Corollary 3.2. The key point is that, since both \(P_{0}^{-1}BP_{0}\) and \(P_{0}^{-1}HP_{0}\) are diagonal, then the projectors \(\mathcal{P}_{\lambda,\omega}\) are exactly the projectors \(\Pi^{(j)}\), \(1\leq j\leq N\) (given by (3.5)). Note that, contrary to Theorem 3.1, in Theorem 3.3 one does not claim that \(P_{h}=P_{0}+\mathcal{O}(h^{p})\). A priori, here, in general, the best estimate we expect is \(P_{h}=P_{0}+\mathcal{O}(h)\) (which follows directly from the smoothness of \(P_{h}\) with respect to \(h\)). It is this loss which explains why, in Corollary 3.4, the error terms are of order \(\mathcal{O}(h)\) whereas they are of order \(\mathcal{O}(h^{p})\) in Corollary 3.2. \(\Box\) Remark.Before starting the proof of these theorems, let us provide some comments about the context and the ideas involved. * In Theorem 3.1 and its proof, we are just putting \(S_{h}\) in Birkhoff normal form. The fact that \(S_{h}\) can be diagonalized is due to the simplicity of the eigenvalues of \(H\) while the fact that its eigenvalues are complex numbers of modulus \(1\) is due the reversibility of \(S_{h}\). This approach is robust and well known, in particular it can be extended to the nonlinear setting (see e.g. [12, section V.1]). Note that here, we reach convergence of the Birkhoff normal form because the system is linear. * Theorem 3.3 is a refinement of Theorem 3.1. To prove the absence of resonances due to the multiplicity of the eigenvalues of \(H\), we use the first correction to the frequencies generated by the perturbation of \(H\) (i.e., the projections of \(R\) in Theorem 3.3). This approach is typical of what one does in the proof of Nekhoroshev theorems or KAM theorems (see also [12]). * In order to give some intuition about the proof and the assumptions of Theorem 3.1, let us prove simply that, provided \(h\) is small enough, \(S_{h}\) is conjugated to a unitary matrix. Indeed, since \(S_{h}\) is reversible it writes as \[S_{h}=e^{ihH_{h}},\] where \(H_{h}=H+{\cal O}(h^{p})\) is a real matrix (provided that \(h\) is small enough). Now, since the set of the real matrices whose eigenvalues are simple and real is open in the space of the real matrices (by continuity of the eigenvalues) and \(H_{h}\) is a real perturbation of such a matrix (\(H\) by assumption), we deduce that, provided \(h\) is small enough, its eigenvalues are simple and real. This implies that \(H_{h}\) is conjugated to a real diagonal matrix and so that \(S_{h}\) is conjugated to a unitary matrix. ### Technical lemmas In the proof of the previous theorems we will make use of the following three lemmas. **Lemma 3.5**: _Let \(M\) be a complex matrix and let \(P\) be a complex invertible matrix. Then \(\mathrm{ad}_{P^{-1}MP}\) and \(\mathrm{ad}_{M}\) are similar. More precisely,_ \[\mathrm{ad}_{\mathrm{int}_{P}M}=(\mathrm{int}_{P})\mathrm{ad}_{M}(\mathrm{int} _{P})^{-1},\] _where \(\mathrm{int}_{P}M:=P^{-1}MP\). Here \(\mathrm{ad}_{M}\) stands for the adjoint operator: \(\mathrm{ad}_{M}X:=[M,X]=MX-XM\), for any matrix \(X\)._ **Proof:** A straightforward calculation shows that, for any \(X\), \[(\mathrm{int}_{P})\mathrm{ad}_{M}X=P^{-1}[M,X]P=[P^{-1}MP,P^{-1}XP]=\mathrm{ ad}_{\mathrm{int}_{P}M}(P^{-1}XP)=\mathrm{ad}_{\mathrm{int}_{P}M}(\mathrm{ int}_{P})X.\] \(\Box\) **Lemma 3.6**: _Let \(M\) be a complex matrix. Then \(M\) is diagonalizable if and only if the kernel and the image of \(\mathrm{ad}_{M}\) are supplementary, i.e._ \[\mathrm{Ker}_{\mathbb{C}}\;\mathrm{ad}_{M}\cap\mathrm{Im}_{\mathbb{C}}\; \mathrm{ad}_{M}=\{0\}. \tag{3.8}\] **Proof:** We can assume, in virtue of Lemma 3.5 and without loss of generality, that \(M\) is in Jordan normal form2. On the one hand, if \(M\) is diagonal, we have \(\mathrm{ad}_{M}A=((m_{i,i}-m_{j,j})A)_{i,j}\) and so the support of the matrices in \(\mathrm{Ker}_{\mathbb{C}}\mathrm{ad}_{M}\) and \(\mathrm{Im}_{\mathbb{C}}\mathrm{ad}_{M}\) are clearly disjoint (which implies (3.8)). Conversely, doing calculations by blocks it is enough to consider the case where \(M=\lambda I_{N}+\mathcal{N}\) is a Jordan matrix (i.e. \(\lambda\in\mathbb{C}\) and \(\mathcal{N}\) nilpotent). Then we just have to note that \(\mathrm{ad}_{\lambda I_{N}+\mathcal{N}}=\mathrm{ad}_{\mathcal{N}}\) and that since \(\mathrm{ad}_{\mathcal{N}}\) is nilpotent necessarily we have \(\mathrm{Ker}_{\mathbb{C}}\mathrm{ad}_{\mathcal{N}}\cap\mathrm{Im}_{\mathbb{C}} \mathrm{ad}_{\mathcal{N}}\neq\{0\}\). \(\Box\) Footnote 2: Indeed, the property (3.8) is clearly invariant by conjugation of \(\mathrm{ad}_{M}\) and by Lemma 3.5 we know that \(\mathrm{ad}_{M}\) is conjugated to the adjoint representation of any Jordan normal form of \(M\). **Lemma 3.7**: _Let \(M_{h}\) be a family of real matrices depending smoothly on \(h\) and of the form_ \[M_{h}=M_{0}+\mathcal{O}(h^{p}),\quad\mbox{ where }\quad p\geq 1.\] _If \(M_{0}\) is diagonalizable on \(\mathbb{C}\), then there exists a family of real matrices \(\chi_{h}\), depending smoothly on \(h\), such that if \(|h|\) is small enough, \(\mathrm{e}^{-h^{p}\chi_{h}}M_{h}\,\mathrm{e}^{h^{p}\chi_{h}}\) commutes with \(M_{0}\), i.e._ \[[\mathrm{e}^{h^{p}\chi_{h}}M_{h}\,\mathrm{e}^{-h^{p}\chi_{h}},M_{0}]=0.\] **Proof:** We aim at designing the family \(\chi_{h}\) as solution of the equation \[\mathrm{ad}_{M_{0}}\left(\mathrm{e}^{h^{p}\chi_{h}}M_{h}\,\mathrm{e}^{-h^{p} \chi_{h}}\right)=0.\] Thanks to the well known identity \(\mathrm{e}^{A}B\,\mathrm{e}^{-A}=\mathrm{e}^{\mathrm{ad}_{A}}B\), this equation rewrites as \[\mathrm{ad}_{M_{0}}\left(\mathrm{e}^{h^{p}\mathrm{ad}\chi_{h}}M_{h}\right)=0. \tag{3.9}\] Next we write the Taylor expansion of \(M_{h}\) at order \(p\) as \[M_{h}=M_{0}+h^{p}R_{h},\] where \(R_{h}\) is a family of real matrices depending smoothly on \(h\). Then, isolating the terms of order \(0\) (and dividing by \(h^{p}\)), the equation (3.9) leads to \[f(h,\chi_{h}):=\mathrm{ad}_{M_{0}}\left(\mathrm{e}^{h^{p}\mathrm{ad}\chi_{h}}R _{h}-\varphi_{1}(h^{p}\mathrm{ad}\chi_{t})\,\mathrm{ad}_{M_{0}}\chi_{h}\right) =0,\] where \(\varphi_{1}(z):=\frac{e^{z}-1}{z}\). We restrict ourselves to \(\chi_{h}\) in \(\mathrm{Im}_{\mathbb{R}}\,\mathrm{ad}_{M_{0}}\) and consider \(f\) as a smooth map from \(\mathbb{R}\times\mathrm{Im}_{\mathbb{R}}\,\mathrm{ad}_{M_{0}}\) to \(\mathrm{Im}_{\mathbb{R}}\,\mathrm{ad}_{M_{0}}\). To solve the equation \(f(h,\chi_{h})=0\) using the implicit function theorem, we just have to design \(\chi_{0}\) so that \[f(0,\chi_{0})=\mathrm{ad}_{M_{0}}R_{0}-\mathrm{ad}_{M_{0}}\chi_{0}=0\] and prove that \(\mathrm{d}_{\chi}f(0,\chi_{0})=-\mathrm{ad}_{M_{0}}:\mathrm{Im}_{\mathbb{R}}\, \mathrm{ad}_{M_{0}}\to\mathrm{Im}_{\mathbb{R}}\,\mathrm{ad}_{M_{0}}\) is invertible. Actually, these properties are clear because the first one is a consequence of the second one, whereas the second follows directly from Lemma 3.6. \(\Box\) ### Proofs of the theorems We are now in a position to prove Theorems 3.1 and 3.3. Without loss of generality, and to simplify notations, we assume that \(H\) is diagonal \[H=\begin{pmatrix}\omega_{1}I_{n_{1}}&&&\\ &\ddots&\\ &&&\omega_{d}I_{n_{d}}\end{pmatrix},\] where \(\omega_{1}<\cdots<\omega_{d}\) denote the eigenvalues of \(H\) and \(n_{1},\cdots,n_{d}\) are positive integers satisfying \(n_{1}+\cdots+n_{d}=N\). Thanks to the consistency assumption (3.6) (which is equivalent to (3.1)), provided that \(|h|\) is small enough, \(S_{h}\) rewrites as \[S_{h}=\mathrm{e}^{ihH_{h}},\quad\text{ where }\quad H_{h}=H+h^{p}R+\mathcal{O}(h^{p +1}).\] Moreover, the reversibility assumption \(S_{h}^{-1}=\overline{S}_{h}\) implies that \(H_{h}\) is a real matrix (provided that \(|h|\) is small enough). Note that, hence, we deduce that \(R\) is also a real matrix. Then, applying Lemma 3.7 to \(H_{h}\), we get a family of real matrices \(\chi_{h}\) such that, provided that \(|h|\) is small enough, \[[W_{h},H]=0,\qquad\text{ where }\qquad W_{h}=\mathrm{e}^{h^{p}\chi_{h}}H_{h} \,\mathrm{e}^{-h^{p}\chi_{h}}.\] We conclude that \(W_{h}\) is block-diagonal (with the same structure of blocks as \(H\)), i.e. there exists some \(n_{j}\times n_{j}\) real matrices \(W_{h}^{(j)}\) such that \[W_{h}=\begin{pmatrix}W_{h}^{(1)}&&&\\ &\ddots&\\ &&W_{h}^{(d)}\end{pmatrix}. \tag{3.10}\] As a consequence, if the eigenvalues of \(H\) are simple (i.e. \(d=N\) and \(n_{j}=1\) for all \(j\)) then \(W_{h}\) is diagonal. Therefore, in this case, it is enough to set \(P_{h}=\mathrm{e}^{-h^{p}\chi_{h}}\) and \(W_{h}=D_{h}\) to conclude the proof of Theorem 3.1. So, from now on, we only focus on the proof of Theorem 3.3. First, we aim at identifying the matrices on the blocks in (3.10). The Taylor expansion of \(W_{h}\) is clearly \[W_{h}=H+h^{p}B+\mathcal{O}(h^{p+1}),\qquad\text{ with }\qquad B:=R+[\chi_{0},H].\] However, since \([W_{h},H]=0\), we deduce that \([B,H]=0\) and so that \(B\) is block-diagonal. Moreover, since the matrix \([\chi_{0},H]\) is identically equal to zero on the diagonal blocks, the diagonal blocks of \(B\) are exactly those of \(R\). As a consequence, with a slight abuse of notations, we may write \[W_{h}^{(j)}=\omega I_{n_{j}}+h^{p}B^{(j)}+h^{p+1}Y_{h}^{(j)},\qquad\text{ where }\qquad B^{(j)}:=\Pi_{\omega_{j}}R_{|E_{\omega_{j}}(H)}\] and \(Y_{h}^{(j)}\) is a family of real matrices depending smoothly on \(h\). Next we aim at diagonalizing these blocks. By assumption, the eigenvalues of each matrix \(B^{(j)}\) are real and simple. Therefore, all \(B^{(j)}\) are diagonalizable. As a consequence, and again by applying Lemma 3.7, we get a family of real matrices \(\Upsilon_{h}^{(j)}\) such that if \(|h|\) is small enough, for all \(j\in\llbracket 1,d\rrbracket\) we have \[\left[\mathrm{e}^{h\Upsilon_{h}^{(j)}}(B^{(j)}+hY_{h}^{(j)})\mathrm{e}^{-h \Upsilon_{h}^{(j)}},B^{(j)}\right]=0.\] This means that the eigenspaces of \(B^{(j)}\) are stable by the action of \(\mathrm{e}^{h\Upsilon_{h}^{(j)}}(B^{(j)}+hY_{h}^{(j)})\mathrm{e}^{-h\Upsilon_ {h}^{(j)}}\). However, by assumption, these spaces are lines. Therefore, if \(Q^{(j)}\) is a real invertible matrix such that \(Q^{(j)}B^{(j)}(Q^{(j)})^{-1}\) is diagonal then \(Q^{(j)}\mathrm{e}^{h\Upsilon_{h}^{(j)}}(B^{(j)}+hY_{h}^{(j)})\mathrm{e}^{-h \Upsilon_{h}^{(j)}}(Q^{(j)})^{-1}\) is also diagonal. Finally, as a consequence, setting \[P_{h}:=\mathrm{e}^{-h^{p}\chi_{h}}\begin{pmatrix}\mathrm{e}^{-h\Upsilon_{h}^ {(1)}}Q^{(1)}&&\\ &\ddots&\\ &&\mathrm{e}^{-h\Upsilon_{h}^{(d)}}Q^{(d)}\end{pmatrix}\] we have proven that \(D_{h}:=P_{h}^{-1}H_{h}P_{h}\) is real diagonal, which concludes the proof of Theorem 3.3. ### Applications to reversible splitting and composition methods Theorems 3.1 and 3.3 shed light on the behavior observed in the examples collected in Section 2. Thus, suppose \(H=A+B\) is a real symmetric matrix, with \(A\), \(B\) also real. Furthermore, consider a splitting scheme \(S_{h}\) of the form (1.2) with coefficients satisfying the symmetry conditions (1.3) and consistency, \[a_{0}+\cdots+a_{2n}=1,\qquad\qquad b_{0}+\cdots+b_{2n-1}=1.\] Clearly, \(S_{h}\) is a reversible map and moreover, it is consistent with \(\mathrm{e}^{ihH}\) at least at order \(1\), so that (3.1) holds with \(p\geq 1\). Since \(H\) is real symmetric, it is diagonalizable. Therefore, if the eigenvalues of \(H\) are simple, the dynamics of \((S_{h}^{n})_{n\in\mathbb{Z}}\) is given by Theorem 3.1: for sufficiently small \(h\), there exist real matrices \(D_{h}\) (diagonal) and \(P_{h}\) (invertible) so that \(S_{h}^{n}=P_{h}\,\mathrm{e}^{inD_{h}}P_{h}^{-1}\), all the eigenvalues of \(S_{h}\) verify \(|\omega_{j}|=1\) and \(\mathcal{M}(u)\) and \(\mathcal{H}(u)\) are almost preserved for long times. This corresponds to the examples of Figure 1. The same conclusions apply as long as \(H\) is a real matrix with all its eigenvalues real and simple (Figure 2, left), whereas the general case of complex eigenvalues is not covered by the theorem, and no preservation is ensured (Figure 2, right). Suppose now that the real matrix \(H\) has multiple real eigenvalues, but is still diagonalizable, and that \(A\) and \(B\) are real and symmetric. In that case, a symmetric-conjugate splitting method satisfy both conditions (1.4) and (1.5), so that it can be written as \[S_{h}=e^{ihH_{h}},\] where \(H_{h}\) is a family of real matrices whose even terms in \(h\) are symmetric and odd terms are skew-symmetric. Suppose in addition that \(S_{h}\) is of even order (i.e., \(p\) is even in (3.6)). In that case the matrix \(R\) in Theorem 3.3 is symmetric, and so its eigenvalues are real. Moreover, since \(R\) strongly depends on the coefficients \(a_{j},b_{j}\) and the decomposition \(H=A+B\), it is very likely that typically the eigenvalues of the operators \(\Pi_{\omega}R_{|E_{\omega}(H)}\) are simple and so that the dynamics of \((S_{h}^{n})_{n\in\mathbb{Z}}\) is given by Theorem 3.3 and is therefore similar to the one of \((\mathrm{e}^{inhH})_{n\in\mathbb{Z}}\). Notice that this does not necessarily hold if the scheme is of odd order and/or \(A\) and \(B\) are not symmetric. This phenomenon is clearly illustrated in the examples of Figure 3 by methods \(S_{h}^{[3,2]}\) and \(S_{h}^{[4]}\). Notice, however, that method \(S_{h}^{[3,1]}\), although of odd order, works in fact better than expected from the previous considerations. The reason for this behavior resides in the following **Proposition 3.8**: _The 3th-order symmetric-conjugate splitting method_ \[S_{h}^{[3,1]}={\rm e}^{ih\overline{b}_{0}B}\,{\rm e}^{ih\overline{a}_{1}A}\,{ \rm e}^{ihb_{1}B}\,{\rm e}^{iha_{1}A}\,{\rm e}^{ihb_{0}B},\] _with \(a_{1}=\frac{1}{2}+i\frac{\sqrt{3}}{6}\), \(b_{0}=\frac{a_{1}}{2}\), \(b_{1}=\frac{1}{2}\), is indeed conjugate to a reversible integrator \(V_{h}\) of order 4, i.e., there exists a real near-identity transformation \(F_{h}\) such that \(F_{h}\,S_{h}^{[3,1]}\,F_{h}^{-1}=V_{h}={\rm e}^{ihH}+{\cal O}(h^{5})\) and \(\overline{V}_{h}=V_{h}^{-1}\)._ **Proof:** Method \(S_{h}^{[3,1]}\) constitutes in fact a particular case of a composition \(\psi_{h}={\cal S}_{\bar{\alpha}h}^{[2]}\,{\cal S}_{\alpha h}^{[2]}\), where \({\cal S}_{h}^{[2]}\) is a time-symmetric 2nd-order method and \(\alpha=a_{1}\). Specifically, \(S_{h}^{[3,1]}\) is recovered when \({\cal S}_{h}^{[2]}={\rm e}^{\frac{h}{2}B}\,{\rm e}^{hA}\,{\rm e}^{\frac{h}{2}B}\). Therefore, it can be written as \[{\cal S}_{h}^{[2]}=\exp(ihH-ih^{3}F_{3}+ih^{5}F_{5}+\cdots)\] for certain real matrices \(F_{2j+1}\). In consequence, by applying the BCH formula, one gets \(\psi_{h}={\rm e}^{W(h)}\), with \[W(h)=ihH+\frac{1}{2}h^{4}|\alpha|^{2}(\alpha^{2}-\bar{\alpha}^{2})[H,F_{3}]+ ih^{5}\big{(}w_{5,1}F_{5}+w_{5,2}[H,[H,F_{3}]]\big{)}+{\cal O}(h^{6}).\] Here \(w_{5,j}\) are polynomials in \(\alpha\). Now let us consider \[V_{h}={\rm e}^{V(h)}={\rm e}^{\lambda h^{3}F_{3}}\,{\rm e}^{W(h)}\,{\rm e}^{- \lambda h^{3}F_{3}}\] for a given parameter \(\lambda\). Then, clearly, \[V(h)={\rm e}^{\lambda h^{3}{\rm ad}F_{3}}W(h)=ihH+h^{4}\left(\frac{1}{2} \alpha^{3}-i\lambda\right)[H,F_{3}]+{\cal O}(h^{5}),\] so that by choosing \(\lambda=-\frac{i}{2}\alpha^{3}=-\frac{\sqrt{3}}{18}\), we have \(V(h)=ihH+{\cal O}(h^{5})\) and the stated result is obtained, with \(F_{h}={\rm e}^{\lambda h^{3}F_{3}}\). \(\Box\) This result can be generalized as follows: given a time-symmetric method \({\cal S}_{h}^{[2k]}\) of order \(2k\), if \(\alpha\) is chosen so that the composition \(\psi_{h}={\cal S}_{\bar{\alpha}h}^{[2k]}\,{\cal S}_{\alpha h}^{[2k]}\) is of order \(2k+1\), then \(\psi_{h}\) is conjugate to a reversible method of order \(2k+2\). Theorems 3.1 and 3.3 also allow one to explain the good behavior shown by symmetric-conjugate composition methods for this type of problems. In fact, suppose \(H\) is a real symmetric matrix and \(\Phi_{H}^{z}\) is a family of linear maps which are consistent with \({\rm e}^{izH}\) at least at order \(1\) and satisfy \[(\Phi_{H}^{z})^{-1}=\overline{\Phi_{H}^{\overline{z}}}.\] If we define \(S_{h}\) as the symmetric-conjugate composition \[S_{h}=\Phi_{H}^{\alpha_{0}h}\cdots\Phi_{H}^{\alpha_{n}h},\] where \(\alpha_{j}\) are some complex coefficients satisfying the symmetry condition \[\alpha_{n-j}=\overline{\alpha}_{j},\qquad j=1,2,\ldots\] and the consistency condition \[\alpha_{0}+\cdots+\alpha_{n}=1,\] then \(S_{h}\) is a reversible map. Moreover, it is consistent with \(\,{\rm e}^{ihH}\) at least at order \(1\). Therefore, one can apply Theorem 3.1 and Theorem 3.3 also in this case. Notice, in particular, that even if the maps \(\,{\rm e}^{iha_{j}A}\) and/or \({\rm e}^{ihb_{j}B}\) in the symmetric-conjugate splitting method (1.2) are not computed exactly, but only conveniently approximated (for instance, by the midpoint rule), the previous theorems still apply, so that one can expect good long term behavior from the resulting approximation. ## 4 Symmetric-conjugate splitting methods for the Schrodinger equation An important application of the previous results corresponds to the numerical integration of the time dependent Schrodinger equation ( \(\hbar=m=1\)) \[i\frac{\partial}{\partial t}\psi(x,t)=\hat{H}\psi(x,t),\qquad\quad\psi(x,0)= \psi_{0}(x), \tag{4.1}\] where \(\psi:\mathbb{R}^{3}\times\mathbb{R}\longrightarrow\mathbb{C}\). The Hamiltonian operator \(\hat{H}\) is the sum \(\hat{H}=\hat{T}+\hat{V}\) of the kinetic energy operator \(\hat{T}\) and the potential \(\hat{V}\). Specifically, \[(\hat{T}\psi)(x)=-\frac{1}{2}\Delta\psi(x,t),\qquad\quad(\hat{V}\psi)(x)=\hat{ V}(x)\psi(x,t).\] In addition, a simple computation shows that \([\hat{V},[\hat{T},\hat{V}]]\)\(\psi=|\nabla\hat{V}|^{2}\psi\), and therefore \[[\hat{V},[\hat{V},[\hat{V},\hat{T}]]]\)\(\psi=0. \tag{4.2}\] Assuming \(d=1\) and periodic boundary conditions, the application of a pseudo-spectral method in space (with \(N\) points) leads to the \(N\)-dimensional system (1.1), where \(u(0)=u_{0}\in\mathbb{C}^{N}\) and \(H\) represents the (real symmetric) \(N\times N\) matrix associated with the operator \(-\hat{H}\)[16]. Now \[H=A+B,\] where \(A\) is the (minus) differentiation matrix corresponding to the discretization of \(\hat{T}\) (a real and symmetric matrix) and \(B\) is the diagonal matrix associated to \(-\hat{V}\) at the grid points. Since \(\exp(tA)\) can be efficiently computed with the fast Fourier transform (FFT) algorithm, it is a common practice to use splitting methods of the form (1.2) to integrate this problem. In this respect, notice that property (4.2) will be inherited by the matrices \(A\) and \(B\) only if the number of discretization points \(N\) is sufficiently large to achieve spectral accuracy, i.e., \[[B,[B,[B,A]]]u=0\qquad\mbox{ if }N\mbox{ \ is large enough.} \tag{4.3}\] Assuming this is satisfied, then there is a reduction in the number of conditions necessary to construct a method (1.2) of a given order \(p\)[12, 2]. Integrators of this class are sometimes called Runge-Kutta-Nystrom (RKN) splitting methods [5]. Two further points are worth remarking. First, the computational cost of evaluating (1.2) is not significantly increased by incorporating complex coefficients into the scheme, since one has to use complex arithmetic anyway. Second, since \(\sum_{j}a_{j}=1\) for a consistent method, if \(a_{j}\in\mathbb{C}\), then both positive _and_ negative imaginary parts are present, and this can lead to severe instabilities due to the unboundedness of the Laplace operator [8, 14]. On the other hand, the spurious effects introduced by complex \(b_{j}\) can be eliminated (at least for sufficiently small values of \(h\)) by introducing an artificial cut-off bound in the potential when necessary. In view of these considerations, we next limit our exploration to symmetric-conjugate splitting methods of the form (1.2) with \(0<a_{j}<1\) and \(b_{j}\in\mathbb{C}\) with \(\Re(b_{j})>0\) to try to reduce the size of the error terms appearing in the asymptotic expansion of the modified Hamiltonian \(H_{h}\) associated with the integrator. For simplicity, we denote the symmetric-conjugate splitting schemes \(S_{h}\) by their sequence of coefficients as \[(a_{0},b_{0},a_{1},b_{1},\ldots,a_{r},b_{r},a_{r},\ldots,\overline{b}_{1},a_{ 1},\overline{b}_{0},a_{0}). \tag{4.4}\] As a matter of fact, since \(A\) and \(B\) are sought to verify (4.3), sequences starting with \(B\) may lead to schemes with a different efficiency, so that we also analyze methods of the form \[(b_{0},a_{0},b_{1},a_{1},\ldots,b_{r},a_{r},\overline{b}_{r},\ldots,a_{1}, \overline{b}_{1},a_{0},\overline{b}_{0}). \tag{4.5}\] Schemes (4.4) and (4.5) include integrators where the central exponential corresponds to \(A\) (when \(b_{r}=0\)) and \(B\) (when \(a_{r}=0\)), respectively. The method has \(s\) stages if the number of exponentials of \(A\) is precisely \(s\) for the scheme (4.5) or \(s+1\) for the scheme (4.4). The construction process of methods within this class is detailed elsewhere (e.g. [7, 5] and references therein), so that it is only summarized here. First, we get the order conditions a symmetric-conjugate scheme has to satisfy to achieve a given order \(p=4,5\) and 6. These are polynomial equations depending on the coefficients \(a_{j}\), \(b_{j}\), and can be obtained by identifying a basis in the Lie algebra generated by \(\{A,B\}\) and using repeatedly the BCH formula to express the splitting method as \(S_{h}=\exp(hH_{h})\), with \(H_{h}\) in terms of \(A\), \(B\) and their nested commutators. The order conditions up to order \(p\) are obtained by requiring that \(H_{h}=H+\mathcal{O}(h)^{p+1}\), and the number is 7, 11 and 16 for orders 4, 5 and 6, respectively. Second, we take compositions (4.4) and (4.5) involving the minimum number of stages required to solve the order conditions and get eventually all possible solutions with the appropriate symmetry. Sometimes, one has to add parameters, because there are no such solutions. In particular, there are no 4th-order schemes with 4 stages with both \(a_{j}>0\) and \(\Re(b_{j})>0\). Even when there are appropriate solutions, it may be convenient to explore compositions with additional stages to have free parameters for optimization. This strategy usually pays off when purely real coefficients are involved, and so it is worth to be explored also in this context. Of course, some optimization criterion related with the error terms and the computational effort has to be adopted. In our study we look at the error terms in the expansion of \(H_{h}\) at successive orders and the size of the \(b_{j}\) coefficients. Specifically, we compute for each method of order, say, \(p\), the quantities \[\Delta_{b}:=\sum_{j}|b_{j}|\qquad\text{ and }\qquad E_{f}^{(r+1)}:=s\left( \mathcal{E}_{r+1}\right)^{1/r},\qquad r=p,p+1,\ldots \tag{4.6}\] Here \(s\) is the number of stages and \(\mathcal{E}_{r+1}\) is the Euclidean norm of the vector of error coefficients in \(H_{h}\) at higher orders than the method itself. In particular, for a method of order \(6\), \(E_{f}^{(7)}\) gives an estimate of the efficiency of the scheme by considering only the error at order 7. By computing \(E_{f}^{(8)}\) and \(E_{f}^{(9)}\) for this method we get an idea of how the higher order error terms behave. It will be of interest, of course, to reduce these quantities as much as possible to get efficient schemes. Solving the polynomial equations required to construct splitting methods with additional stages is not a trivial task, especially for orders 5 and 6. In these cases we have used the Python function fsolve of the _SciPy_ library, with a large number of initial points in the space of parameters to start the procedure. From the total number of valid solutions thus obtained, we have selected those leading to reasonably small values of all quantities (4.6) and checked them on numerical examples. The corresponding values for the most efficient methods we have found by following this approach have been collected in Table 1, where \(\mathcal{N}\mathcal{A}_{s}^{*[p]}\) refers to a symmetric-conjugate method of type (4.4) of order \(p\) involving \(s\) stages, and \(\mathcal{NB}_{s}^{*[p]}\) is a similar scheme of type (4.5). For completeness, we have also included the most efficient integrators of order 4, 6 and 8 with real coefficients for systems satisfying the condition (4.3) (same notation without \(*\)) and also the symmetric-conjugate splitting schemes presented in [10, 11] (denoted by \(\mathcal{GB}_{s}^{*[p]}\)). They do not take into account the property (4.3) for their formulation. In Table 1 we also write the value of \(\Delta_{a}:=\sum_{j}|a_{j}|\) and \(\Delta_{b}:=\sum_{j}|b_{j}|\) for each method. Of course, by construction, \(\Delta_{a}=1\) for all symmetric-conjugate integrators. The coefficients of the most efficient schemes we have found (in boldface) are collected in Table 2. In the Appendix we provide analogous information for general schemes of orders 3, 4, 5 and 6, i.e., of splitting methods for general problems of the form \(H=A+B\), with \(a_{j}>0\) and \(b_{j}\in\mathbb{C}\) with \(\Re(b_{j})>0\). They typically involve more stages, but can be applied in more general contexts. One should take into account, however, that all these symmetric-conjugate methods have been obtained by considering the ordinary differential equation (1.1) in finite dimension, whereas the time dependent Schrodinger equation is a prototypical example of an evolutionary PDE involving unbounded operators (the Laplacian and possibly the potential). In consequence, one might arguably question the viability of using the above schemes in this setting. That this is indeed possible comes as a consequence of some previous results obtained in the context of PDEs defined in analytic semigroups. Specifically, equation (4.1) can be written in the generic form \[u^{\prime}=\hat{L}u=(\hat{A}+\hat{B})u,\qquad u(0)=u_{0}, \tag{4.7}\] with \(\hat{A}=\frac{i}{2}\Delta\) and \(\hat{B}=-i\hat{V}\). It has been shown in [13] (see also [15, 18]) that, under the two assumptions stated below, a splitting method of the form \[S_{h}=\mathrm{e}^{ha_{0}\hat{A}}\,\mathrm{e}^{hb_{0}\hat{B}}\,\ldots\, \mathrm{e}^{hb_{2n-1}\hat{B}}\,\mathrm{e}^{ha_{2n}\hat{A}} \tag{4.8}\] is of order \(p\) for problem (4.7) if and only if it is of classical order \(p\) in the finite dimensional case. The assumptions are as follows: 1. _Semi-group property_: \(\hat{A}\), \(\hat{B}\) and \(\hat{L}\) generate \(C^{0}\)-semigroups on a Banach space \(X\) with norm \(\|\cdot\|\) and, in addition, they satisfy the bounds \[\|\mathrm{e}^{t\hat{A}}\|\leq\mathrm{e}^{\omega t},\qquad\|\mathrm{e}^{t\hat{B }}\|\leq\mathrm{e}^{\omega t}\] for some positive constant \(\omega\) and all \(t\geq 0\). 2. _Smoothness property_: For any pair of multi-indices \(\,(i_{1},\ldots,i_{m})\,\) and \(\,(j_{1},\ldots,j_{m})\,\) with \(\,i_{1}+\cdots+i_{m}+j_{1}+\cdots+j_{m}=p+1\,\), and for all \(\,t\in[0,T]\,\), \[\|\hat{A}^{i_{1}}\hat{B}^{j_{1}}\ldots\hat{A}^{i_{m}}\hat{B}^{j_{m}}\;{\rm e }^{t\hat{L}}u_{0}\|\leq C\] for a positive constant \(\,C\,\). These conditions restrict the coefficients \(\,a_{j}\,\), \(\,b_{j}\,\) in (4.8) to be positive, however, and thus the method to be of second order at most. Nevertheless, it has been shown in [14, 8] that, if in addition \(\,\hat{L}\,\), \(\,\hat{A}\,\) and \(\,\hat{B}\,\) generate analytic semigroups on \(\,X\,\) defined in the sector \(\,\Sigma_{\phi}=\{z\in\mathbb{C}:|\arg z|<\phi\}\,\), for a given angle \(\,\phi\in(0,\pi/2]\,\) and the operators \(\,\hat{A}\,\) and \(\,\hat{B}\,\) verify \[\|{\rm e}^{z\hat{A}}\|\leq{\rm e}^{\omega|z|},\qquad\|{\rm e}^{z\hat{B}}\|\leq {\rm e}^{\omega|z|}\] for some \(\,\omega\geq 0\,\) and all \(\,z\in\Sigma_{\phi}\,\), then a splitting method of the form (4.8) of classical order \(\,p\,\) with all its \begin{table} \begin{tabular}{c|c c c c c c c} & \(\Delta_{a}\) & \(\Delta_{b}\) & \(E_{f}^{(5)}\) & \(E_{f}^{(6)}\) & \(E_{f}^{(7)}\) & \(E_{f}^{(8)}\) & \(E_{f}^{(9)}\) \\ \hline \({\cal N}{\cal A}_{6}^{*[4]}\) & 1.000 & 1.267 & 0.400 & 0.821 & 0.704 & 1.082 & 1.012 \\ \({\cal NB}_{5}^{*[4]}\) & **1.000** & **1.141** & **0.352** & **0.698** & **0.559** & **0.913** & **0.789** \\ \({\cal NB}_{6}^{*[4]}\) & **1.000** & **1.416** & **0.322** & **0.766** & **0.666** & **1.025** & **0.866** \\ \({\cal N}{\cal A}_{7}^{*[5]}\) & 1.000 & 1.662 & – & 0.695 & 0.817 & 1.013 & 1.132 \\ \({\cal N}{\cal A}_{8}^{*[5]}\) & 1.000 & 1.393 & – & 0.546 & 0.947 & 0.953 & 1.339 \\ \({\cal N}{\cal A}_{9}^{*[5]}\) & 1.000 & 1.456 & – & 0.498 & 0.970 & 1.157 & 1.357 \\ \({\cal NB}_{7}^{*[5]}\) & 1.000 & 3.196 & – & 0.833 & 0.970 & 1.143 & 1.300 \\ \({\cal NB}_{8}^{*[5]}\) & **1.000** & **1.482** & – & **0.478** & **0.670** & **1.046** & **1.031** \\ \({\cal NB}_{9}^{*[5]}\) & **1.000** & **1.618** & – & **0.403** & **0.966** & **1.331** & **1.499** \\ \({\cal N}{\cal A}_{10}^{*[6]}\) & 1.000 & 1.528 & – & – & 0.906 & 1.204 & 1.298 \\ \({\cal N}{\cal A}_{11}^{*[6]}\) & **1.000** & **2.092** & – & – & **0.656** & **1.418** & **1.643** \\ \({\cal NB}_{10}^{*[6]}\) & 1.000 & 1.516 & – & – & 1.000 & 1.212 & 1.557 \\ \({\cal NB}_{11}^{*[6]}\) & **1.000** & **1.595** & – & – & **0.646** & **1.387** & **1.394** \\ \hline \({\cal GB}_{5}^{*[4]}\) & 1.000 & 1.133 & 0.477 & 0.662 & 0.662 & 0.885 & 0.807 \\ \({\cal GB}_{9}^{*[5]}\) & 1.000 & 1.463 & – & 0.603 & 0.786 & 1.036 & 1.278 \\ \({\cal GB}_{15}^{*[6]}\) & 1.000 & 1.692 & – & – & 1.515 & 1.434 & 2.169 \\ \hline \({\cal NB}_{6}^{[4]}\) & 2.401 & 1.156 & 0.291 & – & 0.809 & – & 1.307 \\ \({\cal NB}_{11}^{[6]}\) & 2.494 & 1.206 & – & – & 0.784 & – & 1.664 \\ \({\cal NB}_{14}^{[6]}\) & 1.659 & 2.012 & – & – & 0.627 & – & 2.238 \\ \hline \end{tabular} \end{table} Table 1: 1-norm and effective errors for several splitting methods of order 4, 5 and 6 designed for problems satisfying the condition (4.3). coefficients \(\,a_{j}\,,\,\,b_{j}\,\) in the sector \(\,\Sigma_{\phi}\subset\mathbb{C}\,\), then \[\|(S_{h}^{n}-\mathrm{e}^{nh\hat{L}})u_{0}\|\leq Ch^{p},\qquad 0\leq nh\leq T\] \begin{table} \begin{tabular}{l l l} \hline & \multicolumn{1}{c}{\(a_{i}\)} & \multicolumn{1}{c}{\(b_{i}\)} \\ \cline{2-3} \(\mathcal{NB}_{5}^{*[4]}\) & \(a_{0}=0.17354158169943656\) & \(b_{0}=0.06421454120274125+0.0245540186592381\,i\) \\ & \(a_{1}=0.19379086394173623\) & \(b_{1}=0.20166370500451958-0.0982277975564409\,i\) \\ & \(a_{2}=1-2\sum_{i=0}^{1}a_{i}\) & \(b_{2}=\frac{1}{2}-\sum_{i=0}^{1}\Re(b_{i})+0.1491719824749133\,i\) \\ \cline{2-3} \(\mathcal{NB}_{6}^{*[4]}\) & \(a_{0}=\frac{1}{5}\) & \(b_{0}=\frac{7}{100}+0.019444288930263294\,i\) \\ & \(a_{1}=0.054855282174763084\) & \(b_{1}=0.16-0.20579973912385285\,i\) \\ & \(a_{2}=\frac{1}{2}-\sum_{i=0}^{1}a_{i}\) & \(b_{2}=0.16251793145097668+0.21219211957584155\,i\) \\ & & \(b_{3}=1-2\sum_{i=0}^{1}\Re(b_{i})\) \\ \cline{2-3} \(\mathcal{NB}_{8}^{*[5]}\) & \(a_{0}=0.13556579817637690\) & \(b_{0}=0.048-0.0045117121645322032\,i\) \\ & \(a_{1}=0.12110548685533656\) & \(b_{1}=0.159+0.039915395925895825\,i\) \\ & \(a_{2}=0.040926280383255811\) & \(b_{2}=0.08808186616153123-0.19475521098317861\,i\) \\ & \(a_{3}=\frac{1}{2}-\sum_{i=0}^{2}\Re(a_{i})\) & \(b_{3}=0.08139005735125036+0.17341123352295854\,i\) \\ & & \(b_{4}=1-2\sum_{i=0}^{3}b_{i}\) \\ \cline{2-3} \(\mathcal{NB}_{9}^{*[5]}\) & \(a_{0}=0.066\) & \(b_{0}=0.03-0.026088775868557137\,i\) \\ & \(a_{1}=0.066\) & \(b_{1}=0.065+0.0871906864166141\,i\) \\ & \(a_{2}=0.15406042184345631\) & \(b_{2}=0.087791471011534450-0.07869869176637824\,i\) \\ & \(a_{3}=0.20434260458660722\) & \(b_{3}=0.21903826707051549+0.005649631789653575\,i\) \\ & \(a_{4}=1-2\sum_{i=0}^{3}a_{i}\) & \(b_{4}=\frac{1}{2}-\sum_{i=0}^{3}\Re(b_{i})+0.3080209334852549\,i\) \\ \cline{2-3} \(\mathcal{NB}_{11}^{*[6]}\) & \(a_{0}=0.062770091\) & \(b_{0}=0.10891717046144-0.16165289456182\,i\) \\ & \(a_{1}=0.011912916558090\) & \(b_{1}=0.05673774365156+0.19084324113721\,i\) \\ & \(a_{2}=0.20435669618321\) & \(b_{2}=0.00000000664446-0.2132590752834\,i\) \\ & \(a_{3}=0.019233264988143\) & \(b_{3}=0.2404799796837+0.10112304441789\,i\) \\ & \(a_{4}=0.06593857714457\) & \(b_{4}=0.04313692053520+0.11954730647763\,i\) \\ & \(a_{5}=\frac{1}{2}-\sum_{i=0}^{4}a_{i}\) & \(b_{5}=1-2\sum_{i=0}^{4}\Re(b_{i})\) \\ \cline{2-3} \(\mathcal{NB}_{11}^{*[6]}\) & \(a_{0}=\frac{213}{2500}\) & \(b_{0}=\frac{7}{250}-0.009532915454170\,i\) \\ & \(a_{1}=0.047358568390005\) & \(b_{1}=0.08562523731685+0.0718344013568\,i\) \\ & \(a_{2}=0.1553620075936\) & \(b_{2}=0.09331583397900-0.09161071812994\,i\) \\ & \(a_{3}=0.10012117440925\) & \(b_{3}=0.11799012127542+0.0702739287203\,i\) \\ & \(a_{4}=0.10547836949919\) & \(b_{4}=0.16176918420712-0.04327349898459\,i\) \\ & \(a_{5}=1-2\sum_{i=0}^{4}a_{i}\) & \(\Re(b_{5})=\frac{1}{2}-\sum_{i=0}^{4}\Re(b_{i})-0.2203293328195\,i\) \\ \cline{2-3} \end{tabular} \end{table} Table 2: Coefficients of the most efficient symmetric-conjugate RKN splitting methods of order 4, 5 and 6. where \(C\) is a constant independent of \(n\) and \(h\). ## 5 Numerical illustration: Modified Poschl-Teller potential The so-called modified Poschl-Teller potential takes the form \[V(x)=-\frac{\alpha^{2}}{2}\frac{\lambda(\lambda-1)}{\cosh^{2}\alpha x}, \tag{5.1}\] with \(\lambda>1\), and admits an analytic treatment to compute explicitly the eigenvalues for negative energies [9]. For the simulations we take \(\alpha=1\), \(\lambda(\lambda-1)=10\) and the initial condition \(\psi_{0}(x)=\sigma\,{\rm e}^{-x^{2}/2}\), with \(\sigma\) a normalizing constant. We discretize the interval \(x\in[-8,8]\) with \(N=256\) equispaced points and apply Fourier spectral methods. With this value of \(N\) it turns out that \(\|([B,[B,[A,B]]])u_{0}\|\) is sufficiently close to zero to be negligible, so that we can safely apply the schemes of Table 2. If \(N\) is not sufficiently large, then the corresponding matrices \(A\) and \(B\) do not satisfy (4.3), and as a consequence, the schemes are only of order three. This can be indeed observed in practice. We first check how the errors in the norm \({\cal M}(u)\) and in the energy \({\cal H}(u)\) evolve with time according with each type of integrator. To this end we integrate numerically until the final time \(t_{f}=10^{4}\) with three 6th-order compositions involving complex coefficients: (i) the new symmetric-conjugate scheme \({\cal NB}_{11}^{*[6]}\) collected in Table 2\((h=100/909\approx 0.11)\), (ii) the palindromic scheme denoted by \({\cal B}_{16}^{[6]}\) with all \(a_{j}\) taking the same value \(a_{j}=1/16\), \(j=1,\ldots,8\) and complex \(b_{j}\) with positive real part3\((h=0.16)\), and (iii) the method obtained by composing \({\cal B}_{16}^{[6]}\) with its complex conjugate \(({\cal B}_{16}^{[6]})^{*}\), resulting in a symmetric-conjugate integrator \((h=0.32)\). The step size is chosen in such a way that all the methods require the same number of FFTs. The results are depicted in Figure 4. We see that, according with the previous analysis, the error in both unitarity and energy furnished by the new scheme \({\cal NB}_{11}^{*[6]}\) does not grow with time, in contrast with palindromic compositions involving complex coefficients. Notice also that the composition of the palindromic scheme \({\cal B}_{16}^{[6]}\) with its complex conjugate leads to a new (symmetric-conjugate) integrator with good preservation properties. On the other hand, composing a symmetric-conjugate method with its complex conjugate results in a palindromic scheme showing a drift in the error of both the norm and the energy [4]. Footnote 3: The coefficients can be found at the website [http://www.gicas.uji.es/Research/splitting-complex.html](http://www.gicas.uji.es/Research/splitting-complex.html). In our second experiment, we test the efficiency of the different schemes. To this end we integrate until the final time \(t_{f}=100\), compute the expectation value of the energy, \({\cal H}(u_{\rm app}(t))\), and measure the error as the maximum of the difference with respect to the exact value along the integration: \[\max_{0\leq t\leq t_{f}}\quad|{\cal H}(u_{\rm app}(t))-{\cal H}(u_{0})|. \tag{5.2}\] The corresponding results are displayed as a function of the computational cost measured by the number of FFTs necessary to carry out the calculations (in log-log plots) in Figure 5. Notice how the new symmetric-conjugate schemes offer a better efficiency than standard splitting methods for this problem. The improvement is particularly significant in the 6th-order case. ### Acknowledgements The work of JB is supported by ANR-22-CE40-0016 "KEN" of the Agence Nationale de la Recherche (France) and by the region Pays de la Loire (France) through the project "MasCan". SB, FC and AE-T acknowledge financial support by Ministerio de Ciencia e Innovacion (Spain) through project PID2019-104927GB-C21, MCIN/AEI/10.13039/501100011033, ERDF ("A way of making Europe"). The authors would also like to thank Prof. C. Lubich for his very useful remarks. ### Compliance with Ethical Standards All authors declare that they have no conflicts of interest. ## Appendix A Appendix We collect in this Appendix the most efficient symmetric-conjugate splitting methods with \(a_{j}>0\) and \(b_{j}\in\mathbb{C}\) with \(\Re(b_{j})>0\) we have found for a general problem of the form \(H=A+B\). The coefficients of the schemes in boldface in Table 3 are listed in Table 4. Methods of type (4.4) of order \(p\) involving \(s\) stages are denoted as \(\mathcal{A}_{s}^{[p]}\), whereas \(\mathcal{B}_{s}^{*[p]}\) refers to a similar scheme of type (4.5). As in Table 1, we also collect for reference the methods proposed in [10] (denoted by \(\mathcal{GB}_{s}^{*[p]}\)) and two efficient palindromic compositions of time-symmetric schemes of order 2 with real coefficients, \(\mathcal{S}_{s}^{[p]}\). At order 5, the most efficient scheme turns out to be \(\mathcal{GB}_{9}^{*[5]}\). We also include a numerical illustration on the modified Poschl-Teller potential with the same data as before. Notice in particular the improvement with respect to the 6th-order scheme \(\mathcal{S}_{10}^{[6]}\). Figure 4: Error in norm \(\mathcal{M}(u)\) (left) and in energy \(\mathcal{H}(u)\) (right) as a function of time for complex-conjugate and palindromic methods involving complex coefficients.
2307.13776
Combating the Curse of Multilinguality in Cross-Lingual WSD by Aligning Sparse Contextualized Word Representations
In this paper, we advocate for using large pre-trained monolingual language models in cross lingual zero-shot word sense disambiguation (WSD) coupled with a contextualized mapping mechanism. We also report rigorous experiments that illustrate the effectiveness of employing sparse contextualized word representations obtained via a dictionary learning procedure. Our experimental results demonstrate that the above modifications yield a significant improvement of nearly 6.5 points of increase in the average F-score (from 62.0 to 68.5) over a collection of 17 typologically diverse set of target languages. We release our source code for replicating our experiments at https://github.com/begab/sparsity_makes_sense.
Gábor Berend
2023-07-25T19:20:50Z
http://arxiv.org/abs/2307.13776v1
Combating the Curse of Multilinguality in Cross-Lingual WSD by Aligning Sparse Contextualized Word Representations ###### Abstract In this paper, we advocate for using large pre-trained monolingual language models in cross lingual zero-shot word sense disambiguation (WSD) coupled with a contextualized mapping mechanism. We also report rigorous experiments that illustrate the effectiveness of employing sparse contextualized word representations obtained via a dictionary learning procedure. Our experimental results demonstrate that the above modifications yield a significant improvement of nearly 6.5 points of increase in the average F-score (from 62.0 to 68.5) over a collection of 17 typologically diverse set of target languages. We release our source code for replicating our experiments at [https://github.com/begab/sparsity_makes_sense](https://github.com/begab/sparsity_makes_sense). ## 1 Introduction Word sense disambiguation (WSD) is a long-standing and fundamental problem of Natural Language Processing, known to be affected by the _knowledge acquisition bottleneck_Gale et al. (1992). Large pre-trained neural language models are known to effectively mitigate the problems related to the paucity of high quality, large-coverage sense annotated training data for WSD Loureiro and Jorge (2019); Loureiro et al. (2021); _inter alia_. Most recently, the knowledge acquisition bottleneck has been identified as an immense problem in the cross-lingual setting as well Pasini (2020). A straightforward solution for handling this problem is to apply large multilingual pre-trained language models in a zero-shot setting, however, this approach has a potential limitation owing to the _curse of multilinguality_Conneau et al. (2020), i.e., the inability of such models to handle the large number of languages involved during training such models to an equally good quality. The research community replied to the limitations of large massively multilingual models by developing language-specific monolingual language models.1 Table 1 provides a shortlist of recently published monolingual large pre-trained language models, related to the languages involved in the cross-lingual WSD test suit, XL-WSD Pasini et al. (2021). Footnote 1: With a slight abuse of notation, we also refer to models that support a handful of (related) languages (e.g. Slovenian and Croatian) as language-specific monolingual ones. With the prevalence of large monolingual pre-trained models, the important research question arises if their language-specific nature can be successfully exploited during zero-shot learning. Our research provides a thorough comparison of the application of large multilingual and monolingual pre-trained language models for zero-shot WSD. Another crucial aspect that we carefully investigate in this paper is the integration of sparse contextualized word representations into cross-lingual zero-shot WSD. Sparse word representations have a demonstrated ability to align with word senses Balogh et al. (2020); Yun et al. (2021). While the benefits of employing sparsity has been shown for WSD in English Berend (2020), its viability in the cross-lingual setting has not yet been verified. In order to conduct such an analysis, we propose an \begin{table} \begin{tabular}{l l} \hline \hline SO & Huggingface model identifier \\ \hline b & DeepPavlov/bert-base-by-one-p-1-ru-cased Arbiépo et al. (2019) \\ a & PlanIt-COn-ES/roborta-base-ca (Aumengli-Etapte et al., 2021) \\ da & Maintl/dainsl-bert-botox \\ de &bert-base-german-cased \\ es & docuchile/bert-base-spanish-wun-cased Calote et al. (2020) \\ e & EMBCHCHCH/first-BERT (Ukar and Rothstein, 2020) \\ en & lxx-asu-bbert-base-cased (Aperieri et al., 2020) \\ f & cambert-base (Martin et al., 2020) \\ g & divilares-bertino-di-base-cased (Vilares et al., 2021) \\ h & EMBCHCH/crossedingual-bert (Ukar and Rothstein, 2020) \\ h & SLTATI-HIL/mbuster-base-cc (Nemusyels, 2021) \\ i & Musicmatch/unbrute-commercial-cased-vi \\ ja & c1-tohoku/bert-base-japanosa-who-word-masking \\ k & shuight/RDK-BERT-chardt1642 \\ f & CMFORD-best-base-cased (de Vries et al., 2019) \\ s & EMBCHCHIA/slobstra \\ \# & bert-base-chinese \\ \hline \hline \end{tabular} \end{table} Table 1: Monolingual models from the transformers library Wolf et al. (2020) covering all the (non-English) languages of the XL-WSD dataset Pasini et al. (2021). algorithm for obtaining cross-lingual sparse contextualized word representations from independently trained monolingual language models. ## 2 Related work The analysis and the investigation of the transfer capabilities of large pre-trained language models (such as mBERT or XLM) across languages has spurred significant research interest Pires et al. (2019); Wu and Dredze (2019, 2020); K et al. (2020). In contrast to the availability of multilingual neural language models, a series of recent papers have argued for the creation of dedicated neural language models for different languages (see e.g. Table 1). While monolingual neural language models can more accurately model the distinct languages, models that are trained in isolation of other languages cannot directly benefit from downstream application-specific annotated training data available in different languages. Artetxe et al. (2020) proposed an approach for making monolingual models compatible with each other by first pre-training a masked language model on a source language, then freezing its parameters apart from its embedding layer that get replaced and trained for additional target languages using a standard masked language modeling objective. Note that this approach is complementary and strictly more resource intensive to ours, as it involves the pre-training of a (freezed) transformer model with respect its embedding layer for a target language. In contrast, our approach can operate on monolingual language models fully pre-trained in total isolation from the source language encoder. Also, our approach learns substantially fewer parameters in the form of an alignment matrix between the hidden representations of the contextualized target and source language spaces. Conneau et al. (2020) analyzed the multilingual patterns emerging in large pre-trained language models. The authors found that "_language universal representations emerge in pre-trained models without the requirement of any shared vocabulary or domain similarity_". That work have demonstrated that monolingual BERT models can be effectively mapped for performing zero-shot cross-lingual named entity recognition and syntactic parsing. Similarly, Wang et al. (2019); Schuster et al. (2019) also illustrated the efficacy of linear transformations for using BERT-derived representations in cross-lingual dependency parsing. WSD has been a fundamental and challenging problem in NLP for many decades, dating back to (Weaver, 1949/1955). The utilization of contextualized word representations was first advocated by Peters et al. (2018), later popularized by Loureiro and Jorge (2019); Loureiro et al. (2021). Bevilacqua et al. (2021) offers a survey of the recent approaches. Most recently, Rezaee et al. (2021) have explored the usage of multilingual language models (XLM) in zero-shot WSD. While the experiments in Rezaee et al. (2021) cover four related target languages (German, Spanish, French and Italian), our investigation involves a typologically diverse set of 17 target languages (beyond English) from Pasini et al. (2021). Our work also extends that line of research in important aspects, as we show that the application of monolingual neural language models can vastly improve the performance of cross-lingual zero-shot WSD. Additionally, we also provide a careful evaluation of sparse contextualized word representations in zero-shot WSD. Berend (2020) introduced sparse contextualized word representations via the application of dictionary learning, and showed that sense representations that are obtained from the co-occurrence statistics of the sparsity structure of the contextualized word representations and their sense annotations can provide significant improvement in monolingual WSD. Our work relates to that line of research by providing a mapping-based procedure, which enables the usage of such sense representations created in some source language to be applied in other target languages as well. The kind of mapping we employ can be viewed as a generalization of the approach introduced in Berend (2020) with the notable exception that in this work, we obtain sparse word representations for contextualized models as opposed to static word embeddings. ## 3 Methodology In order to allow for zero-shot transfer between monolingual language models pre-trained in isolation from each other, we need to determine a mapping between their hidden representations. We first introduce our methodology for doing so, then we integrate this to the creation of sparse contextualized word representations. ### Mapping hidden representations The alignment of word representations between independently constructed semantic spaces can be conveniently and efficiently performed via linear transformations. This has been a standard approach for non-contextualized word embeddings Mikolov et al. (2013); Xing et al. (2015); Smith et al. (2017), but it has been shown to be useful in the contextualized case as well Conneau et al. (2020). The standard approach is to obtain a collection of pairs of anchor points \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i=1}^{n}\) with \(\mathbf{x}_{i}\) and \(\mathbf{y}_{i}\) denoting the representation of semantically equivalent words in the target and source languages, respectively. The mapping \(W\) is then obtained as \[\min_{W}\sum_{i=1}^{n}\lVert W\mathbf{x}_{i}-\mathbf{y}_{i}\rVert_{2}^{2}. \tag{1}\] As we deal with contextualized models, we can obtain various representations for a word even in the same context, by considering the hidden representations from different layers of the neural language models employed. Additionally, as constraining the mapping matrix to be an isometric one have proven to be a useful requirement, we define our learning task to be of the form \[\min_{W\ s.t.\ W^{\intercal}W=I}\sum_{i=1}^{n}\lVert W\mathbf{x}_{i}^{(l_{t})}- \mathbf{y}_{i}^{(l_{s})}\rVert_{2}^{2}, \tag{2}\] with \(I\) denoting the identity matrix, \(\mathbf{x}_{i}^{(l_{t})}\) and \(\mathbf{y}_{i}^{(l_{s})}\) denoting the hidden representations obtained from the \(l_{t}\)th and \(l_{s}\)th layers of the target and source language neural language models, respectively. Finding the optimal isometric \(W\) can be viewed as an instance of the orthogonal Procrustes problem Schonemann (1966) which can be solved by \(W_{\perp}=UV\), with \(U\) and \(V\) originating from the singular value decomposition of the matrix product \(Y^{\intercal}X\), where \(X\) and \(Y\) include the stacked target and source language contextual representations of pairs of semantically equivalent words. As words of the input sequences to the neural language models can be split into multiple subtokens, we followed the common practice of obtaining word-level neural representations by performing mean pooling of the subword representations. Throughout our experiments, we also relied on the RCSLS criterion Joulin et al. (2018), which offers a retrieval-based alternative of obtaining a mapping from the target to the source language representations. ### Cross-lingual sparse contextualized word representations Our approach extends the information theoretic algorithm introduced in Berend (2020) for its application in the cross-lingual zero-shot WSD setting. In order to obtain sparse contextualized representations for the source language, we first populate \(Y\in\mathbb{R}^{d\times N}\) with \(d\)-dimensional contextualized representations of words determined for texts in the source language, and minimize the objective \[\min_{D\in\mathcal{C},\mathbf{\alpha}_{i}\in\mathbb{R}^{k}_{\geq 0}}\sum_{i=1}^{N }\frac{1}{2}\lVert\mathbf{y_{i}}-D\mathbf{\alpha}_{i}\rVert_{2}^{2}+\lambda\lVert \mathbf{\alpha}_{i}\rVert_{1}, \tag{3}\] where \(\mathcal{C}\) denotes the convex set of \(d\times k\) matrices with column norm at most 1, \(\lambda\) is a regularization coefficient and the sparse coefficients in \(\alpha\) are required to be non-negative. We used the SPAMS library Mairal et al. (2009) for calculating \(D\) and \(\alpha\). Having obtained \(D\) for the source language, we determine a sparse contextualized word representation for a target language word with dense contextualized representation \(\mathbf{x_{i}}\) as \[\min_{\mathbf{\alpha}_{i}\in\mathbb{R}^{k}_{\geq 0}}\frac{1}{2}\lVert W\mathbf{x_{i}}-D \mathbf{\alpha_{i}}\rVert_{2}^{2}+\lambda\lVert\mathbf{\alpha_{i}}\rVert_{1}, \tag{4}\] where \(W\) is the alignment transformation as described earlier in Section 3.1. Eq. (4) reveals that the cross-lingual applicability of the sparse codes are assured by the mapping transformation \(W\) and the fact that the sparse target language representations are also using the same \(D\) that was determined for the source language, which also ensures the efficient calculation of sparse representations during inference time. Apart from these crucial extensions we made for providing the use of contextualized sparse representations in the cross-lingual setting, the way we utilized them for the determination of sense representation and inference is identical to Berend (2020). That is, for all sense-annotated words in the training corpus, we calculated a weighted co-occurrence statistics between a word pertaining to a specific semantic category and having non-zero co-ordinates along a specific dimension in their sparse contextualised word representations. These statistics are then transformed into pointwise mutual information (PMI) scores, resulting in a sense representation for all the senses in the training sense inventory. Sense representations obtained that way measure the strength of the relation of the senses to the different (sparse) coordinates. Inference for a word with sparse representation \(\mathbf{\alpha}\) is simply taken as \(\arg\max_{s}\Phi\mathbf{\alpha}^{\intercal}\), where \(\Phi\) is the previously defined matrix of PMI values and \(s\) corresponds to the sense at which position the above matrix-vector products takes its largest value. ## 4 Experimental results All the neural language models that we relied on during our experiments were obtained from the transformers library Wolf et al. (2020). We used four NVIDIA Titan 2080 GPUs for our experiments. As the multilingual language model, we used the 24-layer transformer architecture, XLM-RoBERTa (XLM-R for short) Conneau et al. (2020). We chose the cased BERT Devlin et al. (2019) large model as the monolingual model for encoding English text. As for the rest of the monolingual language models involved in our experiments, we relied on the models listed in Table 1. These monolingual models have the same size as the BERT-base model, i.e., they consist of 12 transformer blocks and employ hidden representations of 768 dimensions. For evaluation purposes, we used the extra-large cross-lingual evaluation benchmark XL-WSD, recently proposed in Pasini et al. (2021). The database contains a high-quality sense annotated corpus for English as the concatenation of the SemCor dataset Miller et al. (1994) and the sense definitions and example sentences from WordNet Fellbaum (1998). XL-WSD uses the unified cross-lingual sense inventory of BabelNet Navigli and Ponzetto (2012). The dataset contains 17 additional typologically diverse languages besides English (that we listed in Table 1). The authors also released machine translated silver standard sense annotated training corpora for all the languages, which makes the language-specific fine-tuning of monolingual models possible, however, as shown in Pasini et al. (2021), that approach resulted in inferior results compared to the application of multilingual models in the zero-shot setting. Throughout the application of sparse contextualized representations, we employ the same set of hyperparameters that were used in Berend (2020), i.e., we set the number of the regularization coefficient to \(\lambda=0.05\) and the number of (sparse) coordinates to \(k=3000\). There made one optional change, i.e., we decided whether to use the normalization of PMI values Bouma (2009) during the calculation of the sense representation matrix \(\Phi\) on a per language basis based on development set performances. An ablation study related to the (optional) normalization of PMI scores is reported in Table 5, Appendix B. When we do not employ the sparsification of the contextualized word representations for determining the sense representations, we follow the approach introduced in Loureiro and Jorge (2019). That is, we take the centroid of word vectors belonging to a particular sense as the representation of that sense, and perform a nearest neighbor search during inference. ### Alignment of contextualized representations As the different layers of neural language models have been shown to provide different levels of utility towards different tasks, we experimented with mappings between different combinations of layers from the target and source language neural language models. Since the last few layers of the neural models are generally agreed to be the most useful for semantics-related tasks Peters et al. (2018); Tenney et al. (2019); Reif et al. (2019), we decided to learn mappings between the hidden representations of any of the last four layers of the target and source language encoders. We used BERT as the language specific encoder for the source language texts in English, but we also investigated the application of XLM-R, so that we can see the effects of replacing it by an encoder especially tailored for English. As for the target languages, we used the respective models for each language as listed in Table 1. Similar to the source language, we also investigated the case when target languages were encoded by the multilingual model. In what follows, we label the different experimental settings according to the followings: * multi\(\rightarrow\)multi means that we map the target language representations obtained by the multilingual (XLM-R) model to the representation space of the source language also obtained by the multilingual (XLM-R) encoder, * multi\(\rightarrow\)mono, means that we map the target language representations obtained by the multilingual (XLM-R) model to the representation space of the source language obtained by the monolingual (English BERT) encoder, * mono\(\rightarrow\)multi, means that we map the target language representations obtained by their respective monolingual language model to the representation space of the source language obtained by the multilingual (XLM-R) encoder, * mono\(\rightarrow\)mono, means that we map the target language representations obtained by their respective monolingual language model to the representation space of the source language obtained by the monolingual (English BERT) encoder. In order to obtain the cross-representational mappings, we accessed the Tatoeba corpus Tiedemann (2012) through the datasets library Lhoest et al. (2021). The Tatoeba corpus contains translated sentence pairs for several hundreds of languages which we used for obtaining the pivot word mention pairs together with their contexts. In addition to the Tatoeba corpus, we used the word2word library Choe et al. (2020) containing word translation pairs between more than 3,500 language pairs. By denoting \((S_{s_{i}},S_{t_{i}})\) the \(i^{\text{th}}\) translated sentence pair from the Tatoeba corpus, we treated those \((w_{s}\in S_{s_{i}},w_{t}\in S_{t_{i}})\) word occurrences as being semantically equivalent, for which the \(w_{t}\in TranslationOf(w_{s})\) and the \(w_{s}\in TranslationOf(w_{t})\) relations simultaneously held according to the translation list provided by word2word. As an example, given the German-English translation pair from Tatoeba, _['de:' 'Es steht ein Glas auf dem Tisch', 'en': 'There is a glass on the table )_, underlined pairs of words with the same color would be treated as contextualized translation pairs of each other. One benefit of our approach for determining contextual alignment of word pairs is that it does not require word level alignment of the parallel sentences, hence it suits such lower resource scenarios better, when only parallel sentences (without word level alignments) and a list of word translation pairs are provided. Naturally, different contextual alignment approaches could be integrated into our approach at this point, and this is something that we regard as potential future extension of our work. We evaluated the quality of the mapping learned between the target and the source language representations by defining a contextualized translation retrieval task and evaluating it on its accuracy@1 metric, i.e., for what fraction of the contextualized translation pairs - not seen during the determination of the mapping between the two representation spaces - are we able to rank the original translated context as the highest. In the multi\(\rightarrow\)multi case, i.e., when both the target and source languages are encoded by the same multilingual model (XLM-R), it also makes sense to use the identity matrix as the mapping operator for mapping the target language contextual text representations to the semantic space of the source language (as long as the target and source language texts are obtained from the same layer of the multilingual encoder). We also evaluated the quality of this approach in our experiments that we refer to as the identity approach. We list the statistics of the Tatoeba corpus and the size of the training and test contextualized translation pairs in Table 2. Our results on the top-1 contextualized translation retrieval accuracies along the different languages and combination of target and source encoder usage are reported in Figure 1. The quality of the combination which uses monolingual encoders for both the target and source languages (mono\(\rightarrow\)mono) performed the best. \begin{table} \begin{tabular}{l l r r r} \hline \hline Language & \multicolumn{2}{c}{\#sentences} & \multicolumn{1}{c}{Train} & \multicolumn{1}{c}{Test} \\ \hline bg & Bulgarian & 17,797 & 14,212 & 3,554 \\ ca & Catalan & 1,663 & 3,912 & 979 \\ da & Danish & 30,089 & 20,000 & 5,000 \\ de & German & 299,769 & 20,000 & 5,000 \\ es & Spanish & 207,517 & 20,000 & 5,000 \\ et & Estonian & 2,428 & 2,365 & 592 \\ eu & Basque & 2,062 & 3,956 & 990 \\ fr & French & 262,078 & 20,000 & 5,000 \\ gl & Galician & 1,013 & 2,356 & 590 \\ hr & Croatian & 2,420 & 1,946 & 487 \\ hu & Hungarian & 107,133 & 20,000 & 5,000 \\ it & Italian & 482,948 & 20,000 & 5,000 \\ ja & Japanese & 204,893 & 20,000 & 5,000 \\ ko & Korean & 3,434 & 5,632 & 1,408 \\ nl & Dutch & 72,391 & 20,000 & 5,000 \\ sl & Slovenian & 3,210 & 1,285 & 322 \\ zh & Chinese & 46,114 & 20,000 & 5,000 \\ \hline \hline \end{tabular} \end{table} Table 2: The number of sentence pairs included in the Tatoeba corpus between English and a target language and the number of contextualized translation pairs extracted for training and testing the mappings. ### Monolingual evaluation We first conducted evaluations in the monolingual setting, i.e., we used the sense annotated training data to train and evaluate WSD models in English. The results of these experiments - depending on the encoder architecture used (BERT/XLM-R), the layer of the encoder utilized ({21,...,24}), and whether the sparsification of the contextualized representations took place (Dense/Sparse) - are included in Table 3. Unsurprisingly, the application of the language-specific BERT model achieved better scores compared to that of XLM-R. An interesting observation though, is that the drop in performance is much more subtle for those cases when the contextualized representations are enhanced via sparsification, i.e., the typical loss in performance across the layers is only 3 points (apart from the final layer), opposed to the typical loss of 4-7 points in the dense case. ### Cross-lingual zero-shot evaluation Table 4 includes the zero-shot cross-lingual WSD results for a collection of baseline approaches (Table 3(a)) from (Pasini et al., 2021), followed by our models not utilizing the sparsification of the contextualized embeddings (Table 3(b)) and the ones that additionally benefit from sparsification as well (Table 3(c)). It is useful to note that the mono\(\rightarrow\)* approaches are strictly more resource efficient during inference as they are based on 12-layer encoders instead of the 24 layers of the multilingual XLM-R model. At this point, we separate the multi\(\rightarrow\)multi results into two, i.e., 1) those obtained when relying on the hidden representations from the same layer of XLM-R without mapping (or equivalently, with the identity mapping from the target to source representations); and 2) those obtained when the target and source language contextual representations could originate from different layers of the XLM-R encoder, and a non-identity (either isometric or RC-SLS) mapping was employed. We keep referring to the latter as multi\(\rightarrow\)multi, and denote the former type of experiments as multi (without the \(\rightarrow\)multi suffix as there were no real mappings performed in these cases). Inspecting the first two rows of Table 3(b) and Table 3(c) reveals that enhancing the multilingual encoder towards the treatment of a particular pair of languages by providing it a language pair specific mapping has a larger positive effect when using dense vectors. In fact, it increased the micro-averaged F-score over the 17 languages by 1.72 and 0.11 points for the dense and the sparse cases, respectively. Overall, the micro-averaged F-score of our final approach managed to improve nearly 6.5 points (cf. the first row of Table 3(b) and the last row in Table 3(c)). A 5 point average improvement is due Figure 1: The results of translation retrieval over the test sets of the different languages and different combinations of transformers used for the (English) source and the target languages. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{BERT} & \multicolumn{2}{c}{XLM-R} \\ Layer & Dense & Sparse & Dense & Sparse \\ \hline 21 & 74.39 & 77.45 & 69.29 & 74.51 \\ 22 & 74.87 & 77.60 & 67.87 & 74.50 \\ 23 & 74.45 & 77.86 & 67.48 & 74.26 \\ 24 & 73.58 & 76.21 & 64.50 & 70.06 \\ \hline \hline \end{tabular} \end{table} Table 3: English results expressed in F-score. to the replacement of the XLM-R encoder for both the source language during training and target languages for inference (cf. the first and last row of Table 3(b)) and an additional 1.5 points of improvement was an effect of our sparsification in the cross-lingual setting. The inspection of the third and fourth rows in both Table 3(b) and Table 3(c) reveals that using a monolingual encoder during inference helps more compared to the application of a monolingual encoder for encoding the source language during training. We conducted the McNemar test between our system outputs when a non-identity mapping was used between a pair of languages. Our investigation revealed that all such \(\binom{8}{2}\) pairs of system outputs from Table 3(b) and Table 3(c) differ significantly from each other with \(p<0.0007\), with only four exceptions, i.e, 1) multi\(\rightarrow\)multi and multi\(\rightarrow\)mono from Table 3(b); 2) multi\(\rightarrow\)multi and multi\(\rightarrow\)mono from Table 3(c); 3) mono\(\rightarrow\)multi from Table 3(c) and mono\(\rightarrow\)mono from Table 3(b); 4) multi\(\rightarrow\)mono from Table 3(c) and mono\(\rightarrow\)multi from Table 3(b). Figure 2 summarizes the results of all the possible runs conducted. When using the multilingual encoder for both the target and source languages without a mapping step between the two (multi), we ran 4 different experiments per each language based on the hidden representations obtained from one of the last 4 layers of the multilingual encoder. For the remaining experiments relying on the dense and sparse representations, there were 32 and 64 experiments for each language, respectively. The 32 experiments were a result of choosing any of the 16 possible combination of the final four layers on the target and source language encoder, coupled with the type of mapping utilized (isometric/RCSLS). For the experiments involving the sparse representations, there was an extra parameter, whether the normalization of the PMI scores for obtaining the sense representations to be performed, resulting \begin{table} () Our results based on sparse sense vectors. () Our results based on dense sense vectors. \end{table} Table 4: Test set results on the XL-WSD benchmark. The hyperparameters of the individual approaches (e.g. which layer of the target language encoder to align with which layer of the source language encode) were determined based on the development set of each language. Figure 2: Overall averaged results for all the experiments conducted for the different approaches. in \(2\times 32\) experiments all together. Our ablation study in Table 5 illustrates that this extra factor of \(2\) for the sparse experiments did not provided us an unfair advantage, i.e., when fixing the value of normalization in any way, the overall results did not differ substantially. The difference in the average performance of our approach transforming sparse contextualized representations obtained by monolingual models is significant (using unpaired t-test2, \(p<0.005\)) compared to any other configuration. This suggests that the mono\(\rightarrow\)mono approach has a robust advantage over alternative variants, and the improvements seen in Table 4 are _not_ an effect of careful hyperparameter selection, but they generalize over a wide range of choices. Footnote 2: We used unpaired t-test as the number of experiments was not same in all cases, i.e., 4 experiments/language in the multi case, and either 32 or 64 experiments/language in the rest of the cases. This effect is further corroborated in Figure 3, which offers a comparison between the two systems with the best average performance, i.e., mono\(\rightarrow\)mono that operates with the dense vectors (results are along the x-axis) and the same model but with the enhancement of sparsification (results are along the y-axis). Each data point corresponds to a setting with the same hyperparameter choices, and points above the diagonal line with slope one demonstrate the benefits of sparsification. We have demonstrated the improved utility of mapping language-specific sparse contextualized representations for conducting zero-shot WSD, requiring large pre-trained language-specific text encoders for the target languages. While such models are available for all languages in XL-WSD, a variety of the existing languages lack their dedicated language-specific pre-trained language model. As such, an important question emerges whether it is possible to enjoy the benefits of mapping sparse contextualized representations for zero-shot WSD in the absence of a large pre-trained language model dedicated to the target language. To this end, we shall inspect the results of our multi\(\rightarrow\)mono approach in Table 4, a series of mapping-based experiments in which we acted as if the monolingual language models (other than the one for English) did not exist. In these experiments, the sense embeddings were obtained with bert-large-cased (being specialized to English), and the mapping to the non-English target languages were performed towards their XLM-R representations during the evaluation. This way, we could simulate the effects of the absence of language-specific models. The multi\(\rightarrow\)mono approach provided a substantially better average performance compared to the mere utilization of a multilingual encoder in the case of dense contextualized representations as it can be seen in Table 3(b). The average results of multi\(\rightarrow\)mono are slightly inferior (albeit statistically insignificantly) to that of the multi approach for the application of sparse contextualized representations. However, when comparing the multi\(\rightarrow\)multi results with that of multi\(\rightarrow\)mono, we can see that by relying on a multilingual encoder alone, and allowing a mapping to be employed between its hidden representations pertaining to different languages, one can obtain the same (or even slightly better) performance as with the multi\(\rightarrow\)mono approach. This highlights the importance of monolingual encoders for the target language, which seems to be more important than having access to a monolingual encoder for the source language. ## 5 Conclusions In this paper we provided a systematic investigation of the benefits of using large monolingual pre-trained language models in place of multilingual language models, such as XLM-R. We have shown that since monolingual neural language models are specifically tailored for a single (or at most a few related) languages, they can effectively mitigate the _curse of multilinguality_ typical of multilingual models, and their application can significantly improve the F-scores in zero-shot WSD. We additionally showed that the benefits of sparse con Figure 3: Comparison of the two best performing systems when the same hyperparameters were employed. textualized word representations, obtained via a dictionary learning procedure, also convey to the cross-lingual setting, and that it provides complementary improvements to the usage of monolingual neural language models. ## Acknowledgments The research was supported by the Ministry of Innovation and Technology NRDI Office within the framework of the Artificial Intelligence National Laboratory Program. Additionally, we are thankful for the usage of ELKH Cloud ([https://science-cloud.hu/](https://science-cloud.hu/)) that helped us achieving the results published in this paper.
2303.01685
Multi-Scale Control Signal-Aware Transformer for Motion Synthesis without Phase
Synthesizing controllable motion for a character using deep learning has been a promising approach due to its potential to learn a compact model without laborious feature engineering. To produce dynamic motion from weak control signals such as desired paths, existing methods often require auxiliary information such as phases for alleviating motion ambiguity, which limits their generalisation capability. As past poses often contain useful auxiliary hints, in this paper, we propose a task-agnostic deep learning method, namely Multi-scale Control Signal-aware Transformer (MCS-T), with an attention based encoder-decoder architecture to discover the auxiliary information implicitly for synthesizing controllable motion without explicitly requiring auxiliary information such as phase. Specifically, an encoder is devised to adaptively formulate the motion patterns of a character's past poses with multi-scale skeletons, and a decoder driven by control signals to further synthesize and predict the character's state by paying context-specialised attention to the encoded past motion patterns. As a result, it helps alleviate the issues of low responsiveness and slow transition which often happen in conventional methods not using auxiliary information. Both qualitative and quantitative experimental results on an existing biped locomotion dataset, which involves diverse types of motion transitions, demonstrate the effectiveness of our method. In particular, MCS-T is able to successfully generate motions comparable to those generated by the methods using auxiliary information.
Lintao Wang, Kun Hu, Lei Bai, Yu Ding, Wanli Ouyang, Zhiyong Wang
2023-03-03T02:56:44Z
http://arxiv.org/abs/2303.01685v1
# Multi-Scale Control Signal-Aware Transformer for ###### Abstract Synthesizing controllable motion for a character using deep learning has been a promising approach due to its potential to learn a compact model without laborious feature engineering. To produce dynamic motion from weak control signals such as desired paths, existing methods often require auxiliary information such as phrases for alleviating motion ambiguity, which limits their generalisation capability. As past poses often contain useful auxiliary hints, in this paper, we propose a task-agnostic deep learning method, namely Multi-scale Control Signal-aware Transformer (MCS-T), with an attention based encoder-decoder architecture to discover the auxiliary information implicitly for synthesizing controllable motion without explicitly requiring auxiliary information such as phase. Specifically, an encoder is devised to adaptively formulate the motion patterns of a character's past poses with multi-scale skeletons, and a decoder driven by control signals to further synthesize and predict the character's state by paying context-specialised attention to the encoded past motion patterns. As a result, it helps alleviate the issues of low responsiveness and slow transition which often happen in conventional methods not using auxiliary information. Both qualitative and quantitative experimental results on an existing biped locomotion dataset, which involves diverse types of motion transitions, demonstrate the effectiveness of our method. In particular, MCS-T is able to successfully generate motions comparable to those generated by the methods using auxiliary information. ## 1 Introduction Interactively controlling a character has been increasingly demanded by various applications such as gaming, virtual reality and robotics. This task remains challenging to achieve realistic and natural poses with complex motions and environments, even with large amount of high quality motion capture (MoCap) data for modelling Holden et al. (2017); Peng et al. (2018). Recently, deep learning techniques have been studied for controllable motion synthesis given their strong learning capability yet efficient parallel structures for fast runtime. Many encouraging results have been achieved using deep architectures such as multilayer perceptron (MLP) networks Holden et al. (2017), recurrent neural networks Lee et al. (2018), generative networks Henter et al. (2020) and deep reinforcement learning architectures Peng et al. (2018). Particularly, due to the potentials of delivering fast, responsive yet high-quality controllers, MLP networks have been devised for biped locomotion Holden et al. (2017), quadruped locomotion Zhang et al. (2018), daily interaction Starke et al. (2019), basketball Starke et al. (2020) and stylised motion prediction Mason et al. (2022). Since a weak control signal, which is commonly used in graphics, often corresponds to a large variation of possible motions, these studies have to rely on auxiliary phase variables in line with the character's contact states for disambiguation purposes. However, the contact states may not be available for all kinds of motions and may require manual correction during data acquisition. By contrast, recurrent neural networks, e.g. Lee et al. (2018), aim to constrain the next pose prediction subject to the past motions, which can be task-agnostic in terms of motion category and demonstrate better generalisation capability. The key limitation of RNN based methods is that they often suffer from slow responsiveness issues due to the large variation of the hidden memory Starke et al. (2019). We believe that auxiliary information can be inferred from a character's past motions. As shown in Figure 1, a walking motion sequence is represented in 2D manifolds by using different attributes (e.g. joint positions and velocities). It can Figure 1: Illustration of the motion manifold of a running motion sequence. Joint positions and velocities of individual poses are projected into a 2D space by t-SNE and colored in line with their phases. It is noticed that auxiliary phases are continuously distributed on the manifold, which suggests the potential of inferring the phases from motion attributes. be observed that the phases are continuously distributed on the manifolds, which enables the auxiliary information inference from the motion related attributes. Nonetheless, the past poses should be used "attentionally" since not all of them are always informative especially during motion transition, which is the reason that RNN based methods perform poorly without explicit data augmentation to the transitional cases in motion capture (MoCap) data. Therefore, in this work, we aims to study a deep learning based task-agnostic method to produce dynamic motion from trajectory-based control signals, without explicitly using additional auxiliary information such as phase. Specifically, we propose a transformer-based encoder-decoder architecture, namely Multi-Scale Control Signal-aware Transformer (MCS-T), to attend to the motion information of past poses and trajectory with respect to various scenarios. An encoder formulates the past motion patterns of a character from multi-scale skeletons in pursuit of learning spatio-temporal patterns from different levels of dynamics. With the past motion information, the encoder is expected to formulate conventional auxiliaries implicitly. Then, a decoder guided by the control signals synthesizes and predicts the next character pose by paying trajectory-specialised attention to the encoded historical motion patterns, rather than using a long inflexible memory setting. This dynamic motion modelling pipeline helps alleviate the issues of low responsiveness and slow transition, which can be observed in existing methods not using auxiliary information. Comprehensive experiments on a biped locomotion dataset containing various motion transitions (e.g., sudden jumping and uneven terrain walking) demonstrate the effectiveness of MCS-T. It produces responsive and dynamic motion, and achieves a performance comparable to that of the methods explicitly using auxiliary information while retaining such capability for various motion categories. The main contributions of this paper can be summarised as follows: * A novel real-time motion controller, namely Multi-Scale Control Signal-aware Transformer, is proposed to improve the responsiveness and motion dynamics over existing methods not explicitly using auxiliary information. It is also task-agnostic, compared with the methods explicitly using auxiliary information. To the best of our knowledge, our task-agnostic method is one of the first studies utilising transformer based encoder-decoder scheme for controllable motion synthesis. * A multi-scale graph modelling scheme is devised to exploit rich skeleton dynamics. * A novel control signal-aware self-attention is devised to inject control signals for motion prediction. * Comprehensive experiments were conducted to demonstrate the effectiveness of our proposed MCS-T. ## 2 Related Work In this section, we review related studies in terms of kinematic-based controllable motion synthesis, transformer based motion learning and multi-scale skeleton. ### Kinematics Based Controllable Motion Synthesis The kinematics based methods focus on the motion of character bodies including the joints without considering the physics that cause them to move. Four major categories of methods are reviewed as follows. _Search-based Methods_: Early studies were based on graphs [1, 13, 14], where each frame of a motion database was treated as a vertex and edges represented possible transitions between two frames. A graph search can find a path to produce an expected motion. Motion matching [13, 12] simplified the graph search by finding transitional frames directly in animation databases and produced the state-of-art gaming animation [15, 16]. However, the matching criterion are often required to be devised by experienced animators for a wide range of motion scenarios. _Recurrent Neural Network based Methods_: Fragkiadaki et al. (2015) constructed an encoder-decoder structure based on recurrent neural network (RNN) and directly adopted 3D body joint angles to predict character poses. Li et al. (2017) addressed the error accumulation issue by introducing a teacher forcing-like mechanism [23]. Lee et al. (2018) incorporated control signals into RNNs. To alleviate the low responsiveness and slow motion transition issues caused by the inflexible RNN memory state, comprehensive data augmentation was conducted to enrich transitional patterns. However, less motion diversity was observed during the runtime, since the augmented knowledge was still limited. _Phase-based Methods_: Phase-functioned neural network [12] adopted a multilayer perceptron (MLP) to predict biped locomotion with an auxiliary foot contact phase, which clusters motion with similar timing to disambiguate motion predictions. The phase-based frameworks were further extended to quadruped locomotion [16], environmental interaction [20], basketball game [21] and marital arts [22]. Nevertheless, the acquisition of phase information relied on the expertise of animators and the contact information of characters, which may not be universally available. Mason et al. (2022) proposed a heuristic principal component analysis based strategy to compute the phase of a stylised motion, where the arms often exhibited special movements without contact states. However, it was still a task-specific solution. _Generative Methods_: Instead of predicting a single motion pose, modelling the conditional pose distribution and conducting sampling could avoid the averaging pose from vastly different poses [1, 16, 17, 18, 19, 20, 21, 22]. Ling et al. (2020) used a variational autoencoder (VAE) to estimate the next pose distribution and draw user control-conditioned samples through reinforcement learning. Normalising flow was also introduced for this purpose, which modelled motion distribution and control signal together [10]. derson, and Beskow 2020). Although the generative approach did not require auxiliary information, it heavily depended on balanced MoCap data distributions Ling et al. (2020), not designed for trajectory-based control signal Liu et al. (2021) or less controlled on produced motion gait Henter et al. (2020). ### Transformer in Motion Learning Transformers Vaswani et al. (2017) have achieved great success in a wide range of tasks such as natural language processing Devlin et al. (2018) and computer vision Dosovitskiy et al. (2020); Arnab et al. (2021). Compared with traditional recurrent neural networks, self-attention mechanisms perform more effectively and efficiently to address sequential patterns. Therefore, various transformer based methods were proposed for many motion related tasks such as motion prediction Mao, Liu, and Salzmann (2020); Martinez-Gonzalez, Villamizar, and Odobez (2021); Aksan et al. (2021); Wang et al. (2021), action recognition Plizzari, Cannici, and Matteucci (2021); Mazzia et al. (2022), 3D pose estimation Zheng et al. (2021) and motion synthesis Petrovich, Black, and Varol (2021). However, there are few studies utilising transformer for controllable motion synthesis. ### Multi-scale Skeleton To better explore rich spatial skeleton representations of human poses, many studies introduced multi-scale skeletons by using higher order polynomials of adjacency matrices Liu et al. (2020), graph convolutions Jang, Park, and Lee (2022) or heuristic rules Li et al. (2020); Dang et al. (2021); Ghosh et al. (2021); Bhattacharya et al. (2021). We address the multi-scale graphs with transformers to provide multi-scale tokens with trajectories, which is the first attempt in controllable motion synthesis for more responsive motions. ## 3 Methodology Figure 2 illustrates the proposed MCS-T architecture, which addresses the motion control problem as a regression task. The motion data is first parameterised as pose and trajectory embeddings. The pose embedding is formulated by multi-scale skeleton graphs for comprehensively exploiting the past spatio-temporal relations. It encodes the motion sequence for each skeleton scale by specialised transformer encoders for latent motion representation. The representation is then utilised by a transformer decoder queried by trajectory information, i.e., control signal, for a control-conditioned integration with past motion states. Finally, a motion prediction network predicts the character's next pose and potential future trajectory. ### Multi-scale Skeleton Poses A virtual character is animated upon a skeleton of rotational joints, of which the coordinates and velocities can be defined regarding the motion. Each pose skeleton in a motion sequence can be viewed as a graph, where the joints are vertices and the bones are edges. Based on such graph representation, multi-scale skeletons can be constructed for a pose by aggregating the adjacent vertices as a pooled coarse-level vertex. As illustrated in Figure 2, two scales of skeletons in a fine-to-coarse scheme are adopted in this study. This scheme aims to comprehensively characterise the spatial patterns, by which the additional coarse-level representation enables global observations of motions and improves motion dynamics especially during a motion transition. The fine-level representation is the same as the original skeleton structure obtained from MoCap, which contains 31 joints. Specifically, we denote \(j_{i}^{p}\) and \(j_{i}^{v}\) as the vectors representing the coordinates local to the corresponding root transformation and velocity values of the fine-level vertices (i.e. joints) of the \(i\)-th frame, respectively. For the coarse-level representation, we denote \(b_{i}^{p}\) and \(b_{i}^{v}\) as the vectors representing the coordinates and velocity values of the vertices (i.e., aggregated joints) of the \(i\)-th frame, respectively. Particularly, for the motion prediction of the \(i\)-th frame, we construct an input \(X_{i}\), which consists of two components regarding the past pose information of the two skeleton Figure 2: Illustration of our proposed MCS-T method, which is based on an encoder-decoder architecture to formulate the past motion patterns with multi-scale skeleton representations and predict the next motion with the guidance of the control signals. scales: \(J_{i}\) and \(B_{i}\). In detail, we denote \(J_{i}=\{(j^{p}_{i-k},j^{v}_{i-k})\}\), \(B_{i}=\{(b^{p}_{i-k},b^{v}_{i-k})\}\), \(k=k_{1},...,k_{K}\), as the fine and coarse sequences, respectively. In total, \(K\) frames are adopted instead of using all frames for the consideration of both efficiency purpose and model complexity. ### Multi-scale Motion Encoder A motion encoder aims to formulate the past motion patterns \(X_{i}\) as a reference for predicting the future motion. Compared with the methods using auxiliary information, our encoder only depends on the past motion information available and can be generalized to all kinds of actions. However, trivially using all past motion information could result in issues of low responsiveness and slow motion transition. In other words, using all past information can introduce redundancy for predicting the next pose and sometimes even disturb the prediction, which may fall into the historical motion states. Thus, a transformer-based multi-scale encoder is proposed to formulate the past motion patterns in an adaptive manner. The fine and coarse-level pose information \(J_{i}\) and \(B_{i}\) can be treated as matrices, where each row represents the position and velocity of a particular temporal motion frame. Our multi-scale encoder is based on self-attentions [22] using the concepts of query, value and key, which can be formulated as: \[Q^{J}_{i}=J_{i}W^{Q,J},K^{J}_{i}=J_{i}W^{K,J},V^{J}_{i}=J_{i}W^{V,J}, \tag{1}\] \[Q^{B}_{i}=B_{i}W^{Q,B},K^{B}_{i}=B_{i}W^{K,B},V^{B}_{i}=B_{i}W^{ V,B},\] where \(W^{\cdot,\cdot}\) are projection matrices containing trainable weights with an output dimension \(\gamma\), \(J\) related matrices formulate the fine-level pose patterns and \(B\) related matrices formulate the coarse-level pose patterns. Then, the temporal patterns can be computed for each level as follows: \[Z^{J}_{i}=\text{softmax}(\frac{Q^{J}_{i}K^{J\intercal}_{i}}{\sqrt{\gamma}})V^ {J}_{i},Z^{B}_{i}=\text{softmax}(\frac{Q^{B}_{i}K^{B\intercal}_{i}}{\sqrt{ \gamma}})V^{B}_{i}. \tag{2}\] To this end, the temporal relations of motion frames can be formulated by observing the entire sequence based on the weights obtained using the softmax function in Eq. (2). In practice, multiple independent self-attentions can be adopted to increase the capability of modelling and feed-forward components are followed, which is known as a transformer encoder layer. By stacking multiple transformer encoder layers for each observation level, the final spatio-temporal patterns can be obtained. For the convenience of notations, we still use the symbols \(Z^{J}_{i}\) and \(Z^{B}_{i}\) to indicate the encoded sequential representations. By concatenating \(Z^{J}_{i}\) and \(Z^{B}_{i}\) in a frame-wise manner, a sequence \(\{Z_{i}\}\) can be obtained as the encoded multi-scale past motion patterns. ### Control Signal & Trajectory The trajectory of a character's movement is based on the user's control signals. We denote a trajectory vector: \[T_{i}=(t^{p}_{i,s-S},...,t^{p}_{i,s},...,t^{p}_{i,S-1},t^{d}_{i,s-S},...,t^{d}_{i,s},...,t^{d}_{i,S-1}, \tag{3}\] \[t^{h}_{i,s-S},...,t^{h}_{i,s},...,t^{h}_{i,S-1},t^{g}_{i,s-S},...,t^{g}_{i,s},...,t^{g}_{i,S-1}),\] which represents the sampled discrete trajectory patterns for the prediction of the frame \(i\). Particularly, the indices of the sampled points are specified as \(s\in\{s_{-S},...,s_{0},...,s_{S-1}\}\). In this study, we empirically adopt \(S=6\) and the sampled points are evenly distributed around the current frame to cover the trajectories 1 second before and 1 second after. In detail, the trajectory includes four aspects: * \(t^{p}_{i,s}\) represents the sampled \(s\)-th trajectory position in the \(2D\) horizontal plane of the \(i\)-th frame. * \(t^{d}_{i,s}\) indicates the trajectory direction in the \(2\)D horizontal plane, which is the facing direction of the character. * \(t^{h}_{i,s}\) is a sub-vector contains the trajectory height in line with the terrain to characterise the geometry information, which are obtained from three locations regarding the sampled point including the center, left and right offset. * \(t^{g}_{i,s}\) is a one-hot encoding sub-vector regarding the action category for the sampled trajectory point. For our locomotion settings, we have five action categories including standing, walking, jogging, jumping and crouching. ### Control Signal-aware Decoder Based on the past motion embeddings from the multi-scale motion encoder, a control signal-aware decoder is proposed to formulate a latent embedding for motion prediction. The trajectory information is involved by the decoder to attend to the past encoded motion patterns through a control signal-aware attention mechanism. This allows the decoded patterns being relevant to the user's control signals. In detail, we adopt the trajectory \(T_{i}\) as a query to the past motions: \[q^{D}_{i}=T_{i}W^{Q,D},K^{D}_{i}=Z_{i}W^{K,D},V^{D}_{i}=Z_{i}W^{V,D}, \tag{4}\] where \(W^{\cdot,\cdot}\) are projection matrices containing trainable weights with an output dimension \(\gamma\). Hereafter, the past motion information with user control can be summarised into a vector as follows: \[z^{D}_{i}=\text{softmax}(\frac{q^{D}_{i}K^{D\intercal}_{i}}{\sqrt{\gamma}})V^ {D}_{i}. \tag{5}\] Particularly, we call the attention in Eq. (4-5) as a control signal-aware attention and multi-heads of it are adopted with feed-forward networks to characterise the motions from multiple aspects. For the simplification of notations, we continue to use \(z^{D}_{i}\) to denote this multi-head output. ### Motion Prediction Network To predict and synthesize the motion of the \(i\)-th frame, which we denote as \(Y_{i}\), an additional motion prediction network (MPN) component is introduced. \(Y_{i}\) contains pose \(\{(j^{p}_{i},j^{v}_{i},j^{r}_{i})\}\), trajectory \(T_{i+1}\) and contact information \(C_{i}\). Particularly, \(j^{r}_{i}\) represents local joint rotation additional to position and velocity. The prediction \(\hat{T}_{i+1}\) of \(T_{i+1}\) is only for the trajectory after the \(i\)-th frame, where the sampled trajectory points before the current frame already exist. \(C_{i}\) is a vector, which indicates the labels of foot contact for each heel and toe joint of the two feet. It can be used to perform Inverse Kinematics (IK) post-processing to better fit the character with terrain geometry. Our MPN is based on feed-forward layers with Exponential Linear Unit (ELU) activation function [12]. In detail, we have an estimation \(\hat{Y}_{i}\) of \(Y_{i}\): \[\hat{Y}_{i}=\text{MPN}(z_{i}^{D},T_{i}), \tag{6}\] where the decoded output and motion trajectory are considered as the input. Note that the trajectory information is also used for MPN besides the decoder, which helps the control signals to be fully formulated for providing highly responsive motion synthesis. ### MCS-T Training and Runtime Inference By defining the computations of the proposed MCS-T as a function \(\mathcal{F}\) with trainable parameters \(\mathbf{\Theta}\), where \(\hat{Y}_{i}=\mathcal{F}(X_{i},T_{i})\). A mean squre error (MSE) loss with \(\ell_{1}\) regularization is adopted to optimize \(\mathbf{\Theta}\). In detail, we solve the following optimization problem during the training: \[\operatorname*{arg\,min}_{\mathbf{\Theta}}\parallel Y_{i}-\boldsymbol{ \mathcal{F}}(X_{i},T_{i};\mathbf{\Theta})\parallel_{2}^{2}+\lambda|\mathbf{ \Theta}|, \tag{7}\] where \(\lambda\) is a hyper-parameter controlling the scale of the regularization. In terms of the runtime inference, a trajectory blending scheme is adopted for post-processing. In detail, the trajectory positions \(\bar{t}_{i+1,s}^{p}\) and directions \(\bar{t}_{i+1,s}^{d}\), \(s=s_{0},...,s_{S-1}\), after the \(i\)-th frame are further blended with the user control signal for the \((i+1)\)-th frame's motion prediction: \[\begin{split} t_{i+1,s}^{p}&=(1-\tau_{s}^{p}) \bar{t}_{i+1,s}^{p}+\tau_{s}^{d}t_{i+1,s}^{p},\\ t_{i+1,s}^{d}&=(1-\tau_{s}^{d})\bar{t}_{i+1,s}^{d} +\tau_{s}^{d}t_{i+1,s}^{d},\end{split} \tag{8}\] where \(\bar{t}_{i+1,s}\) is the trajectory computed by the user's control signal, \(\tau_{s}^{p}\) and \(\tau_{s}^{d}\) are hyper-parameters to control the blending level. That is, the user control signal is blended with higher weights in near trajectory for more responsive motion, and with lower weights in far trajectory in pursuit of smoother transition. In terms of \(t_{i+1,s}^{p}\) and \(t_{i+1,s}^{d}\), \(s=s_{-S},...,s_{-1}\), they are in line with the actual existing trajectory. Additionally, the trajectory height \(\bar{t}_{i+1,s}^{h}\) can be derived based on \(t_{i+1,s}^{p}\) within the virtual scene, and the action category \(t_{i+1,s}^{g}\) is set directly by the user. ## 4 Experimental Results and Discussions ### Dataset We evaluate our proposed method on a public dataset [1] for a fair comparison with the state-of-the-art methods. The dataset consists of biped locomotion data of various gaits, terrains, facing directions and speeds, which helps evaluate the quality of the common character motion controller in terms of responsiveness and motion transition. A biped character with 31 joints and MoCap techniques were adopted to collect these data. In total, we obtained around 4 million samples for training. ### Implementation Details In total, \(K=5\) past frames with indices \(k_{1}=1\), \(k_{2}=10\), \(k_{3}=20\), \(k_{4}=30\) and \(k_{5}=40\) were selected as input to predict the motion of the \(i\)-th frame. Note that this setting was found to provide the best quality of prediction (see Section 4.5). Two independent transformer-encoders were used for the fine-level and coarse-level motion sequences, respectively. Each of them consisted of three transformer-encoder layers using six self-attention heads of a dimension 186 and the the feed-forward layers were of a dimension \(1024\). A dropout rate of \(0.1\) was applied to the encoders. The transformer decoder was using the same configurations as the encoder. The motion prediction network was modelled as a three-layer MLP with a hidden dimension \(512\) and a dropout rate \(0.3\). \(\tau_{s}^{p}=(s/S)^{0.5}\) and \(\tau_{s}^{d}=(s/S)^{2}\) were defined for Eq(8) empirically. (see Appendix for details) During the training, the input and output were firstly normalised by their mean and standard deviation. Additionally, the input features related to the joints were all scaled by \(0.1\), which helped produce dynamic motions in certain scenarios to enlarge the proportion of the trajectory related inputs. In terms of the loss function, \(\lambda\) for \(\ell_{1}\) regularization was set to 0.01. The model was implemented by PyTorch [23] and trained with an Adam optimiser [13]. The learning rate was set to \(10^{-4}\) and the batch size was 32. In total, MCS-T was trained with 20 epochs, which took around 50 hours on an Nvidia GTX 1080Ti GPU. ### Comparisons with State-of-the-art Methods Qualitative and quantitative evaluations of MCS-T were conducted against a number of baseline methods, in terms of motion quality, especially from the aspects of responsiveness and motion transition. The baseline methods include MLP with a single past pose, MLP with multiple past poses, RNN [14] and PFN [15] methods. Overall, we show that our MCS-T was able to produce motions in line with the-state-of-the-art results with a task-agnostic design and to alleviate the fundamental issues of the baseline methods. More results are available in the supplementary material. _MLP with single past pose_: We trained an MLP to synthesize motion using a single past pose with trajectory information. The experimental results show that the overall motion produced was quite stiff especially when changing the direction and could have weird artifacts such as floating as shown in the ceiling scenario in Figure 3. As expected, motion prediction from vague control signal can be difficult without auxiliary information and various possible predictions can exist, which leads to an average pose. _MLP with multiple past poses_: Similar to the first baseline, except that additional pose information from multiple past frames was considered for an MLP. The results show that the generated motion was improved, as the past frames provided the auxiliary information implicitly. Nonetheless, the synthesized motion suffers from the slow motion transition issue. This problem became obvious when the character was traversing through rocky terrain as shown in Figure 3. While it was able to adapt the character motion correctly to the new geometry, the motion was performed as smooth as the regular locomotion on a flat terrain. The reason could be that using several past poses in a simple manner is limited to the large redundant variations in the past. _RNN_: An LSTM architecture [11] was adopted for this dataset without augmentation. The past memory enables LSTM to predict motion of higher quality. Nevertheless, it still suffered from the slow motion transition issue. As shown in Figure 3, the character could be floating when transiting between motion and was unable to jump over the obstacles timely and obviously. The reason is that the hidden memory prevented the RNN model from quickly reaching transitional states of a jumping motion. _Phase-functioned neural network_: Rather than relying on past poses to constrain motion prediction, PFNN [12] utilised the foot contact phase for motion disambiguation. The qualitative results of our MCS-T were very closely to those of PFNN in a wide range of scenarios (see supplementary materials). Our method does not require the task-specific auxiliary information, which only relies on the past motion data generally available. Moreover, to quantitatively evaluate whether the produced motion is responsive to control signals and transits to different motions timely, the average joint angle update per second as a metrics for motion dynamics was compared. A higher joint angle update represents more dynamic motion produced and faster transition between frames. The results are listed in Table 1, which indicate that MCS-T is able to produce much more agile motion than the task-agnostic RNN, while being comparable to the task-specific PFNN [12] method. ### Ablation Study An ablation study was conducted to demonstrate the effectiveness of the multi-scale skeleton representation and the control signal-aware mechanism in our encoder and decoder, respectively. The quantitative evaluation is listed in Table 1. _Multi-scale skeletons with an extra middle scale_: In addition to the two skeleton scales, we experimented with one extra scale called as a middle scale. It aggregated the joints into a level between the two existing levels. However, the three-scale scheme did not contribute to the overall performance and produced stiff motions especially under scenarios with quick and frequent transitions such as obstacles and ceiling scene. The potential reason could be that the increased model complexity deteriorates the capability of motion prediction and produces sub-optimal solution. _Multi-scale skeletons_: Without multi-scale skeletons, the motion dynamics dropped significantly, especially in the obstacles scene. Jumping motion became less responsive and sometimes the dynamics were too weak to observe. Thus, incorporating coarse-level skeletons helped exploit the motion patterns during a transition from a global perspective. _Control signal-aware decoder_: Besides the motion prediction network, our decoder is driven by the control signals as well, which are adopted as the queries of the decoding attentions. Alternatively, by simply using a conventional self-attention mechanism to construct this decoder, it led to less motion dynamics. The most obvious case is in the ceiling scenario, where the motion appeared to be jittery and unstable during the transition between the walking and the crouching in the ceiling scenario. Figure 3: Qualitative results of our MCS-T and other baselines under four scenarios: flat, rocky, obstacles and ceiling. The left side shows the motions synthesized by MCS-T and the right side provides examples that demonstrate the baseline limitations. ### Multi-scale and Control Signal-aware Motion Attentions Our experiments show that MCS-T is able to synthesize motions with the highest quality and alleviate the slow transition issue. This lies in the attention mechanisms of MCS-T, which adaptively addresses the sequential motion context. The attention map of the decoder's first layer is visualised in Figure 4 to show how MCS-T performs attentions for different cases. Figure 4 (a) is for a frame of motion transition from a jumping state to a jogging state. Most attention heads focused on the fine-level skeletons, especially in more recent frames, as the further past frames were not very relevant during this motion transition. Additionally, two attention heads paid even attentions to the coarse-scale motion, which learned global motion patterns for faster motion transition. Figure 4 (b) is for a non-transitional case where the character remains the jogging state, the attentions are evenly distributed on all positions of the past poses, especially with more attention heads focusing on the coarse-level. The reason could be that the coarse motion sequence provides sufficient spatio-temporal patterns for predicting this kind of motions with strong recurring patterns. ### Limitations & Future Work There are two major limitations of our proposed MCS-T. First, our MCS-T method may not always synthesize the beam walking motion well. For example, as shown in Figure 5, informed by the special terrain geometry, the character performed a hand balancing motion. However, MCS-T did not always launch this motion. It could be due to the small percentage of beam walking motion in the training data (\(\bar{2}\)%) and imbalance learning strategies should be considered. Second, since MCS-T exploits the past motion history, the error accumulation could happen with a very low chance. The character motion could get stuck in weird poses for a very short period but can escape from it by providing new control signals. Robust noise-based learning could be conducted for alleviating such error accumulation. In our future work, besides addressing these limitations, we will investigate an adaptive strategy for selecting past frames, such as exploring network architecture search (NAS) [22] and token evaluation strategies. ## 5 Conclusion In this paper, we present MCS-T as a transformer-based task-agnostic character motion control method. With multi-scale graph representation, it aims to produce responsive and dynamic motions without explicitly using auxiliary information. Specifically, MCS-T involves an encoder-decoder design, where the encoder formulates the spaio-temporal motion patterns of past poses from multi-scale perspectives and the decoder takes a control signal into account for predicting the next pose. Our experiments on a public dataset have demonstrated that MCS-T can produce results comparable to those of the state-of-the-art methods which explicitly using auxiliary information. We also investigate the limitations of our method for future improvement. \begin{table} \begin{tabular}{l|c|c c c c c c c c c c c c|c c c} \cline{2-13} & & & \multicolumn{3}{c}{Flat} & \multicolumn{3}{c}{Rocky} & \multicolumn{3}{c}{Obstacles} & \multicolumn{3}{c|}{Ceiling} & \multicolumn{3}{c}{Average} \\ Method & Phase & Full & Arm & Leg & Full & Arm & Leg & Full & Arm & Leg & Full & Arm & Leg & Full & Arm & Leg & Full & Arm & Leg \\ \hline PFN & ✓ & 106.5 & 100.6 & 135.9 & **128.7** & 145.0 & **156.0** & 109.1 & 110.9 & 143.9 & 139.9 & 130.9 & **187.0** & 121.1 & 121.9 & 155.7 \\ MLP w/ Single Pose & ✗ & 71.5 & 65.4 & 90.2 & 86.5 & 90.3 & 110.5 & 78.7 & 71.2 & 108.9 & 109.7 & 103.7 & 142.0 & 86.6 & 82.7 & 112.9 \\ MLP w/ Multiple Poses & ✗ & 94.0 & 88.1 & 122.7 & 95.2 & 91.3 & 131.0 & 85.7 & 76.3 & 122.6 & 115.1 & 100.8 & 161.8 & 97.5 & 89.1 & 134.5 \\ RNN & ✗ & 83.3 & 78.4 & 107.1 & 83.2 & 76.4 & 115.5 & 85.5 & 80.4 & 122.3 & 123.7 & 107.9 & 174.2 & 93.9 & 85.8 & 129.8 \\ \hline MCS-T (ours) & ✗ & **110.9** & **107.5** & **142.8** & 126.7 & **149.0** & 151.6 & **116.1** & **121.4** & **150.7** & **140.0** & 137.0 & 184.6 & **123.4** & **128.7** & **157.4** \\ + Middle scale & & 105.5 & 101.6 & 135.4 & 109.2 & 117.2 & 140.8 & 104.6 & 108.6 & 140.1 & 122.0 & 111.5 & 164.5 & 110.3 & 109.7 & 145.2 \\ - Multi-scale skeleton & ✗ & 96.3 & 87.6 & 127.1 & 112.6 & 123.8 & 143.4 & 91.9 & 89.4 & 127.2 & 124.1 & 114.1 & 167.1 & 106.2 & 103.7 & 141.2 \\ - Control signal-aware attention & ✗ & 94.6 & 87.7 & 125.2 & 105.6 & 109.1 & 140.2 & 90.7 & 85.2 & 129.3 & 137.5 & **145.8** & 173.7 & 107.1 & 107.0 & 142.1 \\ \end{tabular} \end{table} Table 1: Quantitative comparison in terms of the average joint angle update per second (degree/s) \(\uparrow\) for different methods including MCS-T under four motion scenarios: Flat, Rocky, Obstacles, and Ceiling. The angle updates are further divided into full body with all joints, arm and leg joints. The highest value is in bold and the second highest value is underlined. Figure 4: Visualization of the attention map of MCS-T, where the first layer of the decoder is shown, including (a) transitional and (b) non-transitional scenarios. The x-axis represents the past motion indices and the y-axis indicates the 6 attention heads. Figure 5: Illustration of a limitation of MCS-T, where the hand balancing motion is not well synthesized when the character is walking on a beam. ## Acknowledgments This study was partially supported by Australian Research Council (ARC) grant #DP210102674.
2306.03880
Dynamical analysis in Chameleon dark energy
We present a detailed analysis of the phase-space for the field equations in scalar field cosmology with a chameleon cosmology in a spatially flat Friedmann--Lema\^{\i}tre--Robertson--Walker spacetime. For the matter source we assume that it is an ideal gas with a constant equation of state parameter, while for the scalar field potential and the coupling function of the chameleon mechanism we consider four different sets which provide four different models. We consider the $H$-normalization approach and we write the field equations with the help of dimensionless variables. The asymptotic solutions are determined from where we find that the theory can describe the main eras of cosmological history and evolution. Future attractors which describe acceleration exist, however we found past acceleration solutions related to the inflationary era, as also the radiation epoch and the matter dominated eras are provided by the dynamics. We conclude that the Chameleon dark energy model can be used as a unified model for the elements which contribute to the dark sector of the universe.
Andronikos Paliathanasis
2023-06-06T17:31:04Z
http://arxiv.org/abs/2306.03880v1
# Dynamical analysis in Chameleon dark energy ###### Abstract We present a detailed analysis of the phase-space for the field equations in scalar field cosmology with a chameleon cosmology in a spatially flat Friedmann-Lemaitre-Robertson-Walker spacetime. For the matter source we assume that it is an ideal gas with a constant equation of state parameter, while for the scalar field potential and the coupling function of the chameleon mechanism we consider four different sets which provide four different models. We consider the \(H\)-normalization approach and we write the field equations with the help of dimensionless variables. The asymptotic solutions are determined from where we find that the theory can describe the main eras of cosmological history and evolution. Future attractors which describe acceleration exist, however we found past acceleration solutions related to the inflationary era, as also the radiation epoch and the matter dominated eras are provided by the dynamics. We conclude that the Chameleon dark energy model can be used as a unified model for the elements which contribute to the dark sector of the universe. Chameleon gravity; Scalar field; Cosmology; Phase-space analysis; Dynamical analysis ## 1 Introduction The recent cosmological data [1; 2; 3; 4; 5] indicate that the universe is in an acceleration phase. Dark energy attributes the late-time acceleration of the universe [6]. Indeed, dark energy is a fluid source with negative pressure such as to provide anti-gravitating effects. The physical origin and the nature of dark energy are still unknown; however, there are various proposals in the literature, see for instance [7; 8; 9; 10] and references therein. Scalar field models are of special interest in gravitational theories for which they play an important role in the explanation of cosmological observations. Scalar fields introduce new degrees of freedom in the gravitational field equations. The new dynamical variables provide a mechanism which can drive the evolution of the physical parameters as provided by the cosmological data. The simplest scalar field model is the quintessence theory [11]. In this theory the equation of state parameter has as lower bound the value \(-1\) which is the limit of the cosmological constant and an upper bound of \(+1\) which describes a stiff fluid. The quintessence scalar field satisfies the null energy condition, the weak energy condition and the dominant energy condition. However, because the fluid pressure component can be negative and provide acceleration, the strong energy condition for the quintessence can be violated. The dynamics of the quintessence cosmological model with an exponential potential was investigated in detail in [12]. The dynamical analysis provides the cosmological model and it admits a stiff fluid solution, a scaling solution which can describe acceleration, a matter dominated solution and a tracking solution where the scalar field has the same physical behaviour as the matter source. In the absence of a matter source, the stiff fluid and the scaling solutions exist for the exponential potential. On the hand, for a power-law potential function the tracking solution does not exist, but then a de Sitter solution appears where the scalar field reaches the limit of the cosmological constant. A scalar field similar to that of the quintessence theory has been used to describe the inflaton mechanism [14] responsible for the inflation which has been introduced to solve the horizon and the isotropization problems [13]. Last but not least, scalar fields can attribute the higher-order degrees of freedom provided by the modified theories of gravity, for instance the quadratic inflationary model belongs to the \(f\left(R\right)\)-gravity which after the application of a Lagrange multiplier and a conformal transformation is equivalent to a quintessence scalar field theory [15]. Because of the importance of the quintessence theory there is a plethora of studies in the literature which investigate different functional forms of the scalar field potential and derive new analytic and exact solutions [16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. Quintessence is a simple dark energy which, however, cannot solve various puzzles such as the cosmological tensions [26]. Thus, various scalar fields have been proposed in the literature, such as phantom scalar field models [27; 28], Brans-Dicke and scalar-tensor theories [29; 30], Galileons [31; 32], k-essence [33], tachyons [34; 35], multi-scalar field models [36; 37; 38] and others. A geometric mechanism to introduce a minimally coupled to gravity scalar field in the Einstein-Hilbert Action Integral of General Relativity is the Weyl theory and specifically the Weyl Integrable Spacetime (WIS) [39]. This theory is a torsion-free theory embedded with two conformal related metrics and the connection preserves the conformal structure. In WIS the connection structure differs from the Levi-Civita connection by a scalar field [40; 41; 42]. The novelty of WIS is that in the presence of a matter source because of the conformal structure there appears a coupling between the scalar field and the matter source. Hence, there exists an interaction between the matter components of the gravitational theory and the mass of the scalar field depends upon the energy density of the matter source [43; 44]. Inspired by this property in [45; 46] there has been proposed a chameleon mechanism which generalizes the coupling between the scalar field and the matter source. In chameleon theory the WIS is only a particular case for which the coupling function is an exponential. The limit of the WIS was investigated recently in [47], while the case for which the background geometry has nonzero spatial curvature was considered in [48]. A power-law coupling was investigated in [49]; it was found that this cosmological model for power-law potential has a behaviour very close to that of \(\Lambda\)CDM theory, but in low redshifts the model can enhance the growth of the linear perturbations. A tachyonic-like chameleon model was investigated in [50] while some other generalizations can be found in [51; 52; 53; 54]. In this piece of work we investigate the dynamical evolution and the asymptotic solutions of chameleon cosmology in the context of a homogeneous and isotropic spatially flat Friedmann-Lemaitre-Robertson-Walker spacetime. For the matter source we consider an ideal gas and we assume various sets for the free functions of the theory; they are the scalar field potential and the coupling function. We make use of the H-normalization approach [12] and we determine the stationary points for the field equations and we calculate their stability properties. We wish to answer to the question if the chameleon dark energy model can explain the main eras of the cosmological history. Furthermore we make conclusions for the initial value problem in this cosmological theory. The mathematical methods that we apply in this work have been successful for the classification of various cosmological models [55; 56; 57; 58; 59]. The structure of the paper is as follows. In Section 2 we introduce the cosmological theory of our consideration which is that of General Relativity with a scalar field and a chameleon mechanism, that is, the scalar field is coupled to a perfect fluid. Furthermore, we assume that the universe is described by the spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) geometry. The field equations are of second-order with dynamical variables the scale factor, the scalar field and the energy density of the perfect fluid which is assumed to be an ideal gas with constant equation of state parameter. Over and above, the dynamical evolution of the physical variables depends upon two unknown functions, the scalar field potential \(V\left(\phi\right)\) and the coupling function, \(f\left(\phi\right),\) responsible for the chameleon mechanism. In order to investigate the dynamics of the cosmological field equations we consider new dimensionless variables. Thus in Section 3 we write the field equations in the equivalent form of an algebraic-differential system. We consider four-different sets for the potential and the coupling functions and we determine the stationary points and their stability properties as the physical properties of the asymptotic solutions at the stationary points. In Section 4 we study the case for which \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}\). The cosmological constant term is introduced in Section 5 in which we select \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}+\Lambda\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}\). The dynamical analysis of the field equations for the functions \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}+\Lambda\) is performed in Section 6. For the fourth model of our analysis we select \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}+\Lambda\) and \(f\left(\phi\right)=f_{0}\left(V_{0}e^{\sigma_{0}\phi}+\Lambda\right)^{p}\), \(p\neq 0\) and its phase-space analysis is presented in Section 7. Finally, in Section 8 we summarize our results and we draw our conclusions. ## 2 Chameleon dark energy The gravitational Action Integral of Chameleon dark energy is [45] \[S=\int\sqrt{-g}d^{4}x\left(R-\frac{1}{2}g^{\mu\kappa}\nabla_{\mu}\phi\left(x^ {\nu}\right)\nabla_{\kappa}\phi\left(x^{\nu}\right)-V\left(\phi\left(x^{\nu} \right)\right)-f\left(\phi\right)L_{m}\left(x^{\nu}\right)\right), \tag{1}\] where \(R\) is the Ricci scalar for the four-dimensional background Riemannian physical space with metric tensor \(g_{\mu\nu}\); \(\phi\left(x^{\mu}\right)\) is a scalar with potential function \(V\left(\phi\left(x^{\mu}\right)\right)\) and \(L_{m}\left(x^{\mu}\right)\) is a Lagrangian function for the matter source. For this we assume an ideal gas with energy density \(\rho_{m}\) and pressure component \(p_{m}\) and an equation of state parameter \(w_{m}\), that is \(p_{m}=w_{m}\rho_{m}.\) Hence, the Lagrangian for the matter source is expressed as \(L_{m}\left(x^{\mu}\right)\simeq\rho_{m}\)[60]. The coupling function \(f\left(\phi\right)\) between the scalar field and the matter source describes the chameleon mechanism. The gravitational field equations are \[G_{\mu\nu}=T_{\mu\nu}^{eff}, \tag{2}\] where \(G_{\mu\nu}\) is the Einstein tensor and \(T_{\mu\nu}^{eff}\) is the effective energy-momentum tensor, that is, \(T_{\mu\nu}^{eff}=T_{\mu\nu}^{\phi}+f\left(\phi\right)T_{\mu\nu}^{m}.\) The latter are defined as \[T_{\mu\nu}^{\phi}=\nabla_{\mu}\phi\nabla_{\nu}\phi-g_{\mu\nu}\left(\frac{1}{2} g^{\kappa\zeta}\nabla_{\kappa}\phi\nabla_{\zeta}\phi+V\left(\phi\right)\right), \tag{3}\] and \[T_{\mu\nu}^{m}=\left(\rho_{m}+p_{m}\right)u_{\mu}u_{\nu}+p_{m}g_{\mu\nu}, \tag{4}\] where \(u_{\mu}\) is the co-moving observer with \(g_{\mu\nu}u^{\mu}u^{\nu}=-1.\) The equation of motion for the matter source is \(\nabla_{\nu}T^{eff\ \ \ \mu\nu}=0,\) i.e. \[\nabla_{\nu}\left(T^{\phi\ \mu\nu}+f\left(\phi\right)T^{m\ \mu\nu}\right)=0, \tag{5}\] or \[-g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\phi+V\left(\phi\right)+\nabla_{\mu}\left( f\left(\phi\right)\rho_{m}\right)u^{\mu}+f\left(\phi\right)\left(\rho_{m}+p_{m} \right)\nabla_{\mu}u^{\mu}=0. \tag{6}\] Equivalently we can write the following equations [45] \[-g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\phi+V\left(\phi\right)+\left(1+\alpha \right)\rho_{m}\nabla_{\mu}f\left(\phi\right)u^{\mu}=0, \tag{7}\] \[\nabla_{\mu}\left(\rho_{m}\right)u^{\mu}+\left(\rho_{m}+p_{m}\right)\nabla_{ \mu}u^{\mu}-\alpha\rho_{m}\nabla_{\mu}\ln\left(f\left(\phi\right)\right)u^{ \mu}=0. \tag{8}\] The parameter, \(\alpha,\) is an arbitrary parameter different from zero and minus one. It has been introduced only in order to write equation (6) as a system of two equations. For a spatially flat FLRW line element \[ds^{2}=-dt^{2}+a^{2}\left(t\right)\left(dx^{2}+dy^{2}+dz^{2}\right) \tag{9}\] in which \(a\left(t\right)\) is the scalar factor and \(H=\frac{\dot{a}}{a}\) is the Hubble function, with \(\dot{a}=\frac{da}{dt},\) the gravitational field equations (1) are written as follows [45] \[3H^{2}=\frac{1}{2}\dot{\phi}^{2}+V\left(\phi\right)+\rho_{m}f\left(\phi\right) \tag{10}\] and \[2\dot{H}+3H^{2}=-\left(\frac{1}{2}\dot{\phi}^{2}-V\left(\phi\right)+f\left( \phi\right)p_{m}\right) \tag{11}\] with the equations of motion \[\ddot{\phi}+3H\dot{\phi}+V_{,\phi}+\left(1+\alpha\right)f_{,\phi}\rho_{m}=0, \tag{12}\] and \[\dot{\rho}_{m}+3H\left(\rho_{m}+p_{m}\right)-\alpha\dot{\phi}\left(\ln f\right)_ {,\phi}\rho_{m}=0. \tag{13}\] We have assumed that the scalar field and the matter source inherit the symmetries of the background space, that is, \(\phi=\phi\left(t\right)\) and \(\rho_{m}=\rho_{m}\left(t\right)\). Moreover, for the matter source we consider a constant equation of state parameter \(w_{m}=\frac{p_{m}}{\rho_{m}}\) with \(0\leq w_{m}<1\). In the limit \(w_{m}=0\) the fluid is pressureless, while for \(w_{m}=1\) the fluid is a stiff fluid which can be described by a massless scalar field. From the modified Klein-Gordon equation (12) we observe that the mass of the scalar field depends upon the coupling function \(f\left(\phi\right)\) and upon the energy density of the scalar field. ## III Phase-space analysis We proceed to the analysis of the dynamics for the field equations (10)-(13). In particular we investigate the stationary points of the phase-space and we study their stability properties. We work in the \(H\)-normalization approach [12] and we define the dimensionless variables and parameters \[\tau=\ln a,\ x=\frac{\dot{\phi}}{\sqrt{6}H},\ y=\sqrt{\frac{V\left(\phi \right)}{3H}},\ \Omega_{m}=\frac{\rho_{m}f\left(\phi\right)}{3H^{2}}, \tag{14}\] \[\lambda=\frac{V_{,\phi}}{V}\,\ \sigma=\frac{f_{,\phi}}{f}\,\ \Gamma\left(\lambda \right)=\frac{V_{,\phi\phi}V}{\left(V_{,\phi}\right)^{2}}\,\ \Delta\left(\lambda \right)=\frac{f_{,\phi\phi}f}{\left(f_{,\phi}\right)^{2}}. \tag{15}\] In the new variables the field equations are expressed as the following algebraic differential system of first-order differential equations \[\frac{dx}{d\tau}=\frac{1}{2}\left(3x\left(x^{2}-y^{2}-1+w_{m}\Omega_{m} \right)-\sqrt{6}\left(\lambda y^{2}+\left(1+\alpha\right)\sigma\Omega_{m} \right)\right), \tag{16}\] \[\frac{dy}{d\tau}=\frac{1}{2}y\left(3+\sqrt{6}\lambda x+3\left(x^{2}-y^{2}+w_{ m}\Omega_{m}\right)\right), \tag{17}\] \[\frac{d\Omega_{m}}{d\tau}=\Omega_{m}\left(\sqrt{6}\left(1+\alpha\right)\sigma x +3\left(x^{2}-y^{2}+w_{m}\left(\Omega_{m}-1\right)\right)\right), \tag{18}\] \[\frac{d\lambda}{d\tau}=\sqrt{6}x\lambda^{2}\left(\Gamma\left(\lambda\right)-1 \right), \tag{19}\] \[\frac{d\sigma}{d\tau}=\sqrt{6}x\sigma^{2}\left(\Delta\left(\sigma\right)-1\right), \tag{20}\] and from (10) we derive the algebraic constraint \[1-x^{2}-y^{2}-\Omega_{m}=0. \tag{21}\] We observe that not all the variables are independent; indeed, by definition \(\lambda=\lambda\left(\phi\right)\) and \(\sigma=\sigma\left(\phi\right)\), which means that for arbitrary functions \(V\left(\phi\right)\) and \(f\left(\phi\right)\) it follows that \(\sigma=\sigma\left(\lambda\right)\) or \(\lambda=\lambda\left(\sigma\right)\). Moreover, with the use of the constraint equation (21) the dimension of the dynamical system (16)-(20) has maximum value three. Furthermore, we assume a positive coupling function, \(f\left(\phi\right)\geq 0\), from which it follows that \(\Omega_{m}\geq 0\). Thus from the constraint equation (21) parameters \(x\) and \(y\) are bounded on the two-dimensional unitary disk, i.e. \(x^{2}+y^{2}\leq 1\), that is, \(\left|x\right|\leq 1\) and \(0<y\leq 1\). In the new-dimensionless variables the equation of state parameter for the effective fluid, \(w_{eff}=-1-\frac{2}{3}\frac{\dot{H}}{H^{2}}\), becomes \[w_{eff}\left(x,y,\Omega_{m};w_{m}\right)=x^{2}-y^{2}+w_{m}\Omega_{m}. \tag{22}\] Each stationary point, \(P=x_{0}\left(P\right)\), where \(x=\left(x,y,\Omega_{m},\lambda,\sigma\right)^{T}\) of the dynamical system (16)-(20) describes an asymptotic solution for the cosmological model with effective equation of state parameter \(w_{eff}\left(P\right)\) and scale factor \(a\left(t\right)=a_{0}t^{\frac{2}{3\left(1+w_{eff}\left(P\right)\right)}}\) for \(w_{eff}\left(P\right)\neq-1\,;\) and \(a\left(t\right)=a_{0}e^{H_{0}t}\) for \(w_{eff}\left(P\right)=-1\). At each point \(P\) we determine the stability properties of the asymptotic solution with the study of the eigenvalues of the linearised system around the stationary point. Hence we can constrain the free parameters of the model and the initial conditions in order to reconstruct the cosmological history. Below we define the following four sets for the scalar field potential \(V\left(\phi\right)\) and the coupling function \(f\left(\phi\right)\). (A) \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}\); (B) \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}+\Lambda\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}\); (C) \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}+\Lambda\); (D) \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}+\Lambda\) and \(f\left(\phi\right)=f_{0}\left(V_{0}e^{\sigma_{0}\phi}+\Lambda\right)^{p}\). The exponential potential has been widely studied in the literature for the description of the early and late-time acceleration phases of the universe [61; 62], while in [63; 64; 65] the cosmological constant term has been introduced into the potential function. As far as the interaction is concerned, the exponential interaction is related to the WIS [39]. Model A: \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}\) For the first model we consider the exponential scalar field potential \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}\) and exponential coupling function \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}\). For these definitions it follows that the \(\lambda\) and \(\sigma\) parameters are always constant, \(\lambda=\lambda_{0}\) and \(\sigma=\sigma_{0}\). Moreover with the application of the constraint equation (21) the dimension of the dynamical system is reduced to two. We replace \(\Omega_{m}\) from (21) and we have the two first-order differential equations \[\frac{dx}{d\tau}=\frac{1}{2}\left(3x\left(x^{2}-y^{2}-1+w_{m}\left(1-x^{2}-y^{ 2}\right)\right)-\sqrt{6}\left(\lambda y^{2}+\left(1+\alpha\right)\sigma\left( 1-x^{2}-y^{2}\right)\right)\right) \tag{23}\] and \[\frac{dy}{d\tau}=\frac{1}{2}y\left(3+\sqrt{6}\lambda x+3\left(x^{2}-y^{2}+w_{ m}\left(1-x^{2}-y^{2}\right)\right)\right). \tag{24}\] The stationary points \(A=\left(x\left(A\right),y\left(A\right)\right)\) of the dynamical system (23), (24) at the finite regime are presented bellow. \[A_{1}^{\pm}=\left(\pm 1,0\right),\] they describe scaling solutions in which the kinetic term of the scalar field dominates in the cosmological fluid, that is, \(\Omega_{m}\left(A_{1}^{\pm}\right)=0\) and \(w_{eff}\left(A_{1}^{\pm}\right)=1\). The eigenvalues of the linearised system are derived to be \(e_{1}\left(A_{1}^{\pm}\right)=\frac{1}{2}\left(6\pm\sqrt{6}\lambda\right)\) and \(e_{2}\left(A_{1}^{\pm}\right)=3\left(1-w_{m}\right)\pm\sqrt{6}\bar{\sigma}\) with \(\bar{\sigma}=\left(1+\alpha\right)\sigma\). Hence, point \(A_{1}\) is an attractor for \(\lambda<-\sqrt{6}\) and \(\bar{\sigma}<-3\left(1-w_{m}\right)\), while point \(A_{2}\) is an attractor for \(\lambda>\sqrt{6}\) and \(\bar{\sigma}>3\left(1-w_{m}\right)\). \[A_{2}=\left(-\frac{\lambda}{\sqrt{6}},\sqrt{1-\frac{\lambda^{2}}{6}}\right)\] corresponds to a universe with \(\Omega_{m}\left(A_{2}\right)=0\) and \(w_{eff}\left(A_{2}\right)=-1+\frac{\lambda^{2}}{3}\). The point is real and physically acceptable for \(\lambda^{2}\leq 6\). When \(\lambda=0\), that is \(V\left(\phi\right)=V_{0}\), the de Sitter universe is recovered. The eigenvalues of the linearised system are \(e_{1}\left(A_{2}\right)=\frac{\lambda^{2}-6}{2}\) and \(e_{2}\left(A_{2}\right)=-3\left(1+w_{m}\right)+\lambda\left(\lambda-\bar{ \sigma}\right)\), from which we conclude that the point is an attractor for \(0\leq\lambda^{2}<6\) and \(3\left(w_{m}+1\right)+\lambda^{2}>\lambda\bar{\sigma}\). \[A_{3}=\left(-\sqrt{\frac{2}{3}}\frac{1+\alpha}{1-w_{m}}\sigma,0\right)\] describes a universe with \(\Omega_{m}\left(A_{3}\right)=1-\frac{2\bar{\sigma}^{2}}{3(1-w_{m})^{2}}\) and \(w_{eff}\left(A_{3}\right)=\frac{2\bar{\sigma}^{2}}{3(1-w_{m})}+w_{m}\). The point is real and physically acceptable when \(\left|\sqrt{\frac{2}{3}}\frac{1+\alpha}{1-w_{m}}\sigma\right|\leq 1\). The stationary points describe an accelerated universe for \(2\left(\bar{\sigma}\right)^{2}<(2-3w_{m})-1\). We observe that this stationary point cannot describe the de Sitter universe. The eigenvalues of the linearised system are derived to be \(e_{1}\left(A_{3}\right)=-\frac{3}{2}\left(1-w_{m}\right)+\frac{\left(\bar{ \sigma}\right)^{2}}{1-w_{m}}\) and \(e_{2}\left(A_{3}\right)=\frac{3}{2}\left(1+w_{m}\right)+\frac{\bar{\sigma} \left(\bar{\sigma}-\lambda\right)}{1-w_{m}}\). Hence, the stationary point is an attractor in the region presented in Fig. 1. \[A_{4}=\left(-\sqrt{\frac{3}{2}}\frac{1+w_{m}}{\lambda-\bar{\sigma}},\frac{ \sqrt{\frac{3\left(w_{m}^{2}-1\right)}{\bar{\sigma}-\lambda}+2\bar{\sigma}}}{ \sqrt{2\lambda-2\bar{\sigma}}}\right).\] It has physical parameters \(\Omega_{m}\left(A_{4}\right)=\frac{\lambda\left(\bar{\sigma}-\lambda_{0}\right) -3\left(1+w_{m}\right)}{\left(\lambda-\bar{\sigma}\right)^{2}}\) and \(w_{eff}\left(A_{4}\right)=\frac{\bar{\sigma}+w_{m}\lambda}{\lambda-\bar{ \sigma}}\). The eigenvalues of the linearised system are \(e_{1,2}\left(A_{4}\right)=\frac{1}{4}\left(-3\left(1-w_{m}\right)+\frac{3 \bar{\sigma}\left(1+w_{m}\right)}{\lambda-\bar{\sigma}}+\frac{\sqrt{X_{1}+X_ {2}}}{\lambda-\bar{\sigma}}\right)\) with \(X_{1}=12\bar{\sigma}\left(\lambda\left(4\lambda^{2}-9-3w_{m}\left(3+2w_{m} \right)\right)+\bar{\sigma}\left(15+12w_{m}-8\lambda^{2}+4\lambda\bar{\sigma} \right)\right)\) and \(X_{2}=9\left(w_{m}-1\right)\left(\left(7+9w_{m}\right)\lambda^{2}-24\left(1+w_{ m}^{2}\right)\right)\). In Fig. 2 we present the regions in the space \(\left\{\lambda,\bar{\sigma}\right\}\) in which the stationary point \(A_{4}\) is physically acceptable and when it is an attractor. Over and above in Fig. 3 we present the phase-space portrait for the dynamical system (23), (24) for different values of the free parameters such that all of the points appear as attractors in the different plots. The qualitative evolution of the eff Figure 1: Region plot on the space of the free parameters \(\lambda\) and \(\bar{\sigma}\) for \(w_{m}=0\) (left fig.) and \(w_{m}=\frac{1}{3}\) (right fig.) The shadowed areas correspond the values of the parameters \(\left(\lambda,\bar{\sigma}\right)\) in which the stationary point \(A_{3}\) is an attractor. parameter \(w_{eff}\) is presented in Fig. 4, while the evolution of the \(\Omega_{m}\) is presented in Fig. 5. From the latter figures we observe that there exists a future attractor which describes an accelerated universe. We have considered initial conditions for which the equation of state parameter for the effective fluid is that of a radiation fluid, from which we see that from the radiation epoch the universe goes to a solution dominated by the matter source and ends to the acceleration solution. In Table 1 we summarize the stationary points and their physical properties for Model A. V Model B: \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}+\Lambda\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}\) For the second model of our consideration we consider the scalar field potential \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}+\Lambda\) and the coupling functions \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}.\) We calculate \(\sigma=\sigma_{0}\), which means that \(\sigma\) is always a constant, but now \(\lambda\) is a varying parameter defined as \(\lambda=\frac{V_{0}\lambda_{0}e^{\lambda_{0}\phi}}{V_{0}\lambda_{0}e^{\lambda_ {0}\phi}+\Lambda}\), that is, \(\phi=\frac{1}{\lambda_{0}}\ln\left(\frac{\lambda\Lambda}{V_{0}\left(\lambda_{ 0}-\lambda\right)}\right)\). This transformation is valid for \(\frac{\lambda\Lambda}{V_{0}\left(\lambda_{0}-\lambda\right)}>0\). Moreover, we calculate \(\Gamma\left(\lambda\right)=\frac{\lambda_{0}}{\lambda}\). Figure 2: Region plot on the space of the free parameters \(\lambda\) and \(\left(1+\alpha\right)\sigma\) for \(w_{m}=0\) (left fig.\right) and \(w_{m}=\frac{1}{3}\) (right fig.). The shadowed areas correspond the values of the parameters \(\left(\lambda,\left(1+\alpha\right)\sigma\right)\) in which the stationary point \(A_{4}\) exists and when it is an attractor. We remark that point \(A_{4}\) is always an attractor when it exists We end with the three-dimensional system \[\frac{dx}{d\tau}=\frac{1}{2}\left(3x\left(x^{2}-y^{2}-1+w_{m}\left(1-x^{2}-y^{2} \right)\right)-\sqrt{6}\left(\lambda y^{2}+\left(1+\alpha\right)\sigma\left(1-x ^{2}-y^{2}\right)\right)\right), \tag{25}\] \[\frac{dy}{d\tau}=\frac{1}{2}y\left(3+\sqrt{6}\lambda x+3\left(x^{2}-y^{2}+w_{m }\left(1-x^{2}-y^{2}\right)\right)\right), \tag{26}\] \[\frac{d\lambda}{d\tau}=\sqrt{6}x\lambda\left(\lambda_{0}-\lambda\right). \tag{27}\] The stationary points \(B=\left(x\left(B\right),y\left(B\right),\lambda\left(B\right)\right)\) of this dynamical system and their physical properties are discussed in the following lines. \[B_{1}^{\pm}=\left(\pm 1,0,\lambda_{0}\right),\] they have similar physical properties with that of \(A_{1}^{\pm}\), that is, \(\Omega_{m}\left(B_{1}^{\pm}\right)=0\) and \(w_{eff}\left(B_{1}^{\pm}\right)=1\). As far as the stability properties of the points are concerned, we derive the eigenvalues for the linearised system \(e_{1}\left(B_{1}^{\pm}\right)=\frac{\left(6+\sqrt{6}\lambda_{0}\right)}{2}\), \(e_{2}\left(B_{1}^{\pm}\right)=\mp\sqrt{6}\lambda_{0}\) and \(e_{3}\left(B_{1}^{\pm}\right)=3\left(1-w_{m}\right)\pm\sqrt{6}\bar{\sigma}\), with \(\bar{\sigma}=\left(1+\alpha\right)\sigma\); from which we infer that the stationary points cannot describe stable solutions. Specifically for \(-\sqrt{6}<\lambda<0\) and \(-\sqrt{6}\bar{\sigma}<3\left(1-w_{m}\right)\) point \(B_{1}^{+}\) is a source, while for \(0<\lambda<\sqrt{6}\) and \(\sqrt{6}\bar{\sigma}<3\left(1-w_{m}\right)\) point \(B_{1}^{-}\) is a source. For other values of the free parameters points \(B_{1}^{\pm}\) are saddle points. \[B_{2}=\left(-\frac{\lambda_{0}}{6},\sqrt{1-\frac{\lambda_{0}^{2}}{6}},\lambda_ {0}\right)\] is the extension of point \(A_{2}\) in the three-dimensional space. Therefore the physical properties and the existence conditions are the same. The eigenvalues are derived to be \(e_{1}\left(B_{2}\right)=\frac{\left(\lambda_{0}^{2}-6\right)}{2}\), \(e_{2}\left(B_{2}\right)=-3\left(1+w_{m}\right)+\lambda_{0}\left(\lambda_{0}- \bar{\sigma}\right)\) and \(e_{3}\left(B_{2}\right)=\lambda_{0}^{2}\). Therefore, for \(\lambda_{0}^{2}>6\) and \(\lambda_{0}\bar{\sigma}<-3\left(1+w_{m}\right)+\lambda_{0}^{2}\) the stationary point is a source. Otherwise it is a saddle point. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Point** & \(\left(\mathbf{x},\mathbf{y}\right)\) & \(\Omega_{m}\) & \(\mathbf{w}_{eff}\) & **Can be Attractor?** \\ \hline \(A_{1}^{\pm}\) & \(\left(\pm 1,0\right)\) & \(0\) & \(1\) & Yes \\ \(A_{2}\) & \(\left(-\frac{\lambda}{\sqrt{6}},\sqrt{1-\frac{\lambda^{2}}{6}}\right)\) & \(0\) & \(-1+\frac{\lambda^{2}}{3}\) & Yes \\ \(A_{3}\) & \(\left(-\sqrt{\frac{2}{3}}\frac{1+\alpha}{1-w_{m}}\sigma,0\right)\) & \(1-\frac{2\bar{\sigma}^{2}}{3\left(1-w_{m}\right)^{2}}\) & \(\frac{2\bar{\sigma}^{2}}{3\left(1-w_{m}\right)}+w_{m}\) & Yes \\ \(A_{4}\) & \(\left(-\sqrt{\frac{3}{2}}\frac{1+w_{m}}{\lambda-\bar{\sigma}},\frac{\sqrt{3 \left(w_{m}^{2}-1\right)}+2\bar{\sigma}}{\sqrt{2}\lambda-2\bar{\sigma}} \right)\) & \(\frac{\lambda\left(\bar{\sigma}-\lambda_{0}\right)-3\left(1+w_{m}\right)}{ \left(\lambda-\bar{\sigma}\right)^{2}}\) & \(\frac{\bar{\sigma}+w_{m}\lambda}{\lambda-\bar{\sigma}}\) & Yes Always \\ \hline \hline \end{tabular} \end{table} Table 1: Stationary points and physical properties for Model A \[B_{3}=\left(-\sqrt{\frac{2}{3}}\frac{\bar{\sigma}}{1-w_{m}},0,\lambda_{0}\right),\] has the same physical properties as point \(A_{3}\). The corresponding eigenvalues are \(e_{1}\left(B_{3}\right)=-\frac{3}{2}\left(1-w_{m}\right)+\frac{\bar{\sigma}^{2} }{1-w_{m}}\) and \(e_{2}\left(B_{3}\right)=\frac{3}{2}\left(1+w_{m}\right)+\frac{\bar{\sigma} \left(\bar{\sigma}-\lambda\right)}{1-w_{m}}\) and \(e_{3}\left(B_{3}\right)=\frac{2\lambda_{0}\bar{\sigma}}{1-w_{m}}\). We conclude that the stationary point, when it exists, is always a saddle point. \[B_{4}=\left(-\sqrt{\frac{3}{2}}\frac{1+w_{m}}{\lambda_{0}-\bar{\sigma}},\frac{ \sqrt{\frac{3\left(w_{m}^{2}-1\right)}{\bar{\sigma}-\lambda_{0}}+2\bar{\sigma }}}{\sqrt{2\lambda_{0}-2\bar{\sigma}}},\lambda_{0}\right),\] with asymptotic solution \(\Omega_{m}\left(B_{4}\right)=\frac{\lambda\left(\bar{\sigma}-\lambda_{0} \right)-3\left(1+w_{m}\right)}{\left(\lambda-\bar{\sigma}\right)^{2}}\) and \(w_{eff}\left(A_{4}\right)=\frac{\bar{\sigma}+w_{m}\lambda_{0}}{\lambda-\bar{ \sigma}}\) and eigenvalues \(e_{1,2}\left(B_{4}\right)=e_{1,2}\left(A_{4}\right)\) and \(e_{3}=\frac{3\left(1-w_{m}\right)\lambda_{0}}{\lambda_{0}-\bar{\sigma}}\). The stability conditions are similar to those for the point \(A_{4}\). Indeed, when point \(B_{4}\) exists, it is is always an attractor. \[B_{5}=\left(0,1,0\right),\] describes a de Sitter universe with \(\Omega_{m}\left(B_{5}\right)=0\) and \(w_{eff}\left(B_{5}\right)=-1\). The eigenvalues are \(e_{1}\left(B_{5}\right)=-3\), \(e_{2}\left(B_{5}\right)=-3\left(1+w_{m}\right)\) and \(e_{3}\left(B_{5}\right)=0\). Because of the zero eigenvalue we cannot infer about the stability of the dynamical system. However, we know that it may have a stable submanifold. Indeed on the surface \(\lambda_{0}=0\), the stationary point is always an attractor. That is observed in Fig. 6. The analytic submanifold may be derived with the use of the center manifold theorem, but for simplicity we omit such analysis. However for equation (27) and \(\lambda=\varepsilon\lambda_{\varepsilon}\), \(\varepsilon^{2}\to 0\), it follows \(\frac{d\lambda_{\varepsilon}}{d\tau}=\sqrt{6}x\lambda_{0}\lambda_{\varepsilon}.\) Therefore for \(x\lambda_{0}>0\), \(\lambda_{\varepsilon}\) increases. Otherwise for \(x\lambda_{0}<0\), \(\lambda_{\varepsilon}\) decays. Furthermore, in Fig. 7 we present the phase-space portraits for the three-dimensional dynamical system (25), (26) and (27) for \(\lambda_{0}=-1\), \(w_{m}=0\) and \(\bar{\sigma}^{2}=2\). We conclude that for initial conditions with \(\lambda\leq 0\) the de Sitter universe described by \(B_{5}\) is a future attractor in the surface \(x\lambda_{0}<0\). ### Analysis at Infinity For the dynamical system (25), (26) and (27) variables \(x\) and \(y\) are constrained by the algebraic condition \(x^{2}+y^{2}\leq 1\), but that is not true for the dynamical variable \(\lambda\) which can take values at infinity, as can be observed by Fig. 7. In order to investigate the existence of stationary points at the infinity we define the new variable \(\lambda=\frac{\lambda_{b}}{\sqrt{1-\lambda_{b}^{2}}}\), from which it follows that infinity is reached when \(\lambda_{b}^{2}=1\). We define the new independent variable \(d\tau=\sqrt{1-\lambda_{b}^{2}}d\bar{\tau}\), that is, at infinity the dynamical system (25), (26) and (27) becomes \[\frac{dx}{d\bar{\tau}}=\mp\frac{\sqrt{6}}{2}y^{2}\,\ \frac{dy}{d\bar{\tau}}=\pm \frac{\sqrt{6}}{2}xy\,\,\frac{d\lambda_{b}}{d\bar{\tau}}=\sqrt{6}x,\ \text{for}\ \lambda_{b}\to\pm 1. \tag{28}\] Therefore, at infinity there exists the unique stationary point \(B_{\text{inf}}=\left(0,0,\lambda_{b}\to\pm 1\right)\), which describes a matter dominated universe, \(\Omega_{m}\left(B_{\text{inf}}\right)=1\) and \(w_{eff}\left(B_{\text{inf}}\right)=w_{m}.\) From (28) it is easy to see that the stationary point at infinity is a homothetic center in the surface \(\lambda_{b}^{2}=1\). However, for \(x>0\), \(\frac{d\lambda_{b}}{d\bar{\tau}}>0\) while for \(x<0\), \(\frac{d\lambda_{b}}{d\bar{\tau}}<0\), from which we can easily infer that point \(B_{\text{inf}}\) is always a saddle point. In Table 2 we summarize the results for Model B. VI Model C: \(V\left(\phi\right)=V_{0}e^{\lambda\phi}\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}+\Lambda\) For the third model we assume that the scalar field potential and the coupling functions are \(V\left(\phi\right)=V_{0}e^{\lambda\phi}\) and \(f\left(\phi\right)=V_{0}e^{\sigma_{0}\phi}+\Lambda\). Therefore, we have the three-dimensional dynamical system \[\frac{dx}{d\tau}=\frac{1}{2}\left(3x\left(x^{2}-y^{2}-1+w_{m}\left(1-x^{2}-y^ {2}\right)\right)-\sqrt{6}\left(\lambda y^{2}+\left(1+\alpha\right)\sigma \left(1-x^{2}-y^{2}\right)\right)\right), \tag{29}\] \[\frac{dy}{d\tau}=\frac{1}{2}y\left(3+\sqrt{6}\lambda x+3\left(x^{2}-y^{2}+w_{m }\left(1-x^{2}-y^{2}\right)\right)\right), \tag{30}\] \begin{table} \begin{tabular}{c c c c c} \hline \hline **Point** & \(\left(\mathbf{x,y,\lambda}\right)\) & \(\Omega_{m}\) & \(\mathbf{w}_{eff}\) & **Can be Attractor?** \\ \hline \(B_{1}^{\pm}\) & \(\left(\pm 1,0,\lambda_{0}\right)\) & \(0\) & \(1\) & No \\ \(B_{2}\) & \(\left(-\frac{\lambda}{\sqrt{6}},\sqrt{1-\frac{\lambda^{2}}{6}},\lambda_{0}\right)\) & \(0\) & \(-1+\frac{\lambda_{0}^{2}}{3}\) & No \\ \(B_{3}\) & \(\left(-\sqrt{\frac{2}{3}}\frac{1+\alpha}{1-w_{m}}\sigma,0,\lambda_{0}\right)\) & \(1-\frac{2\bar{\sigma}^{2}}{3\left(1-w_{m}\right)^{2}}\) & \(\frac{2\bar{\sigma}^{2}}{3\left(1-w_{m}\right)}+w_{m}\) & No \\ \(B_{4}\) & \(\left(-\sqrt{\frac{2}{3}}\frac{1+w_{m}}{\lambda-\bar{\sigma}},\frac{\sqrt{ \frac{3\left(w_{m}^{2}-1\right)}{\bar{\sigma}-\lambda}+2\bar{\sigma}}}{\sqrt{ 2\lambda-2\bar{\sigma}}},\lambda_{0}\right)\) & \(\frac{\lambda\left(\bar{\sigma}-\lambda_{0}\right)-3\left(1+w_{m}\right)}{ \left(\lambda-\bar{\sigma}\right)^{2}}\) & \(\frac{\bar{\sigma}+w_{m}\lambda}{\lambda_{0}-\bar{\sigma}}\) & Yes Always \\ \(B_{5}\) & \(\left(0,1,0\right)\) & \(0\) & \(-1\) & Yes \\ \(B_{\text{inf}}\) & \(\left(0,0,\lambda\to\pm\infty\right)\) & \(1\) & \(w_{m}\) & No \\ \hline \hline \end{tabular} \end{table} Table 2: Stationary points and physical properties for Model B \[C_{4}=\left(-\sqrt{\frac{3}{2}}\frac{1+w_{m}}{\lambda_{0}-\bar{\sigma}},\frac{ \sqrt{\frac{3\left(w_{m}^{2}-1\right)}{\bar{\sigma}-\lambda_{0}}+2\bar{\sigma}}} {\sqrt{2\lambda_{0}-2\bar{\sigma}}},\sigma_{0}\right),\] is the extension of point \(A_{4}\) in the three-dimensional space. The eigenvalues are \(e_{1,2}\left(C_{4}\right)=e_{1,2}\left(A_{4}\right)\) and \(e_{3}=\frac{3\left(1+w_{m}\right)\bar{\sigma}_{0}}{\left(1+\alpha\right)\left( \lambda-\sigma_{0}\right)}\) from which we can easily conclude that, when the point exist, is is always an attractor. Similarly to the property of points \(A_{4}\) and \(B_{4}\). \[C_{5}=\left(0,0,0\right)\] describes a universe dominated by the ideal gas, that is, \(\Omega_{m}\left(C_{5}\right)=1\) and \(w_{eff}\left(C_{5}\right)=w_{m}\). That is the new point provided by the coupling function \(f\left(\phi\right)=V_{0}e^{\sigma_{0}\phi}+\Lambda\). By definition \(\phi=\frac{1}{\sigma_{0}}\ln\left(\frac{\sigma\Lambda}{f_{0}\left(\sigma- \sigma_{0}\right)}\right)\), that is, the limit \(\sigma_{0}\) means that \(\phi\rightarrow-\infty\). Therefore, for \(\sigma=0\), the coupling function becomes \(f\left(\phi\right)\rightarrow\Lambda.\) The eigenvalues of the linearised system are \(e_{1}\left(C_{5}\right)=-\frac{3}{2}\left(1+w_{m}\right)\), \(e_{2}\left(C_{5}\right)=\frac{3}{2}\left(1-w_{m}\right)\) and \(e_{3}\left(C_{5}\right)=0\), from which we infer that point \(C_{5}\) describes always an unstable asymptotic solution and \(C_{5}\) is a saddle point. In Fig. 8 we present three-dimensional phase-space portraits of the dynamical system (29), (30) and (31) for different values of the free parameters. In Fig. 9 we present the qualitative evolution of the effective equation of state for various values of the free parameters for this model. We observe that for \(w_{m}=0\), if we start for initial conditions near to the radiation epoch, this model can reconstruct the cosmological history of the late universe. Moreover, the evolution of \(\Omega_{m}\) is presented in Fig. 10. For the initial conditions of the latter figures we considered the radiation epoch, that is, the effective fluid mimics the radiation fluid. The trajectories go near to the saddle point which describes the matter era and end at a universe dominated by the scalar field. ### Analysis at Infinity We apply the same procedure as for the Model B where now we consider the variables \(\sigma=\frac{\sigma_{b}}{\sqrt{1-\sigma_{b}^{2}}}\) and \(d\tau=\sqrt{1-\sigma_{b}^{2}}d\bar{\tau}\). At the limit of infinity the dynamical system becomes \[\frac{dx}{d\bar{\tau}}=\pm\frac{\sqrt{6}}{2}\left(1+\alpha\right)\left(x^{2}+ y^{2}-1\right),\text{ }\frac{dy}{d\bar{\tau}}=0\text{, }\frac{d\sigma_{b}}{d\bar{\tau}}=\sqrt{6}x\text{ for }\sigma_{b} \rightarrow\pm 1, \tag{32}\] from which it follows the stationary point \(C_{\text{inf}}=\left(0,1,\sigma_{b}^{2}\to 1\right)\). Point \(C_{\text{inf}}\) describes a de Sitter universe with \(\Omega_{m}\left(C_{\text{inf}}\right)=0\) and \(w_{eff}\left(C_{\text{inf}}\right)=-1\). Similarly as before we conclude that the stationary point describes an unstable solution. We summarize the results of this Section in Table 3. Model D: \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}+\Lambda\) and \(f\left(\phi\right)=f_{0}\left(V_{0}e^{\lambda_{0}\phi}+\Lambda\right)^{p}\) Consider now the cosmological model with scalar field potential \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}+\Lambda\) and coupling function for the chameleon mechanism \(f\left(\phi\right)=f_{0}\left(V\left(\phi\right)\right)^{p}\). For these functions it follows that \(\sigma\left(\lambda\right)=p\lambda\). Thus, the field equations are expressed by the following three-dimensional system \[\frac{dx}{d\tau}=\frac{1}{2}\left(3x\left(x^{2}-y^{2}-1+w_{m}\left(1-x^{2}-y^{2} \right)\right)-\sqrt{6}\left(\lambda y^{2}+\lambda\bar{p}\left(1-x^{2}-y^{2} \right)\right)\right), \tag{33}\] \[\frac{dy}{d\tau}=\frac{1}{2}y\left(3+\sqrt{6}\lambda x+3\left(x^{2}-y^{2}+w_{ m}\left(1-x^{2}-y^{2}\right)\right)\right), \tag{34}\] \[\frac{d\lambda}{d\tau}=\sqrt{6}x\lambda\left(\lambda_{0}-\lambda\right), \tag{35}\] where \(\bar{p}=\left(1+\alpha\right)p\). In the following lines we derive the stationary points of this system and we discuss their stability properties. \[D_{1}=\left(\pm 1,0,\lambda_{0}\right),\] have similar physical properties as points \(A_{1}^{\pm}\). We calculate the eigenvalues \(e_{1}\left(D_{1}^{\pm}\right)=\frac{\left(6+\sqrt{6}\lambda_{0}\right)}{2}\), \(e_{2}\left(D_{1}^{\pm}\right)=\mp\sqrt{6}\lambda_{0}\) and \(e_{3}\left(D_{1}^{\pm}\right)=3\left(1-w_{m}\right)\pm\sqrt{6}\bar{p}\lambda_ {0}\), from which it follows that for \(\bar{p}\left|\lambda_{0}\right|<\frac{3\left(1-w_{m}\right)}{\sqrt{6}}\), point \(D_{1}^{+}\) is a source when \(-\sqrt{6}<\lambda<0\), while point \(D_{1}^{-}\) is a source for \(0<\lambda<\sqrt{6}\). Otherwise the two points are saddle points. \[D_{2}=\left(-\frac{\lambda_{0}}{6},\sqrt{1-\frac{\lambda_{0}^{2}}{6}},\lambda_ {0}\right),\] with eigenvalues \(e_{1}\left(D_{2}\right)=\frac{\left(\lambda_{0}^{2}-6\right)}{2}\), \(e_{2}\left(D_{2}\right)=-3\left(1+w_{m}\right)+\lambda_{0}\left(\lambda_{0}- \bar{p}\lambda_{0}\right)\) and \(e_{3}\left(D_{2}\right)=\lambda_{0}^{2}\) describes an unstable scaling solution similar to that of point \(B_{2}\) with the same stability properties, by replacing \(\bar{\sigma}=\bar{p}\lambda_{0}\). \[D_{3}=\left(-\sqrt{\frac{2}{3}}\frac{\bar{p}\lambda_{0}}{1-w_{m}},0,\lambda_{ 0}\right).\] \begin{table} \begin{tabular}{c c c c} \hline \hline **Point** & \(\left(\mathbf{x},\mathbf{y},\sigma\right)\) & \(\Omega_{m}\) & \(\mathbf{w}_{eff}\) & **Can be Attractor?** \\ \hline \(C_{1}^{\pm}\) & \(\left(\pm 1,0,\sigma_{0}\right)\) & \(0\) & \(1\) & Yes \\ \(C_{2}\) & \(\left(-\frac{\lambda_{0}}{\sqrt{6}},\sqrt{1-\frac{\lambda_{0}^{2}}{6}},\sigma_ {0}\right)\) & \(0\) & \(-1+\frac{\lambda^{2}}{3}\) & Yes \\ \(C_{3}\) & \(\left(-\sqrt{\frac{2}{3}}\frac{1+\alpha}{1-w_{m}}\sigma,0,\sigma_{0}\right)\) & \(1-\frac{2\bar{\sigma}^{2}}{3\left(1-w_{m}\right)^{2}}\) & \(\frac{2\bar{\sigma}^{2}}{3\left(1-w_{m}\right)}+w_{m}\) & Yes \\ \(C_{4}\) & \(\left(-\sqrt{\frac{3}{2}}\frac{1+w_{m}}{\lambda_{0}-\bar{\sigma}},\frac{\sqrt{ \frac{3\left(w_{m}^{2}-1\right)}{\bar{\sigma}-\lambda}+2\bar{\sigma}}}{\sqrt{ 2\lambda_{0}-2\bar{\sigma}}},\sigma_{0}\right)\) & \(\frac{\lambda\left(\bar{\sigma}-\lambda_{0}\right)-3\left(1+w_{m}\right)}{ \left(\lambda-\bar{\sigma}\right)^{2}}\) & \(\frac{\bar{\sigma}_{0}+w_{m}\lambda}{\lambda-\bar{\sigma}_{0}}\) & Yes Always \\ \(C_{5}\) & \(\left(0,0,0\right)\) & \(1\) & \(w_{m}\) & No \\ \(C_{\inf}\) & \(\left(0,1,\sigma\rightarrow\pm\infty\right)\) & \(0\) & \(-1\) & No \\ \hline \hline \end{tabular} \end{table} Table 3: Stationary points and physical properties for Model C it is point \(B_{3}\) and it has the same physical and stability properties as for \(\bar{\sigma}=\bar{p}\lambda_{0}\). \[D_{4}=\left(-\sqrt{\frac{3}{2}}\frac{1+w_{m}}{\lambda_{0}-\bar{p}\lambda_{0}}, \frac{\sqrt{\frac{3(w_{m}^{2}-1)}{\bar{p}\lambda_{0}-\lambda_{0}}+2\bar{p} \lambda_{0}}}{\sqrt{2\lambda_{0}-2\bar{p}\lambda_{0}}},\lambda_{0}\right).\] it is point \(B_{4}\) for \(\bar{\sigma}=\bar{p}\lambda_{0}\), and it has the same physical and stability properties, that is, point \(D_{4}\) is an attractor when it exists. \[D_{5}=\left(0,0,0\right),\] it is an asymptotic solution in which the matter source dominates in the universe similarly to point \(C_{5}\) with the same eigenvalues. Hence, point \(D_{5}\) is always a saddle point. \[D_{6}=\left(0,1,0\right),\] describes a de Sitter universe similar to point \(B_{5}\) for which the cosmological constant term dominates in the scalar field potential. The stability properties are similar with those of point \(B_{5}\). In Fig. 11 we present the three-dimensional phase-space portrait for the dynamical system (33), (34) and (35) of Model D. Furthermore, the qualitative evolution of the effective equation of state parameter \(w_{eff}\) is given in Fig. 12 and the dynamical evolution of \(\Omega_{m}\) is presented in Fig. 13. From this we have selected a set of initial conditions whereby the dynamical system can describe the main eras of the cosmological history. We observe that a main difference of the evolution of \(w_{eff}\) with that of Model C is that now the future attractor can be a de Sitter universe, i.e. point \(D_{6}\), instead of a scaling solution described by point \(C_{4}\). For the initial conditions of the latter figures we considered the effective fluid to describe radiation. From the evolution of the trajectories we observe that the universe goes near to the saddle point which describes the matter era and ends to a universe dominated by the scalar field which is a future attractor. ### Analysis at Infinity For the analysis at infinity we apply the same procedure as that for Model B. At infinity, i.e. \(\lambda_{b}^{2}\to 1\), we end with the dynamical system \[\frac{dx}{d\bar{\tau}}=\mp\frac{\sqrt{6}}{2}\left(\bar{p}\left(1-x^{2}\right)+ \left(1-\bar{p}\right)y^{2}\right)\,\ \frac{dy}{d\bar{\tau}}=\pm\frac{\sqrt{6}}{2}xy\,\ \frac{d \lambda_{b}}{d\bar{\tau}}=\sqrt{6}x\, \tag{36}\] which means that the unique stationary point for \(\bar{p}\neq 1\) is the \(D_{\inf}=\left(0,\sqrt{\frac{\bar{p}}{\bar{p}-1}},\lambda_{b}\to\pm 1\right).\) For that stationary point we derive \(\Omega\left(D_{\inf}\right)=0\) and \(w_{eff}\left(D_{\inf}\right)=-1\), which means that it describes a de Sitter solution. On the surface \(\lambda_{b}^{2}=1\) the point is a homothetic center. However, from the differential equation for the variable \(\lambda_{b}\) it follows that \(D_{\inf}\) is always a saddle point. For \(\bar{p}=1\), there exists a family of saddle stationary points \(\bar{D}_{\inf}=\left(0,y,\lambda_{b}\to\pm 1\right)\) which describe solutions with \(\Omega\left(\bar{D}_{\inf}\right)=1-y^{2}\left(\bar{D}\right)\) and \(w_{eff}\left(\bar{D}\right)=w_{m}\left(1-y^{2}\left(\bar{D}\right)\right)-y^ {2}\left(\bar{D}\right)\). That is a very interesting result because Model D can provide two de Sitter epochs one unstable at infinity,in which can be related to inflation, and one attractor related to the late-time acceleration phase. The stability analysis for Model D is summarized in Table 4 \begin{table} \begin{tabular}{c c c c c} **Point** & \(\left(\mathbf{x},\mathbf{y},\lambda\right)\) & \(\Omega_{m}\) & \(\mathbf{w}_{eff}\) & **Can be Attractor?** \\ \hline \(D_{1}^{\pm}\) & \(\left(\pm 1,0,\lambda_{0}\right)\) & \(0\) & \(1\) & No \\ \(D_{2}\) & \(\left(-\frac{\lambda}{\sqrt{6}},\sqrt{1-\frac{\lambda^{2}}{6}},\lambda_{0}\right)\) & \(0\) & \(-1+\frac{\lambda_{0}^{2}}{3}\) & No \\ \(D_{3}\) & \(\left(-\sqrt{\frac{2}{3}}\frac{1+\alpha}{1-w_{m}}\sigma,0,\lambda_{0}\right)\) & \(1-\frac{2\bar{\sigma}^{2}}{3\left(1-w_{m}\right)^{2}}\) & \(\frac{2\bar{\sigma}^{2}}{3\left(1-w_{m}\right)}+w_{m}\) & No \\ \(D_{4}\) & \(\left(-\sqrt{\frac{3}{2}}\frac{1+w_{m}}{\lambda-\bar{\sigma}},\frac{\sqrt{3 \left(w_{m}^{2}-1\right)}+2\bar{\sigma}}{\sqrt{2}\lambda-2\bar{\sigma}}, \lambda_{0}\right)\) & \(1-\frac{2\bar{\sigma}^{2}}{3\left(1-w_{m}\right)^{2}}\) & \(\frac{\bar{\sigma}+w_{m}\lambda}{\lambda_{0}-\bar{\sigma}}\) & Yes Always \\ \(D_{5}\) & \(\left(0,0,0\right)\) & \(1\) & \(0\) & No \\ \(D_{6}\) & \(\left(0,1,0\right)\) & \(0\) & \(-1\) & Yes \\ \(D_{\inf}\) & \(\left(0,\sqrt{\frac{p}{p-1}},\lambda\to\pm\infty\right)\) & \(0\) & \(-1\) & No \\ \(\bar{D}_{\inf}\) & \(\left(0,y^{2}\left(\bar{D}\right),\lambda\to\pm\infty\right)\,\ \text{for}\ \bar{p}=1\) & \(1-y^{2}\left(\bar{D}\right)\) & \(w_{m}\left(1-y^{2}\left(\bar{D}\right)\right)-y^{2}\left(\bar{D}\right)\) & No \\ \end{tabular} \end{table} Table 4: Stationary points and physical properties for Model D Conclusions In this work we investigated the phase-space of the cosmological field equations in chameleon dark energy. In particular we performed a detailed dynamical analysis for the gravitational field equations in a spatially flat FLRW with a scalar field coupled to an ideal gas. The coupling function is responsible for the chameleon mechanism when the mass of the scalar field depends upon the energy density of the ideal gas. In order to avoid the violation of the weak energy condition, we assumed the scalar field to be that of quintessence and the coupling function to be always of positive value. In in such a scenario the Hubble function does not change sign. Consequently, for the study of the dynamics we followed the \(H\)-normalization approach. We defined a new set of dimensionless variables and we expressed all the physical quantities in terms of these variables. In the \(H\)-normalization approach the field equations are written in an equivalent form of an algebraic-differential system where the independent variable is the radius of the FLRW geometry, that is, the scale factor. With the use of the algebraic equation the dimensional space of the field equation is reduced to a maximum dimension of three. The dynamical evolution of the physical variables it depends on the selection of two unknown functions, the scalar field potential and the coupling function. We considered four different sets for the free functions; for these four models we discussed in details the evolution of the dynamics and the physical properties of the asymptotic solutions. For the first cosmological model of our consideration, namely Model A, we assumed that the potential and the coupling functions are exponential, that is, \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}\). For the exponential coupling function the theory is equivalent with the Weyl Integrable Spacetime. Furthermore, the dimension of the field equations in the dimensional variables for this selection of the unknown functions is reduced to two. The admitted stationary points are four, the two points correspond to the quintessence model for which the ideal gas does not contribute to the cosmological fluid, that is, we derived the stiff fluid epoch in which the kinetic term of the scalar field contributes in the field equations and a scaling solution which can describe acceleration. On the other hand, for the remainder of the stationary points the ideal gas contributes in the cosmological fluid. These points can describe scaling solutions related to the matter or to the late-time acceleration of the universe. Model B was considered with \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}+\Lambda\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}\). In this case the field equations form a three-dimensional system. On the two-dimensional surface we recovered the four stationary points of Model A. Moreover we found two new stationary points which describe a Sitter universe and a matter dominated era. In order to determine the existence of the matter dominated era we had to make use of Poincare' variables so as to investigate the asymptotic dynamics at the infinity regime. In Model C with \(V\left(\phi\right)=V_{0}e^{\lambda\phi}\) and \(f\left(\phi\right)=f_{0}e^{\sigma_{0}\phi}+\Lambda\,,\) we determined the same stationary points as in Model B, but now the de Sitter point appeared in the infinity regime while the matter dominated solutions exist in the finite regime. It is important to mention that the stability properties are different in each model. Finally, for the fourth-model of our consideration with \(V\left(\phi\right)=V_{0}e^{\lambda_{0}\phi}+\Lambda\) and \(f\left(\phi\right)=f_{0}\left(V_{0}e^{\lambda_{0}\phi}+\Lambda\right)^{p}\) with \(p\neq 0\), we calculated six stationary points at the finite regime which describe physical solutions corresponding to the stationary points at the finite regime for the cosmological Models A, B and C. Moreover, in the infinite regime for \(p\neq 1\), an unstable de Sitter universe exists which can be related to the early inflationary epoch of the universe, while for \(p=1\), the unstable solution at infinity describes a family of scaling solutions. For each of the above mentioned cosmological models, we presented phase-space portraits of the dynamical variables and the qualitative evolution of the effective equation of state parameter for different sets of initial conditions and values of the free parameters. The results from the stability analysis and the phase-space portraits can be used to constrain the region of the free variables for the initial condition problem. Furthermore, from the evolution of the effective equation of state parameters we can conclude that these models can reproduce the main eras of the cosmological history. Also, Model D can be used to unify the all the components for the dark sector of the universe and it shows that it can provide a mechanism to relate the dark energy and the inflation responsible for the inflation. In future work we plan to focus on the analysis of the perturbations and to investigate if these models can solve cosmological tensions. ###### Acknowledgements. This work was partially financially supported in part by the National Research Foundation of South Africa (Grant Numbers 131604). The author thanks the support of Vicerrec toria de Investigacion y Desarrollo Tecnologico (Vridt) at Universidad Catolica del Norte through Nucleo de Investigacion Geometria Diferencial y Aplicaciones, Resolucion Vridt No - 098/2022.
2304.13232
Multi-criteria Hardware Trojan Detection: A Reinforcement Learning Approach
Hardware Trojans (HTs) are undesired design or manufacturing modifications that can severely alter the security and functionality of digital integrated circuits. HTs can be inserted according to various design criteria, e.g., nets switching activity, observability, controllability, etc. However, to our knowledge, most HT detection methods are only based on a single criterion, i.e., nets switching activity. This paper proposes a multi-criteria reinforcement learning (RL) HT detection tool that features a tunable reward function for different HT detection scenarios. The tool allows for exploring existing detection strategies and can adapt new detection scenarios with minimal effort. We also propose a generic methodology for comparing HT detection methods fairly. Our preliminary results show an average of 84.2% successful HT detection in ISCAS-85 benchmark
Amin Sarihi, Peter Jamieson, Ahmad Patooghy, Abdel-Hameed A. Badawy
2023-04-26T01:40:55Z
http://arxiv.org/abs/2304.13232v1
# Multi-criteria Hardware Trojan Detection: A Reinforcement Learning Approach ###### Abstract Hardware Trojans (HTs) are undesired design or manufacturing modifications that can severely alter the security and functionality of digital integrated circuits. HTs can be inserted according to various design criteria, e.g., nets switching activity, observability, controllability, etc. However, to our knowledge, most HT detection methods are only based on a single criterion, i.e., nets switching activity. This paper proposes a multi-criteria reinforcement learning (RL) HT detection tool that features a tunable reward function for different HT detection scenarios. The tool allows for exploring existing detection strategies and can adapt new detection scenarios with minimal effort. We also propose a generic methodology for comparing HT detection methods fairly. Our preliminary results show an average of 84.2% successful HT detection in ISCAS-85 benchmarks. Hardware Trojan, Reinforcement learning, Automated Benchmarks ## 1 Introduction Due to time-to-market constraints and increasing production costs, the integrated circuit (IC) supply chain has adopted a multi-party production model. According to this new model, most microelectronic chips are being produced outside of the country sec (2022), raising security concerns about the design and fabrication of chips, particularly hardware Trojan (HT) insertion attacks. Our current HT detection capabilities suffer from the following shortcomings. 1) Most detection methods perform the HT detection through a one-dimensional lens, i.e., nets' switching activity - Lyu and Mishra (2020) and Gohil et al. (2022). We believe that the current detection methods might not cover the real-world scenarios in which adversaries can insert HTs according to a range of criteria. 2) Available HT benchmarks suffer from significant limitations in size and variety of circuits, as well as the fact that they are all human-crafted and hence are biased by the expert mindset at the creation time - Sarihi et al. (2022)1. Footnote 1: The most referenced benchmarks are available on trust-hub.org. This paper attempts to move the HT detection research space forward by developing a multi-criteria HT detector that explores many HT detection strategies, not limited to a designer's mindset. Our Reinforcement Learning (RL) HT detector has a tunable rewarding function that helps detect different HTs with different insertion strategies. The RL agent explores large circuit designs promptly and generates test vectors to find HTs in digital circuits. Our threat model consists of a security engineer that must verify a manufactured IC's integrity before allowing it to be integrated into a bigger design. The engineer only can rely on the golden netlist to produce test vectors. Our threat model inherently differs from previous works Lyu and Mishra (2020), where the design internals are still accessible in the pre-silicon phase. Our generated test patterns are publicly available through this link2. Additionally, this paper introduces a confidence value as a part of a methodology to compare HT detectors fairly. This helps security engineers to decide the merits of HT detectors for specific applications. In summary, the paper's contributions are as follows: Footnote 2: The link removed for blind review. * We introduce an RL-based HT detection tool with a tunable rewarding function that can be modified and re-trained based on different criteria. * We introduce and use a generic methodology to make fair comparisons among HT detectors. The rest of the paper is organized as follows. We discuss previous endeavors in HT detection in Section 2. Section 3 presents our HT detection tool. We define a security metric to better compare the HT detectors by security engineers in Section 4. Experimental evaluation of the proposed tool and analysis of the results are in Section 5. Finally, Section 6 concludes the paper. ## 2 Background and Previous Work This section reviews existing hardware Trojan (HT) detection methods. MERO is a test pattern generator that tries to trigger possible HTs by exciting rare-active nets multiple times. MERO becomes less effective with larger circuits. Hasegawa et al. (2017) extract 51 features from the Trusthub benchmarks and train a Random Forest classifier. However, the studied dataset is limited. Lyu et al. Lyu and Mishra (2020) proposed TARMAC, which maps the trigger activation problem to the clique cover problem. TARMAC requires access to the internal nets and testing each suspect circuit separately. TGRL is an RL framework where the agent decides whether to flip a bit in the test vector according to an observed probability distribution. The reward function combines the number of activated nets and their SCOAP Goldstein and Thigpen (1980) (Sandia Controllability/Observability Analysis Program) parameters. The algorithm was not tested on any HT benchmarks. DETERRENT Gohil et al. (2022) is another RL-based detector that finds the smallest set of test vectors to activate as many rare nets as possible; however, it only targets the switching activity of nets. HW2VEC Yu et al. (2021) uses Graph Neural Networks to extract structural features from graphs and produce graph embeddings. The embeddings are passed to a deep neural network to classify circuits as HT-free or HT-infected. The detector is trained on Trusthub benchmarks. Unlike the previous work, our study proposes a multi-criteria RL-based HT detector tool that can detect HTs with different insertion strategies. ## 3 RL-based HT Detection From an RL agent perspective for HT detection, the environment is a given circuit (or netlist) to determine whether it is clean or HT-infected. The agent interacts (performs an action) with the circuit by flipping input values to activate internal nets. The RL agent has an \(n\)-dimensional binary action space \(a_{t}=[a_{1},a_{2},...,a_{n}]\) where \(n\) is the number of circuit primary inputs. The agent may set or reset each \(a_{i}\) to transition to another state. \(a_{i}=0\) denotes that the value of the \(i^{th}\) input will remain unchanged from the previous test pattern, and \(a_{i}=1\) means that the input bit will flip. Attackers are likely to choose trigger nets with a consistent value (either \(0\) or \(1\)) most of the time. Thus, a detector aims to activate as many dormant nets as possible. We consider two different approaches for identifying such rare nets: **1) Dynamic Simulation**: We feed each circuit with \(100K\) random test patterns and record the value of each net. Through logging nets transitions, we populate the switching activity statistics for each net and compare it against a threshold \(\theta\) (ranges in \([0,1]\)). Nets with switching below \(\theta\) are considered rare nets. **2) Controllability Simulation**: This approach classifies the nets based on their _controllability3_ values. Low switching nets have a high difference between their controllability value Sebt et al. (2018), _i.e._, they are mostly stuck at \(0\) or \(1\). We set a threshold value \(\eta\) as defined in Eq. 1: Footnote 3: Controllability is the difficulty of setting a particular net to \(0\) or \(1\) logic value. \[\eta=\frac{|CC1(Net_{i})-CC0(Net_{i})|}{Max(CC1(Net_{i}),CC0(Net_{i}))} \tag{1}\] where \(CC0(Net_{i})\) and \(CC1(Net_{i})\) are the combinational controllability of \(0\) and \(1\) for \(Net_{i}\), respectively. The \(\eta\) parameter ranges between \([0,1)\) such that higher values of \(\eta\) correlate with lower net activity Sebt et al. (2018). Our RL state is mapped to the set of the collected rare nets. In a circuit with \(m\) rare nets, the state space is defined as \(State_{t}=[s_{1},s_{2},...,s_{m}]\) where \(s_{i}\) is associated with the \(i^{th}\) net in the set. Whenever an action (a test pattern) activates \(s_{i}\) (taking its rare value), it will set that state to \(1\) in the state vector. Otherwise, its state stays at \(0\). As can be inferred, the action and state spaces are multi-binary. Figure 1 summarizes our tool flow. ### Rewarding Functions The agent's goal is to activate as many HT triggers as possible. Thus, a part of the rewarding function should enumerate rare nets. However, we should avoid over-counting situations where a rare net has successive dependent rare nets. We adopt a pruning strategy and pick the rarest net in a sequence of dependent rare nets (seen in Figure 1). This policy will help accelerate the RL agent to converge on the global minima faster. As for rewarding the agent, we consider three rewarding functions, and we explain them in the rest of this section. In our first rewarding function (hereafter \(D1\)), we use a copy of the agent's previous state and encourage it to generate states that differ from the previous one. This pushes the agent towards finding test vectors that lead to unseen states. The pruned current and previous state vectors are passed as inputs to \(D1\); the final reward is the output. The reward function comprises an _immediate_ and _sequential_ parts. The sequential reward is computed by making a one-to-one comparison between the nets in the old and new states. The highest reward is given when an action can activate a net that was not triggered in the previous state, where it is given \(+40\) for each net. If a rare net continues to be active in the new state, the agent will still be rewarded \(+20\). The worst state transition is whenever an agent takes an action that leads to a rare net losing its rare value, and that is rewarded \(-3\). Lastly, if the agent cannot activate a rare net after a state transition, it will be rewarded \(-1\). The immediate award is the number of activated rare nets in the new state. Lastly, the final reward is a weighted mixture of immediate and sequential rewards with tunable weights. Figure 1: The proposed toolset workflow. Algorithm 1 describes our second rewarding function \(D2\). In this case, the agent gains a reward proportional to the difficulty of the rare net it can trigger. This reward is computed using the inverse of net switching activities (line \(4\)). If no vectors were found to trigger a net, it would be rewarded \(10X\), the greatest reward in the vector (line \(12\)). The algorithm encourages the agent to trigger the rarest nets in the circuit. In the third rewarding function (\(D3\)), rare nets are populated based on threshold \(\eta\) in Eq. 1. When a rare net is activated, the agent is rewarded with the controllability of the rare value. This scenario aims to investigate controllability-based HT detection using an RL algorithm. ## 4 The Proposed Generic HT-Detection Metric We propose the following methodology to the community for fair and repeatable comparisons among HT detection methods. This methodology obtains a confidence value that one can use to conduct a fair comparison between different HT detection methods. There are \(4\) possible outcomes when an HT detection tool studies a given circuit. From the tool user's point of view, the outcomes are probabilistic events. For example, when an HT-free circuit is being tested, the detecting tool may either classify it as an infected or a clean circuit, _i.e._, \(Prob(FP)+Prob(TN)=1\) where \(FP\) and \(TN\) stand for _False Positive_ and _True Negative_ events. Similarly, for HT-infected circuits, we have \(Prob(FN)+Prob(TP)=1\). We know \(FN\) and \(FP\) are two undesirable outcomes that detectors misclassify. Between these two, \(FN\) cases are much more dangerous because an \(FN\) case leads to a situation in which we rely on an HT-infected chip, whereas an \(FP\) case means wasting a clean chip by either not selling or not using it. So, we need to know how HT detection tools' user (might be a security engineer or a company representative) prioritizes \(FN\) and \(FP\) cases. We define a parameter \(\alpha\) as the ratio of the undesirability of \(FN\) over \(FP\). The tool user determines \(\alpha\) based on characteristics and details of the application that eventual chips will be employed in, _e.g._, the risks of using an infected chip in a device with a sensitive application versus using a chip for home appliances. Note that the user sets this value, which is not derived from the actual \(FP\) and \(FN\). After \(\alpha\) is set, it is plugged in Eq. 2 and a general confidence basis \(Conf.Val\) is computed. \[Conf.Val=\frac{(1-FP)}{(1/\alpha+FN)} \tag{2}\] This metric can make a fair comparison between HT detection methods regardless of their detection criteria and implementation methodology. The defined confidence metric combines the two undesirable cases concerning their severity from the security engineer's point of view, and it ranges between \([\frac{0.5\alpha}{1+0.5\alpha}..\alpha]\). The closer the value is to \(\alpha\), the higher the confidence in the detector. The absolute minimum of the \(Conf.Val=1/3\) that happens when \(\alpha=1\) and \(FP=FN=50\%\). This analysis assumes that \(FN\) and \(FP\) are independent probabilities. We note that for some detection methods, \(FP\) is always \(0\). For instance, test-based HT detection methods that apply a test pattern to excite HTs use a golden model (HT-free) circuit for comparison and decision-making. There is no way for such methods to detect an HT in a clean circuit falsely. However, our metric is general and captures such cases. ## 5 Experimental Evaluations Our proposed multi-criteria HT detector is developed in Python. The training process of the RL agent is done using the PPO (proximal policy optimization) Schulman et al. (2017) from the Stable Baselines library with an episode length of \(10\). This guarantees that the agent would reset each \(10\) episodes and agent observes a new state. We select six circuits from ISCAS-85, namely \(c432\), \(c880\), \(c1355\), \(c1908\), \(c3540\), and \(c6288\). To accelerate the training of the RL agent, instead of calling time-consuming graph functions, we built adjacency matrices and dictionaries that contain structural information of each node within the graph. This simple yet efficient technique speeds up training and testing processes by \(3.7\times\) and \(3.2\times\), respectively. We start from \(450K\) of timesteps for training in \(c432\) and increase the timesteps for each successive circuit by \(10\%\) to enable enough exploration for larger circuits. We ran the training processes in parallel for each circuit. This process took nearly \(27\) hours to train the benchmark set. In the testing phase, we ran the trained RL agent for \(20K\) episodes. To select a test vector, we set a cut-off reward of one-tenth of the collected reward in the last training episode (since we have ten steps per episode). We gather \(20K\) test vectors that surpass this reward threshold. Table 1 summarizes the detection percentages of our three detection scenarios for different HT sizes inserted in ISCAS-85 Sarhi et al. (2022). The inserted HTs in this dataset were introduced to address two issues: 1) removing inherent human bias in current HT databases and 2) providing ample HT instances for training detectors. Table 1 lists six benchmarks with HTs triggered by \(2\), \(3\), \(4\), and \(5\) input wires and reports the detection accuracy for \(D1\), \(D2\), and \(D3\) (labeled across the top of the table). The number of HTs for each case is in Sarhi et al. (2022). From the table, \(D2\) has the best detection rate in most cases; however, exceptions exist. For instance, in \(c880\), the detection rate for \(D1\) is equal to or better than \(D2\), especially for 5-input HTs. The same happens for 3-input HTS in \(c1908\). On the other hand, \(D3\) shows its superiority in \(c3540\). Except for the 3-input Trojans, \(D3\) equals or is better than the other two rewarding scenarios. This underlines the importance of \(D3\), which uses an inherently different detection criterion. Polling among the three HT detection scenarios can generally lead to satisfactory HT detection in most circuits. One interesting observation concerns the detection rate of \(c432\). While applying \(100,000\) random test vectors, we found that the rarest net in the circuit was triggered \(7\%\) of the time, which is significantly higher than other circuits where many nets exhibit switching activity of less than \(1\%\). This suggests that the inserted HTs in the \(c432\) might be activated easier with random test vectors. To test this hypothesis, we generated \(20,000\) additional random test vectors and applied them to the circuit, detecting \(99\%\) of the HTs. This demonstrates that the RL attack did not have the intended impact in \(c432\). The confidence metric of our HT-detection tool proposed in Section 4 can be seen in the last row of the table, assuming \(\alpha=10\). The confidence value for each column is calculated by averaging each detection scenario for each column. The table shows that the security engineer can put confidence in the \(D2\) detector since it has higher confidence values than the other detection scenarios. The confidence value only surpasses \(5\) for detectors with \(FN<10\%\). In other words, the confidence value increases sharply, and better HT detectors are rewarded with more exponential confidence values. ## 6 Conclusion This paper emphasizes the need for multi-criteria HT detection tools and universal metrics to compare them. We propose a reinforcement learning tool for hardware Trojan detection, which features three rewarding functions that detect a wide range of HTs. Results on ISCAS-85 circuits showed a high detection rate of the proposed tool for various HTs. We also present a methodology to help the community compare HT detection methods regardless of their implementation details. We applied the methodology to our HT detection and discovered that our tool offers the highest confidence in HT detection when using the rewarding function D2.
2305.03308
Tiny-PPG: A Lightweight Deep Neural Network for Real-Time Detection of Motion Artifacts in Photoplethysmogram Signals on Edge Devices
Photoplethysmogram (PPG) signals are easily contaminated by motion artifacts in real-world settings, despite their widespread use in Internet-of-Things (IoT) based wearable and smart health devices for cardiovascular health monitoring. This study proposed a lightweight deep neural network, called Tiny-PPG, for accurate and real-time PPG artifact segmentation on IoT edge devices. The model was trained and tested on a public dataset, PPG DaLiA, which featured complex artifacts with diverse lengths and morphologies during various daily activities of 15 subjects using a watch-type device (Empatica E4). The model structure, training method and loss function were specifically designed to balance detection accuracy and speed for real-time PPG artifact detection in resource-constrained embedded devices. To optimize the model size and capability in multi-scale feature representation, the model employed depth-wise separable convolution and atrous spatial pyramid pooling modules, respectively. Additionally, the contrastive loss was also utilized to further optimize the feature embeddings. With additional model pruning, Tiny-PPG achieved state-of-the-art detection accuracy of 87.4% while only having 19,726 model parameters (0.15 megabytes), and was successfully deployed on an STM32 embedded system for real-time PPG artifact detection. Therefore, this study provides an effective solution for resource-constraint IoT smart health devices in PPG artifact detection.
Yali Zheng, Chen Wu, Peizheng Cai, Zhiqiang Zhong, Hongda Huang, Yuqi Jiang
2023-05-05T06:17:57Z
http://arxiv.org/abs/2305.03308v3
Tiny-PPG: A Lightweight Deep Neural Network for Real-Time Detection of Motion Artifacts in Photoplethysmogram Signals on Edge Devices ###### Abstract Photoplethysmogram (PPG) signals are easily contaminated by motion artifacts in real-world settings, despite their widespread use in Internet-of-Things (IoT) based wearable and smart health devices for cardiovascular health monitoring. This study proposed a lightweight deep neural network, called Tiny-PPG, for accurate and real-time PPG artifact segmentation on IoT edge devices. The model was trained and tested on a public dataset, PPG DaLiA, which featured complex artifacts with diverse lengths and morphologies during various daily activities of 15 subjects using a watch-type device (Empatica E4). The model structure, training method and loss function were specifically designed to balance detection accuracy and speed for real-time PPG artifact detection in resource-constrained embedded devices. To optimize the model size and capability in multi-scale feature representation, the model employed deep separable convolution and atrous spatial pyramid pooling modules, respectively. Additionally, the contrastive loss was also utilized to further optimize the feature embeddings. With additional model pruning, Tiny-PPG achieved state-of-the-art detection accuracy of 87.8% while only having 19,726 model parameters (0.15 megabytes), and was successfully deployed on an STM32 embedded system for real-time PPG artifact detection. Therefore, this study provides an effective solution for resource-constraint IoT smart health devices in PPG artifact detection. **Keywords**: Edge AI, IoT wearables, Photoplethysmogram, Motion artifacts ## 1 Introduction Cardiovascular diseases have emerged as the leading cause of mortality in the modern society. With the advancement of sensing, electronic, and information technologies, smart wearable medical devices such as watches, glasses, and clothing have been proposed for the unobtrusive measurement of vital signs like electrocardiogram (ECG) or photoplethysmogram (PPG) [1] to monitor cardiovascular health. These devices are expected to be valuable in the prevention and precise diagnosis and treatment of cardiovascular diseases. PPG is a simple, non-invasive, and cost-effective optical technique that measures blood volume changes in microvasculature and has been widely employed to estimate physiological parameters such as heart rate, blood pressure [2][3], oxygen saturation [4], and respiratory rate [5], etc. However, PPG signals are often interfered with by various noises, especially motion artifacts (MA) caused by relative movements between the sensor and skin. MAs are difficult to remove because they overlap with the signal frequency [6]. In most research, PPG artifacts are detected by dividing PPG signals into segments and classifying them into either clean or artifact categories [7][8]. However, pulse segmentation can be challenging in complex motion scenarios. Researchers have attempted to assess signal quality by extracting hand-crafted features such as pulse width, amplitude, and slope in a sliding window [9][10]. In recent years, deep learning methods have been increasingly employed in the analysis of various
2308.14850
Attention Visualizer Package: Revealing Word Importance for Deeper Insight into Encoder-Only Transformer Models
This report introduces the Attention Visualizer package, which is crafted to visually illustrate the significance of individual words in encoder-only transformer-based models. In contrast to other methods that center on tokens and self-attention scores, our approach will examine the words and their impact on the final embedding representation. Libraries like this play a crucial role in enhancing the interpretability and explainability of neural networks. They offer the opportunity to illuminate their internal mechanisms, providing a better understanding of how they operate and can be enhanced. You can access the code and review examples on the following GitHub repository: https://github.com/AlaFalaki/AttentionVisualizer.
Ala Alam Falaki, Robin Gras
2023-08-28T19:11:52Z
http://arxiv.org/abs/2308.14850v1
Attention Visualizer Package: Revealing Word Importance for Deeper Insight into Encoder-Only Transformer Models ## Abstract This report introduces the "Attention Visualizer" package, which is crafted to visually illustrate the significance of individual words in encoder-only transformer-based models. In contrast to other methods that center on tokens and self-attention scores, our approach will examine the words and their impact on the final embedding representation. Libraries like this play a crucial role in enhancing the interpretability and explainability of neural networks. They offer the opportunity to illuminate their internal mechanisms, providing a better understanding of how they operate and can be enhanced. You can access the code and review examples on the following GitHub repository: [https://github.com/AlaFalaki/AttentionVisualizer](https://github.com/AlaFalaki/AttentionVisualizer). _Keywords: Attention Mechanism, Transformers, Visualization, Natural Language Processing_ ## 1 Introduction From the time of their introduction, neural networks were regarded as enigmatic systems, often resembling black boxes [1] due to the difficulty in grasping their underlying mechanisms. Nevertheless, in recent years, numerous researchers have uncovered the inner workings of these networks and clarified the factors contributing to their functionality. Various approaches exist to assist in enhancing the interpretability of different types of neural networks. There are research studies that have directed their focus towards different facets of these models, as evident in the paper on extracting rules from the network [1], which emphasizes exploitability. Additionally, the xNN network from [23] is designed with architecture constraints Figure 1: The library’s user interface operated within a Google Colab instance. (left) The input fields and settings to calculate the attention scores. (right) The visualization takes the form of a heatmap, with darker shades of red indicating higher scores. to make the networks more interpretable. Another area of research pertains to the visualization of these models. Existing literature concentrates on visualizing the attention mechanism within vision-related tasks. [25] Furthermore, the BertViz [26] library focus on visualizing the self-attention scores in natural language processing, and more specifically, the BERT [10] model. Visualizing neural networks holds significant importance as it provides a tangible means to demystify their core operations and enhance our understanding of their inner dynamics. These intricate networks, inspired by the human brain, consist of numerous layers and interconnected nodes, making them challenging to comprehend solely through abstract mathematical descriptions. Visualizations offer an intuitive way to represent the flow of information, the weights assigned to connections, and the transformations occurring at each layer. By translating abstract concepts into visual elements, researchers, practitioners, and even non-experts can gain insights into how neural networks learn, generalize, and make predictions. Such visualizations foster transparency, allowing to detect potential biases, anomalies, or areas of improvement within the network's structure and performance. In essence, visualizing neural networks bridges the gap between their computational complexity and human comprehension, enabling more effective development, interpretation, and refinement of these powerful tools across various domains. This report introduces the "Attention Visualizer" library, designed to visualize the significance of words based on the attention mechanism scores within transformer-based models. Subsequent sections of this paper will delve into the library's design decisions, features, and its practical applications. ## 2 Design and Architecture As illustrated in Figure 1, the library showcases a user-friendly interface to input the text and obtain the attention score or the significance of each word in a form of a heatmap of the output. The precise score can be viewed by hovering the crosshair over the word. It is possible to customize the scoring criteria to align with diverse use cases through interaction with the controls, a topic we will explore further in the subsequent discussion. The primary emphasis of this library lies in encoder-only models, aiming to enhance understanding of how Transformer-based models create embeddings representation for textual content. This constitutes the primary contrast distinguishing this work from similar visualization libraries. Their primary emphasis lies in utilizing the self-attention mechanism scores to illustrate token relationships. In contrast, our approach involves quantifying the contribution of individual words to the final representation. In the upcoming subsections, the design decisions and challenges will be discussed to shed light on the details of our approach and provide comprehensive insights into its implementation. ### Pre-Trained Model As previously stated, the library is tailored for encoder-only models, with the default model chosen for this package being the base variant of the pre-trained RoBERTa [12] model. The Transformers library [13] is employed to simplify the loading process and facilitate seamless integration of various models, enabling straightforward comparisons of their performance. One factor to consider when choosing a model is the predetermined maximum input size which is established during the pre-training phase. The RoBERTa model comes with a limitation of 512 tokens for input. Consequently, the library will truncate any text that exceeds this limit to the specified size. ### Tokens vs Words The tokenization process [27] involves breaking down a text into smaller chunks, called tokens. These tokens can be words, subwords, or even characters, depending on the chosen tokenization strategy. The tokens are stored within a dictionary and can be substituted with their corresponding IDs for input into the models. Although several techniques exist for tokenization, the Byte-Level BPE (Sennrich, Haddow, and Birch 2016) method stands out as a renowned approach that has demonstrated its effectiveness for transformer-based models. This method has the potential to divide a word into multiple tokens. For example, the word "tokenizing" would be represented as a combination of two tokens: "token" + "izing". This introduces a certain level of complexity for visualization step. How do we address this, given that each token ("token," "izing") is assigned a score instead of the entire word? The solution implemented here involves selecting the token with the highest score as the score for the respective word. (For instance, if the initial token "token" has a score of 0.05 and the second token "izing" has a score of 0.01, the score of 0.05 will be chosen as the score for the word "tokenizing.") ### Calculating the Score The self-attention scores tensor possesses a size of [L\(\times\)H\(\times\)N\(\times\)N], wherein N\(\times\)N signifies the self-attention mechanism through which each token's score is computed by comparing a sentence with itself. Additionally, variables L and H denote the number of layers and self-attention heads, respectively. As evident, it deviates from being a straightforward rank 1 tensor with a size of [N], which contains a score for each token. To tackle this, the package has opted for a simple approach of averaging the scores across layers and heads. Further experiments using the package highlighted the difficulty in identifying the score differences due to constraints in displaying the results. The rationale behind this is that a 10% alteration in color opacity doesn't bear the same visual significance as it does conceptually, resulting in scores exceeding 70% appearing as very dark red and being attributed greater importance. Put simply, differentiating between words with moderately high or low scores would have been challenging. We observed that employing a basic min-max normalization approach would yield a more uniform outcome. Figure 2: The impact of distinct filters on the output is showcased through four variations: no filter (top-left), excluding BOS and EOS tokens (top-right), excluding dots (bottom-left), and excluding stop words (bottom-right). ## 3 Key Features A limitation of this library might arise from the averaging process, which could potentially lead to information loss when scores are averaged across layers. To address this concern, the controls allow users to examine individual layers/attention heads or group several of them together. It can help with finding patterns in the attention scores and understanding how specific layers or attention heads contribute to the overall representation of the text. As visible in Figure 2, three options are accessible for enhancing the interpretability of the outcomes. The results are produced by averaging all the attention heads across the layers. The outcomes without any filter place significant emphasis on the special BOS (Beginning of Sentence) and EOS (End of Sentence) tokens. The "Ignore BOS/EOS" option reveals that the dots emerge as the second important element within the texts. Only by masking the scores of dots during calculations can we achieve a more evenly distributed score, allowing us to observe trends effectively. An additional choice has been included to also ignore stop words1 from the calculation. It might be useful depending on the use case. Footnote 1: The words such as “a”, “are”, “is”, “the”, etc. It's important to emphasize that when opting for the ignoring options, the tokens are not eliminated from the text. Regardless of the chosen approach, the exact same text will be fed into the RoBERTa model. The sole distinction lies in adjusting the scores of these tokens (after averaging) to the minimum score before undergoing the normalization process. ## 4 Usage and Take Aways The library is designed to work seamlessly with both Jupyter Notebook and, consequently, Google Colab. You can install the package directly from GitHub using PIP, a Python package manager. Subsequently, you can define the visualizer object by initializing the main class. The provided code illustrates sample usage for running an instance. 2 Footnote 2: The usage of the library is demonstrated in the following URL: [https://colab.research.google.com/github/AlaFalaki/AttentionVisualizer/blob/main/demo.ipvnb](https://colab.research.google.com/github/AlaFalaki/AttentionVisualizer/blob/main/demo.ipvnb) Figure 3: An overview of all attention heads in layer 1 of the network. (Appendix 1 contains the images of higher quality.) We can identify several trends by examining the results shown in Figure 3, which display the attention scores of all heads for a single layer. (Refer to Appendix 1 for higher quality images) For example, heads 1 to 7 exhibit an increasing concentration on specific sets of words, particularly evident in head 7. This stands in contrast to the initial group of heads, which distribute attention across a larger number of words. Additionally, it's apparent that head number 8 is specifically trained to focus on hyphenated words such as "one-off", "long-standing", "20-race", etc. Attention head number 10 views all words in the text as significant. This could potentially serve as a mechanism within the model to transfer information from the initial layers to the later ones. Remarkably, the final head appears to focus on names. These could encompass names of individuals such as "Sebastian", countries like "Malaysia", events like "Prix", or even the names of weekdays such as "Thursday". ## 5 Conclusion This report delves into the significance of exploring novel approaches for visualizing models. We introduced the "Attention Visualizer" package and made the code accessible by publishing it on GitHub. The details and design decisions involved in implementation of the library are elaborated comprehensively, along with potential enhancements for future versions, including refining the methodology for calculating attention scores. We also conducted a basic analysis to demonstrate the effectiveness of visualization by examining attention patterns within a layer of the RoBERTa model. The illustrations assisted us in pinpointing particular attention heads that focus on patterns like hyphenated words and names. It's evident that this tool has the potential to significantly aid researchers in delving into models with greater detail, thereby enhancing their understanding of the decision-making process and potential biases.
2306.02546
Symbol Preference Aware Generative Models for Recovering Variable Names from Stripped Binary
Decompilation aims to recover the source code form of a binary executable. It has many security applications such as malware analysis, vulnerability detection and code hardening. A prominent challenge in decompilation is to recover variable names. We propose a novel technique that leverages the strengths of generative models while mitigating model biases and potential hallucinations. We build a prototype, GenNm, from pre-trained generative models CodeGemma-2B and CodeLlama-7B. We finetune GenNm on decompiled functions, and mitigate model biases by incorporating symbol preference to the training pipeline. GenNm includes names from callers and callees while querying a function, providing rich contextual information within the model's input token limitation. It further leverages program analysis to validate the consistency of names produced by the generative model. Our results show that GenNm improves the state-of-the-art name recovery accuracy by 8.6 and 11.4 percentage points on two commonly used datasets, and improves the state-of-the-art from 8.5% to 22.8% in the most challenging setup where ground-truth variable names are not seen in the training dataset.
Xiangzhe Xu, Zhuo Zhang, Zian Su, Ziyang Huang, Shiwei Feng, Yapeng Ye, Nan Jiang, Danning Xie, Siyuan Cheng, Lin Tan, Xiangyu Zhang
2023-06-05T02:39:48Z
http://arxiv.org/abs/2306.02546v3
# LmPa: Improving Decompilation by Synergy of Large Language Model and Program Analysis ###### Abstract. Decompilation aims to recover the source code form of a binary executable. It has many applications in security and software engineering such as malware analysis, vulnerability detection and code reuse. A prominent challenge in decompilation is to recover variable names. We propose a novel method that leverages the synergy of large language model (LLM) and program analysis. Language models encode rich multi-modal knowledge, but its limited input size prevents providing sufficient global context for name recovery. We propose to divide the task to many LLM queries and use program analysis to correlate and propagate the query results, which in turn improves the performance of LLM by providing additional contextual information. Our results show that 75% of the recovered names are considered good by users and our technique outperforms the state-of-the-art technique by 16.5% and 20.23% in precision and recall, respectively. ## 1. Introduction Decompilation aims to reverse engineer a binary executable, which often has no debugging or symbol information, to a source code form that is close to its original source and human-understandable. During compilation, variables at the source level are transformed to registers and memory locations at the binary level; type information is discarded; statements are broken down to instructions, relocated, and even removed; code structure may be reformed; functions may be limited; and function boundaries, data and code boundaries are no longer explicit [55; 75]. Decompilation attempts to reverse these transformations, which has various challenges including disassembly [24; 37; 55; 80], variable and type recovery [10; 46; 67; 87], code structure recovery [73], function boundary recovery [67; 83], and name recovery [10; 44]. Decompilation is critical in many security and software engineering tasks. For example, it is often the first step for malware analysis [53; 54; 74], in which human analysts inspect malware code to understand their behaviors. It is important for binary vulnerability analysis where analysts want to identify critical bugs in executables [19; 56], for software supply chain analysis [32; 59], and for code reuse in which legacy executables may need to be ported or hardened [21; 51; 68]. Its importance is evidenced by the popularity of decompilation tools such as IDA [35] and Ghidra [29], e.g., in security threat analysis [8; 14; 53; 13; 54] There is a large body of existing work on binary reverse engineering and decompilation [11; 12; 31; 31; 49; 66; 73]. The state-of-the-art disassembling methods achieve over 95% precision and recall [55; 65; 67]; type recovery techniques can achieve over 90% precision and recall for primitive types [10] and over 70% for user defined types [10; 87]; and function boundary recognition can achieve 97.1% precision and recall [83]. However, the state-of-the-art _name recovery_ method DIRTY [10] only achieves 17.3% precision and 10.9% recall according to our experiment (Section 4.3). Name recovery is arguably one of the most valuable steps in decompilation, because natural language artifacts such as identifier names are crucial for human developers to effectively apprehend a piece of code. Yet, name recovery tends to be more challenging compared to a few other tasks. In addition to the challenges induced by the aforementioned compilation transformations, a large number of intermediate variables are introduced at the binary level and may not have correspondence to any source variable; function names and variable names are application and context dependent such that machine instructions with few syntactical differences may have substantially different names. In DIRTY [10], researchers proposed to use language models to infer variable types and names. They trained a transformer model using a large repository of executables and their ground truth symbol information, and then used the model to generate type and name information for a program decompiled by IDA. Note that IDA decompilation focuses on recovering basic control structure and hence largely lacks type or name information. Before DIRTY [10], there were a number of proposals using various deep learning [36; 45; 58] and probabilistic graph model based name generation [31]. More discussions are in the related work section. DIRTY's performance degrades when the complexity of subject binary increases. This is due to a number of limitations in existing language models. For instance, they only support inputs with a limited size. Hence, DIRTY can only infer names for one function at a time and hardly consider calling contexts. In addition, although the training repository used in DIRTY has 75,656 binaries, it may not be large enough to leverage the true benefits of language models. In comparison, ChatGPT was trained on massive natural language and programming language corpora including Wikipedia, digital books, GitHub, StackOverflow, and so on (Gil et al., 2017). In this paper, we develop a novel name recovery technique leveraging the synergy between pre-trained _large language model_ (LLM) and program analysis. LLMs are usually trained on enormous datasets with multiple modalities, including source code, binary code, and natural languages. The scale of their training is the key to their impressive performance (Bahdan et al., 2016; Goyal et al., 2017; Goyal et al., 2017). We hence propose to build on the success of SOTA pre-trained LLMs to achieve generalizability. In the meantime, existing LLMs still have the aforementioned input size limitation. For example, ChatGPT allows at most 4,096 tokens at a time. We propose to break the procedure of name recovery for a program down to multiple queries to an LLM and use program analysis to chain them together. The procedure is iterative, allowing LLMs to gradually improve over time. Specifically, we develop a name propagation algorithm that has a similar nature to type inference. Assume in one round of query, the LLM is able to derive a meaningful name for some decompiled variable within the queried code snippet (called the _query window_), the name can be propagated to other places in the program outside the query window, following strict program semantics. This allows LLM queries in future rounds to have more contextual information. For example, a newly generated callee function name is propagated to the invocation sites in its callers. To tolerate the non-determinism of LLM-generated names, our analysis abstracts the name of a variable to a set, instead of a singular identifier. After convergence, the distribution in the set naturally informs us the most likely name for the variable. Our contributions are summarized as follows. * We propose a novel approach to name recovery for binary executables. It features an iterative algorithm involving using both an LLM and a program analysis. * We develop a systematic method to construct LLM queries. The method has the capability of including up-to-date information collected from previous rounds of queries. * We develop a name propagation analysis that can propagate predicted names by the LLM to other places and even construct new meaningful names. * We devise a post-processing step that filters out meaningless names and selects appropriate names from the analysis results after convergence. * We have implemented a prototype LmPa. We evaluate it on 1258 functions from 6 popular binary analysis benchmarks. Our user study shows that 75% of the names recovered by LmPa are considered good while the number for DIRTY is 6%. Using an automatic metric based on name similarity, LmPa achieves 33.85% precision and 31.12% recall, substantially outperforming DIRTY, which has 17.31% precision and 10.89% recall. It takes on average 8 LLM queries to name variables in a function. The total fee for our experiments is only 30 USD. Our ablation study shows that if we directly query the LLM without the program analysis, the precision and recall degrade to 31.04% and 18.21%, respectively. ## 2. Motivation and Overview We use a motivating example to illustrate challenges in decompilation, as well as the limitations of state of the arts. We then present our method. ### Motivating Example The example is adapted from two functions in Coreutils (Coreutils, 2017). The source code of the functions is shown in Fig. 1. Function c_tlower (defined at line 1) converts an input character to its lower case. Specifically, it is implemented with a switch-statement (line 2). If the input is an upper case letter, the function converts it to lower case (line 6); otherwise the input character is returned unchanged (line 8). Function c_strcascemp() (defined at line 11) takes as input two strings and compares them in a case-insensitive fashion. It declares four variables (lines 12-14): p1 and p2 are pointers to the next characters to be compared in the two input strings, respectively; c1 and c2 are two temporary variables holding the compared characters in lower cases. The function iteratively compares each pair of characters (at the same position) in both strings (line 21), and stops at the first difference. It finally returns the difference (line 22). Note that before character comparison, the function calls c_tlower() (lines 17-18) to convert both characters to lower cases. **Challenges in Decompilation** We compile the example with GCC and the option -O0 (i.e., no optimization), resulting in a binary program. Then we remove the debugging information and symbol information from the binary, following a typical real-world reverse-engineering scenario (Goyal et al., 2017; Goyal et al., 2017). We further use IDA (Meyer et al., 2017) to decompile the binary program, and part of the results are shown in Fig. 2. During compilation and deployment, the symbol information and high-level code structures in the source code are lost. For example, Fig. 1(a) shows the decompiled form of c_tolower() by IDA (Meyer et al., 2017), and the corresponding assembly instructions are shown in the grey box. We can observe that (1) the function name c_tolower and the variable name c are not preserved in the binary; (2) the switch-statement in lines 2-9 of Fig. 1 is translated to comparison instructions like line 37 in the grey box of Fig. 1(a); (3) the expression - 'A' + 'a'(at line 6 in Fig. 1) is simplified to + 0x20 (at line 39 in the grey box of Fig. 1(a)). Without any symbol or structural information from the original source code, the decompiled code is not similar to the original source, but rather a direct translation of the assembly code, which is difficult to understand. Similarly, Fig. 1(b) shows the decompiled form of c_strcascemp() and its corresponding assembly code. Note that the variables and callee functions do not have meaningful names. For example, the variable p1 at line 17 of Fig. 1 is stored in r12 at line 23 in the grey box of Fig. 1(b). IDA thus fills in a dummy name v2. Also, the callee function c_tolower() is now invoked by its address 0x4BECE3. The decompiler gives it a dummy name sub_4BECE3. Without a meaningful name reflection mechanism, it is hard to understand the decompiled function. ### Limitations of State-of-the-Art Methods DIRTY (DIRTY, 2017) leverages a transformer model to predict types and names of variables in decompiled programs. Although it demonstrates impressive results in type recovery, its name recovery results are limited. In our motivating example, it does not produce any names different from those that are already in the decompiled code. There are two possible reasons. First, limited by the input size of transformer models, DIRTY handles one function at a time and it does not support information sharing across functions. Second, DIRTY assumes binaries still have function names and uses such names in training. Function names provide strong contextual information as the model can learn the typical variable names used in a function with a particular name. For example, in its training data, line 7 in Fig. 2b would be something like v6 = c_tolower(*v2). However in practice, stripped binaries do not have function names. Therefore, the transformer model cannot pick up enough context, and thus can hardly generate meaningful variable names. Another line of work leverages probabilistic graph models (Zhou et al., 2017) (PGM) to rename decompiled variables with names seen in the training data. We test our motivating example on DEBIN (Zhou et al., 2017), a representative technique in this line. It could not generate desirable names either. For example, in c_tolower(), it gives variable c a name index. PGMs can be considered a more powerful form of Bayesian networks. They model type and name predicates of program artifacts as nodes, e.g., a predicate \(isInt(x)\) asserting \(x\) is of int type, and edges denote statistical dependences between nodes, which are acquired by program semantics and training. Typing and naming patterns are hence learned and encoded as weight values in the PGMs. However, their results heavily depend on the quality of training data, and PGM inferences are largely local, lacking an ability similar to the attention mechanism in transformers. In our case, the decompiled code body of c_tolower() is too simple and does not provide much hint for DEBIN. However, in the caller c_strcascemp(), individual characters in a string are passed to variable c in order. Such behavior pattern has been seen and encoded by the PGM, but it was connected with an id of index. We show the full function with predicted names from DEBIN in Fig. 23 of our supplementary material. ### Our Technique Existing techniques suffer from the relatively limited scale of their training. We thus propose a technique that builds on the recent advances in large language models. LLMs (Liang et al., 2017; Chen et al., 2017; Chen et al., 2017; DIRTY, 2017; DIRTY, 2017) are typically trained on multi-modal data of an enormous scale. They demonstrate superior capabilities in many natural language tasks and coding tasks (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017). However, they only allow input of a limited size. Our idea is hence to query an LLM many times, requesting names for separate code snippets of a program, and use program analysis to propagate and filter query results. The process is iterative, meaning that information acquired from past queries are used to provide additional contextual information for future queries, improving the LLM's performance. We use ChatGPT as our underlying LLM, but our technique can be easily generalized to other LLMs (e.g., GPT-4 (Zhou et al., 2017)). **Query to LLM.** ChatGPT is an online chat-bot that mimics the dialogue behavior of a human. Its input and output are natural language sentences. To leverage ChatGPT in generating variable names, we have to: (1) formulate the problem into natural language questions; and (2) automatically parse ChatGPT's response and associate the suggested name with the corresponding variable in code. We show in Fig. 3 an example about how LmPA queries ChatGPT to rename function c_tolower(). The blue and green boxes are LmPa's query and ChatGPT's response, respectively. At the beginning, LmPa briefly describes the task of predicting names in the decompiled code. Then the decompiled function is attached. After that, LmPa enumerates each variable and specifies the response Figure 1. Motivating example Figure 2. Decompiled code. (a) and (b) show the decompiled functions generated by IDA. The grey boxes next to them show the corresponding assembly instructions. format requirements. As shown in Fig. 3, ChatGPT follows the format requirements in its response, and thus LMPa can post-process ChatGPT's answer by recognizing the format. Fig. 3(a) and Fig. 3(b) show the two functions in our motivating example, with the variables and functions renamed according to ChatGPT's initial response (using the ChatGPT website between March 6-10). For function c_tolower(), we can see that ChatGPT mistakenly considers it as a function converting digits to ASCII code, which shares some common behavior patterns with the target function. The suggested name input_parameter for variable c is not that informative either. On the other hand, for function c_strcascemp(), ChatGPT produces a close name of compare_strings, while missing the case-insensitive part. The predicted variable names in this function are of good quality too (e.g., string1 for s1,string1_pointer for p1, and string1_char for c1). We speculate that the good results are due to the sufficient context, namely, the pairwise comparison of array elements (lines 5-11), the comparison with literal number 0 (line 8) to break the loop, and the return value that reports the first difference. **Iterative Name Propagation.** To leverage ChatGPT's success in one place to improve its performance in other places such as c_tolower(), we further propose a name propagation technique that iteratively propagates names between functions. The key insight is that some functions might be easier for ChatGPT to understand. Information (e.g., variable/function names) derived from these functions can provide better context for other functions. The insight aligns with how a human reverse engineer understands a binary program (Bowman et al., 2017; Bowman et al., 2017). She typically starts from functions with special literals or well-known program idioms. The information from these functions will help her understand the other connected parts. Take Fig. 3(c) as an example of name propagation. LMPa adds a code comment at the beginning of the queried function. The comment describes how the function is used in its caller. As depicted by the red dashed arrows and the red boxes, LMPa leverages the name of the caller function (i.e., compare_strings) and the name of the argument variable (i.e., string1_pointer) to compose a comment, propagating the newly acquired contextual information. Readers may be curious why we use comments to propagate information instead of directly setting function and variable names. The reason is that ChatGPT often refuses to generate new names if variables already have non-trivial names in the code. Using comments does not have such restraint. Note that using comments in natural language to convey program analysis results to the chat bot is a unique capability enabled by the underlying LIM. In Fig. 3(c), the changes of ChatGPT's response are highlighted in light yellow. With the additional context, ChatGPT realizes that function c_tolower() takes as input a character, and further correctly recognizes the functionality of this function is converting a character to its lower case. Based on the correct functionality, ChatGPT generates a better name (i.e., input_char) for variable c. Similarly, in the third round shown in Fig. 3(d), LMPa conversely propagates the name convert_to_lowercase() back to its caller. ChatGPT then generates a more precise name for c_strcascemp() (see part of the function name in yellow). This time, the case insensitive part of the function name is recovered. The example illustrates the power of LLMs, the importance of name propagation, and the gradual improvement through multiple iterations. Figure 4. Three rounds of interactions with ChatGPT. **Round 1’s query is partially shown in Fig. 3. Each box shows a function with symbols renamed according to ChatGPT’s response. Red circles show the numbers of rounds. Red arrows and boxes indicate how LMPa uses information from previous response to craft new queries. Names different to previous rounds are highlighted in yellow.** Figure 5. Workflow. In the green boxes are main components of LMPa. **The major steps are numbered with orange circles.** ## 3. Method The overall workflow of LaPA is in Fig. 5. It takes as input a binary program, and outputs the decompiled program with recovered names. LaPA first leverages IDA to decompile the input binary program to C code, and then iteratively queries ChatGPT to generate names for functions and variables in the C code. Specifically, after the decompilation, LaPA first generates prompts for each function in the input C program (step 1 in Fig. 5), and then queries ChatGPT with the generated prompts via the ChatGPT API (Gupta et al., 2019), one function at a time (step 2). After LaPA obtains responses from ChatGPT, it parses the natural language outputs and maps the names proposed by ChatGPT back to the C code (step 3). Then a program analysis (_Name Propagator_ in Fig. 5) is applied to propagate good names among functions. How to determine if a name is good by its confidence will be discussed later in Section 3.2. The results of propagation are further leveraged to construct the next round queries to ChatGPT (step 4), enabling improvement over time. After convergence, the final results are further processed by selecting the most appropriate names from those that were ever predicted over the multiple rounds (step 5). In the following, we discuss more details. ### Formalization of Problem This section illustrates how we formulate the problem of name generation for decompiled programs. We first introduce a simple language and the abstract domains (for the program analysis) to facilitate the discussion. Then we show the iterative algorithm LaPA uses to refine variable names. **Language.** To simplify the discussion, we use a simple language to model the decompiled C code. Our implementation is based on the Clang-AST parser, and supports most commonly-used C syntax in decompiled functions. The definition of our language is shown in the top part of Fig. 6. A _program_ in our language consists of a list of function _declarations_. Each declaration consists of an identifier for the function (\(Id\)), a list of arguments (\(Args\)), and the function body (\(S\)). We use _identifier_ to refer to the dummy names (e.g., v6) in the decompiled program. Our language has three types of _statements_: \(S_{1}\); \(S_{2}\) is used to concatenate two statements; \(E_{1}\coloneqq E_{2}\) is the assignment statement; \(\mathtt{return}\ E\) is used to return values to caller functions. The definitions for _expressions_ are standard: \(Id\) and \(Lit\) are expressions referring to an identifier and a literal, respectively; \(E_{1}\circ E_{2}\) denotes a binary operation over two operand expressions; and \(Id(E_{1},E_{2},...)\) is a function call expression. ``` 1FunctionLaPA (\(program\)) 2nameScheme = 0 budget = N whilebudget\(\geq\) 0 do // Ask ChatGPT for each function fordef\(program\)do // Ask ChatGPT for one function predNames = askOneFunc(decl) // Update the name scheme with new predictions for\(id\),pred\(pred\)xmlsdo \(\mathtt{nameScheme}[(decl.id, id)]\)-append(pred) // Propagate names for each function propagationRet = 0 for\(decl\in program\)do \(\mathtt{propagationRet}\) \(\mathtt{Update}\) = propagate(decl, program, nameScheme) program = updateQuery(program, propagationRet) budget = budget - 1 selected = selectName(nameScheme) return selected ``` **Algorithm 1**Iterative Query and Propagation The overall workflow of LaPA is in Fig. 5. It takes as input a binary program, and outputs the decompiled program with recovered names. LaPA first leverages IDA to decompile the input binary program to C code, and then iteratively queries ChatGPT to generate names for functions and variables in the C code. Specifically, after the decompilation, LaPA first generates prompts for each function in the input C program (step 1 in Fig. 5), and then queries ChatGPT with the generated prompts via the ChatGPT API (Gupta et al., 2019), one function at a time (step 2). After LaPA obtains responses from ChatGPT, it parses the natural language outputs and maps the names proposed by ChatGPT back to the C code (step 3). Then a program analysis (_Name Propagator_ in Fig. 5) is applied to propagate good names among functions. How to determine if a name is good by its confidence will be discussed later in Section 3.2. The results of propagation are further leveraged to construct the next round queries to ChatGPT (step 4), enabling improvement over time. After convergence, the final results are further processed by selecting the most appropriate names from those that were ever predicted over the multiple rounds (step 5). In the following, we discuss more details. ### Formalization of Problem This section illustrates how we formulate the problem of name generation for decompiled programs. We first introduce a simple language and the abstract domains (for the program analysis) to facilitate the discussion. Then we show the iterative algorithm LaPA uses to refine variable names. **Language.** To simplify the discussion, we use a simple language to model the decompiled C code. Our implementation is based on the Clang-AST parser, and supports most commonly-used C syntax in decompiled functions. The definition of our language is shown in the top part of Fig. 6. A _program_ in our language consists of a list of function _declarations_. Each declaration consists of an identifier for the function (\(Id\)), a list of arguments (\(Args\)), and the function body (\(S\)). We use _identifier_ to refer to the dummy names (e.g., v6) in the decompiled program. Our language has three types of _statements_: \(S_{1}\); \(S_{2}\) is used to concatenate two statements; \(E_{1}\coloneqq E_{2}\) is the assignment statement; \(\mathtt{return}\ E\) is used to return values to caller functions. The definitions for _expressions_ are standard: \(Id\) and \(Lit\) are expressions referring to an identifier and a literal, respectively; \(E_{1}\circ E_{2}\) denotes a binary operation over two operand expressions; and \(Id(E_{1},E_{2},...)\) is a function call expression. ### Method The overall workflow of LaPA is in Fig. 5. It takes as input a binary program, and outputs the decompiled program with recovered names. LaPA first leverages IDA to decompile the input binary program to C code, and then iteratively queries ChatGPT to generate names for functions and variables in the C code. Specifically, after the decompilation, LaPA first generates prompts for each function in the input C program (step 1 in Fig. 5), and then queries ChatGPT with the generated prompts via the ChatGPT API (Gupta et al., 2019), one function at a time (step 2). After LaPA obtains responses from ChatGPT, it parses the natural language outputs and maps the names proposed by ChatGPT back to the C code (step 3). Then a program analysis (_Name Propagator_ in Fig. 5) is applied to propagate good names among functions. How to determine if a name is good by its confidence will be discussed later in Section 3.2. The results of propagation are further leveraged to construct the next round queries to ChatGPT (step 4), enabling improvement over time. After convergence, the final results are further processed by selecting the most appropriate names from those that were ever predicted over the multiple rounds (step 5). In the following, we discuss more details. ### Formalization of Problem This section illustrates how we formulate the problem of name generation for decompiled programs. We first introduce a simple language and the abstract domains (for the program analysis) to facilitate the discussion. Then we show the iterative algorithm LaPA uses to refine variable names. **Language.** To simplify the discussion, we use a simple language to model the decompiled C code. Our implementation is based on the Clang-AST parser, and supports most commonly-used C syntax in decompiled functions. The definition of our language is shown in the top part of Fig. 6. A _program_ in our language consists of a list of function _declarations_. Each declaration consists of an identifier for the function (\(Id\)), a list of arguments (\(Args\)), and the function body (\(S\)). We use _identifier_ to refer to the dummy names (e.g., v6) in the decompiled program. Our language has three types of _statements_: \(S_{1}\); \(S_{2}\) is used to concatenate two statements; \(E_{1}\coloneqq E_{2}\) is the assignment statement; \(\mathtt{return}\ E\) is used to return values to caller functions. The definitions for _expressions_ are standard: \(Id\) and \(Lit\) are expressions referring to an identifier and a literal, respectively; \(E_{1}\circ E_{2}\) denotes a binary operation over two operand expressions; and \(Id(E_{1},E_{2},...)\) is a function call expression. ### Interaction with ChatGPT Both the input and output of ChatGPT are natural language sentences. Thus the key challenge is to formulate the problem of name generation into natural language questions, and to automatically parse ChatGPT's responses. Our solution is to use a prompt template to enumerate each variable we want ChatGPT to predict, and ask ChatGPT to follow a specific output format. Figure 6. Syntax of our language (top) and abstract domains of LaPA (bottom) **Prompt Generation.** As shown in Fig. 3, to query for a function, LaPa first describes the task with a few natural language sentences, followed by the decompiled C code. Then LaPa enumerates individual variables in the function and sends the query. We observed that ChatGPT may _miss some variables_ when the question is too general, e.g., "What are the good names for all variables in the above function?" If a function has many variables, LaPa groups them in two separate queries to prevent the length from going beyond ChatGPT's token limit. Note that LaPa also asks names for functions. In addition to names, LaPa guides ChatGPT to report the confidence for each prediction. This is because ChatGPT may generate dummy names (e.g., "function_input_argument") or randomly pick irrelevant names when it cannot predict a good name from the context. LaPa prunes out these low-quality names by confidence. Specifically, in prompts, LaPa instructs ChatGPT as follows: _You MUST mark your confidence as 'Confident' or 'Not Sure' for each name. If you are confident about a name, you should mark it as 'Confident'. Otherwise, if you are not sure about a name, you should mark it as 'Not Sure'._ Then LaPa simply filters out all the predictions that are marked as _Not Sure_ in post-processing. We observe that ChatGPT may overestimate its confidence for sub-optimal names but rarely underestimate. For example, in our motivating example, ChatGPT marks the (wrongly) predicted name convert_to_ascii as _Confident_. LaPa alleviates this problem by considering the name candidate distributions returned by ChatGPT over multiple iterations. Details are in Section B of the supplementary material. Finally, LaPa requires ChatGPT to output names in a machine-readable format. Without the output format requirements, ChatGPT tends to generate its answers in natural language, or even give a rewritten version of the program. **Post-processing.** Although LaPa specifies the output format, ChatGPT still has some variance in its answer. We manually craft a set of regular expressions for LaPa to parse the output, and LaPa will retry the query for one more time if the output format cannot be correctly read. Typically, we observe less than 3% format errors. ### Name Propagation LaPa's name propagation shares a similar nature as type inference in which known types of some variables are used to derive types for other variables, following program semantics. For example, assume a statement x=y and x has a known type of int, type inference algorithms can determine y also has an int type. In LaPa, good names for a variable are leveraged to derive good names for other variables. Initially, all high confidence names from ChatGPT are considered good names and literals are assigned good names of their own textual forms. A set of rules are used to propagate good names. For instance, a good callee function name will be propagated to its invocation sites in callers. New good names may be constructed for an expression only involving operands with good names. Different from type inference, name propagation is inclusive, meaning that a variable may have multiple good names. Therefore, the propagation is by monotonically deriving more and more relations. **Relations and Auxiliary Functions.** To facilitate the discussion, we define a few relations and functions in Fig. 7. A good name is represented by a relation. Specifically, \(GoodNameOf(name_{0},id_{0},id_{1})\) indicates a string \((name_{0})\) is considered a good name for an identifier \((id_{1})\) in a function \((id_{0})\). Similarly, \(GoodNameOf(name_{0},id_{0},e_{1})\) indicates \(name_{0}\) is a good name for an expression \(e_{1}\) in function \(id_{0}\). \(CallerOf(id_{0},id_{1})\) indicates \(id_{0}\) is a caller of \(id_{1}\). LaPa iteratively derives such relations during analysis till it reaches a fixed point. The auxiliary function \(\texttt{str()}\) maps a literal number or an operator to its string representation. \(\Box\) The analysis is formally defined by a set of inference rules shown in Fig. 8. Each rule is interpreted as follows: the predicates above the line are the premises of a rule; and the formula below the line depicts how new relations are inferred. Rule _Caller_ recognizes caller-callee relations. It means that if a call to function \(id_{2}\) is found in a statement of \(id_{1}\), then there is a relation \(CallerOf(id_{1},id_{2})\). That is, \(id_{1}\) is a caller of \(id_{2}\). Rules _GN-Id_ and _GN-Lid_ denote starting points of our inference. _GN-Id_ denotes that if in the function \(id_{1}\), ChatGPT predicts a name \(n\) for \(id\) with high confidence, then \(n\) is considered a good name for \(id\) in function \(id_{1}\). _GN-Lit_ specifies the string representations for all literal values are good names. The rationale is that literals (e.g., magic numbers) are important for human reverse engineers (Barb et al., 2017; Barb et al., 2018). Rule _PropExpr_ constructs a good name for an expression \(e_{1}\)\(\circ\)\(e_{2}\) if both sub-expressions have a good name. Note that LaPa similarly constructs good names for other expressions, such as call-expressions and unary operations. Details are elided. Figure 8. Propagation Rules Figure 7. Relations and Functions for Name Propagation Rules _PropCalleeName_ and _PropCalleeArg_ are inter-procedural and propagate name information from a callee function to its caller. Specifically, Rule _PropCalleeName_ denotes that a good name for the callee is considered a good name for the function invocation in the caller. Rule _PropCalleeArg_ represents how LmPa propagates the name for a formal argument in the callee to the corresponding actual argument expression in the caller. For example, if a formal argument is named as file_descriptor in the callee function, then the expression corresponding to that argument at the invocation site may also be a file descriptor. The set of rules for propagation from a caller function to all its callees are symmetric and hence elided. ### Query Update After name propagation, LmPa further leverages the propagated names to construct the next round queries. The query update algorithm takes as input a query text of a function and the _GoodNameOf_ relations derived by the propagation rules, and outputs a new query for the function. A few query construction rules are presented in Fig. 9. The green boxes show the derived _GoodNameOf_ relations, and the tan boxes show the function. In Fig. 9, LmPa derives a good name for the callee function \(id_{1}\) in the context of function \(id_{0}\). It renames all the invocations to \(id_{1}\) to the good name. Note that there may be multiple good names for a function/variable, LmPa selects the one with the latest timestamp. Fig. 9 shows how to leverage good name information regarding an expression, including a singleton variable expression. Recall that our name propagation allows generating names for composite expressions. We cannot simply rename any identifier to utilize such information. LmPa thus propagates the information by code comments. As shown in Fig. 9, it puts the propagated _name_ in the code comment before the expression \(e_{i}\). Note that even if the related expression is a singleton variable, simply replacing its identifier with a good name may yield undesirable results. The reason is that ChatGPT tends not to rename a variable that already has a meaningful name in the code. Thus directly setting variable names in the code prevents ChatGPT from generating any new names. Fig. 9 shows that when a caller function and its actual argument expression have good names, they can be utilized in the query of a callee of the function. Specifically, a new comment is added before the callee function describing which caller function may call it and the good name for the argument expression. ## 4. Evaluation We develop LmPa on top of IDA Pro 7.5 and Clang 12. LmPa consists of a total of 2,770 lines of Python code and 3,214 lines of C++ code. We examine the effectiveness of LmPa by addressing the following research questions (RQs): **RQ1**: Can LmPa effectively help developers comprehend decompiled code? How does it compare with the SOTA? **RQ2**: How well do names generated by LmPa and SOTA match their original versions in the source code? **RQ3**: What are the impacts of the name propagation analysis on the overall performance of LmPa? **RQ4**: Does LmPa scale well on real-world data? **RQ5**: Is LmPa resilient to the nondeterminism of LLM answers? In addition to these RQs, we conduct four case studies to illustrate how LmPa helps in the real-world use scenarios. ### Setup **Benchmark.** We assess LmPa using six well-established real-world projects that have been extensively employed in previous studies (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2020). It is worth noting that OpenAI enforces various resource restrictions when accessing ChatGPT (Wang et al., 2018), such as query fees and intentional delays (e.g., around 20 seconds per query). Our dataset consists of 16,212 functions in total. However, evaluating all the functions from our dataset would lead to high resource consumption. Therefore, we adhere to existing practices (Wang et al., 2018; Wang et al., 2019; Wang et al., 2020) and randomly sample a subset of 1,258 functions consisting of 4,277 variables. Detailed statistics of our dataset can be found in Section C of our supplementary material. **Evaluation Metrics.** Assessing the degree of alignment between predicted names and ground-truth names (i.e., the variable names in the source code) presents a significant challenge because there may be many semantically equivalent names for a variable. For instance, buffer_size and buffer_n are often deemed semantically equivalent in the context of programming, yet they do not match each other. To address the issue, we propose the following two metrics for evaluation. Developer Preferences. Taking into account the complexity in evaluating the semantic equivalence of symbol names, incorporating professional developers in the evaluation process is a judicious approach. To this end, we conduct a user study with a group of developers, including a number of participants with substantial reverse engineering experience. Each participant was presented with several functions, accompanied by their source code and ground-truth names. The participants were then asked to score each predicted name on a scale of 1 to 5, with higher scores reflecting better predictions. A more detailed description can be found in Section 4.2. Name Similarity. While user studies can provide reliable results, they are inherently difficult to scale up. In order to automate the evaluation process, we introduce a similarity score function that quantifies the similarity between a predicted name and its corresponding ground-truth name. \[Similarity(S_{\textit{TP}},\ S_{\textit{P}})=\frac{|LCS(S_{\textit{TP}},\ S _{\textit{P}})|}{|S_{\textit{TP}}|}\] Figure 9. How LmPa composes new queries. LmPa rewrites a query based on both the _GoodNameOf_ relation(s) (green box, noted as \(GN\)) and function code body (tan box). The white boxes on the bottom show the new queries with the modification highlighted in yellow. In the formula above, \(S_{\text{TP}}\) and \(S_{\text{P}}\) represent the ground-truth and predicted names, respectively. _LCS_ represents the longest common subsequence between the two input strings. Essentially, this function assesses the proportion of characters in the ground-truth name that are accurately predicted in order. For example, it yields similarity scores of 0.64 and 0.6 for the aforementioned buffer_size and ret_buffer examples, respectively. It is important to note that the similarity function generates a score rather than a binary outcome, providing a more refined evaluation of the predictions. More importantly, outcomes derived from our user study align well with results by this automated method, as detailed in Section 4.2. This provides additional support for its validity in practice. ### RQ1: User Study To evaluate the effectiveness of LaPa, we conduct a sizeable user study. Specifically, we randomly select 30 functions from our dataset, and all variables present in the sampled functions are examined as subjects within the study. To help participants understand the context, each function is accompanied by its respective source code and decompiled code. We task the participants with evaluating the quality of predicted names by comparing them to their ground-truth counterparts. The study encompasses four variable name prediction methods: DEBIN, DIRTY, ChatGPT without the propagation mechanism (one-shot), and LaPa. Participants are instructed to rate each predicted name on a scale of 1 to 5, with the scores indicating (1) _misleading_, (2) _meaningless_, (3) _general but missing context_, (4) _acceptable_, and (5) _comparable to or better than ground truth_. We include concrete samples of the study in Section A in the supplementary material. In addition to the randomly-sampled 30 functions, we mix in the study another 8 functions with 33 variables as validation samples. In each validation sample, one of the four methods demonstrates a clear advantage over the others. These samples are used to ascertain participants' attentiveness during the study. It should be noted that results from validation questions are excluded from our final analysis. In total, we construct 528 questions, consisting of 396 testing questions and 132 validation questions. We recruit 31 participants, with 16 from our institution, and the rest from three world-class CTF (Capture The Flag) teams1. All participants have extensive programming experience, with 26 of them having utilized C/C++ in project development and 10 possessing over three years of hands-on expertise in reverse engineering. We ensure that at least four participants respond to each question. Footnote 1: CTFs are renowned competitions designed to challenge participants in solving computer security problems, including reverse engineering tasks. In order to determine the world-class standing of a CTF team, we assess whether they have achieved a top-10 ranking on CTFTime (Cheng et al., 2017) at least once during the period spanning from 2013 to 2023. **Overall Results.** Fig. 10 delineates the results of the user study, with the x-axis representing user scores and the y-axis indicating the count of predicted names corresponding to each score. It is clear that LaPa surpasses the other three methods, as the majority of its predicted names achieve scores of 4 and 5, i.e., "good names", indicating that LaPa is good at providing semantically meaningful names. ChatGPT without propagation also exhibits a relatively commonable performance compared to the baselines. However, due to the lack of a propagation mechanism and the inability to aggregate derived information, it yields fewer good names. Specifically, LaPa generates good names for 75% variables, and ChatGPT without propagation generates good names for 45%. The two baseline methods might be directly applicable to the proposed method. Note that the majority of DIRTY's predictions are scored 2 (i.e., meaningless names), and none of them obtain a score of 1 (i.e., misleading names). This can be attributed to DIRTY's conservative nature, which tends to generate dummy names such as a1. **Effectiveness of the Name Similarity Metric.** Piggybacking on this experiment, we validate the effectiveness of the proposed automated metric. Specifically, for each predicted name, we calculate its similarity to the corresponding ground-truth name and compare the similarity score with the score by users. Fig. 11 presents the results. The x-axis represents a name similarity score threshold. The y-axis indicates the average user study scores of the predicted names whose similarity scores exceed the threshold. Observe that the average user study score has a close-to-linear positive relation with the threshold. It validates that the similarity score serves as a reasonable approximation of semantic equivalence of variable names from a user standpoint. Observe that when the threshold is 0.0, the user score is still slightly above 3. It essentially indicates that the average user study score of all predicted variables (generated by the four subject methods) is marginally above 3. ### RQ2: Quality of Predicted Names To assess the degree to which the names generated by LaPa correspond with their ground-truth counterparts, we employ the similarity function to gauge the prediction quality, thereby enabling the evaluation to be scaled across the entire benchmark. Fig. 11 shows that when the threshold is set at 0.6, the average human score exceeds 4, indicating that the predicted names are acceptable alternatives as rated by users. Consequently, we select a threshold of 0.6 for the similarity metric, meaning that a predicted name is deemed a "good name" if its similarity score surpasses 0.6. A good prediction is treated as a _true positive_, based on which we can further calculate the _precision_ and _recall_ of a name prediction technique (Kang et al., 2017). **Overall Results.** Table 1 shows the performance of LaPa in comparison to DIRTY, the current state-of-the-art technique for predicting variable names in decompiled code. Note that DIRTY assumes the decompiled program has the ground-truth function names. _We thus provide the names of functions (only) in DIRTY's test samples_. The results of LaPa are obtained on programs without ground-truth function names. Although the setup for LaPa is more challenging, LaPa outperforms DIRTY on most datasets in terms of both precision and recall. We attribute this to the advances of LLM, and the \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Total} & \multicolumn{2}{c}{LaPa} & \multicolumn{2}{c}{DIRTY} \\ \cline{3-6} & & Precision & Recall & Precision & Recall & Overlap \\ \hline Coreutils & 820 & 30.86\% & 27.93\% & 11.61\% & 8.17\% & 16.95\% \\ Findutils & 743 & 32.33\% & 30.42\% & 15.60\% & 11.84\% & 14.00\% \\ SQLite & 563 & 27.77\% & 26.29\% & 15.05\% & 4.80\% & 3.20\% \\ ImageMagick & 488 & 48.73\% & 43.24\% & 11.96\% & 6.76\% & 0.00\% \\ Diftutils & 727 & 33.28\% & 31.09\% & 13.45\% & 9.49\% & 14.86\% \\ Binutils & 886 & 30.15\% & 27.77\% & 36.13\% & 24.27\% & 61.40\% \\ \hline **Average** & **-** & **33.85\%** & **31.12\%** & **17.31\%** & **10.89\%** & - \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison between LaPa and DIRTY. Column “Total” is the number of variables in our samples in each dataset. Column “Overlap” shows the portion of variables that overlap with the training set of DIRTY. name propagation technique that provides more context for the queries to LLM. LmPa achieves the highest improvement on the ImageMagick dataset, with a precision that is over four times that of DIRTY's and a recall over six times. Further analysis attributes the relatively higher performance to ImageMagick's heavy reliance on external library function calls, which supply an abundance of hints to the LLM. On the Binutils dataset, DIRTY slightly outperforms LMPa in terms of precision. That is because more than 60% of the variables in that dataset overlap with DIRTY's training set (see the last column of Table 1), while such overlap is lower than 17% in other benchmarks. Note that DIRTY was trained on functions randomly sampled from Github, and thus their training data may overlap with some functions in our test sets. On the other hand, LMPa still outperforms DIRTY in terms of recall. That is because LMPa propagates program context across functions, while DIRTY makes prediction on the local context of a function. The authors of DIRTY reported better precision and recall in their paper. The reason is that they used many small projects from Github, in which variable names tend to have stronger connections with the provided ground-truth function names. In comparison, the benchmarks used in our evaluation are more complex than 80% of those used in DIRTY. _Discussion._ Observe that the precision and recall of LMPa are not as remarkable as one would hope. However, it does not necessarily mean that LMPa cannot provide informative names. Based on our observations, the similarity metric used is relatively strict. Even if the predicted names are semantically equivalent to the ground-truth names, they may not receive a high similarity score. For example, params and options are semantically equivalent, but they have a very low similarity score. Our user study indeed indicates that over 75% of the predicted names are considered good by users. An ideal solution would be to precisely measure semantic distance of two names. However, the substantial variations in naming conventions make the development of such a method very challenging. We will leave it to our future work. We further conduct a case study to show a typical failure of LMPa in Section 4.7. **Assessment with Various Thresholds.** We further compare the performance of LMPa and DIRTY with different thresholds for a "good name". The result shows that LMPa outperforms DIRTY across the entire spectrum of threshold levels. Details can be found in Section D of our supplementary material. ### RQ3: Ablation Study To better understand the effects of the name propagation analysis, we conduct three ablation studies. The first study compares the performance of LMPa with that by asking ChatGPT for one-shot. The second study compares LMPa with a naive approach that simply appends callee functions of the query function in the query text. The last ablation study shows how LMPa gradually achieves better performance as the iteration of propagation grows. **Comparison with One-shot ChatGPT Queries.** Fig. 12 presents a comparison between LMPa and one-shot ChatGPT queries, with the left figure illustrating precision and the right figure depicting recall. Notably, LMPa achieves a slightly superior, yet generally comparable precision in relation to one-shot ChatGPT queries. Upon closer examination, we find that, for a given variable, when ChatGPT lacks sufficient information to predict an appropriate name, it tends to generate a "dummy name". These names are subsequently eliminated through the name selection process. Consequently, only variables with adequate contextual information receive predicted names. As such, the precision primarily assesses ChatGPT's capability of predicting names for variables already rich in contextual information and is not directly related to the presence of the propagation mechanism. Nevertheless, LMPa significantly outperforms one-shot ChatGPT queries in terms of recall, achieving approximately twice the performance in most cases. This can be attributed to the effective propagation mechanism. **Comparison with a Naive Algorithm.** We conduct a study to show that LMPa substantially outperforms a method that includes callee functions in ChatGPT queries. Details are in Section E in the supplementary material. **Impact of the Number of Propagation Iterations.** We observe performance improvement is substantial in the first a few rounds of analysis and 10 rounds deliver optimal results. Details are in Section F in the supplementary material. ### RQ4: Scalability On average, querying ChatGPT for one time takes 22.8 seconds, leading to a relatively high time consumption for LMPa. However, the queries can be easily parallelized and the cost are justifiable in practice, given the one-time nature of reverse engineering efforts. Furthermore, LMPa scales well to large programs which in fact provides more context. Details can be found in Section G in the supplementary material. ### RQ5: Robustness Due to the nondeterministic nature of LLM, we repeat an experiment on the Coreutils dataset for 8 times to illustrate the robustness of LMPa. In each run, we let LMPa propagate names for 4 iterations. The results show that LMPa has a stable performance among different runs, with less than 0.04% variations and the improvement from round to round is significantly larger than the variance. Details can be found in Fig. 22 in the supplementary material. ### Case Studies **Performance on Unseen Programs.** ChatGPT is trained on enormous data. It is unclear whether our benchmarks have been used in ChatGPT's training. To study LmPa's performance on unseen programs, we conduct a case study on AudioFlux (Wang et al., 2017), an audio processing library project started in 2023. The results show that LmPa is equally effective whereas the baselines have lower than 5% precision and recall. Details are in Section H in the supplementary material. **Failure Case of** LmPa. We examine a failure case of LmPa, which received a score of 1 in our user study. Figure 13 presents the source code for this case, which is simplified for illustrative purposes. The code represents a wrapper function for memcmp, in which buf1 and buf2 are input memory buffers, while size1 and size2 denote the respective buffer sizes. The variable tmp_size2 is a copy of size2. The code utilizes tmp_size2 to store the value of size2 that will be modified later. Although LmPa accurately predicts the name of size2, it erroneously assigns the name size_diff to tmp_size2. One might wonder why tmp_size2 = size2 (line 3) does not help resolve the issue, given the name propagation analysis. Recall that, unlike inter-procedural hints, LmPa does not employ code comments to explicitly propagate intra-procedural hints. Instead, we rely on LLM itself to detect the potential relations among variables within the same function, avoiding the submission of lengthy queries to the LLM that might end up confusing the model. In this case, ChatGPT does not correctly determine the relation between size2 and tmp_size2 and our propagation does not help either. This issue could be tackled either by devising more sophisticated propagation rules for intra-procedural hints or by adopting a more advanced LLM. In fact, we assessed the failure case utilizing a variant of LmPa built upon GPT-4 (Zhu et al., 2017). The GPT-4-based LmPa successfully determines the desired relation and predicts tmp_size2 as size2_copy, supporting our hypothesis that LmPa's performance exhibits a positive correlation with LLM quality. **Query with Program Functionality Description.** To simulate realistic application in which analysts roughly know a program's functionalities, we provide a textual description of a program at the beginning of LmPa's query prompts and find that LmPa's performance improves. Details are in Section I in the supplementary material. **Query to GPT-4.** We substitute ChatGPT with GPT-4 to investigate the impact of a more advanced LLM on LmPa's performance. Due to GPT-4's slower processing speed compared with ChatGPT (Zhu et al., 2017), we randomly sample a smaller dataset from Coreutils, comprising 140 variables, and evaluate both GPT-4-driven LmPa and ChatGPT-driven LmPa on this dataset. The results are presented in Table 2. Observe that LmPa demonstrates better performance when powered by GPT-4. Specifically, with a propagation iteration count of 4, the GPT-4-driven LmPa achieves over 13% higher precision compared with its ChatGPT-driven counterpart. We attribute this to the superior capability of GPT-4. Note that precision essentially measures the LLM's performance when making confident predictions (see Section 4.3). Thus a stronger LLM leads to better performance of LmPa. Additionally, the GPT-4-driven version achieves better recall than the ChatGPT-driven one. We attribute the improvement to GPT-4's better capability of inferring good names based on local information, rendering the overall contextual information propagation more effective. It is also noteworthy that, for both GPT-4-driven LmPa and ChatGPT-driven LmPa, the propagation algorithm leads to improved results. This highlights the necessity of our name propagation analysis, regardless of the underlying LLM employed. Such results indicate that the performance of LmPa can be further enhanced as more powerful LLMs become available, while the propagation analysis continues to play an essential role in achieving optimal results. ## 5. Threats to Validity We choose to use ChatGPT, a closed-source LLM. The reported results may hence be tied to a specific version of ChatGPT. We have logged the interactions with ChatGPT for reproducibility. In addition, our technique is independent of the LLM. Our case study shows that the performance of LmPa has a positive correlation with LLM quality, which is supposed to improve over time. LLMs including ChatGPT are trained on enormous multi-modal data. It is unclear if the benchmarks used in the paper had been used in ChatGPT's training. This is a general threat-to-validity to any research using LLMs. On one hand, we compile the benchmarks and generate fresh binaries, which likely differ from the binaries used in LLM training. On the other hand, we argue that LLMs are so general that they unlikely overfit-on/memorize specific training examples. In addition, our ablation study on a very recent project (unlikely seen by ChatGPT) shows that LmPa is equally effective, whereas the baseline has substantially degraded performance. LLMs' responses are nondeterministic in general. Our ablation study shows that name predication by ChatGPT yields largely stable results. Our user study is susceptible to human errors. To mitigate the threat, we have carefully planned the study, using validation tests as part of the study and choosing programmers with extensive experience (e.g., in reverse engineering), and having multiple users covering a test. ## 6. Related Work **Binary Analysis.** Binary analysis is of fundamental importance in the field of software security and software engineering, encompassing a range of critical downstream applications such as malware analysis (Bahdan et al., 2017; Chen et al., 2017; Chen et al., 2018; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), vulnerability detection (Wang et al., 2017; Wang et al., 2017), software fingerprinting (Zhu et al., 2017), APT attack forensics (Bahdan et al., 2017; Chen et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), and software reuse (Wang et al., 2017; Wang et al., 2017). LmPa is intrinsically connected to decompilation (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), a foundational task in binary analysis. In addition to the related works discussed in Section 2.1, substantial research has been conducted in the area of decompilation, addressing topics such as type inference (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), binary-level data-flow analysis (Bahdan et al., 2017; Wang et al., 2017), function signature inference (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), and \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Tool} & \multicolumn{2}{c}{Iteration: 0} & \multicolumn{2}{c}{Iteration: 4} \\ \cline{2-5} & Precision & Recall & Precision & Recall \\ \hline LmPa \({}_{GPT}\) & 50.00\% & 28.57\% & 60.95\% & 45.71\% \\ LmPa \({}_{CLM}\) & 41.25\% & 23.57\% & 47.22\% & 36.43\% \\ \hline \hline \end{tabular} \end{table} Table 2. Performance on GPT-4. LmPa \({}_{GPT}\)4 and LmPa \({}_{CHM}\) denote LmPa with GPT-4 and ChatGPT, respectively. Figure 13. A failure case of LmPa binary similarity (Kalalem and Yang, 2018; Yang et al., 2019; Yang et al., 2020; Yang et al., 2020; Yang et al., 2020; Yang et al., 2020; Yang et al., 2020). Our work is orthogonal to these existing contributions. **Large Language Models.** Large Language Models (LLMs) have made significant breakthroughs in language understanding and generative tasks, including language translation (Yang et al., 2020; Yang et al., 2020), text summarization (Yang et al., 2020; Yang et al., 2020; Yang et al., 2020; Yang et al., 2020), question answering (Yang et al., 2020; Yang et al., 2020; Yang et al., 2020; Yang et al., 2020), and so on. LLMs developed for programming languages (Yang et al., 2020; Yang et al., 2020; Yang et al., 2020; Yang et al., 2020) have also shown their capabilities in software engineering tasks, such as code translation (Yang et al., 2020; Yang et al., 2020; Yang et al., 2020), code completion (Yang et al., 2020; Yang et al., 2020; Yang et al., 2020), and program repair (Yang et al., 2020; Yang et al., 2020; Yang et al., 2020). In this paper, we are the first to explore the potential of LLMs, especially ChatGPT, for name recovery, and demonstrate through extensive evaluation that they can significantly improve performance on this important task. ## 7. Conclusion We develop a novel technique for symbol name recovery in decompilation. It leverages the synergy between large language models and program analysis. It features an iterative algorithm that propagates query results from ChatGPT following program semantics. The propagation in turn provides better context for ChatGPT. Our results show that 75% of the recovered names are considered good by users and our technique outperforms the state-of-the-art technique by 16.5% and 20.23% in precision and recall, respectively.
2303.01849
An investigation into the adaptability of a diffusion-based TTS model
Given the recent success of diffusion in producing natural-sounding synthetic speech, we investigate how diffusion can be used in speaker adaptive TTS. Taking cues from more traditional adaptation approaches, we show that adaptation can be included in a diffusion pipeline using conditional layer normalization with a step embedding. However, we show experimentally that, whilst the approach has merit, such adaptation alone cannot approach the performance of Transformer-based techniques. In a second experiment, we show that diffusion can be optimally combined with Transformer, with the latter taking the bulk of the adaptation load and the former contributing to improved naturalness.
Haolin Chen, Philip N. Garner
2023-03-03T11:06:20Z
http://arxiv.org/abs/2303.01849v1
# An Investigation Into the Adaptability of a Diffusion-Based Tts Model ###### Abstract Given the recent success of diffusion in producing natural-sounding synthetic speech, we investigate how diffusion can be used in speaker adaptive TTS. Taking cues from more traditional adaptation approaches, we show that adaptation can be included in a diffusion pipeline using conditional layer normalization with a step embedding. However, we show experimentally that, whilst the approach has merit, such adaptation alone cannot approach the performance of Transformer-based techniques. In a second experiment, we show that diffusion can be optimally combined with Transformer, with the latter taking the bulk of the adaptation load and the former contributing to improved naturalness. Haolin Chen, Philip N. Garner Idiap Research Institute, Martigny, Switzerland Text-to-speech, speaker adaptation, diffusion model, conditional layer normalization ## 1 Introduction Recent years have seen successful applications of adaptive text-to-speech (TTS) [1, 2, 3, 4] to synthesize personalized voices for target speakers. In the typical scenario of adaptive TTS, a source acoustic model, which is usually trained on a large multi-speaker corpus, is adapted with few adaptation data to synthesize the desired voice. Concurrently, in the general field of acoustic modeling, deep generative models (DGMs) [5, 6, 7] have demonstrated their superiority over other solutions in high-quality and fast synthesis. In particular, the more recent diffusion models [7, 8, 9] have dominated this field in terms of intelligibility and naturalness. Current research in adaptive TTS focuses on 1) enhancing the generalizability of the source model to various acoustic conditions and styles; as well as 2) improving the data and parameter efficiency of adaptation. The first can further be categorized into 1) employing pluggable reference encoders to generate representations of the acoustic information and style on different semantic levels [1, 2, 10]; and 2) ad-hoc designs of model structure that control desired features [2, 3]. Furthermore, such adaptation techniques should be based on architectures with high synthesis quality, in which aspect diffusion-based acoustic models have surpassed their flow-based predecessors [5, 11], while enjoying more flexibility in network design. Since diffusion models were first applied in TTS, many works [9, 12, 13] have demonstrated how to accelerate the generative process substantially to a speed similar to that of their fastest counterpart without much degradation of synthesis quality. In general, we are interested in parameter-efficient adaptation techniques for diffusion-based acoustic models that enhance their generalizability. Despite diffusion models having been well explored for generic acoustic modeling, few works have exploited them in adaptive TTS systems. Guided-TTS 2 [14], the only diffusion-based adaptive TTS system we are aware of, utilizes diffusion with classifier guidance to adapt to diverse voices. However, the method lacks parameter efficiency for each target as all parameters of the diffusion model are finetuned during adaptation, and is not within the typical encoder-decoder framework. Since parameter-efficient adaptation techniques exist for other architectures such as Transformer [2], and given the superior synthesis quality of diffusion models, such a method for diffusion, would be of great interest to the community, enabling both parameter-efficient and high-quality adaptation. Based on the analyses above, we investigate the adaptability of diffusion-based acoustic models, with a special focus on parameter-efficient solutions. Specifically, our experiment is based on a typical diffusion denoiser network architecture, being a bidirectional dilated convolutional neural network. Inspired by observations from HMM-based adaptation, we propose introducing conditional layer normalization (CLN) to the denoiser network. Preliminary experimental results suggest that although it is viable to adapt the diffusion decoder, simply relying on adapting diffusion is not sufficient for high-quality adaptation; it also indicates inferior generalizability and adaptability of the denoiser. We further introduce adaptive Transformer layers as part of the decoder and observe the impact of adding different numbers of such layers to the adaptation quality. Our result shows that, while CLN in the denoiser network contributes to better speech quality and speaker similarity, it must be used in combination with adaptive Transformer layers to achieve usable adaptation quality. We conclude that for this particular type of diffusion model, its best use case in an adaptive TTS system is as a post-processing net that helps refine the detail of mel-spectrograms generated by a Transformer decoder. ## 2 Adaptive Diffusion Decoder ### Diffusion-based acoustic model In principle, diffusion models generate samples by denoising a sample from the prior distribution into real data through a diffusion process. Although taking different approaches, the learning problem of diffusion models can be expressed in terms of learning a denoiser network that predicts the noise in each diffusion step. The prevalent architecture of diffusion acoustic models comprises a Transformer-based phoneme encoder and a diffusion denoiser decoder. Here we mainly focus on the network design of the denoiser. The most widely used structure of the denoiser network is the bidirectional dilated convolutional network [7, 8, 9, 13]; other choices include the U-Net [14, 15]. As depicted in Fig. 0(a), the denoiser takes the sample from the previous step as input to predict the noise in the reverse diffusion process conditioned on the encoded phoneme sequence \(C_{text}\) and the step embedding \(t\). The network mainly consists of an input convolution layer and \(N\) convolutional blocks with residual connections and skip outputs, after which the skip outputs are accumulated to generate the final prediction through output convolution layers. ### Conditional layer normalization for denoiser Previous works [2, 3] find that the layer normalization in the Transformer can greatly affect the output with light-weight adaptable scale vector \(\gamma\) and bias vector \(\beta\): \(LN(x)=\frac{x-\mu}{\sigma}*\gamma+\beta\), where \(\mu\) and \(\sigma\) are the mean and variance of the input vector \(x\), respectively. Furthermore, the two vectors can be generated by a small neural network conditioned on the speaker embedding, which can be finetuned when adapting to a new voice, and significantly reduce the number of parameters to be adapted while maintaining adaptation quality. Following [2], we refer to this module as conditional layer normalization (CLN). In particular, we are interested in integrating the CLN into the denoiser. Considering the application of the CLN in the Transformer, the operations take place on the whole hidden representation, and gradually change the prediction along the Transformer blocks. Back to the denoiser network, the final prediction is collected by accumulating skip outputs from all convolution blocks, which is inspired by the idea of incorporating features at multiple levels to generate fine detail. Adding the CLN in or between these blocks makes it only apply to part of the features, which hinders the model from learning such hierarchical information. Moreover, the number of convolution blocks, \(N\), is usually large (\(\geq 12\)), which hampers the parameter efficiency if the CLN is placed in every block. After initial ad-hoc experiments that verified the hypotheses above, we place the CLN right after the 1-D convolution layer at the input, as in Fig. 0(a). The positioning is also inspired by HMM-based adaptation methods such as [16], in which the normalization takes place in the frequency (or cepstral) domain on the whole feature. The property of diffusion models that a single denoiser is used in all diffusion steps by being conditioned on the step embedding also makes it a parameter-efficient solution. Instead of being solely conditioned on the speaker embedding, the CLN takes the concatenation of both speaker and step embedding to control the strength of the operation, as is depicted in Fig. 0(b). Since the module is shared cross all diffusion steps, and the normalization should come into effect only when the sample is more refined rather than at the beginning, this mechanism enables the denoiser to automatically learn when to start functioning and at what strength during the whole reverse diffusion process. This is also inspired by [17], in which a Transformer-based denoiser is conditioned on a step embedding through CLN. The following sections describe experiments to verify the system described thus far. ### Experiment settings **Implementation details.** The model architecture of the diffusion decoder used in the experiments is based on PriorGrad [9]. As an improved version of Diff-TTS [7], the first diffusion-based acoustic model, PriorGrad leverages a data-dependent prior, of which the mean and variance are phoneme-level statistics extracted from the dataset. Compared to ones using a standard Gaussian prior, PriorGrad offers better synthesis quality, higher parameter efficiency, and faster convergence. For the architecture behind the diffusion decoder, we implemented that of AdaSpeech [2], including the phoneme encoder, the acoustic condition modeling module, and the variance adapter. Our implementation is based on the open-source software 12 of the two models. We use the Figure 1: Illustrations of the denoiser network and the conditional layer normalization (CLN). diffusion decoder with 12 convolution blocks, 128 residual channels and 3.5M parameters proposed in [9]. Other model configurations follow corresponding parts of AdaSpeech and PriorGrad unless otherwise stated. The total number parameters to be finetuned for the diffusion decoder is 0.131M, compared to 1.184M for AdaSpeech. **Data.** We train the source model on two clean subsets train-clean-100 and train-clean-360 of LibriTTS dataset [18], a multi-speaker TTS corpus, totaling 1151 speakers and 245 hours of speech. For evaluation, we select 11 speakers (7 females and 4 males) with different accents from VCTK [19] following the practice in [4]. For each speaker, 10 utterances with the same transcripts across all speakers are randomly selected as the test set. The preprocessing of speech and text data follows AdaSpeech except using a sampling rate of 22,050 Hz. **Training, adaptation and inference.** Following AdaSpeech, the training process comprises two stages in which the numbers of steps are 200K and 100K, respectively. The models are trained on one NVIDIA RTX 3090 GPU using a batch size of 50,000 speech frames. For the diffusion decoder, a beta schedule with 400 steps is used for both training and inference. We use the speaker-independent prior calculated on the whole training set. Other hyperparameters follow [9] unless otherwise stated. During adaptation, the model is finetuned using 10 utterances of the target speaker for 2000 steps using a fixed learning rate of \(2\times 10^{-4}\), while only the speaker embedding and the CLN are optimized. In the inference process, a HiFi-GAN vocoder [20] trained on VCTK is used to synthesize waveforms from the generated spectrogram. ### Objective evaluation For preliminary evaluation, we employ MOSNet [21], a neural network-based objective assessment tool for speech quality that generates machine rated MOS (mean opinion score), and a pretrained speaker verification model provided by SpeechBrain [22] which calculates the cosine similarity (CS) between speaker embeddings of the generated sample and the reference. The cosine score ranges from 0 to 1; higher score means higher speaker similarity to the reference. We found the scores from the two automatic assessment tools were consistent with our subjective judgment, and will conduct human evaluation for the final settings. We compare the performance among the following settings: 1) GT mel + Vocoder, using the ground truth mel-spectrograms to synthesize waveforms with the HiFi-GAN vocoder; 2) AdaSpeech, the Transformer-based adaptive acoustic model with CLN applied to the decoder; 3) Enc + DiffDec (decoder), a baseline system without the CLN in the denoiser which finetunes the whole decoder during adaptation as an upper bound; 4) Enc + DiffDec (spk emb), with the same architecture as the previous one but only finetuning the speaker embedding as a lower bound; 5) Enc + AdaD-iffDec, our proposed system with the CLN in the denoiser that is finetuned with the speaker embedding during adaptation. ### Results and analyses The MOSNet and cosine similarity results are shown in Table 1. It can be observed that: 1) adapting the whole diffusion decoder (#3) results in the best speech quality and speaker similarity among all settings, achieving similar performance to AdaSpeech (#2); 2) only finetuning the speaker embedding (#4) results in poor performance; 3) our proposed method (#5) only slightly outperforms baseline (#4), nevertheless it is much worse than finetuning the whole decoder (#3) and AdaSpeech (#2). The results indicate that simply relying on adapting the CLN in the denoiser is not sufficient for achieving a reasonable adaptation quality. Furthermore, our test listening suggests that some of the samples synthesized by three diffusion-based systems (#3-5) are not intelligible, which explains their inferiority to AdaSpeech demonstrated by objective tests. It also implies that the diffusion decoder is sensitive to out-of-domain input, therefore has poor generalizability and adaptability. This is very interesting since it is capable of synthesizing very high-quality and natural speech as a generic acoustic model. The phenomenon suggests that further efforts should be made to improve the adaptability of the system. ## 3 Adapting diffusion: a good choice? Our preliminary experimental result demonstrated that the previously proposed system does not achieve usable adaptation quality, which suggests that solely adapting the diffusion decoder may not be a good choice; other components need to be introduced to the system to improve the adaptation performance while taking advantage of the high-quality synthesis of the diffusion decoder. Given the fact that Transformer-based adaptive TTS systems have achieved decent adaptation quality, we consider adding Transformer layers with CLN before the diffusion decoder to construct a decoder with mixed architecture. The method is inspired by a common practice of utilizing DGMs as post-processing nets (post-net) in acoustic models [6, 23] \begin{table} \begin{tabular}{l|l|c|c} \hline \hline **\#** & **Model** & **MOSNet (\(\uparrow\))** & **CS (\(\uparrow\))** \\ \hline 1 & _GT mel + Vocoder_ & 4.10 & 0.96 \\ \hline 2 & _AdaSpeech_ & 3.78 & 0.52 \\ \hline 3 & _Enc + DiffDec (decoder)_ & 3.80 & 0.50 \\ 4 & _Enc + DiffDec (spk emb)_ & 3.42 & 0.20 \\ \hline 5 & _Enc + AdaDiffDec_ & 3.58 & 0.22 \\ \hline \hline \end{tabular} \end{table} Table 1: The MOSNet and cosine similarity (CS) scores. to refine the over-smoothed sample generated by VAE (variational auto-encoder) or Transformer. Much improved adaptation quality is expected provided that the diffusion decoder works as a post-net that refines the output of AdaSpeech. We are especially interested in how much performance the additional Transformer layers can bring and the difference between adapting both the Transformer decoder and the diffusion decoder and adapting the Transformer decoder alone. ### Experiments and evaluation The model configurations in this setting are a grid search combining the following two factors: 1) the number of additional Transformer decoder layers from 0 to 4, where 4 corresponds to the full Transformer decoder; and 2) whether or not to use the CLN in the diffusion denoiser. Since there are more than 10 systems to compare, we first conduct the objective evaluation as previous experiments. We then further conduct subjective listening tests to verify the findings from objective test results. We expect that more Transformer layers result in better speech quality and speaker similarity. However, it is more important to observe the impact of the CLN in diffusion on the performance in such settings. For subjective listening tests, 10 raters were involved to rate the MOS for naturalness and SMOS for similarity of 10 samples for each system. The test utterances were randomly selected from those used in objective tests, covering most speakers or accents. Note that the test utterances are the same across all systems. We select the two settings with 4 Transformer decoder layers, which are equivalent to using the diffusion decoder as a post-net on top of the AdaSpeech and are expected to have the best performance, and compare them with the vocoder-synthesized ground truth and AdaSpeech. ### Results The results of the objective evaluation are shown in Figure 2, where the results of the two metrics are displayed in separate plots. Several observations include: 1) the additional Transformer layers significantly improve the performance compared to only using the diffusion decoder; 2) in general, both speech quality and speaker similarity are improved with increasing number of Transformer layers; 3) adding CLN to the denoiser results in better performance in terms of both metrics in all settings, however, the difference of speaker similarity narrows when the number of Transformer layers is high. The subjective test results are shown in Table 2, where "AS" stands for AdaSpeech. It can be seen that 1) the diffusion decoder on top of the Transformer decoder (#3,4) significantly improves both perceptual speech quality and speaker similarity compared to AdaSpeech (#2) which only uses the Transformer decoder; 2) the CLN in the diffusion decoder further improves the two scores, making #3 the best among all systems; 3) the improvement of speech quality the CLN brings is more than that of speaker similarity, which only shows a slight difference. Overall, it has been demonstrated that, despite the CLN in the denoiser network contributing to higher adaptation quality, it must be used with adaptive Transformer layers to achieve usable performance. The adaptability of the model mainly relies on the adaptive Transformer layers, which suggests the inferior generalizability and adaptability of the diffusion denoiser compared to the Transformer. ## 4 Conclusion In this paper, we conducted an investigation into the adaptability of a typical diffusion-based acoustic model under parameter-efficient settings. We proposed the conditional layer normalization for the denoiser network and tested its effectiveness for speaker adaptation. We demonstrated that, while it is feasible to adapt the diffusion decoder by this method, it must be used in combination with adaptive Transformer layers to achieve usable adaptation quality. This suggests that the diffusion is less generalizable and adaptable than a Transformer. Future works in this field should focus on improving the diffusion model in the above aspects, or utilize the diffusion model as a post-net that refines the mel-spectrograms generated by other adaptable components. ## 5 Acknowledgments This project received funding under NAST: Neural Architectures for Speech Technology, Swiss National Science Foundation grant 185010. \begin{table} \begin{tabular}{l|l|c|c} \hline \hline **\#** & **Model** & **MOS** (\(\uparrow\)) & **SMOS** (\(\uparrow\)) \\ \hline 1 & _GT mel + Vocoder_ & \(4.77\pm 0.10\) & \(4.99\pm 0.02\) \\ \hline 2 & _AdaSpeech_ & \(2.38\pm 0.19\) & \(2.94\pm 0.20\) \\ \hline 3 & _AS + AdaDiffDec_ & \(3.02\pm 0.19\) & \(3.24\pm 0.20\) \\ 4 & _AS + DiffDec_ & \(2.84\pm 0.19\) & \(3.16\pm 0.21\) \\ \hline \hline \end{tabular} \end{table} Table 2: The MOS and SMOS scores with 95% confidence. Figure 2: The MOSNet and cosine similarity scores of settings with different number of Transformer decoder layers.
2304.04996
Aging during Phase Separation in Long-Range Ising Model
The kinetics of domain growth and aging in conserved order parameter systems, in the presence of short-range interaction, is widely studied. Due to technical difficulties and lack of resources, regarding computation, the dynamics is still not well established in the cases where long-range interactions are involved. Here we present related results from the Monte Carlo simulations of the two-dimensional long-range Ising model (LRIM). Random initial configurations, for $50:50$ compositions of up and down spins, mimicking high temperature equilibrium states, have been quenched to temperatures inside the coexistence curve. Our analysis of the simulation data, for such a protocol, shows interesting dependence of the aging exponent, $\lambda$, on $\sigma$, the parameter, within the Hamiltonian, that controls the range of interaction. To complement these results, we also discuss simulation outcomes for the growth exponent. The obtained values of $\lambda$ are compared with a well-known result for the lower bounds. For this purpose we have extracted interesting properties of the evolving structure.
Soumik Ghosh, Subir K. Das
2023-04-11T05:57:36Z
http://arxiv.org/abs/2304.04996v1
# Aging during Phase Separation in Long-Range Ising Model ###### Abstract The kinetics of domain growth and aging in conserved order parameter systems, in the presence of short-range interaction, is widely studied. Due to technical difficulties and lack of resources, regarding computation, the dynamics is still not well established in the cases where long-range interactions are involved. Here we present related results from the Monte Carlo simulations of the two-dimensional long-range Ising model (LRIM). Random initial configurations, for \(50:50\) compositions of up and down spins, mimicking high temperature equilibrium states, have been quenched to temperatures inside the coexistence curve. Our analysis of the simulation data, for such a protocol, shows interesting dependence of the aging exponent, \(\lambda\), on \(\sigma\), the parameter, within the Hamiltonian, that controls the range of interaction. To complement these results, we also discuss simulation outcomes for the growth exponent. The obtained values of \(\lambda\) are compared with a well-known result for the lower bounds. For this purpose we have extracted interesting properties of the evolving structure. ## I Introduction Typically, following a perturbation, relaxation in an older system occurs slower than in a younger system. Such aging phenomena, in an evolving system, during a phase transition, is often studied via the autocorrelation function [1; 2; 3; 4; 5; 6; 7; 8; 9] \[C_{ag}(t,t_{w})=\langle\psi(\vec{r},t)\psi(\vec{r},t_{w})\rangle-\langle\psi( \vec{r},t)\rangle\langle\psi(\vec{r},t_{w})\rangle. \tag{1}\] Here \(\psi\) is a space (\(\vec{r}\)) and time-dependent order parameter, the symbols \(t\) and \(t_{w}\) representing, respectively, the observation and the waiting times. The latter is also called the age of the system. The slower decay of \(C_{ag}\) for an older system is a violation of the time-translation invariance. This implies that in an "away from steady-state" situation there is no scaling or collapse of data for \(C_{ag}\) when results for different \(t_{w}\) are plotted versus \(t-t_{w}\). However, collapse is observed [1] when the data are plotted versus \(t/t_{w}\). In the limit \(t/t_{w}\rightarrow\infty\), one then discusses the scaling behaviour [1] \[C_{ag}(t,t_{w})\sim\left(\frac{t}{t_{w}}\right)^{-\lambda^{t}}. \tag{2}\] Note that during a phase transition, as a homogeneous or disordered system is quenched to a miscibility gap or ordered region of the phase diagram, domains, rich or poor in specific components, form and their average size, \(\ell\), grows as [10] \[\ell\sim t^{\alpha}. \tag{3}\] One further defines \(\lambda=\lambda^{t}/\alpha\), such that \(C_{ag}(t,t_{w})\sim(\ell/\ell_{w})^{-\lambda}\), where \(\ell_{w}\) is the value of \(\ell\) at \(t=t_{w}\). The quantities \(\alpha\) and \(\lambda\) are two important power-law exponents in the literature of kinetics of phase transitions [1; 6; 10; 11]. These are referred to as the growth and the aging exponents, respectively. For many systems, with short-range interactions [1; 2; 3; 4; 5; 6; 7; 8; 9; 11; 12; 13; 14; 15], particularly for the Ising model (IM) [1; 3; 5; 6; 7; 8; 11; 15], these have been estimated fairly accurately. Despite the IM being one of the most popular prototype models for studies of phase transition, progress is very limited for the long-range (LRIM) variety. The Ising model Hamiltonian can be written in the general form [6; 9; 10] \[H=-\frac{1}{2}\Sigma_{i}\Sigma_{j\neq i}J_{ij}S_{i}S_{j}, \tag{4}\] where \(J_{ij}\) is the strength of interaction between spins \(S_{i}\) and \(S_{j}\) (\(=\pm 1\)), sitting at the lattice points \(i\) and \(j\). For \(J_{ij}>0\), one expects mostly parallel alignment of spins, at low enough temperatures. For standard purposes [10], one considers \(J_{ij}=J\) and terminates the interaction at the nearest neighbour distance. For defining LRIM, on the other hand, a power-law variation of the strength, as a function of \(r\), the inter-site distance, is considered [16; 17]: \[J_{ij}=\frac{J}{r^{d+\sigma}}, \tag{5}\] where \(d\) is the spatial distance and \(\sigma\) is a constant. In the equilibrium context, the value of \(\sigma\), that separates the short-range and long-range universality classes, is close to two [18]. For problems associated with kinetics, such a boundary is set [16] at \(\sigma=1\). For nonconserved order-parameter dynamics, that is relevant for ordering, say, in magnetic systems, it is predicted [16] that \[\alpha=\begin{cases}\frac{1}{1+\sigma},&\text{for }\sigma<1\\ \frac{1}{2},&\text{when }\sigma>1.\end{cases} \tag{6}\] Similar predictions exist for kinetics of phase separation, say, in a binary (A + B) mixture, with conserved order-parameter, as well [16; 19]: \[\alpha=\begin{cases}\frac{1}{2+\sigma},&\text{for }\sigma<1\\ \frac{1}{3},&\text{when }\sigma>1.\end{cases} \tag{7}\] There exist certain logarithmic behaviour [16] for \(\sigma=1\). There are no such general predictions for \(\lambda\). In fact, for \(\lambda\), existing theoretical predictions are only for the nearest neighbour case with non-conserved dynamics [1; 2]. For this quantity, not only the theoretical calculations, but also the computer simulations, and their analysis, are challenging, irrespective of the range of interaction, particularly for the conserved dynamics. For the long-range interaction, there has been a surge of interest in recent times [20; 21; 9; 22; 17]. However, the focus has been on the nonconserved variety. Only two studies [21; 23], including a recent one [21], to the best of our knowledge, investigated the conserved case, reporting results on domain growth. In this work, we present results on the _aging phenomena_ for the conserved variety, for a wide range of values of \(\sigma\), in \(d=2\). To analyse the results on aging, it becomes necessary to calculate \(\ell\). Our results on this latter quantity are in good agreement with the above mentioned recent report [21]. We also discuss the results on \(\lambda\) against a well-known bound [3]. ## II Model and method We have considered spins on periodic square lattices of size \(L\times L\). Unless mentioned otherwise, the results are presented for \(\text{L}=256\), the unit being the lattice constant. Conserved order-parameter dynamics requires that the total number of \(+1\) (or particles of type \(A\)) and \(-1\) (or particles of type \(B\)) spins remain constant throughout the evolution of the system. To ensure this, Kawasaki spin-exchange dynamics is used in our Monte Carlo simulations [24]. In this process, two (nearest) neighbouring sites are randomly chosen and the corresponding spin states are exchanged with a certain probability following the Metropolis criterion [24]. For this purpose, energy change is needed to be calculated. With the Hamiltonian in Eq. (4), and the coupling term mentioned in Eq. (5), this calculation is computationally demanding [25]. To minimize the cost of computation, a generalized [26] Ewald summation [26; 27] technique is used. We have carried out the calculations with our (in-house) _parallel_ codes, written with OpenMP and MPI, for even faster computation. Time in our simulations is measured in units of Monte Carlo steps (MCS), one MCS being equivalent to \(L^{2}\) trials. For each \(\sigma\) value, random initial configurations, with equal concentrations of up and down spins, are quenched to a temperature that is \(0.6\) times the corresponding critical temperature [26]. At a finite temperature there exists noise in the structure. This was removed, via a majority spin rule [11], for the calculation of lengths. The latter quantity was obtained from the domain-size distribution function. All results are presented after averaging over \(80\) initial configurations. For each value of \(\sigma\), total run length was \(t=8\times 10^{4}\) MCS. ## III Results In Fig. 1 we show a few plots of the autocorrelation function, with the variation of \(t/t_{w}\), for \(\sigma=0.6\). Original results are presented in the inset. The jumps there have connection with equilibration of domain magnetization [6]. In the main frame the results are presented by discarding this feature, keeping data having connection only with the growth of domain. Essentially, we have rescaled the ordinate after removing the points associated with the jump. That way \(C_{ag}\) appears to tend to unity, as \(t/t_{w}\to 1\), in a continuous fashion. Such a transformation does not alter the outcomes of the analysis that we perform below. It, in fact, brings better visual clarity over the relevant range. Nice collapse of data for different \(t_{w}\) values is observed. It appears from these plots that we are reasonably away from the finite-size affected region [7; 8; 12]. Note that in the finite-size affected situation data for different \(t_{w}\) will deviate from the master curve. The solid line in this figure represents a power-law decay with the exponent value mentioned in the figure. For large values of \(t/t_{w}\), the simulation data are reasonably consistent with this line. However, to derive more accurate information on the decay of \(C_{ag}\), below we carry out certain advanced analysis. We calculate the instantaneous exponent [1; 7; 8; 15; 28]\(\lambda_{i}^{t}\) as \[\lambda_{i}^{t}=-\frac{d\ln C_{ag}(t,t_{w})}{d\ln(t/t_{w})}, \tag{8}\] by anticipating that in the large \(t/t_{w}\) limit \(C_{ag}\) indeed decays in a power-law fashion. In Fig. 2(a) we present \(\lambda_{i}^{t}\) as a function of \(t_{w}/t\). Here we have included results from several values of \(\sigma\). These show linear trend. There exists no discernable quantitative differences among the plots for different \(\sigma\). The broken line there is a fit of the simulation data for \(\sigma=0.8\) to the linear form \(\lambda_{i}^{t}=\lambda^{t}+at_{w}/t\), \(a\) being a constant. This exercise provides \(\lambda^{t}=0.98\). In part (b) of Fig. 2 we show thus obtained values of \(\lambda^{t}\), versus \(\sigma\). No particular dependence is evident. The variations appear random, being consistent with the above statement concerning the absence of any noticeable difference. Thus, we treat \(\lambda^{t}\) as practically independent of \(\sigma\) in this range and obtain its value by averaging over the numbers for different \(\sigma\). This way we estimate \(\lambda^{t}=0.93\). To obtain \(\lambda\), from \(\lambda^{t}\), we need \(\alpha\). While Eq. (7) provides the \(\sigma\)-dependence of \(\alpha\), which was confirmed by the recent simulations of Muller et al. [21], here we revisit this issue. Before moving to that we mention that \(\lambda\) may have \(\sigma\)-dependence in this range. But the random appearances of the values indicate that the dependence is weak. In Fig. 3(a) we show plots of \(\ell\) versus \(t\), on a double-log scale, for two values of \(\sigma\), viz. \(0.6\) and \(0.9\). The growth, Figure 1: Plots of the autocorrelation function, \(C_{ag}(t,t_{w})\), as a function of \(t/t_{w}\), for LRIM with \(\sigma=0.6\). Results for a few different ages have been included. The inset contains the actual data. Results in the main frame are scaled by a pre-factor, after discarding the jump, such that \(C_{ag}\) smoothly approaches \(1\) as \(t/t_{w}\to 1\). The solid line represents a power-law decay with the mentioned value of the exponent. Figure 2: (a) The instantaneous exponent, \(\lambda^{t}_{i}\), is shown versus \(t_{w}/t\). Results from four \(\sigma\) values have been displayed. The broken line is a linear fit to the data set corresponding to \(\sigma=0.8\). This provides \(\lambda^{t}=0.98\). (b) Plot of \(\lambda^{t}\), obtained from the fits discussed in (a), versus \(\sigma\). after early transients, appears stronger for the smaller value of \(\sigma\). In part (b) of Fig. 3, we show the instantaneous exponent for \(\ell\), viz. \(\alpha_{i}\) (\(=d\ln\ell/d\ln t\)), with the variation of \(t\). Difference between the two cases is clearly visible. These outcomes, implying higher \(\alpha\) for smaller \(\sigma\), are in agreement with those in Ref. [21]. Convergences of \(\alpha_{i}\) to the theoretical values of \(\alpha\) can also be appreciated. In Fig.4(a) we show scaling plots [10; 29] of the equal time structure factor, \(S(k,t_{w})\), for \(t_{w}=20000\), using data from different system sizes, viz, \(L=256\) and \(512\). The chosen value of \(\sigma\) is \(0.6\). While the data for the two system sizes agree with each other, the set from the larger system appears more useful with respect to the identification of short wave number (\(k\)) behaviour. In Fig. 4(b), we show results from \(L=512\). For \(\sigma=0.6\) results for two different times are shown in symbols. Nice scaling collapse is observed, confirming the validity of the scaling form \(S(k,t_{w})=\ell^{d}\tilde{S}(k\ell)\), \(\tilde{S}\) being a master function. The small \(k\) behaviour appears to be of power-law [30] type: \(\sim\ k^{\beta}\), with \(\beta=2.9\). This way have extracted \(\beta\) for several values of \(\sigma\). At large \(k\), \(S(k,t_{w})\) is expected to obey the Porod law [10]\(k^{-3}\). Indeed, this appears to be the case. Interestingly, as opposed to the nearest-neighbour case [10], one more decay step appears that also is consistent with \(k^{-3}\). This multiple-step behaviour goes away with the increase of \(\sigma\). See the plot for \(\sigma=0.8\) in Fig. 4(b) for which the above mentioned multiple-step decay is absent as in Ref. [21]. Figure 3: \((a)\) Plots of average domain lengths with the variation of time. Results for two values of \(\sigma\), viz., \(0.6\) and \(0.9\), are included. These are shown in a double-log scale. \((b)\) The inverse of the instantaneous exponents \(\alpha_{i}\), for the same sets of data as in \((a)\), are presented versus \(1/t\). The broken lines are guides to the eyes, showing possible convergences of the data sets to the theoretical values \(2+\sigma\) which are marked on the ordinate by asterisk (* ). The data at very late times have been discarded for better visualization of the convergences. Yeung, Rao and Desai [3] provided an expression for the lower bounds on the values of \(\lambda\): \[\lambda\geq\frac{d+\beta}{2}. \tag{9}\] In Fig. 5 we have plotted these bounds as a function of \(\sigma\). These are compared with the obtained values of \(\lambda\). Recall that we have used the formula \(\lambda=\lambda^{t}(2+\sigma)\). For \(\lambda^{t}=0.93\), the average value, the data set is represented by the Figure 4: (a) Scaled structure factor, \(S(k,t)\ell^{-d}\), at \(t_{w}=20000\), is plotted versus \(k\ell\), \(k\) being the wave number, for systems of two different sizes, viz., \(L=256\) and \(512\), with \(\sigma=0.6\). (b) Same scaling plots as in (a) are shown (with symbols) for two different \(t_{w}\) values with \(L=512\). The dotted-dashed line is the corresponding plot for \(\sigma=0.8\). For better clarity of the features, the plot for \(\sigma=0.8\) is shifted. The solid lines are proportional to \(k^{-3}\). The dashed lines represent power-laws with exponents \(\beta=2.9\) and \(3.4\). Figure 5: The bound in Eq. (9) is plotted versus \(\sigma\) (see the triangles). The circles represent our estimated values of \(\lambda\), with the variation of \(\sigma\). This quantity is calculated as \(\lambda=\lambda^{t}(2+\sigma)\). In one case we have considered \(\lambda^{t}=0.93\) and in another, \(\lambda^{t}=0.98\). See text for details. bigger circles. Even though it appears that there exists consistent violation, the differences with the lower bounds are within \(2\%\). In fact, if we use \(\lambda^{t}=0.98\), instead of \(0.93\), the value of \(\lambda^{t}\) for \(\sigma=0.8\), the agreements of \(\lambda\) with the lower bounds are almost perfect. ## IV Summary In conclusion, we have studied the kinetics of phase separation [10] in the long-range Ising model [16]. We have presented results on aging phenomena obtained via Monte Carlo simulations [24] in space dimension 2, for several values of the interaction range parameter \(\sigma\). It appears that with the increase of \(\sigma\), the aging exponent \(\lambda\) increases. We have compared the values of \(\lambda\) with the lower bounds predicted in Ref. [3]. It seems that the bounds provide the values of \(\lambda\) quite accurately. In a previous study [8], the value of \(\lambda\) for the nearest-neighbour Ising model in \(d=2\) was estimated to be approximately \(3.6\). For rather high values of \(\sigma\), we have checked that our estimates are consistent with this. However, we are uncertain whether such a crossover occurs at \(\sigma=1\), which is the boundary for the crossover of the growth exponent, between short-range and long-range classes. We have also discussed results on domain growth. Our results on this quantity are consistent with the theoretical predictions [18] and very recently published simulation reports by Muller et al. [21]. ## V Acknowledgement We acknowledge computation times in the Param Yukti supercomputer, located in JNCASR, under National Supercomputing Mission.
2305.05779
Learning to Parallelize with OpenMP by Augmented Heterogeneous AST Representation
Detecting parallelizable code regions is a challenging task, even for experienced developers. Numerous recent studies have explored the use of machine learning for code analysis and program synthesis, including parallelization, in light of the success of machine learning in natural language processing. However, applying machine learning techniques to parallelism detection presents several challenges, such as the lack of an adequate dataset for training, an effective code representation with rich information, and a suitable machine learning model to learn the latent features of code for diverse analyses. To address these challenges, we propose a novel graph-based learning approach called Graph2Par that utilizes a heterogeneous augmented abstract syntax tree (Augmented-AST) representation for code. The proposed approach primarily focused on loop-level parallelization with OpenMP. Moreover, we create an OMP\_Serial dataset with 18598 parallelizable and 13972 non-parallelizable loops to train the machine learning models. Our results show that our proposed approach achieves the accuracy of parallelizable code region detection with 85\% accuracy and outperforms the state-of-the-art token-based machine learning approach. These results indicate that our approach is competitive with state-of-the-art tools and capable of handling loops with complex structures that other tools may overlook.
Le Chen, Quazi Ishtiaque Mahmud, Hung Phan, Nesreen K. Ahmed, Ali Jannesari
2023-05-09T21:57:15Z
http://arxiv.org/abs/2305.05779v1
# Learning to Parallelize with OpenMP by Augmented Heterogeneous AST Representation ###### Abstract Detecting parallelizable code regions is a challenging task, even for experienced developers. Numerous recent studies have explored the use of machine learning for code analysis and program synthesis, including parallelization, in light of the success of machine learning in natural language processing. However, applying machine learning techniques to parallelism detection presents several challenges, such as the lack of an adequate dataset for training, an effective code representation with rich information, and a suitable machine learning model to learn the latent features of code for diverse analyses. To address these challenges, we propose a novel graph-based learning approach called Graph2Par that utilizes a heterogeneous augmented abstract syntax tree (Augmented-AST) representation for code. The proposed approach primarily focused on loop-level parallelization with OpenMP. Moreover, we create an OMP_Serial dataset with 18598 parallelizable and 13972 non-parallelizable loops to train the machine learning models. Our results show that our proposed approach achieves the accuracy of parallelizable code region detection with 85% accuracy and outperforms the state-of-the-art token-based machine learning approach. These results indicate that our approach is competitive with state-of-the-art tools and capable of handling loops with complex structures that other tools may overlook. Machine learning, Parallelization, Parallelization, Parallelization, Parallelization ## 1 Introduction The growing demand and popularity for multi-core hardware systems over the past few decades require developing highly-parallel programs to maximize performance. Numerous parallel programming models and frameworks (Chandra et al., 2001; Gabriel et al., 2004; Pheatt, 2008; Bik et al., 2002) have been created to facilitate the development of parallel code, but the developer's expertise in using these frameworks and familiarity with the codes are crucial to achieving better performance. Loop-level auto-parallelism helps developers in carrying out parallel tasks within the loops to speed up the process. Modern compilers typically detect the loop-level parallelism during compile time statically. This process is conservative and overlooks parallelism to ensure the correctness of the detected parallelism opportunities. On the other hand, dynamic auto-parallelism tools detect loop-level parallelism at runtime. The dynamic information captured after executing the programs improves the accuracy but has overhead issues. Moreover, the application of current auto-parallelization tools is constrained by requiring either compilation or execution of the programs for analysis. Therefore, a more practical way to auto-detect parallelism is required. Machine learning (ML) techniques are usually more feasible and cost-effective by redefining conventional software engineering problems as prediction problems. Many attempts have been made recently to use machine learning and Natural Language Processing (NLP) techniques in software engineering, from performance optimization and passes in compilers to solving complex problems such as malicious code detection, code placement on CPU or GPU, and performance prediction. Auto-parallelization with ML techniques is also conducted in recent studies. Chen et al. (Chen et al., 2022) detect parallelism by training code static and dynamic information in a multi-view model. The code embedding in their work is an adaption of word2vec (Mikolov et al., 2013), a now classic NLP technique. Ben-nun et al. (Ben-Nun et al., 2018) introduce a Neural Code Comprehension (NCC) representation of code by using graph embeddings that are trained on unlabelled data before being used for simple code comprehension tasks. Brauckmann et al. (Brauckmann et al., 2020) show that graph-embedding methods applied to Abstract Syntax Tree (AST) or Control Data Flow Graph (CDFG) are more efficient at downstream tasks than the state-of-the-art (NLP-inspired) methods, with better ability to generalize to never-seen-before examples. Despite their success, previous studies have shown common challenges in applying ML and NLP techniques in code analysis. First, constructing relevant datasets is a major pain point when attempting to solve any problem using machine learning. Only a few public benchmarks for parallelization using OpenMP are applicable to the parallelism detection task. Second, code representation is crucial for machine learning models to comprehend programs. The intuitive solution is treating code as a natural language so NLP models can be applied directly (Dai et al., 2019). However, the context or token representation overlooks the code's structural information, which is crucial for parallelization analysis (Blume et al., 1994; Chen et al., 2022). Finally, the performance of ML models varies across different tasks. In this work, we propose to leverage state-of-the-art machine learning techniques to detect loop parallelism and suggest four possible OpenMP pragmas to assist developers in implementing parallelization with OpenMP. We tackle the above-mentioned challenges by (a) generating a dataset containing 18598 parallelizable and 13972 non-parallelizable loops from benchmarks, GitHub projects, and synthetic data, (b) introducing a heterogeneous augmented-AST (aug-AST) representation for loops that considers both textual and structural information of code, and (c) training the heterogeneous aug-AST of the loops in our dataset using a heterogeneous graph neural network. In particular, this paper makes the following contributions: * **Dataset.** OMP_Serial: a C serial loop dataset with labels that can be used for parallelization or other code analysis purposes. * **Method.** Introducing a heterogeneous augmented AST code representation suitable for parallelism detection and other downstream tasks. * **Evaluation.** Comparing the proposed graph-based approach with AST and token-based code representation approach. * **Application.** Implementing a heterogeneous GNN on the proposed dataset and comparing the results with state-of-the-art parallelization tools. ## 2 Motivation Examples This section demonstrates and discusses the limitations of three widely used algorithm-based auto-parallelization tools: DiscoPoP (Li et al., 2016), Pluto (Bondhugula et al., 2008), and autoPar (Quinlan and Liao, 2011). These non-ML tools are generally classified into static and dynamic (hybrid) approaches. Dynamic or hybrid parallelization tools like DiscoPoP (Li et al., 2016) identify parallelism with runtime dynamic information generated by executing the programs. Profiling and executing programs are costly in terms of time and memory. In contrast, static analysis tools such as Pluto (Bondhugula et al., 2008) and autoPar (Quinlan and Liao, 2011) examine source codes statically without execution. However, these static analysis tools tend to be overly conservative, often overlooking parallelization opportunities. In addition to their inherent limitation, the use of non-ML tools is constrained due to their need for compilation or execution of the program. When applied to the OMP_Serial dataset introduced in section 4, only 10.3% and 3.7% of the C loops can be processed with autoPar (static) and DiscoPoP (dynamic), respectively. There are four types of loops where tools mostly make mistakes in our observation: loops with reduction, loops Figure 1: Proposed methodology. Data collection and generation: our dataset contains data from GitHub crawling, benchmark collection, and synthetic data generation. Data pre-processing: we extracted loops from codes with pre-processing steps, e.g., removing comments and blank lines. We also label the data according to the extracted pragma. Code representation: we generate the AST of each loop data and convert it to our proposed augmented heterogeneous AST. Training and Prediction: we feed our processed data and corresponding labels to the HGT model for 4 different downstream tasks. with function calls, loops with reduction and function calls, and nested loops. Listings 1, 2, 3, 4 and 5 present the example of mistakes made by autoPar, Pluto, and DiscoPoP. Figure 2 illustrates the statistic of our findings regarding the number and type of the loops these tools fail to detect parallelism. ``` for(i=0;i<30000000;i++) error=error+fabs(a[i]-a[i+1]); ``` Listing 1Parallel loop with reduction and function call. DiscoPoP, Pluto, and autoPar fail to detect the parallelism due to the \(fabs\) function call. ``` for(inti=0;i<sum_pixels;i++){ fitness+=(abs(objeiv[i].r- individu>>imagen[i].r)+ abs(objeiv[i].g- individu>imagen[i].g))+ abs(objeiv[i].b- individu>imagen[i].b); } ``` Listing 2Parallel loop with reduction and function call missed by Pluto because of the \(abs\) function call. ``` for(inti=0;i<size;i++){ vector[i]=square(vector[i]); } floatsquare(intx){ intk=0; while(k<5000) k++; returnssqrt(x); } ``` Listing 3Parallel loop with a function call missed by autoPar because of the \(square\) function call. ``` for(inti=0;i<N;i+=step){ v+=2; v=v+step; } ``` Listing 4Parallel loop with reduction missed by Discopop because of the reduction operation on variable \(v\). We are motivated to explore cutting-edge machine learning techniques for a more feasible and precise solution. The evaluation in section 6 demonstrates that our proposed approach surpasses the tools we examined in detecting parallelism within complex-structure loops. ## 3 Background The field of source code analysis encompasses a broad spectrum of topics, including bug detection, optimization, and auto-parallelization. Specifically, the parallelization of sequential programs constitutes a sub-field that concentrates on tasks such as detecting parallelism, classifying parallelization patterns, and implementing parallelization. This section delves into the background of parallelization analysis and explores machine learning approaches pertinent to this task. ### Auto-parallelization and Algorithm-based Tools Sequential program parallelization poses considerable challenges, generally involving two phases: parallelism identification and parallelization implementation. Parallelism identification entails the analysis of sequential program fragments to identify opportunities for parallelism. Parallelization implementation or execution involves capitalizing on the detected parallelism to fully exploit the hardware capabilities. Parallelism can be expressed through two fundamental concepts: task-level parallelism and loop-level parallelism. Task-level parallelism demarcates regions within an application that can be executed simultaneously on multiple cores or threads. Task-level parallelism methods require predefined distinct regions in the program, which can limit fine-grained opportunities. Loop-level parallelism considers loop bodies parallel regions, where iterations can be distributed across threads (Wismuller, 2011). This work primarily focuses on loop-level parallelism. The identification of loops eligible for parallelism often relies on the program author, as modern compilers are unable to fully take advantage of parallel loop classification. However, This process imposes a significant burden on developers, particularly for extensive projects. Most dynamic approaches employ dependency analysis to record execution order constraints between instructions, enabling a more accurate automatic parallelizable loop identification. In contrast, static methods infer dependencies by conservatively Figure 2: Category-wise loops missed by renowned parallelization assistant tools. The results are generated using the OMP_Serial dataset introduced in section 4. analyzing the program during compilation. Different static, dynamic, and hybrid (i.e., combining static and dynamic) tools have been developed to automatically identify parallelization opportunities. Polly (Grosser et al., 2012), an automatic parallelism detection tool, is based on static analysis, LLVM (Lattner and Adve, 2004), and the polyhedral model. Kremlin (Garcia et al., 2012) determines the critical path length within the loops using dependency information and subsequently calculates a metric, namely self-parallelism, for parallelism detection. Alchemist (Zhang et al., 2009) identifies parallelization candidates by comparing the number of instructions with the read-after-write (RAW) dependencies, both of which are generated by Vallgrind (Nethercote and Seward, 2007) during runtime. DiscoPoP (Li et al., 2016; Huda et al., 2016) extracts dynamic profiling and instruction dependency data from instrumented sequential programs. Information like dependency type, the number of incoming and outgoing dependencies, and critical path length are extracted from a data dependency graph for parallelism detection. As a hybrid method tool, DiscoPoP provides comprehensive dynamic analysis statistics that complement static analysis, yielding an improved understanding conducive to detecting parallel opportunities. ### Machine Learning-based Auto-Parallelization Machine learning, as defined by Alpaydin et al. (Alpaydin, 2020), involves programming computers to optimize a performance criterion using example data or past experience. Despite its potential, machine learning techniques have been under-explored and infrequently employed in parallelization analysis tasks. Fried et al. (Fried et al., 2013) investigated an automatic method for classifying regions of sequential programs that could be parallelized, using benchmarks with hand-annotated OpenMP directives for training. Tournavitis et al. (Tournavitis et al., 2009) applied SVM in conjunction with static and dynamic features extracted from source codes to identify parallel regions in programs. They used NAS parallel benchmarks (Jin et al., 1999) and SPEC OMP benchmarks (Aslot et al., 2001) to evaluate their model. Machine learning techniques have achieved significant progress since (Fried et al., 2013)'s and (Tournavitis et al., 2009)'s work, with the recent advancements demonstrating the capabilities of deep neural networks in code representation (Cummins et al., 2021; Ma et al., 2021) and parallelization analysis (Shen et al., 2021; Chen et al., 2022). ### Code Representations The representation of code is crucial for applying machine learning techniques in the area of code analysis. This subsection discusses commonly used code representations and their corresponding machine learning approaches. **Token.** Programming tokens are fundamental elements that comprise the source code of a program. A token is a string of characters that can be classified as constants, identifiers, operators, reserved words, or separators according to the syntax of the programming language. Inspired by word embedding in natural language processing (NLP), various studies have focused on generating token-based embedding that can serve as input for machine learning approaches. The state-of-the-art token embedding method, _code2vec_(Alon et al., 2019), is trained on the task of predicting method names. **AST.** The abstract syntax tree (AST) is one of the most viable representations for code. Every programming language has an explicit context-free grammar, allowing source code to be parsed into an abstract syntax tree (AST) that represents the source code's abstract syntactic structure. Each non-leaf node in an AST corresponds to a non-terminal in the context-free grammar that conveys structural information, while each leaf node corresponds to a terminal in the context-free grammar encoding program text. Figure 3 illustrates an example of AST for listing 1. An AST can be easily converted back to source code. As our work focuses on parallelism at the loop level, we concentrate on partial ASTs that represent the desired loop. **CFG.** The control flow graph (CFG) delineates the sequence in which code statements are executed and the requirements that must be satisfied for a specific path of execution. Nodes represent statements and predicates, while directed edges connect them and indicate the flow of control. Although edges of CFGs need not follow any specific order, as in abstract syntax trees, it is still necessary to identify each edge as true, false, or otherwise CFG has been employed for various purposes, such as detecting versions of well-known malicious apps and guidng fuzz testing tools. They are also now a common code representation in reverse engineering to aid in program comprehension. However, control flow graphs do not reveal data flow, making them unsuitable for detecting statements that process data modified by an attacker, a limitation particularly relevant to tasks like vulnerability analysis. **Comprehensive graph representations.** Recent works with code representation have focused on comprehensive graph representations to incorporate more information about programs. Ben-Nun et al. (Ben-Nun et al., 2018) aimed to create an embedded representation of code based on LLVM IR, introducing an intermediate representation of programs by combining NLP techniques with code dependencies. Cummins et al. (Cummins et al., 2021) expanded upon the work of Ben-Nun et al. to propose an IR graph representation called PrograML, which is both comprehensive and rich in code information. The downstream task experiments set a new state-of-the-art standard. However, the requirements for using PrograML are stringent due to LLVM compilation, and only 31.2% of the data in our dataset can be processed with PrograML. Consequently, we adopt AST as our base representation of code to utilize all the data for training. ### Heterogeneous Graph Neural Networks (HGNN) Graph Neural Networks (GNN) models have gained success in various research domains, including biology (Zhang et al., 2021; Kim et al., 2022), natural language processing (Yao et al., 2018; Huang et al., 2019), image processing (Vasudevan et al., 2022; Shi et al., 2019), and software engineering (Allamanis et al., 2017; Kammoun et al., 2022; Huda et al., 2016; TehraniJamsaz et al., 2022). The application of GNNs relies on the ability to represent sequential data or databases as a complex structure with large-scale nodes and edges with structural information (Kipf & Welling, 2016). However, the homogeneous representation of these GNN models hindered their ability to represent meaningful information for prediction. Heterogeneous Graph Neural Network (HGNN) models are proposed to overcome this challenge (Zhang et al., 2019). Compared to original GNNs, HGNN has the following advantages. First, HGNNs allow nodes to connect to all types of neighborhood nodes. In HGNNs, we can define the connection between any type of node without any restriction, which overcomes the drawback of several graph datasets that restrict the type of source node and target node for each edge, such as in (Zhang et al., 2019). Second, HGNNs can accept not only different types of nodes but also nodes with different attributes. For example, with an academic graph, HGNN allows embedding information of profile picture and description of the author, as well as embedding information of textual content of Paper node, since Paper has no information like "profile picture". HGNNs propose a new mechanism for concatenating information and linear transformation between nodes to handle this. Third, HGNNs provide a solution for aggregating neighborhood information between neighbor nodes of different types to a more meaningful embedding per each iteration of training/ inference. To achieve this, HGNN allows representing the learning thanks to different types and weights of edges beside the nodes. The first complete HGNN model was proposed by Zhang et al. (Zhang et al., 2019), called HetGNN. Hu et al. (Hu et al., 2020) proposed HGT, a transformer-based HGNN model that utilizes the graphs' properties more efficiently than HetGNN (Zhang et al., 2019) by decomposing interaction and transformation matrices to capture common and specific patterns of relationships between nodes and edges' types. Moreover, HGT allows embedding dynamic features such as the timeline of nodes and edges. From the work of Hu et al. (Hu et al., 2020), we justify the original HGT model to be trained and inference on parallelism detection. ## 4 Dataset Selection and Analysis In this study, we propose a dataset, OMP_Serial, from two distinct sources: open-source projects containing OpenMP pragmas and synthetic codes with specific parallelization patterns generated by template programming. In this section, we will discuss both approaches in detail. ### Open-source code data Our primary source of data is GitHub, where we crawled around 16000 source files from over 6000 repositories. We focused on \(C\) source files containing loops with and without OpenMP pragmas (pragmas can be either "#pragma omp parallel for" or "#pragma omp for"), ensuring that developers have intentionally used OpenMP directives in their code. To validate the data, we attempted to compile all the source codes using Clang to verify their correctness. Out of the 16000 source files, we were able to compile and retain 5731 source files for further analysis and experiments. Finally, we examined the label of the collected data using parallelization tools: Pluto, autoPar, and DiscoPoP and observed a small number of parallel loops missed by developers. ### Data Processing Data processing is necessary for the crawled source codes. The source codes are parsed to extract loops with comments removed and pragmas extracted. The loops are initially labeled as either parallel or non-parallel based on the presence of OpenMP pragmas. Loops without a pragma are classified as non-parallel. Parallel loops with OpenMP pragmas are further divided into four categories, namely \(private\), \(reduction\), \(simd\), and \(target\) based on the extracted pragma and verified with various parallelization tools. Consequently, the OMP_Serial dataset comprises labeled loops with their corresponding pragma clause, if present. ### Synthetic data To ensure pattern diversity for the OMP_Serial dataset, we complemented the filtered crawled data with synthetic data. Both the crawled and synthetic data will be processed as described in section 4.2. We utilized **Jinja2**(Ronacher, 2008) to generate complete C programs. For the do-all and reduction patterns, we created ten templates for each pattern and generated 20 variations of C source files from each template. We sourced the templates mainly from well-known parallel benchmarks such as the NAS Parallel Benchmark (Jin et al., 1999), PolyBench (Pouchet & Yuki, 2017), the BOTS benchmark (Duran et al., 2009), and the Starbench benchmark (Andersch et al., 2013). To create complete C programs, we inserted randomly generated variables, constants, and operators into the templates. The variable names were generated using a combination of English language alphabets (a-z, A-Z), digits (0-9), and underscores (_). For do-all loops, we considered the operators: \(+,~{}-,~{}*,~{}/\). For reduction loops, we considered only \(+\) and \(*\) operators since reduction operations need to be associative and commutative for parallelization. We used DiscPoP to verify the generated reduction and do-all templates. Loops not identified as do-all or reduction by DiscPoP were manually checked for inter-iteration dependencies or data-race conditions. If such conditions existed in the loop body, they were labeled as non-parallel loops. More details and examples on the generation of synthetic data can be found in Appendix.5. Finally, the OMP_Serial dataset, comprising both open-source and synthetic data, is summarized in Table 1. ## 5 Approach The representation of code is crucial for any analysis task. We propose an augmented heterogeneous AST representation for comprehending code in semantic and structural views. We first introduce the augmented AST (aug-AST) representation based on the control flow graph (CFG) and token distance in text format. Next, we append the types of nodes and edges in the aug-AST and build the augmented heterogeneous AST graph for each data point in our OMP_serial dataset. We use the heterogeneous graph transformer (HGT) model Hu et al. (2020) as our base model, taking the augmented heterogeneous AST graph as input. ### Code representation Code representations like AST and CFG provide crucial data for code analysis. However, a single representation is often insufficient to capture all the dependencies and parallelism. To address this issue, we propose an augmented AST that merges edges and nodes from the CFG, creating a single graph that incorporates the benefits of each distinct representation. Additionally, we address long-dependence problems by incorporating texture edges that follow the token distance map. #### 5.1.1 Transforming the Abstract Syntax Tree To build a joint representation, we propose an augmented AST that incorporates both AST and CFG. We express the AST as a heterogeneous graph \(HA=(V_{A},E_{A},\lambda_{A},\mu_{A})\), where nodes \(V_{A}\) represent AST tree nodes and edges \(E_{A}\) represent corresponding tree edges labeled as AST edges by the labeling function \(\lambda_{A}\). Each node is assigned an attribute using \(\mu_{A}\) that corresponds to the operator or operand the node represents. Furthermore, we assign an attribute to each node to reflect the tree's ordered structure (left or right). The color blocks in Figure 3 represent the heterogeneous node attributes, while the black edges represent edges from the AST. #### 5.1.2 Merging the Control Flow Graph To include the CFG in the joint representation, we express it as a heterogeneous graph \(GC=(V_{C},E_{C},\lambda_{C},\cdot)\). The nodes \(V_{C}\) represent statements and predicates in the loop AST. We also introduce edges from nodes shared by the AST and CFG to nodes in the AST graph. These edges are represented by yellow dash lines in figure 3, where node \(f1\) is a function call node shared by both AST and CFG. These edges enable the machine learning model to identify potential data races within the function call and explore parallelization opportunities. #### 5.1.3 Texture token relations In the work of Zugner et al. (2021), Zugner et. al revealed that AST alone may miss important lexical token distance in \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Source & Type & Total Loops & Pragma Type & Loops & Function Call & Nested Loops & Avg. LOC \\ \hline \multirow{4}{*}{GitHub} & \multirow{4}{*}{Parallel} & \multirow{4}{*}{18598} & reduction & 3705 & 279 & 887 & 6.35 \\ \cline{3-6} & & & private & 6278 & 680 & 2589 & 8.51 \\ \cline{3-6} & & & simd & 3574 & 42 & 201 & 2.65 \\ \cline{3-6} & & target & 2155 & 99 & 191 & 3.04 \\ \cline{2-6} & Non-parallel & 13972 & - & - & 3043 & 5931 & 8.59 \\ \hline \multirow{4}{*}{Synthetic} & \multirow{4}{*}{Parallel} & \multirow{4}{*}{400} & reduction & 200 & 200 & 100 & 31.59 \\ \cline{3-6} & & private (do-all) & 200 & 200 & 100 & 28.26 \\ \cline{1-1} \cline{2-6} & Non-parallel & 700 & - & - & 0 & 0 & 6.43 \\ \hline \end{tabular} \end{table} Table 1: Statistic Summary of the proposed OMP_Serial dataset comprises synthetic code and code collected from GitHub. Each data in the OMP_Serial represents a loop with labels indicating whether it is parallelizable or not. Parallelizable loops also include parallel pattern labels. The Loops column displays the number of loops for each type of pragmas. The Function Call and Nested Loops columns represent the number of loops with functions and nested loops for each type of pragmas, respectively. The Avg. LOC stands for the average length of code. formation, leading to difficulties in capturing long-distance dependence relations. To address this issue, we add extra edges to link each leaf with its neighbors in the token representation as shown in figure 3. The added lexical edges (represented by red dashes) help aug-AST track the token distance. ### Heterogeneous Graph Transformer In this study, the input for the Heterogeneous Graph Transformer (HGT) model is the aug-AST graph generated from the original AST plus augmented nodes and edges. An aug-AST graph is represented as a heterogeneous graph, denoted by \(G=(V,E,A,R)\). Here, \(V\) denotes the set of nodes, \(E\) denotes the set of edges, \(A\) represents the possible types of nodes in \(V\), and \(R\) represents the possible types of edges in \(E\). For a given edge \(e=(s,t)\) with source node \(s\) and target node \(t\), a meta-relation of the edge \(e\) is defined by the type of \(s\), the type of \(t\), and the type of edge \(e\). In our work, three types of edges are considered: parent-child edges generated by the original AST and augmented CFG and lexical edges added to capture the control flow information and the relationship between neighbor leaf nodes. In the original GNN model, information is updated from the \((l-1)\)-th layer to the \(l\)-th layer by the formula 1. \[H^{l}[t]=\text{Aggregate}(\text{Extract}(H^{l-1}[s];H^{l-1}[t];e)) \tag{1}\] Here, \(h_{v}^{(l)}\) is the feature representation of node \(v\) at the \(l\)-th layer, \(\sigma\) is the activation function, \(N_{r}^{out}(v)\) is the set of nodes that have an outgoing edge of type \(r\) from \(v\), \(W_{r}^{(l)}\) is the trainable weight matrix for edge type \(r\) at layer \(l\), and \(d_{v}^{(l-1)}\) is the degree of node \(v\) in the \((l-1)\)-th layer. In formula 1, the _Extract_ operator extracts information from neighbor nodes \(s\) to target node \(t\) and the _Aggregate()_ combines information from all the source that has the target node as \(t\). In HGT, the mechanism of passing information between layers is split into three components: Heterogeneous Mutual Attention, Heterogeneous Message Passing, and Target Specific Aggregation. **Mutual Attention.** The input of this step is the node \(t\) and a set of \(N(t)\), which represents all the source nodes of the relation \(r\). The heterogeneous mutual attention mechanism is calculated by taking the dot product between the source node \(s\) (Key vector) and the node \(t\) (Query vector). Next, the Key vector is projected using a linear projection to \(h\) attention heads, where each head is represented by a vector with \(\frac{d}{h}\) dimension. Similarly, the Query vector is also projected into \(h\) Query vectors. For each head \(h\), the Query vector is compared with the projection of the Key vector using a distinct edge-based matrix \(W^{\textit{ATT}}\). Finally, the attention vector for each pair of nodes is produced by concatenating the \(h\) attention heads. The gathering of all attention vectors from the set of neighbor nodes \(N(t)\) to the target node \(t\) is shown in the formula 2. \[\text{Attention}_{\textit{HGT}}(s,e,t)=\underset{\forall s\in N(t )}{\text{Softmax}}(\underset{i\in[1,h]}{\text{Softmax}}ATT-head^{t}(s,e,t)) \tag{2}\] **Message Parsing.** While the Mutual Attention compares between Key vector and Query vector as target node and source node, the Message Passing mechanism operates in parallel. The input of Message Passing is not only the edge but also its meta relations. The formula of the Message operator is shown in the formula 3, where the MSG-head function is calculated by a number of components. \[\text{Message}_{\textit{HGT}}(s,e,t)=\underset{i\in[1,h]}{\text{Softmax}} \textit{MSG}-head^{t}(s,e,t) \tag{3}\] The amount of components in the equation 3 is equal to the number of hidden layers. Similar to Formula 2, the Message Passing step also needs a matrix \(W^{\textit{MSG}}\) that embeds information of the edge dependency. **Target Specific Aggregation.** This Target Specific Aggregation operator combines the Attention operator calculated by Formula 2 and the Message operator calculated by Formula 3 to generate an update vector for each head, as shown Figure 3: An example of the proposed heterogeneous augmented AST (Heterogeneous aug-AST) representation of code in Listing 1 is shown. The colored blocks indicate the heterogeneous attributes assigned to the AST nodes. The red and yellow lines represent the control flow graph (CFG) and token representation, respectively. in equation 4. \[\bar{H}^{(l)}[t]=Aggregate(Attention(s,e,t).Message(s,e,t)) \tag{4}\] In the final step, the output of each head calculated by the formula 4 is combined with a type-specific distribution of target node \(t\) through a linear projection: \[\bar{H}^{(l)}[t]=A-Linear_{(type(t))}(\sigma(\bar{H}^{(l)}[t]))+H^{(l-1)}[t] \tag{5}\] In Graph2Par, the distribution of \(type(t)\) is the set of different node types in the aug-AST. In the work of Hu et al. (Hu et al., 2020), they provide Inductive Timestamp Assignment and Relative Temporal Encoding to represent the dynamic heterogeneous graphs. However, since Graph2Par works with static and structural information of AST, we set the same temporal encoding mechanism and deactivated the inductive timestamp assignment in our HGT model. ## 6 Results In this section, we present the results of our experiments aimed at answering two research questions: 1. evaluating the performance of the proposed Heterogeneous augmented AST code representation, and 2. assessing the effectiveness of the proposed Graph2Par method for OpenMP pragma suggestion. Additional training results are provided in the appendix (see Appendix **??**). ### Performance of the Heterogeneous aug-AST We demonstrate that our proposed Heterogeneous aug-AST representation outperforms both token-based and original AST representations by evaluating its performance in predicting parallelism. We compare the vanilla AST and the Heterogeneous aug-AST by using them as inputs to the same HGT model. Additionally, we reproduce PragFormer, the work of Harel et. al (Harel et al., 2022), to compare the performance of token representation and Heterogeneous aug-AST representation. PragFormer uses token-based representation as input to a transformer model for parallelism detection. Table 2 shows that our Heterogeneous aug-AST outperforms PragFormer in parallelism detection. ### Parallelism Discovery: Comparing with other tools The results of the above experiments demonstrate that the proposed Heterogeneous aug-AST representation outperforms both original AST and token-based representations in parallelism detection. In this subsection, we continue the evaluation of the aug-AST presentation by comparing it with well-known algorithm-based parallelism assistant tools: PLUTO, autoPar, and DiscoPoP. PLUTO and autoPar are algorithm-based static analysis tools, whereas DiscoPoP is an algorithm-based dynamic analysis tool. All three auto-parallelization tools can detect parallelism in codes they can handle. However, parallelization pattern classification is not supported by all the tools. For example, \(simd\) and \(target\) clause predictions are not supported by any tools at present. Therefore, we conduct a performance comparison for the task of parallelism detection. As mentioned in section 4, loops in the OMP_serial dataset are labeled 1 when the OpenMP clauses are present and labeled 0 otherwise. Graph2Par predicts the parallelism within a loop by a binary classification. PLUTO directly reports the parallelism detection results within a loop. autoPar injects OpenMP clauses like "#pragma omp parallel for" including "private" clause and "reduction" clause to the programs. We mark the detection results as parallel when the injected clauses are present. DiscoPoP can detect reduction and do-all patterns within a loop, and we considered the loops detected as either do-all or reduction by DiscoPoP as parallel loops. Different tools usually work with different sizes of data because they may require different information about the codes. DiscoPoP, for example, requires execution information for analysis, making it works with a much smaller dataset compared with static tools like PLUTO. Therefore, we divided our test dataset into three subsets for a fair comparison between Graph2Par and different tools. The results are presented in Table 4. Our Graph2Par model achieves superior performance compared to the other tools, indicating its effectiveness in detecting parallelism in sequential programs. are in our testing set and can also be successfully processed by PLUTO. This set contains 4032 loops. * **Subset autoPar:** This subset contains the loops that are in our testing set and can also be successfully processed by autoPar. This set contains 3356 loops. * **Subset DiscoPoP:** This subset contains the source files that are in our testing set and can also be successfully processed by DiscoPoP. This set contains 1226 loops. We train our Graph2Par approach the three subset described above separately for comparison. In each training, one of the subsets was excluded to ensure that the model had not seen the samples before. The results are presented in tables 3 and 4. For all three subsets, our Graph2Par model achieved better precision, recall, F1 score, and accuracy than all the other tools. ### OpenMP Clause Classification The above results demonstrate that Graph2Par has the ability to learn the latent features of code for parallelism detection. In this subsection, we evaluate the extensibility of our Graph2Par model for predicting OpenMP pragmas, including "private", "reduction", "simd", and "target". We apply the same labeling strategy as the parallelism detection task, where the presence of the corresponding pragma determines the label of the loop. We train Graph2Par on the entire OMP_serial dataset and evaluate on a separate test set. The results are presented in Table 5. We observe that our Graph2Par model performs well for the "private" and "reduction" pragma prediction tasks but struggles with the "simd" and "target" pragma prediction tasks. This is due to the limited representation of the aug-AST for certain pragma patterns, as some patterns may require additional information beyond the control flow graph and lexical edges represented by the aug-AST. It is worth noting that algorithm-based tools are not able to predict all of these pragmas or process every data point in our dataset. As the state-of-the-art model, PragFormer is used as a baseline for comparing the results of Graph2Par. Table 5 shows that our Graph2Par approach outperforms the SOTA token-based approach in both "private" and "reduction" pragma prediction tasks. Overall, the results demonstrate that our Graph2Par model has the potential to be extended to other OpenMP pragma prediction tasks, but additional features and representations may be required to handle more complex patterns. ### Dealing with False Positives From Table 4, it can be observed that our proposed Graph2Par has some false positives, meaning that it predicted some loops that are not parallel as parallel loops. In contrast, traditional tools like PLUTO, autoPar, and DiscoPoP have zero false positives. However, Graph2Par is able to detect 1.8x, 5.2x, and 1.2x more parallel loops (true positives) in the Subset PLUTO, Subset autoPar, and Subset DiscoPoP datasets, respectively. This suggests that although Graph2Par may wrongly predict some loops as parallel, it can discover more parallelization opportunities than traditional approaches that are often conservative and may miss out on such opportunities. False positives are inevitable when embracing machine learning techniques since no model is perfect and can make mistakes. Parallelizing serial programs is complex, which makes it hard to do end-to-end auto-parallelization, even with algorithm-based tools. There is more to consider for end-to-end approaches other than the parallelization pattern within the code, such as the characteristics of the platform on which the code executes, as well as the input data size and data dependencies. These factors can significantly impact the performance of the paral \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{TP} & TN & FP & FN & Precision & Recall & F1 & Accuracy(\%) \\ \cline{2-10} Subset PLUTO & PLUTO & 1593 & 0 & 0 & 2439 & 100.00 & 39.51 & 56.64 & 39.51 \\ \cline{2-10} & Graph2Par & 2860 & 617 & 356 & 199 & 88.93 & 93.49 & 91.16 & 86.24 \\ \hline \multirow{2}{*}{Subset autoPar} & autoPar & 345 & 952 & 0 & 2059 & 100.00 & 14.35 & 25.10 & 38.65 \\ \cline{2-10} & Graph2Par & 1800 & 897 & 187 & 472 & 90.59 & 79.23 & 84.53 & 80.36 \\ \hline \multirow{2}{*}{Subset DiscoPoP} & DiscoPoP & 541 & 240 & 0 & 445 & 100.00 & 54.87 & 70.86 & 63.70 \\ \cline{2-10} & Graph2Par & 635 & 366 & 64 & 161 & 90.84 & 79.77 & 84.95 & 81.65 \\ \hline \end{tabular} \end{table} Table 4: Comparing Graph2Par model with PLUTO, autoPar and DiscoPoP for the task of parallelism detection (Detecting the presence of ”#pragma omp for” or ”#pragma omp parallel for”) \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Pragma & Approach & Precision & Recall & F1-score & Accuracy \\ \hline private & Graph2Par & 0.88 & 0.87 & 0.87 & 0.89 \\ \hline & PragFormer & 0.86 & 0.85 & 0.86 & 0.85 \\ \hline reduction & Graph2Par & 0.9 & 0.89 & 0.91 & 0.91 \\ \hline & PragFormer & 0.89 & 0.87 & 0.87 & 0.87 \\ \hline SIMD & Graph2Par & 0.79 & 0.76 & 0.77 & 0.77 \\ \hline & PragFormer & N/A & N/A & N/A & N/A \\ \hline target & Graph2Par & 0.75 & 0.74 & 0.74 & 0.74 \\ \hline & PragFormer & N/A & N/A & N/A & N/A \\ \hline \end{tabular} \end{table} Table 5: Performance of Graph2Par for four pragma prediction. leized code, and their consideration is crucial for achieving optimal speedup. Therefore, it is important to carefully analyze and tune these factors in addition to identifying the parallelism opportunities within the code. Therefore, Graph2Par handles the false positives by only providing suggestions instead of generating end-to-end parallel code. The suggestion provided by Graph2Par includes whether parallelism exists within a loop and whether the loop inhibits any parallel patterns when parallelism is present. Developers can then use this information to parallelize the loops using any framework they prefer. For example, if a developer finds that a loop is parallel and has a reduction pattern, they can easily parallelize the loop using the "#pragma omp parallel for reduction" clause of OpenMP. However, there may be scenarios where the false positives are significant and need to be reduced to avoid confusion and save developers time. In such cases, developers may use additional tools to manually verify the suggested parallelism by Graph2Par. ### Overhead. When generating the proposed aug-AST representation for a loop, the steps mentioned in section 5 are followed. The overhead of creating an aug-AST comes from two steps: code compilation with Clang and AST traversal with tree-sitter (Brunsfeld, 2018). However, both steps introduce minimal overhead. It is important to note that the overhead of creating an aug-AST may increase for larger size codes. However, for the loops in the OMP_serial dataset, which have an average size of 6.9 lines, the overhead is minimal and in the order of milliseconds. ### Case Study In the evaluation, it is observed that our proposed model can successfully identify 48 parallel loops missed by all three algorithm-based tools. An example of one such loop is presented in Listing 6, and other examples can be found in Listings 1, 2, 3, 4, and 5 in the motivation examples. These results demonstrate the effectiveness of our Graph2Par approach in detecting parallelism opportunities that are missed by traditional algorithm-based tools. ``` for(i=0;i<1000;i++){ a[i]=i*2; sum+=i; } ``` Listing 6: Parallel loop missed by DiscoPoP, PLUTO and autoPar with array and reduction Another example is shown in Listing 7. We believe that the conservative nature of non-AI-based parallelism assistant tools may be the reason for missing such opportunities. In this specific example, although there is a reduction operation on the variable \(sum\) and memory access to the 2D array \(a\), only the \(j\) index is changing, and there are no inter-iteration dependencies. Therefore, this loop can be executed in parallel, and it is successfully detected by our Graph2Par model. ``` for(j=0;j<1000;j++){ sum+=a[i][j]*v[j]; } ``` Listing 7: Parallel loop missed by DiscoPoP, PLUTO and autoPar with array and reduction Furthermore, our proposed Graph2Par model can handle parallelism detection in nested loops effectively, which is a challenging problem due to the complex dependencies between the loops. As an example, in Listing 8, the outer parallel loop has been missed by all traditional parallelism assistant tools due to its nested structure. However, our model successfully detects that the outer-most \(for\) loop can be parallelized. By observing that each cell of the 3-d array \(a\) will eventually have the same value and that \(m\) is just a constant, we can verify that there are no loop-carried dependencies, and the loop can be safely parallelized. ``` for(i=0;i<12;i++){ for(j=0;j<12;j++){ for(k=0;i<12;k++){ tmp1=60/m; a[i][j][k]= tmp1+4; } } } ``` Listing 8: Parallel loop missed by DiscoPoP, PLUTO and autoPar with nested loop ## 7 Related Work Recent research has shown an increasing trend in employing machine learning techniques for parallelization analysis. These studies can be broadly classified into two categories based on their code representations. Token-based code analysis studies (Fried et al., 2013; Harel et al., 2022) used natural language processing (NLP) models trained on raw code text data. In contrast, recent studies such as (Shen et al., 2021; Chen et al., 2022) have leveraged structured graphical models with the structural representation of code, such as the Abstract Syntax Tree (AST). Compared to these works, our proposed Heterogeneous augment-AST representation is easy to process and contains rich information on nodes and edges, enabling more accurate and efficient parallelization analysis. ## 8 Conclusion In this paper, we propose a static approach to discover parallelism in sequential programs using an augmented AST representation. To address the issue of data insufficiency, we created the OMP_Serial dataset, which can be used for other parallelization tasks as well. We evaluate the aug-AST representation using a GNN-based model, and it outperforms traditional parallelization tools as well as token-based machine learning approaches. However, there is still room for improvement in our model. Currently, Graph2Par can only detect whether a pragma is applicable for a loop or not, but future research directions could focus on developing a model that can generate complete OpenMP pragmas for sequential loops.
2307.13762
Implementing and Benchmarking the Locally Competitive Algorithm on the Loihi 2 Neuromorphic Processor
Neuromorphic processors have garnered considerable interest in recent years for their potential in energy-efficient and high-speed computing. The Locally Competitive Algorithm (LCA) has been utilized for power efficient sparse coding on neuromorphic processors, including the first Loihi processor. With the Loihi 2 processor enabling custom neuron models and graded spike communication, more complex implementations of LCA are possible. We present a new implementation of LCA designed for the Loihi 2 processor and perform an initial set of benchmarks comparing it to LCA on CPU and GPU devices. In these experiments LCA on Loihi 2 is orders of magnitude more efficient and faster for large sparsity penalties, while maintaining similar reconstruction quality. We find this performance improvement increases as the LCA parameters are tuned towards greater representation sparsity. Our study highlights the potential of neuromorphic processors, particularly Loihi 2, in enabling intelligent, autonomous, real-time processing on small robots, satellites where there are strict SWaP (small, lightweight, and low power) requirements. By demonstrating the superior performance of LCA on Loihi 2 compared to conventional computing device, our study suggests that Loihi 2 could be a valuable tool in advancing these types of applications. Overall, our study highlights the potential of neuromorphic processors for efficient and accurate data processing on resource-constrained devices.
Gavin Parpart, Sumedh R. Risbud, Garrett T. Kenyon, Yijing Watkins
2023-07-25T18:43:08Z
http://arxiv.org/abs/2307.13762v1
Implementing and Benchmarking the Locally Competitive Algorithm on the Loihi 2 Neuromorphic Processor ###### Abstract. Neuromorphic processors have garnered considerable interest in recent years for their potential in energy-efficient and high-speed computing. The Locally Competitive Algorithm (LCA) has been utilized for power efficient sparse coding on neuromorphic processors, including the first Loihi processor (Lai et al., 2017; Chen et al., 2017). With the Loihi 2 processor enabling custom neuron models and graded spike communication, more complex implementations of LCA are possible (Chen et al., 2017). We present a new implementation of LCA designed for the Loihi 2 processor and perform an initial set of benchmarks comparing it to LCA on CPU and GPU devices. In these experiments LCA on Loihi 2 is orders of magnitude more efficient and faster for large sparsity penalties, while maintaining similar reconstruction quality. We find this performance improvement increases as the LCA parameters are tuned towards greater representation sparsity. Our study highlights the potential of neuromorphic processors, particularly Loihi 2, in enabling intelligent, autonomous, real-time processing on small robots, satellites where there are strict SWaP (small, lightweight, and low power) requirements. By demonstrating the superior performance of LCA on Loihi 2 compared to conventional computing device, our study suggests that Loihi 2 could be a valuable tool in advancing these types of applications. Overall, our study highlights the potential of neuromorphic processors for efficient and accurate data processing on resource-constrained devices. 2023 + Footnote †: copyrighted: © 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _International Conference on Neuromorphic Systems (ICONS '23)_, August 1-3, 2023, Santa Fe, NM, USA, [https://doi.org/10.1145/3589737.3605973](https://doi.org/10.1145/3589737.3605973) + Footnote †: copyrighted: © 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _International Conference on Neuromorphic Systems (ICONS '23)_, August 1-3, 2023, Santa Fe, NM, USA, [https://doi.org/10.1145/3589737.3605973](https://doi.org/10.1145/3589737.3605973) + Footnote †: copyrighted: © 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _International Conference on Neuromorphic Systems (ICONS '23)_, August 1-3, 2023, Santa Fe, NM, USA, [https://doi.org/10.1145/3589737.3605973](https://doi.org/10.1145/3589737.3605973) + Footnote †: copyrighted: © 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _International Conference on Neuromorphic Systems (ICONS '23)_, August 1-3, 2023, Santa Fe, NM, USA, [https://doi.org/10.1145/3589737.3605973](https://doi.org/10.1145/3589737.3605973) + Footnote †: copyrighted: © 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _International Conference on Neuromorphic Systems (ICONS '23)_, August 1-3, 2023, Santa Fe, NM, USA, [https://doi.org/10.1145/3589737.3605973](https://doi.org/10.1145/3589737.3605973) + Footnote †: copyrighted: © 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _International Conference on Neuromorphic Systems (ICONS '23)_, August 1-3, 2023, Santa Fe, NM, USA, [https://doi.org/10.1145/3589737.3605973](https://doi.org/10.1145/3589737.3605973) + Footnote †: copyrighted: © 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _International Conference on Neuromorphic Systems (ICONS '23)_, August 1-3, 2023, Santa Fe, NM, USA, [https://doi.org/10.1145/3589737.3605973](https://doi.org/10.1145/3589737.3605973) + Footnote †: copyrighted: © 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _International Conference on Neuromorphic Systems (ICONS '23)_, August 1-3, 2023, Santa Fe, NM, USA, [https://doi.org/10.1145/3589737.3605973](https://doi.org/10.1145/3589737.3605973) + Footnote †: copyrighted: © 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in _International Conference on Neuromorphic Systems (ICONS '23)_, August 1-3, 2023, Santa Fe, NM, USA, [https://doi.org/10.1145/3589737.3605973](https://doi.org/10.1145/3589737.3605973) ## 1. Introduction In recent years, the concept of neuromorphic computing has gained popularity in the field of artificial intelligence and machine learning. By emulating the structure and function of biological neural systems, this computing paradigm provides power-efficient and high-speed computing capabilities. Unlike traditional von Neumann computers, neuromorphic computing stores and processes information locally within the same unit, which eliminates the need to move data around. This reduced communication bandwidth allows the chip to perform operations in a highly energy efficient manner compared to traditional computing architectures, consuming only the minimum amount of energy required to perform a given computation. Additionally, the fine grained parallelism allows very large numbers of neurons to work simultaneously, enabling faster processing of large amounts of data. One example of neuromorphic hardware is the Loihi 2 (Chen et al., 2017) neuromorphic processor, which mimics the structure and function of the human brain, enabling efficient and parallel processing of data. Due to it's intrinsic speed and energy efficiency, neuromorphic computing has the potential to revolutionize various computer vision applications, including efficient and robust classification and object detection. Sparse coding is a key technique with applications in neuromorphic computing. It models the behavior of V1 simple cell receptive fields (Krishnan et al., 2017) and can acquire features in an unsupervised scenario for machine learning applications. Sparse coding algorithms use an over-complete set of non-orthogonal basis functions, known as feature vectors, to find a sparse combination of non-zero activation coefficients that can most accurately reconstruct each input image. The Locally Competitive Algorithm (LCA) (Krishnan et al., 2017) is a biologically plausible implementation of sparse coding. LCA has primarily been developed for computer vision, with successful applications in denoising (Chen et al., 2017), up-sampling (Krishnan et al., 2017), compression (Krishnan et al., 2017), and image classification (Chen et al., 2017; Chen et al., 2017). Furthermore, LCA is highly compatible with neuromorphic computing as it maps the feature vectors to neurons which compete to sparsely reconstruct the input. Despite the potential of LCA on neuromorphic computing systems, there is a lack of benchmark studies that compare the performance of different hardware platforms executing LCA. In (Chen et al., 2017), LCA on Loihi 1 is compared against a CPU, showing strong performance and efficiency improvements as the problem size increases. However, the algorithm design was substantially modified from (Krishnan et al., 2017) and other platforms like GPU and edge devices aren't considered. To address this gap, we implement fixed-sized 1-layer and 2-layer LCA with varying V1 and residual thresholds on different hardware platforms, including the Loihi 2 neuromorphic processor, A100 GPU, M1 CPU, and Jetson Nano edge GPU. We measure their performance based on reconstruction error, throughput, and dynamic energy consumption. Our results show that at large sparsity penalties the Loihi 2 neuromorphic processor outperforms conventional computing devices in terms of throughput and dynamic energy consumption while maintaining the same sparsity and reconstruction quality. This study suggests that Loihi 2 is a viable option for enabling intelligent, autonomous, real-time processing on small robots, drones, and satellites with small, lightweight, and low-power requirements. Overall, this study provides valuable insights into the performance of different hardware platforms in executing LCA and highlights the potential of neuromorphic computing for efficient and performant computer vision applications. ### Locally Competitive Algorithm (LCA) The Locally Competitive Algorithm (LCA) seeks to find a sparse representation \(\mathbf{a}\) of an input vector \(\mathbf{X}\) in terms of dictionary elements \(\phi_{i}\in\Phi\). The objective of minimizing reconstruction error \(\mathbf{X}-\Phi\mathbf{a}\) and sparsity is formalized by the following energy function: \[E(\mathbf{X},\Phi,\mathbf{a})=\min_{\{\mathbf{a},\Phi\}}\left[\frac{1}{2}|| \mathbf{X}-\Phi\mathbf{a}||^{2}+\lambda||\mathbf{a}||_{1}\right], \tag{1}\] LCA finds a local minimum of the cost function defined in Eq. (1) by introducing the dynamical variables (membrane potentials) \(\mathbf{u}\) such that the output \(\mathbf{a}\) is given by a soft-threshold transfer function, whose threshold is given by the sparsity tradeoff parameter \(\lambda\)(Kumar et al., 2017): \[\mathbf{a}=T_{\lambda}(\mathbf{u})=\begin{cases}\mathbf{u}-\lambda,&\mathbf{u }>\lambda\\ 0,&\text{otherwise}\end{cases} \tag{2}\] The cost function defined in Eq. (1) is then minimized by taking the gradient of the cost function with respect to \(\mathbf{a}\) and solving the resulting set of coupled differential equations for the membrane potentials \(\mathbf{u}\): \[\hat{\mathbf{u}}\propto-\frac{\partial E}{\partial\mathbf{a}}=- \mathbf{u}+\Phi^{T}\mathbf{X}-T_{\lambda}(\mathbf{u})\{\Phi^{T}\Phi-1\}\] \[=-\mathbf{u}+\Phi^{T}\{\mathbf{X}-\Phi T_{\lambda}(\mathbf{u}) \}+T_{\lambda}(\mathbf{u}) \tag{3}\] ### One and Two Layer LCA The neurons described in Eq. (3) can be structured as either a one or two layer model. In the 1-layer model, the dynamics are structured as a single recurrent layer (Figure 0(a)). We refer to these neurons as V1 as they model the behavior of V1 simple cells. There is a V1 neuron for every element of the dictionary. On traditional hardware platforms, this is a more efficient implementation than 2-layer. In the 2-layer model, the reconstruction is separated into its own layer (Figure 0(b)). In this residual layer, there is a neuron for every element of the input \(\mathbf{X}\). This structure allows for easier implementation of convolutional LCA (Kumar et al., 2017), and provides a local update rule when performing dictionary learning as given in (Kumar et al., 2018). ## 2. Loihi 2 Implementation ### V1 Neurons The V1 neuron models directly implement the dynamics from Figure 1. The voltage \(\mathbf{u}\) is stored as a 24-bit signed integer. Changes to the voltage are discretized with a time-constant \(\tau\): \[\begin{split}\mathbf{u}_{t+1}&=\mathbf{u}_{t}+ \tau(-\mathbf{u}_{t}+b+\mathbf{in})\\ &=\mathbf{u}_{t}(1-\tau)+\tau\ b+\tau\ \text{in}\end{split} \tag{4}\] where \(b=\Phi^{T}\mathbf{X}\) for one-layer LCA, corresponding to lateral competition (Figure 0(a)), and \(b=T_{\lambda}(\mathbf{u})\) for 2-layer LCA, which compensates for self-compromised when the input arises from a residual error layer (Figure 0(b)). In the one-layer \(tb\) is pre-computed. In both models, the input drive \(\mathbf{in}\) connection weight is pre-scaled by \(\tau\). If a neuron is active, it fires every time-step with a graded spike equal to the soft-threshold activation. As the V1 activation is sparse across the layer, the total number of spikes per time-step remains small. ### Residual Neurons In 2-layer LCA, the residual neurons are not inherently sparse and must be modified as such for performance. Accumulator neurons (Kumar et al., 2017) previously have been demonstrated as a way to create a spiking residual layer (Kumar et al., 2017). In our implementation we expand on accumulator neurons to take advantage of the graded spikes on Loihi 2. The residual neurons accumulate the reconstruction error \(\mathbf{e_{t+1}}=\mathbf{e_{t}}+\mathbf{X}-\) in until they reach some error threshold \(\lambda_{e}\). Then the neuron fires a graded spike with magnitude \(\mathbf{e}\) and resets \(\mathbf{e}\) to zero. By increasing the error threshold until multiple time steps are required to produce a spike, firing activity can be made more sparse in time. This sparsity dramatically increases throughput and typically doesn't impact the algorithm's solution quality. We explore the impact of the error threshold further in Section 4.1. ### Connections All connections weights are fixed point values with 8-bit mantissa and an exponent that is constant across the connection. As a result, the precision of the weights is dependent on the maximum magnitude connection. In the two-layer model, connections correspond to unit length feature vectors whose elements don't vary in magnitude sufficiently to be truncated. The lateral connections in the 1-layer model, whose elements are given by the inner product of feature vectors, can vary much more in comparison. Thus, in the one-layer model weak excitation and inhibition between neurons Figure 1. Structure of One and Two Layer LCA Models may be lost with the limited precision. In our later experiments we only observe this behavior at relatively low V1 thresholds. ## 3. Benchmarking Setup To evaluate the performance of our implementation, we benchmark the reconstruction of \(28\mathrm{px}\times 28\mathrm{px}\) grayscale images from MNIST (Dosovitskiy et al., 2017). Reconstructions use a complete (784 element), pre-trained dictionary shown in Figure 2. The dictionary is pre-trained on the MNIST training set using SPORCO (Bordes et al., 2016). We compare LCA on Loihi 2 with other LCA implementations for the CPU and GPU devices listed in Table 1. LCA is run for 256 iterations with a time constant of \(\tau=2^{-7}\) and one input image at a time (i.e., batch-size = 1 regime). This emulates frame-by-frame sparse-coding of a real-time input. Note that LCA will not fully converge after this many iterations, so performance should not be compared to other sparse coding algorithms. We measure reconstruction error, sparsity, dynamic power, and run-time. On the CPU and GPUs dynamic power is calculated by measuring idle power for 10 minutes, and subtracting it from average power running LCA for 10 minutes. By calculating dynamic power, we mitigate the impact of other running processes on the energy measurements. ### Loihi 2 The input to LCA on Loihi 2 is stored in memory that is currently difficult to modify at run-time. To simplify our implementation and limit the impact of IO, we fix the input to LCA for a given run. We instead simulate the effect of changing inputs by resetting the voltage of the V1 neurons every 256 time-steps and averaging results across a small subset of images. All voltages and activations are represented as 24-bit signed fixed point values, with the binary point at \([0000\,0001\,.\,0000\,0000\,000000]_{2}\). This gives a range of \([-128_{10},127_{10}]\) with precision \(2^{-16}\). On the chip the 1-Layer model utilizes 33 cores and the 2-Layer uses 66. Power is measured using Lava's built-in profiler. 1 Footnote 1: [https://github.com/Loihi2](https://github.com/Loihi2) ### CPU and GPU We implement 1-Layer LCA on the CPU and GPUs listed in Table 1. As the devices can perform a non-local learning rule they don't benefit from 2-Layer LCA, so we do not evaluate it. Both the CPU and GPU implementations use 32-bit floating point values for all computation. We did not find a substantial change in run-time or reconstruction accuracy by using 16-bit values. For GPU implementations we utilize the Jax library and for the M1 Pro we utilize the Accelerate framework. Power is measured using powermetrics on MacOS, nvidia-smi for the A100, and through onboard sensors on the Jetson Nano. 2 Footnote 2: [https://github.com/Loihi2](https://github.com/Loihi2) ## 4. Results For 10 randomly sampled images, the reconstruction error and sparsity of solutions obtained using 1-layer LCA on Loihi 2 closely match those of CPU and GPU implementations across most V1 thresholds (Figure 3, 4). At \(\lambda<2^{-6}\) over 90% of the V1 neurons are active and at \(\lambda>2^{4}\) no neurons are active, so we evaluate performance within this range. The reduced weight precision of the 1-layer model relative to CPU/GPU weights results in slightly less sparse solutions at low V1 thresholds. Additionally at these low thresholds the reconstruction error of 2-layer LCA increases notably. It is possible to reduce this error by decreasing the residual threshold, explored in Section 4.1. \begin{table} \begin{tabular}{c c c c} \hline \hline Platform & Type & Process & Framework (Ver.) \\ \hline Loihi 2 & Loihi & Intel 4 & Lava (w/ Loihi ext. 0.4.1) \\ M1 Pro & CPU & TSMC N5 & Accelerate (Xcode 14.2) \\ Jetson Nano & GPU & TSMC 20nm & Jax (Bordes et al., 2016) (0.2.25) \\ A100 & GPU & TSMC N7 & Jax (Bordes et al., 2016) (0.3.21) \\ \hline \hline \end{tabular} \end{table} Table 1. Devices and libraries used for benchmarking Figure 3. Example LCA Reconstructions for Loihi 2 and CPU/GPU Implementations. V1 Threshold \(\lambda=0.5\) Figure 2. Pre-trained dictionary used for benchmarking ### Tuning 2-Layer LCA It is not immediately clear that LCA converges to similar solutions with a spiking residual layer. To test this, we vary the threshold \(\lambda_{e}\) in the residual layer of 2-Layer LCA exponentially from \(2^{-16}\) to \(2^{6}\) and evaluate the reconstruction of 3 images. Figure 5 illustrates the impact of different \(\lambda_{e}\) values on sparsity, MSE, dynamic, reconstruction throughput, and energy usage. As the threshold increases, the number of spikes decreases and we observe a decrease in both energy consumption and run-time. We find the sparsity and reconstruction error is unchanged for large V1 thresholds, but as the V1 threshold decreases lower residual thresholds must be used for optimal solutions. As there is minimal differences in the solution at larger V1 thresholds but substantially faster and lower energy performance, we utilize \(\lambda_{e}=2^{6}\) for the rest of our tests. ### Performance vs. Threshold As the runtime and energy usage of Loihi 2 are dependent on the number of spikes per time-step, we evaluate the reconstruction of 10 images for V1 thresholds from \(2^{-6}\) to \(2^{4}\). Figure 6 depicts how the runtime and energy usage vary substantially based on the chosen threshold. At V1 thresholds \(\lambda\geq 2^{-1}\) 1-Layer LCA on Loihi 2 is faster and more efficient than LCA on all other devices. This threshold corresponds to an average sparsity of 83%. The 2-Layer implementation is very close in performance across all thresholds. Figure 4. Sparse Coding performance across all V1 thresholds. For 2-Layer LCA, \(\lambda_{e}=64\). Figure 5. Sparse Coding performance across all residual layer thresholds \(\lambda_{e}\). 95% confidence intervals shown. Figure 6. Sparse Coding performance across all V1 thresholds \(\lambda_{e}\). 95% confidence intervals shown. Energy improvements as the threshold increases are driven by increased throughput, but at high thresholds power also decreases. It is important to note that while the larger threshold values run substantially faster on Loihi 2, the reconstruction quality is reduced on all platforms (Figure 4). With a less optimal dictionary, we observed LCA was unstable on all platforms when the V1 threshold was small. This instability is expected for small \(\lambda\) and can be mitigated by reducing \(\tau\)(Kennedy et al., 2017). In these unstable cases LCA performs similarly to the minimum V1 threshold; The Loihi 2 implementation remains power efficient but is substantially slower than both the A100 and M1 Pro. ## 5. Conclusion and Future Work The benchmarking results of our LCA implementation on Loihi 2 indicate significant advantages in power efficiency and run-time to other hardware platforms. With a V1 threshold \(\lambda\geq 2^{-1}\), the solutions obtained on Loihi 2 are roughly identical to those on other devices and are obtained at least 34% faster with an order of magnitude improvement in power efficiency. This study enables the development of intelligent, autonomous, real-time data processing systems on small robots, drones, and satellites, where there are strict SWaP requirements. By demonstrating the superior performance of LCA on Loihi 2 compared to conventional computing devices, this study provides a foundation for future research and development of neuromorphic computing systems, paving the way for a new era of power-efficient, intelligent computing. Further improvements in speed and efficiency are possible with the Loihi 2 implementation by increasing the V1 threshold \(\lambda\), but involve trade-offs in reconstruction quality. Additionally, while slower than the 1-Layer version, 2-Layer LCA is similarly fast and efficient across most thresholds on Loihi 2. This may enable efficient dictionary learning on the platform in the future. As our current experiments only involve up to 10 images being reconstructed with a fixed size dictionary, future work will investigate how these differences change as the dictionary size scales up and compare to other CPU and GPU algorithms. Additionally, it may be possible to improve upon the V1 neuron model to make it spike less frequently using sigma-delta or accumulator dynamics. ## Acknowledgments This material is based upon work supported by the Department of Energy, Office of Science, Advanced Scientific Computing Research program, under award number 77902.
2308.11928
OFVL-MS: Once for Visual Localization across Multiple Indoor Scenes
In this work, we seek to predict camera poses across scenes with a multi-task learning manner, where we view the localization of each scene as a new task. We propose OFVL-MS, a unified framework that dispenses with the traditional practice of training a model for each individual scene and relieves gradient conflict induced by optimizing multiple scenes collectively, enabling efficient storage yet precise visual localization for all scenes. Technically, in the forward pass of OFVL-MS, we design a layer-adaptive sharing policy with a learnable score for each layer to automatically determine whether the layer is shared or not. Such sharing policy empowers us to acquire task-shared parameters for a reduction of storage cost and task-specific parameters for learning scene-related features to alleviate gradient conflict. In the backward pass of OFVL-MS, we introduce a gradient normalization algorithm that homogenizes the gradient magnitude of the task-shared parameters so that all tasks converge at the same pace. Furthermore, a sparse penalty loss is applied on the learnable scores to facilitate parameter sharing for all tasks without performance degradation. We conduct comprehensive experiments on multiple benchmarks and our new released indoor dataset LIVL, showing that OFVL-MS families significantly outperform the state-of-the-arts with fewer parameters. We also verify that OFVL-MS can generalize to a new scene with much few parameters while gaining superior localization performance.
Tao Xie, Kun Dai, Siyi Lu, Ke Wang, Zhiqiang Jiang, Jinghan Gao, Dedong Liu, Jie Xu, Lijun Zhao, Ruifeng Li
2023-08-23T05:32:24Z
http://arxiv.org/abs/2308.11928v1
# OFVL-MS: Once for Visual Localization across Multiple Indoor Scenes ###### Abstract In this work, we seek to predict camera poses across scenes with a multi-task learning manner, where we view the localization of each scene as a new task. We propose OFVL-MS, a unified framework that dispenses with the traditional practice of training a model for each individual scene and relieves gradient conflict induced by optimizing multiple scenes collectively, enabling efficient storage yet precise visual localization for all scenes. Technically, in the forward pass of OFVL-MS, we design a layer-adaptive sharing policy with a learnable score for each layer to automatically determine whether the layer is shared or not. Such sharing policy empowers us to acquire task-shared parameters for a reduction of storage cost and task-specific parameters for learning scene-related features to alleviate gradient conflict. In the backward pass of OFVL-MS, we introduce a gradient normalization algorithm that homogenizes the gradient magnitude of the task-shared parameters so that all tasks converge at the same pace. Furthermore, a sparse penalty loss is applied on the learnable scores to facilitate parameter sharing for all tasks without performance degradation. We conduct comprehensive experiments on multiple benchmarks and our new released indoor dataset LIVL, showing that OFVL-MS families significantly outperform the state-of-the-arts with fewer parameters. We also verify that OFVL-MS can generalize to a new scene with much few parameters while gaining superior localization performance. The dataset and evaluation code is available at [https://github.com/mooncake199809/UFVL-Net](https://github.com/mooncake199809/UFVL-Net). ## 1 Introduction Visual localization, a challenging task that aims to forecast 6-DOF camera pose on a provided RGB image, is an integral part of several computer vision tasks, such as simultaneous localization and mapping [51, 32, 5] and structure-from-motion [11, 31]. Typically, classical structure-based visual localization frameworks [36, 34, 59, 58] construct 2D keypoints and 3D scene coordinates associations by matching local descriptors, and afterwards use a RANSAC-based PnP algorithm [15, 25] to retrieve camera pose. Recently, with the advancements of deep learning [57, 56, 41, 49, 48], scene coordinate regression (SCoRe) based methods [26, 13, 61, 53, 16, 10], which trains a convolutional neural network (CNN) to regress the 3D scene coordinate corresponding to each pixel in the input image and calculates camera pose with PnP algorithm [25], establish state-of-the-art localization performance in small static scenes. Compared with structure-based methods, these methods require no database of images or local descriptors and can benefit from high-precision sensors. While SCoRe based methods achieve impressive results, they come with some drawbacks. Scene coordinate regression is scene-specific and required to be trained for new scenes, resulting in a linear increase in total model size with the number of scenes. After witnessing the success of SCoRe-based methods, a naive problem arise: could a single SCoRe-based model predict 3D coordinates for multiple scenes concurrently and generalize to a new scene? Solving this problem is a key step towards truly SCoRe-based model deployment on autonomous robots. A naive solution to this problem is that using a shared backbone to extract features from multiple scenes and then leveraging different regression heads to regress scene coordinates for each scene. Nevertheless, jointly optimizing cross-scene localization with a fully shared backbone exists an insurmountable obstacle, i.e., gradient conflict induced by competition among different tasks for shared parameters, resulting in inferior performance compared with learning tasks separately [27, 7, 14, 17]. Towards this end, we propose OFVL-MS, a unified SCoRe-based framework that optimizes visual localization of multiple scenes collectively. OFVL-MS is a multi-task learning (MTL) [12, 23, 29, 8, 57, 52, 33, 47] framework where localization of each scene is treated as an individual task. OFVL-MS offers benefits in terms of model complexity and learning efficiency since substantial parameters of the network are shared among multiple scenes, which renders the model more pragmatic to be deployed on robotics. Technically, OFVL-MS eliminates gradient conflict from forward and backward pass. In the forward pass, we design a layer-adaptive sharing policy to automatically determine whether each active layer of the backbone is shared or not, from which we derive task-shared parameters for efficient storage and task-specific parameters for mitigating gradient conflict. The central idea of the layer-adaptive sharing policy is to transform the layer selection of the backbone into a learnable problem, so that deciding which layers of the backbone to be shared or not can be done during training by solving a joint optimization problem. In the backward pass, inspired by gradient homogenization algorithms in classical multi-task learning [21, 28], we introduce a gradient normalization algorithm that homogenizes the gradient magnitude of the task-shared parameters across scenes to ensure all tasks converge at a similar but optimal pace, further relieving gradient conflict. We also apply a penalty loss on the active layers to prompt all tasks to share as many parameters as possible while improving the performance of some tasks that benefit from the shared parameters, as illustrated in Sec. 4.4 and Sec. 4.7. Experiments show that OFVL-MS achieves excellent localization performance on several benchmarks, including 7-Scenes dataset[39], 12-Scenes datasets [45] and our **released large indoor dataset LIVL** in terms of median positional and rotational errors, etc. We also demonstrate that OFVL-MS can generalize to a new scene with much few parameters while maintaining exceptional performance. To summarize, the contributions of this work are as follows: (1) We propose OFVL-MS, a unified visual localization framework that optimizes localization tasks of different scenes collectively in a multi-task learning manner. (2) We propose a layer-adaptive sharing policy for OFVL-MS to automatically determine, rather than manually, whether each active layer of backbone is shared or not. A penalty loss is also applied to promote layer sharing across scenes. (3) We introduce a gradient normalization algorithm to homogenize gradient magnitudes of the task-shared parameters, enabling all tasks to converge at same pace. (4) We publish a **new large indoor dataset LIVL** that provides a new test benchmark for visual localization. (5) We demonstrate that OFVL-MS can generalize to a new scene with much fewer parameters while retaining superior localization performance. ## 2 Related Work **Structured-based Visual Localization**. The structure-based methodologies [36, 34, 59, 58] utilize local descriptors to establish 2D pixel positions and 3D scene coordinate matches for a given query image, afterwards using a PnP algorithm to recover camera pose. However, as opposed to directly matching within an exhaustive 3D map as in [36], current state-of-the-art methods [34, 59, 58] employ image retrieval [2] to narrow down the searching space and utilize advanced feature matching techniques such as Patch2pix [62], SuperGlue [35], LoFTR [42], MatchFormer [50], OAMatcher [9], and DeepMatcher [54] to generate precise 2D-2D correspondences, which are subsequently elevated to 2D-3D matches. The structured-based methods demonstrate state-of-the-art performance in large-scale scenes thanks to expeditious image retrieval techniques and feature matching algorithms, while they are limited in small-scale static scenes such as indoor scenes [26, 20]. Moreover, in lifelong localization scenarios, the size of the image and feature database increases over time due to the continuous addition of new data. As a result, the memory requirements for on-device localization in VR/AR systems may exceed the available limits. **Learning-based Visual Localization**. Current learning-based visual localization approaches can be classified into absolute pose regression (APR) [24, 55, 22], relative pose regression (RPR) [1, 11, 44], and scene coordinate regression (SCoRe) [26, 13, 61, 53, 16]. The APR methods directly forecast the camera pose via a provided RGB image in an end-to-end way. However, such methods can not realize accurate visual localization as they are essentially analogous to approximate pose estimation via image retrieval [46]. The RPR methods utilize a neural network to identify the relative pose among the requested image and the most identical image retrieved from the database, which, however, is time-consuming and restricts their practical application. The SCoRe approaches directly forecast the 3D scene coordinates, walked by the RANSAC-based PnP algorithm The SCoRe approaches directly forecast the scene coordinates, succeeded by the PnP algorithm to compute camera pose. While these methods can be optimized end-to-end and achieve impressive results, they suffer from some drawbacks. Pose regression and scene coordinate regression are both scene-specific and must be retrained for new scenes, culminating in a linear increase in total model size with the number of scenes. **Gradient Homogenization over Multi-task Learning (MTL)**. During the training process of multi-task learning (MTL), the gradient magnitudes and directions of different tasks interact complicatedly together via backpropagation, a phenomenon known as task interference. Previous methods [37, 28, 6, 21, 30, 40] simplify the matter to two categories of gradient discrepancies (i.e., magnitudes and directions of task gradients) and suggest various techniques to reconcile this difference. For gradient magnitudes, Sener et al. [37] characterize multi-task learning as multi-objective optimization and provide an upper bound for the multi-objective loss. Javaloy et al. [21] homogenize the gradient magnitudes through normalizing and scaling, ensuring training convergence. For gradient direction, Sinha et al. [40] and Maninis et al. [30] propose to enable task gradients statistically to be indistinguishable through adversarial training. ## 3 Method Given a RGB image, the task of visual localization seeks to estimate the rigid transformation \(T\in SE(3)\) from camera coordinate system to world coordinate system. Such transformation is composed of a 3D rotation matrix \(R\in SO(3)\) and a translation vector \(t\in\mathbb{R}^{3}\). ### Overall We propose OFVL-MS, a unified framework that jointly optimizes localization tasks of different scenes in a multi-task learning manner, where we view the visual localization of each scene as a new task. OFVL-MS is a two-stage pipeline with scene coordinates prediction followed by a RANSAC-based PnP algorithm to calculate the camera pose \(T\). Specifically, OFVL-MS takes \(N\) RGB images \(I_{n}\in\mathbb{R}^{3\times H\times W},\ n\in\{1,2,...,N\}\) from different scenes as input and predicts dense 3D scene coordinates \(\hat{D}_{n}=\{\hat{d}_{n,i}=(\hat{x}_{n,i},\hat{y}_{n,i},\hat{z}_{n,i})|i=1,2, 3,...,Q\}\) with 1D uncertainty \(\hat{U}_{n}=\{\hat{u}_{n,i}|i=1,2,3,...,Q\}\), where \(Q\) is the predicted 3D scene coordinate numbers. Thus, we derive \(Q\) correspondences between 2D pixel coordinates and 3D scene coordinates. Finally, OFVL-MS utilizes RANSAC-based PnP algorithm to calculate 6-DOF camera pose \(T_{n}=[R_{n}|t_{n}]\). In this work, we focus on designing and optimising OFVL-MS, which encourages all tasks to share as many parameters as possible for efficient storage deployment while maintaining superior performance for all tasks. ### Design OFVL-MS As shown in Fig. 1, OFVL-MS is characterized by two components: backbone and regression layers. **Backbone.** The backbone first utilizes a pre-layer with stride of \(2\) to map the input image to a higher dimension and lower resolution, and then leverages four ResBlocks [18] with stride of \((1,1,2,2)\) and several attention modules to extract features. The backbone concludes a set of task-shared parameters \(\phi^{sh}\) for \(N\) tasks and task-specific parameters \(\phi^{sp}_{n}\) for task \(n\) to transform each input \(I_{n}\) into an intermediate representation \(F_{n}=f(I_{n};\phi^{sh},\phi^{sp}_{n})\in\mathbb{R}^{C_{o}\times H_{o}\times W_ {o}}\), where \(C_{o}\) is the dimension of \(F_{n}\), \(H_{0}=H/8\) Figure 1: **Overall of OFVL-MS (using ResNet34 [18] as backbone).** OFVL-MS jointly optimizes visual localization across scenes and consists of two components, that is, backbone and regression layer. The layer-adaptive sharing policy and task-specific attention module are utilized to generate more scene-related features, which are fed into regression layers to predict scene coordinates with uncertainty. Besides, the penalty loss is proposed to facilitate OFVL-MS to share parameters as many as possible, realizing efficient storage deployment. Figure 2: **Layer-adaptive Sharing Policy.** The scores \(s\) are utilized to determine which parameters ((\(w,b,s\)) or (\(\tilde{w},\tilde{b},s\))) to be optimized in current iteration. \(W_{0}=W/8\). **Regression Layer.** Additionally, each task \(n\) has a regression layer \(h\), with exclusive parameters \(\theta_{n}\), which takes \(F_{n}\) as input and predicts 3D scene coordinate \(\hat{D_{n}}\) as well as 1D uncertainty \(\hat{U_{n}}\) for task \(n\). In this work, instead of altering the architecture of the network or adding a fixed set of parameters, we seek a framework that enables all tasks to share as many parameters as feasible while retaining excellent performance, i.e., proposed task-adaptive sharing policy and gradient balance algorithm. We assume the layers with learnable parameters in the backbone except for the attention modules to be active layers, such as convolution and normalization layer, while other layers, such as ReLU layer and Sigmoid layer, are considered as inactive layers. **Layer-adaptive Sharing Policy.** Theoretically, when manually determining whether \(K\) active layers are shared or not, a combinatorial search over \(2^{K}\) possible networks is required. Thus, in lieu of hand-crafted weight or layer sharing schemes, inspired by TAPS [47], we relax the combinatorial issue into a learnable one and introduce a layer-adaptive sharing policy that automatically determines whether each layer of the active layers is shared or not for diverse scenes. Using a single weight and bias for each active layer, however, does not enable different tasks to share or monopolize the parameters dynamically at various iterations during training, hence limiting the adaptivity of OFVL-MS for the scenes. To tackle this issue, as shown in Fig. 2, taking a convolution layer as example, we cast the initial weight \(w\in\mathbb{R}^{C_{out}\times C_{in}\times k\times k}\) of the convolution kernel as task-shared parameters and define two additional parameters: task-specific weight \(\tilde{w}\in\mathbb{R}^{C_{out}\times C_{in}\times k\times k}\), and a learnable score \(s\in\mathbb{R}^{1}\), where \(C_{out}\), \(C_{in}\) and \(k\) mean output channels, input channels, and kernel size, respectively. In forward pass, we define an indicator function for the score to judge whether the parameters of convolution layer are shared or not in current iteration, formulated as: \[\Theta(s)=\begin{cases}0&if~{}~{}s\geq\lambda\\ 1&otherwise,\end{cases} \tag{1}\] where \(\lambda\) is a preset threshold. The task-adaptive weight \(\bar{w}\) used for current iteration is formulated as: \[\bar{w}= \Theta(s)w+(1-\Theta(s))\tilde{w}. \tag{2}\] If the score \(s\) is larger than the preset threshold \(\lambda\), the task-specific parameters \(\tilde{w}\) will be activated and optimized, and vice versa. We apply above procedure on all active layers to enable different tasks to share or monopolize the parameters dynamically at various iterations. Besides, concluding additional parameters \(\tilde{w}\) into each layer does not result in a large increase in memory cost since only the selected parameters \(\bar{w}\) and \(s\) are optimized at each iteration and all other parameters are kept offline. Compared with TAPS, our proposed sharing policy delivers following merits: (1) we introduce individual task-shared weight \(w\) and task-specific weight \(\tilde{w}\) for each active layer rather than a coupled weight in TAPS, enabling the memory footprint to be agnostic to the number of tasks; (2) once the training for multi-task is done, the new added task can share task-shared parameters or task-specific parameters with any task in our setting, allowing for more flexible parameter sharing and real multi-task learning. NOtably, we set the learnable score \(s\) to be task-shared so that ensuring the parameters of all scenes can be integrated into a collective model. Moreover, we calculate the summation of the absolute values of all scores as penalty loss to enable all tasks to share parameters as many as possible, achieving efficient storage deployment. Since the indicator function \(\Theta(\cdot)\) is not differentiable, we need to modify its gradient during backward pass, which will be presented in Appendix 2.1. Notably, as illustrated in Sec. 4.5, learning task-specific batch normalization can significantly improve the localization performance while adding small parameters, so we set the parameters of normalization layers in active layers as task-specific. **Task-specific Attention Module.** We further embed an attention module into the backbone, empowering OFVL-MS to learn more scene-related features. The attention module learns a soft attention mask to the features, that can automatically determine the importance of features for each task along the channel dimension, enabling self-supervised learning of more scene-related features. In this work, we adopt SENet [19] as attention module and integrate it into the BasicBlock of each ResBlock. Each task \(n\) has task-specific attention modules with exclusive parameters. ### Optimize OFVL-MS Since each task has its own dataset domain, we need to utilize multiple GPUs to optimize these tasks. For the sake of description, we assume that a single GPU is used to train each scene. **Loss.** The goal of OFVL-MS is to ensure precise visual localization for all scenes while enabling different tasks to share as many parameters as possible. Therefore, we cast the training process of OFVL-MS as a joint optimization problem for predicted scene coordinates and scores in Eq. (1). For the \(n\)-th scene, the loss \(L_{n}\) involves two terms: the scene coordinates loss \(L_{n}^{sc}\) and the penalty loss \(L_{n}^{pe}\). \[L_{n}=L_{n}^{sc}+\beta L_{n}^{pe}, \tag{3}\] where \(\beta\) denotes weight coefficient used to reconcile \(L_{n}^{sc}\) and \(L_{n}^{pe}\). _Scene coordinates loss._ We employ the loss function proposed by KFNet [61] to maximize the logarithmic likelihood for the probability density function of the predicted scene coordinates. Specifically, the loss function of the \(n\)-th scene is formatted as: \[L_{n}^{sc}=\frac{1}{Q}\sum_{i=1}^{Q}(3log\hat{u}_{n,i}+\frac{||d_{n,i}-\hat{d}_{ n,i}||_{2}^{2}}{2\hat{u}_{n,i}^{2}}), \tag{4}\] where \(Q\) equals to \(H/8\times W/8\); \(\hat{u}_{n,i}\) is the \(i\)-th predicted uncertainty; \(d_{n,i}\) is the \(i\)-th ground truth scene coordinate; \(\hat{d}_{n,i}\) is the \(i\)-th predicted scene coordinate. _Penalty loss on the learnable scores._ The penalty loss \(L_{n}^{pe}\) motivates all tasks to share the parameters of active layers as many as possible. Such loss is denoted by calculating the summation of the absolute values of scores \(s_{n}\) for the \(n\)-th scene: \[L_{n}^{pe}=\frac{1}{||S_{n}||}\sum_{s_{n}\in S_{n}}|s_{n}|, \tag{5}\] where \(S_{n}\) means the collection of the scores; \(||S_{n}||\) denotes the number of scores. It is worth noting that the scores \(s_{n}\) of all scenes are identical since they are set as task-shared. **Backward Pass and Gradient Normalization Algorithm.** For convenient portrayal, we denote the task-shared and task-specific parameters of OFVL-MS for \(n\)-th scene as \(\chi_{n}^{sh}=\{\phi^{sh}\}\) and \(\chi_{n}^{sp}=\{\phi_{n}^{sp},\theta_{n}\}\). For task-specific parameters, we define the gradients of \(\chi_{n,i}^{sp}\) for \(n\)-th scene at \(i\)-th iteration as: \(G_{n,i}^{sp}=\nabla_{\chi_{n,i}^{sp}}L_{n,i}\), where \(L_{n,i}\) means the loss function for the \(n\)-th scene at the \(i\)-th iteration. Subsequently, the task-specific parameters on each GPU will be optimized based on the calculated \(G_{n,i}^{sp}\). Noting that when optimizing a scene with multiple GPUs, the gradients \(G_{n,i}^{sp}\) on the GPUs would be averaged and then the parameters are updated accordingly. For task-shared parameters, the gradients of \(\chi_{n,i}^{sh}\) for \(n\)-th scene at \(i\)-th iteration is also formulated as: \(G_{n,i}^{sh}=\nabla_{\chi_{n,i}^{sh}}L_{n,i}\). A straightforward scheme for optimizing the task-shared parameters involves averaging the gradients \(G_{n,i}^{sh}\) across all GPUs and then updating the corresponding weights. While this method streamlines the optimization problem, it may also trigger gradient conflict among tasks, lowering overall performance due to an unequal competition among tasks for the shared parameters, i.e., gradient magnitude disparities. Moreover, OFVL-MS is designed for jointly optimizing multiple indoor scenes, where the varied scene domains will further intensify the gradient conflict. Inspired by [28, 40, 21, 37], we utilize a gradient normalization algorithm to homogenize the gradient magnitude of the task-shared parameters for all scenes, allowing all tasks to converge at same pace and alleviating the gradient conflict. Specifically, OFVL-MS first places gradient norms of task-shared parameters on a common scale \(D\). Considering the magnitudes and the change rate of gradient reflect whether the optimization direction in current iteration is dependable or not, we define \(D\) as the linear combination of task-wise gradient magnitudes: \[D=\sum_{n=1}^{N}W_{n,i}||G_{n,i}^{sh}||_{2}, \tag{6}\] where the weight \(W_{n,i}\) is denoted as the relative convergence of each task: \[W_{n,i}=\frac{||G_{n,i}^{sh}||_{2}/||G_{n,i-1}^{sh}||_{2}}{\sum_{j=1}^{N}||G_{ j,i}^{sh}||_{2}/||G_{j,i-1}^{sh}||_{2}}. \tag{7}\] Then, given the common scale \(D\), OFVL-MS generates the optimized gradients \(\hat{G}_{n,i}^{sh}\): \[\hat{G}_{n,i}^{sh}=D\frac{G_{n,i}^{sh}}{||G_{n,i}^{sh}||_{2}}. \tag{8}\] Ultimately, we average the gradients \(\hat{G}_{n,i}^{sh}\) on all GPUs to derive \(\hat{G}_{i}^{sh}\), ensuring the gradients of the task-shared parameters for all scene are equivalent. The \(\hat{G}_{i}^{sh}\) is formulated as: \[\hat{G}_{i}^{sh}=\frac{1}{N}\sum_{n=1}^{N}\hat{G}_{n,i}^{sh}. \tag{9}\] ### Pose Estimation We design the regression layer as a fully convolutional structure to predict dense 3D scene coordinates as well as 1D uncertainty, where the uncertainty is utilized to measure the prediction effect by quantifying the noise induced from both data and model. Based on the predicted 2D pixel coordinates-3D scene coordinates correspondences, we apply the RANSAC-based PnP algorithm to minimize reprojection errors and finally derive camera pose \(T\). ## 4 Experiments ### Datasets **7-Scenes**[39] dataset records 41k RGB-D images and corresponding camera poses of seven different indoor environments using a handheld Kinect camera. **12-Scenes**[45] dataset, whose recorded environment is larger than that of 7-Scenes, records RGB-D images in twelve indoor environments with an iPad color camera. ### Experimental Settings **Implementation Details.** We employ the Adamw solver for optimization with a weight decay of \(0.05\). The initial learning rate is set to \(1.4\times 10^{-3}\) for 7-Scenes while \(2.4\times 10^{-3}\) for 12-Scenes with cosine annealing. Considering the number of images for each scene is distinct, we train OFVL-MS for \(200k\) iterations with batch size of \(4\). For layer-adaptive sharing policy, we set the threshold \(\lambda=0.5\) in Eq. (1) to determine whether each active layer of the backbone is shared or not. Besides, we set \(\beta=0.25\) in Eq. (2) to reconcile scene coordinates loss and penalty loss. More implementation details can be found in Appendix 2. **Evaluation Metrics.** Following previous works [53, 61, 26], we evaluate our method using the following metrics: (i) the median positional and rotational errors of the predicted pose; (ii) the percentage of images with positional and rotational errors less than 5cm and 5\({}^{\circ}\). ### Comparison with State-of-the-art Methods We design three versions OFVL-MS18, OFVL-MS34 and OFVL-MS50 of our method by using ResNet18, ResNet34, ResNet50 [18] as backbone respectively, and then compare OFVL-MS families with other state-of-the-arts on 7-Scenes and 12-Scenes datasets, with the results reported in Table 1 and Table 2. **Localization on 7-Scenes.** We compare OFVL-MS with representative structure-based methods (AS [36], InLoc [43], HLoc [34]), APR methods (MS-Transformer [38]), and SCoRe-based methods (DSAC* [4], SCoordNet [61], HSCNet [26], FDANet [53], and VSNet [20]). As shown in Table 1, OFVL-MS surpasses existing methods by non-trivial margins in terms of all evaluation metrics. Specifically, OFVL-MS18/34 outperforms the structure-based method HLoc by \(12.09\%/14.27\%\) in terms of 5cm-5\({}^{\circ}\) accuracy. Besides, compared with SCoRe-based methods HSCNet and FDANet, OFVL-MS18/34 realizes outstanding performance with the improvement of \(0.4\%/2.57\%\) and \(1.51\%/3.68\%\). Compared with the cutting-edge method VS-Net, OFVL-MS18/34 also achieve higher performance. Moreover, OFVL-MS50 yields \(0.021\)m median position error, \(0.69^{\circ}\) median rotational error and \(88.72\%\) 5cm-5\({}^{\circ}\) accuracy, establishing a new state-of-the-art for 7-Scenes dataset. Fig. 3 shows the cumulative pose errors distribution of different approaches on 7-Scenes dataset, which further demonstrates the superiority of OFVL-MS families in visual localization. **Localization on 12-Scenes.** As illustrated in Table 2, we compare OFVL-MS families with state-of-the-arts on 12-Scenes dataset. It can be observed that all methods achieve excellent results since the training trajectories closely resemble the test trajectories. Despite this, OFVL-MS families exhibit exceptional performances, in which OFVL-MS34 realizes the most superior performance with the positional errors of \(7\)mm and localization accuracy of \begin{table} \begin{tabular}{l|c c c c c c c c c|c} \hline \hline Methods & Metrics & Chess & Fire & Heads & Office & Pumpkin & Redkitchen & Stairs & Average \\ \hline AS [36] & **Med. Err.** & 0.03, 0.87 & 0.02, 1.01 & 0.01, 0.82 & 0.04, 1.15 & 0.07, 1.69 & 0.05, 1.72 & 0.04, 1.01 & 0.03, 1.18 \\ & **Acc.** & — & — & — & — & — & — & — & — & 68.7 \\ \hline InLoc [43] & **Med. Err.** & 0.03, 1.05 & 0.03, 1.07 & 0.02, 1.16 & 0.03, 1.05 & 0.05, 1.55 & 0.04, 1.31 & 0.09, 2.47 & 0.04, 1.38 \\ & **Acc.** & — & — & — & — & — & — & — & 66.3 \\ \hline HLoc [34] & **Med. Err.** & 0.02, 0.85 & 0.02, 0.94 & 0.01, 0.75 & 0.03, 0.92 & 0.05, 1.30 & 0.04, 1.40 & 0.05, 1.47 & 0.03, 1.09 \\ & **Acc.** & — & — & — & — & — & — & — & 73.1 \\ \hline MS-Transformer [38] & **Med. Err.** & 0.11, 4.66 & 0.24, 9.6 & 0.14, 12.19 & 0.17, 5.66 & 0.18, 4.44 & 0.17, 5.94 & 0.26, 8.45 & 0.18, 7.27 \\ & **Acc.** & — & — & — & — & — & — & — & — \\ \hline DSAC* [3] & **Med. Err.** & 0.02, 1.10 & 0.02, 1.24 & 0.01, 1.82 & 0.03, 1.15 & 0.04, 1.34 & 0.04, 1.68 & 0.03, 1.16 & **0.02,** 1.35 \\ & **Acc.** & — & — & — & — & — & — & — & 85.2 \\ \hline SCoordNet [61] & **Med. Err.** & 0.019, 0.63 & 0.023, 0.91 & 0.018, 1.26 & 0.026, 0.73 & 0.039, 1.09 & 0.039, 1.18 & 0.037, 1.06 & 0.029, 0.98 \\ & **Acc.** & — & — & — & — & — & — & — & — \\ \hline HSCNet [26] & **Med. Err.** & 0.02, 0.7 & 0.02, 0.9 & 0.01, 0.9 & 0.03, 0.8 & 0.04, 1.0 & 0.04, 1.2 & 0.03, 0.8 & 0.03, 0.9 \\ & **Acc.** & **97.5** & 96.7 & **100.0** & 86.5 & 59.9 & 65.5 & **87.5** & 84.8 \\ \hline FDANet [53] & **Med. Err.** & 0.018, 0.64 & 0.018, 0.73 & 0.013, 1.07 & 0.026, 0.75 & 0.036, 0.91 & 0.034, 1.03 & 0.041, 1.14 & 0.026, 0.89 \\ & **Acc.** & 95.70 & 96.10 & 99.20 & 88.08 & 65.65 & 78.32 & 62.80 & 83.69 \\ \hline VS-Net [20] & **Med. Err.** & **0.015, 0.5** & 0.019, 0.8 & 0.012, 0.7 & **0.021, 0.6** & 0.037, 1.0 & 0.036, 1.1 & 0.028, 0.8 & 0.024, 0.8 \\ & **Acc.** & — & — & — & — & — & — & — & — \\ \hline OFVL-MS18 & **Med. Err.** & 0.021, 0.67 & 0.018, 0.67 & 0.010, 0.56 & 0.030, 0.83 & 0.033, 0.96 & 0.035, 1.02 & 0.031, 0.89 & 0.025, 0.80 \\ & **Acc.** & 96.20 & 97.55 & 98.90 & 81.73 & 67.15 & 75.06 & 79.80 & 85.19 \\ \hline OFVL-MS34 & **Med. Err.** & 0.019, 0.63 & 0.017, 0.65 & **0.008, 0.53** & 0.027, 0.74 & 0.031, 0.93 & 0.032, 1.01 & 0.027, **0.69** & 0.023, 0.74 \\ & **Acc.** & 97.40 & 96.60 & **100.0** & 85.58 & 67.50 & 77.14 & 87.40 & 87.37 \\ \hline OFVL-MS50 & **Med. Err.** & **0.015, 0.50** & **0.015, 0.59** & **0.008, 0.56** & 0.023, **0.63** & **0.030, 0.86** & **0.031, 0.99** & **0.026**, 0.76 & 0.021, **0.69** \\ & **Acc.** & 97.10 & **99.40** & **100.0** & **89.53** & **68.80** & **81.48** & 84.70 & **88.72** \\ \hline \hline \end{tabular} \end{table} Table 1: The median positional error (m), rotational error (\({}^{\circ}\)), and 5cm-5\({}^{\circ}\) accuracy (%) of different methods on **7-Scenes dataset**. \begin{table} \begin{tabular}{l|c c} \hline \hline Methods & Med. Err. & Acc. \\ \hline DSAC* [4] & — & 99.1 \\ SCoordNet [61] & — & 98.9 \\ HSCNet [26] & 0.011, 0.50 & 99.3 \\ FDANet [53] & 0.014, 0.37 & 99.6 \\ \hline OFVL-MS18 & 0.013, 0.48 & 98.7 \\ OFVL-MS34 & 0.007, 0.25 & 99.9 \\ OFVL-MS50 & 0.008, 0.30 & 99.5 \\ \hline \hline \end{tabular} \end{table} Table 2: The median positional error (m), rotational error (\({}^{\circ}\)), and 5cm-5\({}^{\circ}\) accuracy (%) of different methods on **12-Scenes dataset**. \(99.9\%\). **Model Size Comparison.** We compare the storage space occupied by different methods to demonstrate the efficient storage deployment of OFVL-MS families. Previous works typically train a separate model for each scene, resulting in a linear increase in model size with the number of scenes. However, OFVL-MS deposits multiple models with a majority of shared parameters into a single one, realizing efficient storage. As shown in Table 3, OFVL-MS families reduce the model parameters significantly compared with other state-of-the-arts. For 7-Scenes dataset, the parameters size of OFVL-MS50 is only \(1/5\) of that of HSCNet, but the localization accuracy is improved by \(3.92\%\). For 12-Scenes dataset, OFVL-MS34 achieves the best performance with much fewer parameters (only \(1/3\) of HSCNet). ### Joint Training vs Separate Training To further demonstrate the efficiency of jointly optimizing localization tasks across scenes, we train a separate model for each scene. We choose OFVL-MS34 as the benchmark for validation. As shown in Table 4, OFVL-MS34 reduces total model size from \(177.779\)M to \(64.403\)M by sharing parameters for all scenes. Besides, it is astonishing to find OFVL-MS34 achieves competitive performance through joint training, indicating that closely-related tasks have mutual benefits. ### Diverse Parameters Sharing Strategies To verify the effectiveness of the proposed layer-adaptive sharing policy, we apply three different parameter sharing strategies on OFVL-MS34 for 7-Scenes dataset. EXP1: All parameters of active layers are set as task-shared. EXP2: All parameters of active layers (both convolutional and batch normalization layers) are determined whether to be shared by scores. EXP3: All parameters of active layers (only convolutional layers) are determined whether to be shared by scores, and the batch normalization layers are set as task-specific. As shown in Table 5, compared to setting all parameters as task-shared, OFVL-MS34 significantly improves localization performance from \(79.94\) to \(87.37\) in terms of 5cm-5\({}^{\circ}\) accuracy at the expense of a small increase in model parameters, indicating that using additional task-specific parameters to learn scene-related features is critical to resolve gradient conflict. Besides, the performance of OFVL-MS is further enhanced with BN layers set as task-specific. fewer parameters and thus can scale up gracefully with the number of scenes. We utilize the model trained on 12-Scenes/7-Scenes and conduct the generalization experiments on 7-Scenes/12-Scenes. Specifically, we freeze the task-shared parameters trained on 12-Scenes/7-Scenes, and add task-specific parameters as well as an additional regression layer for each scene of 7-Scenes/12-Scenes to predict the scene coordinates. As shown in Table 6, despite generalizing to a new scene, OFVL-MS34/50 still outperform HSCNet and FDANet by \(0.93\%/1.95\%\) and \(2.04\%/3.06\%\) in terms of 5cm-5\({}^{\circ}\) accuracy for EXP1, illustrating that OFVL-MS can avoid catastrophic forgetting and achieve genuine incremental learning. Besides, compared with \(41.250/24.108\) M increased parameters of HSCNet and FDANet, OFVL-MS18/34/50 only need \(5.476/12.117/9.881\) M parameters when generalizing to a new scene, realizing efficient storage. For EXP2, OFVL-MS families yield the lowest localization errors. It is worth noting that the incremental models achieve more precise localization performance in most of scenes except for Floor5b, resulting in the 5cm-5\({}^{\circ}\) accuracy declined, which will be presented in Appendix 3. Moreover, OFVL-MS families realize efficient storage deployment with \(5.597/6.501/5.835\) M additional parameters compared with HSCNet and FDANet. ### Ablation study To comprehensively confirm the veracity of the modules suggested in this work, various variants of OFVL-MS34 are validated using the 7-Scenes dataset. As shown in Table 7, all of the components contribute to outstanding performance. EXP1: Removing all task-specific attention modules results in a large drop in localization accuracy, demonstrating the strong ability of TSAM to generate more scene-related features, realizing efficient scene parsing. EXP2: Removing gradient normalization algorithm leads to much lower accuracy, validating that homogenizing the gradient magnitude of the task-shared parameters alleviates the gradient conflict significantly. EXP3: Removing penalty loss results in degraded localization accuracy, indicating that promoting the informative parameters sharing across scenes improves localization performance. ### Camera Localization on LIVI Despite the existence of publicly available datasets for visual localization, there is no dataset for large-scale indoor scenes. Thus, we introduce the challenging **LIVL** dataset containing RGB-D images tagged with 6-DoF camera poses collected around four scenes. (i) **K544**: spanning about \(12\times 9\)m\({}^{2}\). (ii) **Floor5**: spanning about \(12\times 5\)m\({}^{2}\). (iii) **Parking lot1** spanning about \(8\times 6\)m\({}^{2}\). (iv) **Parking lot2** spanning about \(8\times 8\)m\({}^{2}\). Each scene contains three sequences for training and one sequence for test. A massive proportion of motion blur and sparse texture in the scene make visual localization in the four scenes challenging. We give the visualization of **LIVL** dataset in Fig. 4. The dataset was collected using a autonomous platform armed with a RealSense D435 camera and a VLP-16 laser radar. The RGB and depth images are captured at a resolution of \(640\times 480\) pixels and aligned with point clouds using timestamp. We utilize the LiDAR-based SLAM system A-LOAM [60] to compute the ground truth pose. More details of the dataset can be found in Appendix 4. As shown in Table 8, we can observe that OFVL-MS50 realizes the best performance with \(0.142\)m and \(1.42^{\circ}\) median localization error. Wherein, OFVL-MS50 yields \(0.05\)m and \(0.81^{\circ}\) localization error in K544 scene that contains discriminative texture. Moreover, Floor5 and Parking lot1 are laborious for OFVL-MS families to localize since there exists repetitive and sparse texture, and illu \begin{table} \begin{tabular}{c c c c c c} \hline \hline Methods & TSAM & GNA & PL & Med. Err. & Acc. & Params-t (M) \\ \hline EXP1 & ✗ & ✓ & ✓ & 0.026, 0.79 & 84.30 & **63.297** \\ EXP2 & ✓ & ✗ & ✓ & 0.025, 0.77 & 84.16 & 64.403 \\ EXP3 & ✓ & ✓ & ✗ & **0.023, 0.74** & 86.25 & 78.059 \\ EXP4 & ✓ & ✓ & ✓ & **0.023, 0.74** & **87.37** & 64.403 \\ \hline \hline \end{tabular} \end{table} Table 7: **Ablation study with various variants of OFVL-MS** on 7-Scenes dataset. TSAM: Task-specific Attention Module, GNA: Gradient Normalization Algorithm, PL: Penalty Loss. Params-t means the total parameters of OFVL-MS34 for the seven scenes. Figure 4: **LIVI dataset.** The blue lines indicate training trajectories whereas the red lines indicate test trajectories. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline Methods & Matrix & K544 & Floor5 & Parising lot2 & Parising lot2 & Average & Params-t (M) \\ \hline SCANet[60] & Alex. & **471.12**, 12.208, 19.140, 33.52, 19.70 & **61.18, 12.32**, 20.29 & **93.06** \\ \hline FDANet[51] & Alex. & 98.96 & **33.11** & 12.82 & **23.57** & **15.57** \\ \hline FDANet[51] & Alex. & 14.898 & **13.16** & 15.09 & **29.12**, 27.5 & **10.18** & **10.18** & **10.15** \\ \hline \hline \multirow{2}{*}{GFVL-MS18} & Alex. & 12.64 & 23.87 & 13.31 & 22.89 & **18.18** & **96.422** \\ \cline{2-7} & \multirow{2}{*}{Mall. Err. Acc.} & **107.14** & **11.74** & **10.24**, 21.28 & **10.10**, 13.18 & **10.15**, **1.642** \\ \cline{2-7} & \multirow{2}{*}{Alex. & 39.92 & 16.02 & 24.30 & **23.42** & **22.74** \\ \cline{2-7} & \multirow{2}{*}{Alex. & 20.11 & 10.14 & 14.48 & **21.58**, 20.28 & **60.09**, 17.16 & **30.17** & **10.18** \\ \cline{2-7} & \multirow{2}{*}{Alex. & 42.53 & 25.84 & 25.72 & **20.38** & 30.71 & **30.71** \\ \cline{2-7} & \multirow{2}{*}{Alex. & **460.88**, **81.34** & **13.37** & **20.58**, 2.48 & **0.107**, **1.180** & **8.361**, **1.42** \\ \cline{2-7} & & **49.91** & **30.72** & **36.77** & **23.34** & **33.79** \\ \hline \hline \end{tabular} \end{table} Table 8: The median positional error (m) and rotational error (\({}^{\circ}\)) of OFVL-MS families on **LIVL dataset**. Params-t means that the total parameters of OFVL-MS for the four scenes. mination disturbance. Besides, we can also observe that 5cm-5\({}^{\circ}\) accuracy is inferior due to the large scale of LIVI dataset. Compared with typical SCoRe based methods SCoordNet [61] and FDANet [53], OFVL-MS families outperform them by non-trivial margins in terms of all evaluation metrics while necessitating much fewer total parameters, further indicating that the closely-related tasks benefit from the shared parameters and the efficacy of our OFVL-MS. ## 5 Conclusion In this work, we introduce OFVL-MS, a unified network that achieves precise visual localization across scenes in a multi-task learning manner. OFVL-MS achieves high performance for all tasks and keeps storage efficient for model deployment through forward pass (layer-adaptive sharing policy) and backward pass (gradient normalization algorithm) of the network. Moreover, a penalty loss is proposed to motivate OFVL-MS to share parameters as many as possible while maintaining precise localization accuracy. We demonstrate that OFVL-MS can generalize to a new scene with small task-specific parameters while realizing superior localization performance. We also publish a **new large indoor dataset LIVL** to provide a new test benchmark for the community. **Acknowledgement**. This work was supported in part by National Natural Science Foundation of China under Grant 62073101, in part by Science and Technology Innovation Venture Capital Project of Tiandi Technology Co., LTD. (2022-2-TD-QN009), in part by "Ten Thousand Million" Engineering Technology Major Special Support Action Plano of Heilongjiang Province, China (SC2021ZX02A0040), and in part by Self-Planned Task (SKLRS202301A09) of SKLRS (HIT) of China.
2305.09143
Phase locking in voltage-controlled parametric oscillator
A recent experimental demonstration of a parametric magnetization oscillation excited by applying a microwave voltage to a ferromagnetic metal will be applicable not only to a new magnetization switching method but also to bio-inspired computing. It should be, however, noted that a phase of the parametric magnetization oscillation is not uniquely locked, related to the fact that a frequency of the microwave voltage is twice the value of the magnetization oscillation. There are two possible phases in the parametric oscillation state, and which of the two is realized depends on the initial condition of the magnetization. Here, we examine two approaches to lock the phase uniquely. One is to suppress the distribution of the initial state by enhancing the perpendicular magnetic anisotropy before applying microwave voltage, and the other is to use a sweeping frequency. Through numerical simulation of the Landau-Lifshitz-Gilbert equation and quantification of locked rate, we find that the sweeping frequency is more effective to lock the phase of the parametric magnetization oscillation.
Tomohiro Taniguchi
2023-05-16T03:48:35Z
http://arxiv.org/abs/2305.09143v1
# Phase locking in voltage-controlled parametric oscillator ###### Abstract A recent experimental demonstration of a parametric magnetization oscillation excited by applying a microwave voltage to a ferromagnetic metal will be applicable not only to a new magnetization switching method but also to bio-inspired computing. It should be, however, noted that a phase of the parametric magnetization oscillation is not uniquely locked, related to the fact that a frequency of the microwave voltage is twice the value of the magnetization oscillation. There are two possible phases in the parametric oscillation state, and which of the two is realized depends on the initial condition of the magnetization. Here, we examine two approaches to lock the phase uniquely. One is to suppress the distribution of the initial state by enhancing the perpendicular magnetic anisotropy before applying microwave voltage, and the other is to use a sweeping frequency. Through numerical simulation of the Landau-Lifshitz-Gilbert equation and quantification of locked rate, we find that the sweeping frequency is more effective to lock the phase of the parametric magnetization oscillation. keywords: spintronics, parametric oscillation, voltage controlled magnetic anisotropy effect + Footnote †: journal: Journal of Magnetism and Magnetic Materials ## 1 Introduction Magnetization dynamics studied in magnetism and spintronics are mainly classified into two groups, magnetization switching and oscillation. The magnetization switching is excited by applying magnetic field [1; 2], electric current [3; 4; 5; 6; 7; 8], and/or voltage [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21] to a ferromagnet, and the techinique of which has been applied to non-volatile memory applications [22]. The magnetization oscillation is, on the other hand, driven by a microwave magnetic field [23] or by applying direct or oscillating electric current [24; 25; 26; 27; 28; 29; 30], and is expected to be used in micro- and mill-wave sensors and generators [31; 32] and bio-inspired computing [33; 34; 35; 36; 37; 38; 39]. Note that the magnetization oscillation requires a continuous energy injection into a ferromagnet to sustain the oscillation against energy dissipation due to damping torque. It has been difficult to excite an oscillation by an application of voltage because voltage controlled magnetic anisotropy (VCMA) effect merely changes the shape of magnetic potential energy and does not act as an energy injector. Recently, however, an experimental demonstration of a parametric magnetization oscillation through the VCMA effect was reported [40], where the magnetization oscillated with Larmor frequency \(f_{\rm L}\) when microwave voltage with a frequency of \(2f_{\rm L}\) was applied. Such method might solve issues in oscillator devices driven by electric current, such as large energy dissipation due to Joule heating. In the parametric magnetization oscillation by the VCMA effect, on the other hand, since the frequency of the driving voltage is twice the value of the magnetization oscillation, two possible phases exist in a steady state. Accordingly, the oscillating output from the ferromagnet is not unique [41]. The situation should be avoided for some practical applications, such as, physical reservoir computing [34; 36], where one-to-one correspondence between input and output signal, called echo state property [42], is required [39; 43]. This is in contrast with a phase locking of electric-current-driven oscillators [44; 45], where the frequency of the magnetization oscillation becomes identical to that of input signal and the phase with respect to the input signal is uniquely locked. In this work, we performed numerical and theoretical analyses for a phase locking of a parametric oscillator driven by the VCMA effect. First, we show that the phase of the oscillation depends on the initial state of the magnetization, which usually fluctuates due to thermal activation. The distribution of the initial state can be suppressed by enhancing a perpendicular magnetic anisotropy before the application of microwave voltage. This approach is, however, not effective because a tiny difference in the initial state leads to a different phase. The other proposal is to sweep the frequency of the microwave voltage. When the frequency of the voltage is slightly different from \(2f_{\rm L}\), the relative phase between the magnetization oscillation and the voltage is uniquely locked because of an asymmetry in the stability of the phase. The phase is locked even after changing the frequency of the voltage to \(2f_{\rm L}\). These results are obtained by solving the Landau-Lifshitz-Gilbert (LLG) equation both numerically and analytically. ## 2 System description ### LLG equation In Fig. 1(a), we show a schematic illustration of a ferromagnetic/nonmagnetic/ferromagnetic trilayer. The top and bottom ferromagnets are free and reference layers, respectively. The unit vector pointing in the magnetization direction in the free layer is denoted as \(\mathbf{m}\), where it has been experimentally confirmed that a macrospin model works well to describe the magnetization dynamics due to the VCMA effect [40]. The thickness of the nonmagnetic insulating layer is thick enough so that charge accumulation is generated near the interface when an electric voltage is applied. The charge accumulation modulates electron-states and change the magnetic anisotropy [46; 47; 48]. Accordingly, an application of voltage changes a stable state of the magnetization and drives the magnetization dynamics, which is described by the LLG equation, \[\frac{d\mathbf{m}}{dt}=-\gamma\mathbf{m}\times\mathbf{H}+\alpha\mathbf{m} \times\frac{d\mathbf{m}}{dt}, \tag{1}\] Figure 1: (a) Schematic illustration of a parametric oscillation of magnetization in a ferromagnetic multilayer. A unit vector pointing in the magnetization direction in free layer is \(\mathbf{m}\). The \(z\) axis is normal to the film plane, while the \(x\) axis is parallel to an external magnetic field. It oscillates around the external magnetic field \(H_{\rm appl}\) with the Larmor frequency \(f_{\rm L}=\gamma H_{\rm appl}/(2\pi)\) when microwave voltage with a frequency of \(2f_{\rm L}\) is applied. (b) Time evolution of \(m_{x}\) in the presence of the microwave voltage. The vertical axis represents a ratio of the frequency \(f\) of the microwave voltage to the Larmor frequency. where \(\gamma\) and \(\alpha\) are the gyromagnetic ratio and the Gilbert damping constant, respectively. For the parametric oscillation induced by the VCMA effect, the magnetic field \(\mathbf{H}\) is given by \[\mathbf{H}=H_{\mathrm{appl}}\mathbf{e}_{x}+H_{\mathrm{K}}m_{z}\mathbf{e}_{z}, \tag{2}\] with \[H_{\mathrm{K}}=H_{\mathrm{Ka}}\sin\left(2\pi ft\right). \tag{3}\] Here, \(H_{\mathrm{appl}}\) is an external magnetic field applied in the \(x\) direction, while \(H_{\mathrm{K}}\) is the perpendicular magnetic anisotropy field along the \(z\) axis. In Eq. (3), \(H_{\mathrm{K}}\) has only an oscillating component \(H_{\mathrm{Ka}}\) with a frequency of \(f\), and a direct component \(H_{\mathrm{Kd}}\) is assumed to be zero, for simplicity. Such a situation can be experimentally realized by applying both direct and microwave voltages to the free layer [40]. In Fig. 1(b), we show time evolution of \(m_{x}\) obtained by solving Eq. (1) numerically, where the values of the parameters are derived from typical experiments as \(\gamma=1.764\times 10^{7}\) rad/(Oe s), \(\alpha=0.005\), \(H_{\mathrm{appl}}=720\) Oe, and \(H_{\mathrm{Ka}}=100\) Oe (see also Appendix A for the details of the numerical simulations). It is shown that \(m_{x}\) tends to be zero when \(f\) is close to twice the value of the Larmor frequency \(f_{\mathrm{L}}=\gamma H_{\mathrm{appl}}/(2\pi)\). In this case, the magnetization oscillates around the \(x\) axis (see also Fig. 2 discussed below). On the other hand, when \(f\) differs from \(2f_{\mathrm{L}}\), \(m_{x}\) saturates to \(+1\), which indicates that the magnetization relaxes to the direction of the external magnetic field. Figure 2: (a) 100 samples of an initial state prepared by solving the LLG equation with thermal activation. (b) Examples of oscillation of \(m_{z}\) for two different initial states. The dotted line represents the oscillation of the microwave voltage. The blown up inset indicates that the peak positions of \(m_{z}\) and the microwave voltage are slightly different. ### Dependence of phase on initial state Next, we show that the phase of the parametric oscillation is not uniquely locked. Before applying microwave voltage, the magnetic field is given by \(\mathbf{H}=H_{\mathrm{appl}}\mathbf{e}_{x}+H_{\mathrm{Kd}}m_{z}\mathbf{e}_{z}\), where the direct component \(H_{\mathrm{Kd}}\) of the perpendicular magnetic anisotropy field consists of shape and bulk-and-interfacial magnetic anisotropy fields. The magnetization points to a direction close to an energetically stable state, \(\mathbf{m}^{(0)}=[m_{x}^{(0)},m_{y}^{(0)},m_{z}^{(0)}]=[(H_{\mathrm{appl}}/H_{ \mathrm{Kd}}),0,\pm\sqrt{1-(H_{\mathrm{appl}}/H_{\mathrm{Kd}})^{2}}]\), with an assumption \(H_{\mathrm{appl}}/H_{\mathrm{Kd}}<1\), at which an energy density \(E=-M\int d\mathbf{m}\cdot\mathbf{H}\) is minimized (\(M\): saturation magnetization). The magnetization shows a small-amplitude oscillation around this equilibrium direction \(\mathbf{m}^{(0)}\) due to thermal activation. Therefore, when we apply the voltage, the initial state of the magnetization is randomly distributed. To estimate such a distributed initial condition, we solve the LLG equation with \(\mathbf{H}=H_{\mathrm{appl}}\mathbf{e}_{x}+H_{\mathrm{Kd}}m_{z}\mathbf{e}_{z}\) and a random torque \(-\gamma\mathbf{m}\times\mathbf{h}\). The components \(h_{k}\) (\(k=x,y,z\)) of the random field \(\mathbf{h}\) satisfy the fluctuation-dissipation theorem [49], \[\langle h_{k}(t)h_{\ell}(t^{\prime})\rangle=\frac{2\alpha k_{\mathrm{B}}T}{ \gamma MV}\delta_{k\ell}\delta(t-t^{\prime}), \tag{4}\] where \(V=Sd\) is the volume of the free layer consisting of a cross-section area \(S\) and thickness \(d\). In this work, we use \(M=955\) emu/cm\({}^{3}\), \(H_{\mathrm{Kd}}=6.283\) kOe, and \(d=1.1\) nm from Ref. [40], while \(S\) is assumed to be \(\pi\times 50^{2}\) nm\({}^{2}\)[40]. Temperature \(T\) is 300 K. Note that the random torque is included in the LLG equation only when we evaluate the initial state, whereas it is neglected during the calculation of the parametric oscillation to clarify the roles of our suggestions developed in Secs. 3 and 4.2. The role of thermal activation during the parametric oscillation state will be studied in Sec. 4.3. The calculated results of 100 samples of the initial state obtained by this method is shown in Fig. 2(a). Since \(H_{\mathrm{Kd}}\gg H_{\mathrm{appl}}\), the magnetization points approximately to the \(z\) direction with slight shift to the \(x\) direction. Next, let us go back to the parametric oscillation state, where the initial condition of Eq. (1) is chosen from one of these 100 samples and the frequency of the microwave voltage is \(2f_{\mathrm{L}}\). Two solid lines in Fig. 2(b) show examples of the parametric oscillation of \(m_{z}\) with two different initial conditions. We also show the oscillation of the microwave voltage by a dotted line. As can be seen, the magnetization oscillates with half the frequency of the microwave voltage, i.e., \(f_{\mathrm{L}}\), and there are two possible phases of the magnetization with respect to the microwave voltage, depending on the initial state. Since the initial state is usually uncontrollable, the result indicates that the phase in the parametric oscillation is not uniquely locked. To solve the issue, we examine two approaches. The first one is to suppress the distribution of the initial state, and the other is to use a sweeping frequency. In the following sections, we describe the details of these approaches and their effectiveness. ## 3 Suppression of initial distribution The results shown in Figs. 2(a) and 2(b) indicate that the randomness of the initial state prevents from locking the phase uniquely. We notice that the distribution of the initial state can be suppressed by the VCMA effect due to the following reason. It has been experimentally revealed [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21] that the perpendicular magnetic anisotropy is reduced when an electric field generated by applying a voltage points to a one direction, while it is enhanced when the field points to the opposite direction. In other words, the perpendicular magnetic anisotropy can be either reduced or enhanced, depending on the sign of the voltage. In the switching and parametric-oscillation applications, the sign of the direct voltage is chosen so that the perpendicular magnetic field is reduced in order to move the magnetization from an initial state and starting its dynamics. On the other hand, when a voltage with an opposite sign is applied, the magnetization moves to the direction of the nearest equilibrium state, i.e., \(\mathbf{m}\) moves to the direction of \(\mathbf{m}^{(0)}\) against thermal activation. In this case, the distribution of the initial state will be suppressed. This method was used to reduce a switching error for memory applications [50], where, before applying a voltage for the switching, another voltage having an opposite sign is applied. Here, we apply the method shown above to suppress the distribution of the initial state. As mentioned, 100 samples of the initial state are prepared by solving the LLG equation with the magnetic field of \(\mathbf{H}=H_{\mathrm{appl}}\mathbf{e}_{x}+H_{\mathrm{Kd}}m_{z}\mathbf{e}_{z}\). We assume that \(H_{\mathrm{Kd}}\) of which before applying the microwave voltage, is enhanced due to the VCMA effect caused by a direct voltage with an appropriate sign. In Fig. 3(a), we show two examples of the initial \(m_{z}\) obtained for \(H_{\mathrm{Kd}}\simeq 6.5\) (red) and 10.0 kOe (blue). It is shown that the distribution of the initial state is suppressed for a large \(H_{\mathrm{Kd}}\). Using these initial conditions, we performed the numerical simulation of the LLG equation and investigated the phase in the parametric oscillation state. To quantify the uniqueness of the phase, we introduce a locked rate LR as follows; \[\text{LR}=\frac{\max[N_{+},N_{-}]}{N_{+}+N_{-}}, \tag{5}\] where \(N_{+}\) is the number of the samples having a same phase, while \(N_{-}\) is the number of the samples having another same phase. Since the phase in the parametric oscillation state is locked to either one of them, \(N_{+}+N_{-}\) equals to the sample numbers (100 in this study). When the phase of the magnetization becomes independent of the initial state and is uniquely locked, \(\text{LR}=1\), while LR becomes 0.5 when two possible phases are equally realized. In Fig. 3(b), we summarize the dependence of the locked rate LR on the perpendicular magnetic anisotropy field \(H_{\text{Kd}}\) before applying the microwave voltage. Roughly speaking, the locked rate increases as \(H_{\text{Kd}}\) increases. However, the locked rate is widely distributed even for relatively large \(H_{\text{Kd}}\). The result indicates that the phase in the parametric oscillation state is sensitive to the initial state, even after its distribution is suppressed. Therefore, we conclude that this method is not effective enough for phase locking. ## 4 Phase locking by using sweeping frequency ### Theoretical aspect of proposal In this section, we attempt another method to lock the phase uniquely by using a sweeping frequency, where the frequency of the microwave voltage initially is slightly different from \(2f_{\text{L}}\) and slowly changes to \(2f_{\text{L}}\). The point of this method is as follows. The fact that there are two possible phases Figure 3: (a) The \(z\) components of initial conditions of 100 samples, where \(H_{\text{Kd}}\simeq 6.5\) (10.0) kOe for red (blue) dots. (b) Locked rate as a function of \(H_{\text{Kd}}\). in a steady state implies that there is a double-well of the phase and the depths of two minima are equal. he situation can be confirmed by deriving an approximated equation of motion for the phase from Eq. (1). We introduce a spherical coordinate \((\Theta,\Phi)\) from \(\mathbf{m}\) as \(\mathbf{m}=(m_{x},m_{y},m_{z})=(\cos\Theta,\sin\Theta\cos\Phi,\sin\Theta\sin\Phi)\). In terms of \(\Theta\) and \(\Phi\), the LLG equation is explicitly given by \[\begin{split}\frac{d\Theta}{dt}=&\gamma H_{\rm Ka }\sin(2\pi ft)\sin\Theta\sin\Phi\cos\Phi\\ &-\alpha\gamma\left[H_{\rm appl}-H_{\rm Ka}\sin(2\pi ft)\cos \Theta\sin^{2}\Phi\right]\sin\Theta,\end{split} \tag{6}\] \[\begin{split}\frac{d\Phi}{dt}=&\gamma H_{\rm appl}- \gamma H_{\rm Ka}\sin(2\pi ft)\cos\Theta\sin^{2}\Phi\\ &+\alpha\gamma H_{\rm Ka}\sin(2\pi ft)\sin\Phi\cos\Phi,\end{split} \tag{7}\] where we use an approximation \(1+\alpha^{2}\simeq 1\), for simplicity. Since we are interested in dynamics in which the magnetization oscillates with a frequency nearly half of that of the microwave voltage \(f\), we introduce \(\Psi\) as \[\Psi=\Phi-\pi ft. \tag{8}\] Note that the phase of the microwave voltage is \(2\pi ft\), while \(\pi ft\) appears in Eq. (8). While \(\Phi-2\pi ft\) is not a constant, \(\Psi=\Phi-\pi ft\) becomes an approximately constant when the microwave frequency \(f\) is close to \(2f_{\rm L}\). Let us investigate this point in the following. Using an approximation to average the equation of motion with respect to a fast variable, i.e., averaging the equation with respect to time over a period of \(1/f\), we obtain \[\begin{split}\frac{d\Psi}{dt}=&\gamma H_{\rm appl}- \pi f\\ &-\frac{\gamma H_{\rm Ka}}{4}\cos\Theta\sin 2\Psi+\frac{\alpha \gamma H_{\rm Ka}}{4}\cos 2\Psi,\end{split} \tag{9}\] where we assume that \(\Theta\) is approximately constant (or \(\Theta\) here might be regarded as its averaged value in the oscillation state). We also note that the term proportional to \(\alpha H_{\rm Ka}\) is kept as the fourth term on the right-hand side of Eq. (9), although the damping constant \(\alpha\) is usually small. This is because another term proportional to \(H_{\rm Ka}\), corresponding to the third term of Eq. (9), has a factor \(\cos\Theta=m_{x}\), which will be close to zero near the parametric oscillation state, as can be seen in Fig. 1(b); therefore, it is not clear whether the fourth term in Eq. (9) is sufficiently small enough to ignore compared with the third term. Note that we can introduce a potential \(U\), satisfying \(d\Psi/dt=-\partial U/\partial\Psi\), from Eq. (9) as \[\begin{split} U&=-\left(\gamma H_{\rm appl}-\pi f \right)\Psi-\frac{\gamma H_{\rm Ka}}{8}\cos\Theta\cos 2\Psi-\frac{\alpha\gamma H_{ \rm Ka}}{8}\sin 2\Psi\\ &=-\left(\gamma H_{\rm appl}-\pi f\right)\Psi-\frac{\gamma H_{\rm Ka }}{8}\sqrt{\alpha^{2}+\cos^{2}\Theta}\cos\left(2\Psi-\delta\right),\end{split} \tag{10}\] with \(\cos\delta=\cos\Theta/\sqrt{\alpha^{2}+\cos^{2}\Theta}\) and \(\sin\delta=\alpha/\sqrt{\alpha^{2}+\cos^{2}\Theta}\). When the frequency \(f\) of the microwave voltage equals to twice the value of the Lamor frequency \(f_{\rm L}=\gamma H_{\rm appl}/(2\pi)\), i.e., \(f=2f_{\rm L}\), the potential \(U\) becomes a double-well potential described by \(-\cos(2\Psi-\delta)\). In this case, the potential has minima at \(\Psi=\Psi_{0}=\delta/2\) and \(\Psi_{0}+\pi\), and the values of the potential at these points are the same, i.e., the double-well potential is symmetric; see Fig. 4(a), where such a symmetric potential is schematically shown by a red solid line. This is consistent with the results shown in Fig. 2(b), where the two possible phases in the parametric oscillation state differ by nearly \(\pi\). Note also that the result indicates the validity not to ignore the fourth term in Eq. (9) mentioned above; if we neglect this term proportional to \(\alpha\), \(\delta\) becomes zero, i.e., \(\Psi_{0}=0\) and \(\pi\) are predicted to be the value of \(\Psi\) in the parametric oscillation state; however, it contradicts with the numerical simulation shown as an inset of Fig. 2(b), where the peak positions of \(m_{z}\) and the microwave voltage are slightly different. Thus, the fourth term in Eq. (9) should be kept. Equation (10) also indicates that the potential becomes asymmetric due to the term \(-(\gamma H_{\rm appl}-\pi f)\Psi\) when \(f\neq 2f_{\rm L}\), i.e., the depths of two minima are different; see Fig. 4(a), where such an asymmetric potential is schematically shown by a blue solid line for the case of \(f=2f_{\rm L}\times 0.98\). Note also that the depth of the local minima becomes shallow due to the term \(-(\gamma H_{\rm appl}-\pi f)\Psi\). For example, while the depth of the normalized potential, \(U/(\gamma H_{\rm appl})\), for \(f=2f_{\rm L}\) is on the order of \(10^{-4}\), that for \(f\neq 2f_{\rm L}\) is nearly one order of magnitude shallower than it; see Fig. 4(a) and its inset, as well as B. Remind that Eq. (9) was obtained after applying an averaging technique to a fast variable, i.e., while, for example, \(m_{x}=\cos\Theta\) is not a constant in a steady state [see Fig. 1(b)], we replace it in Eq. (9) with an averaged value. The component \(m_{x}\) oscillates around the averaged value and thus, the oscillation trajectory of the magnetization around the \(x\) axis is slightly distorted from Figure 4: (a) Schematic illustration of potential maps for \(f=2f_{\rm L}\) (red) and \(f\neq 2f_{\rm L}\) (\(f=2f_{\rm L}\times 0.98\)) (blue). The potential \(U\) is normalized by \(\gamma H_{\rm appl}\). The averaged \(\cos\Theta\) is estimated from the numerical simulation of the LLG equation as \(\cos\Theta\simeq 0.004\) for \(f=2f_{\rm L}\) and \(0.581\) for \(f=2f_{\rm L}\times 0.98\). Triangles indicate the positions of local minima. The inset is an enlarged view of the potential for \(f=2f_{\rm L}\times 0.98\) near its local minimum. (b) An example of time evolution of the frequency of microwave-voltage, \(f/(2f_{\rm L})\), where \(\tau_{\rm w}=500\) ns and \(\tau_{\rm s}=200\) ns. The frequency is slightly different from \(2f_{\rm L}\) when time \(t\) is less than waiting time \(\tau_{\rm w}\), and saturates to \(2f_{\rm L}\) with a time scale of sweeping time \(\tau_{\rm s}\). (c) Time evolution of \(m_{z}\) when the frequency is \(2rf_{\rm L}\). (d) Locked rate as a function of \(\tau_{\rm w}\) and \(\tau_{\rm s}\). a circle. This is because the magnetic fields in the \(y\) and \(z\) directions are different, i.e., the oscillating magnetic field due to the VCMA effect appears in the \(z\) direction only. A small-amplitude oscillation of \(m_{x}\), which was hidden by the averaging technique, acts as a perturbation and prevents to stabilize \(\Psi\) to the minima, i.e., \(\Psi\) also shows an oscillation around local minima. When the depth of the local minimum is shallow, this perturbation might move \(\Psi\) to a deeper local minimum. In this case, the phase of the magnetization will be uniquely locked to a value giving a deeper local minimum of the potential. This point is mathematically formulated as follow. The steady state solutions of Eq. (9) are generally given as \(\Psi_{1}=\Psi_{0}\), \(\Psi_{2}=(\pi/2)-\Psi_{0}\), \(\Psi_{3}=\Psi_{1}+\pi\), and \(\Psi_{4}=\Psi_{2}+\pi\), where \(\Psi_{0}\) is \[\Psi_{0}=\frac{1}{2}\sin^{-1}\left[\frac{4(\gamma H_{\rm appl}-\pi f)}{H_{\rm Ka }\sqrt{\cos^{2}\Theta+\alpha^{2}}}\right]+\frac{\delta}{2}. \tag{11}\] When \(f=2f_{\rm L}\), \(\Psi_{0}\) reproduces \(\delta/2\) mentioned above. The potential \(U\) has minima at \(\Psi=\Psi_{1}\) and \(\Psi_{3}\) and has maxima at \(\Psi=\Psi_{2}\) and \(\Psi_{4}\). Let us define the depth of the potential near \(\Psi=\Psi_{1}\) (\(\Psi_{3}\)) as \(\Delta U_{1}=U(\Psi=\Psi_{2})-U(\Psi=\Psi_{1})\) [\(\Delta U_{3}=U(\Psi=\Psi_{2})-U(\Psi=\Psi_{1})\)]. When \(f\neq 2f_{\rm L}\), these depths are different as (see also B) \[\Delta U_{3}-\Delta U_{1}=\frac{\pi\left(\gamma H_{\rm appl}-\pi f\right)}{2}. \tag{12}\] Therefore, when \(f<2f_{\rm L}=\gamma H_{\rm appl}/\pi\), the depth of the potential well near \(\Psi=\Psi_{3}\) is higher than that near \(\Psi=\Psi_{1}\). Thus, it is relatively easy to move from \(\Psi=\Psi_{1}\) to \(\Psi=\Psi_{3}\) by a perturbation but is difficult to return from \(\Psi=\Psi_{3}\) to \(\Psi=\Psi_{1}\). Thus, \(\Psi\) will stay at \(\Psi_{3}\) in this case. On the other hand, when \(f>2f_{\rm L}\), \(\Psi\) will stay at \(\Psi_{1}\). Accordingly, when the frequency \(f\) of the microwave voltage differs from \(2f_{\rm L}\), the phase of the magnetization is uniquely locked. Starting from \(f\neq 2f_{\rm L}\), let us consider to sweep the frequency \(f\) to \(2f_{\rm L}\). Initially, \(\Psi\) is locked to a unique value, \(\Psi_{1}\) or \(\Psi_{3}\), depending on whether \(f>2f_{\rm L}\) or \(f<2f_{\rm L}\). When \(f\) becomes \(2f_{\rm L}\), the depths of the potential well, \(\Delta U_{1}\) and \(\Delta U_{3}\), become deeper than those for \(f\neq 2f_{\rm L}\), as mentioned above. Therefore, \(\Psi\) hardly moves between two wells after \(f\) becomes \(2f_{\rm L}\); rather, \(\Psi\) will stay \(\Psi_{1}\) or \(\Psi_{3}\), which is determined by whether \(f\) is initially higher or lower than \(2f_{\rm L}\). Accordingly, the phase will be uniquely locked by using a sweeping frequency. Let us verify this proposal, as shown in the next subsection. ### Results of numerical simulation In this work, we use the following form of the sweeping frequency, \[f=\begin{cases}2rf_{\mathrm{L}}&(t\leq\tau_{\mathrm{w}})\\ 2f_{\mathrm{L}}\left[(1-r)\tanh\left(\frac{t-\tau_{\mathrm{w}}}{\tau_{\mathrm{s} }}\right)+r\right]&(t>\tau_{\mathrm{w}})\end{cases}, \tag{13}\] where \(r\) is a rate of the microwave frequency with respect to \(2f_{\mathrm{L}}\). Time \(\tau_{\mathrm{w}}\) is a waiting time to keep \(f\) to \(2rf_{\mathrm{L}}\), while time \(\tau_{\mathrm{s}}\) is a sweeping time to change \(f\) to \(2f_{\mathrm{L}}\). In Fig. 4(b), we show an example of Eq. (13) with \(r=0.98\), \(\tau_{\mathrm{w}}=500\) ns, and \(\tau_{\mathrm{s}}=200\) ns, where the frequency of the microwave voltage is initially \(2rf_{\mathrm{L}}\) and finally \(2f_{\mathrm{L}}\). In Fig. 4(c), we show an example of the solution \(m_{z}\) when the frequency is \(2rf_{\mathrm{L}}\). The oscillation amplitude is slightly smaller than that for \(f=2f_{\mathrm{L}}\); see Fig. 2(b), for comparison. We confirm that the phases of all 100 samples in this case are the same, and even after the frequency is changed to \(2f_{\mathrm{L}}\), the phases in the samples are kept identical. In Fig. 4(d), we summarize the dependence the locked rate on the waiting time \(\tau_{\mathrm{w}}\) and the sweeping time \(\tau_{\mathrm{s}}\), where their minimum values are set to be 5 ns. When \(\tau_{\mathrm{w}}\) (\(\tau_{\mathrm{s}}\)) is varied, \(\tau_{\mathrm{s}}\) (\(\tau_{\mathrm{w}}\)) is set to be 10 (500) ns. The locked rate becomes 1 for \(\tau_{\mathrm{w}}\gtrsim 100\) ns and for \(\tau_{\mathrm{s}}\geq 5\) ns. The results indicate that using a sweeping frequency is efficient to lock the phase of the oscillation uniquely. On the other hand, for \(\tau_{\mathrm{w}}\lesssim 100\) ns, the locked rate is smaller than 1, which indicates that the phase is not uniquely locked from two possible values. This result might relate to the fact that such a waiting time is comparable or shorter than a transition time from an initial state to a steady state. The transient time to a steady oscillation state is on the order of 100 ns, as implied from Fig. 1(b). When the waiting time is comparable or shorter than such a value, the frequency of the microwave voltage starts to change before the magnetization reaches a steady state. In this case, the phase is not uniquely locked by the microwave voltage having the frequency of \(2rf_{\mathrm{L}}\); therefore, even after the frequency is changed to \(2f_{\mathrm{L}}\), the phase is not uniquely locked. Regarding these results, the waiting time is a relatively important factor to lock the phase of the magnetization. ### Role of thermal activation The remaining issue is a role of thermal activation in the parametric oscillation state. As mentioned above, thermal activation is only included in the evaluation of the initial state. In reality, however, it also affects the magnetization dynamics in the presence of the microwave voltage. It is known that thermal activation induces a transition between two minima in a double-well potential [49]. As a result, the phase cannot stay stably in one of the two minima. In particular, when the potential shape is symmetric, the probability to find a system in one of the two stable states will be 50 %, i.e., the locked rate will be 0.5. We confirm it by solving the LLG equation by adding a random torque even after applying a microwave voltage, where the frequency of the voltage obeys Eq. (13). In Fig. 5(a), we show examples of the magnetization oscillation for \(T=300\) K, \(\tau_{\rm w}=200\) ns, and \(\tau_{\rm s}=100\) ns. It shows that, even at finite temperature, the phase of the magnetization oscillation has two possible values. The values of the phase between two samples is slightly different from \(\pi\), which might be due to an instantaneous phase disturbance by thermal activation and/or time after saturating the frequency of the microwave voltage is still too short to lock the phase. To evaluate the locked rate at finite temperature, we solve the LLG equation with nonzero random torque for \(0\leq t\leq\tau_{\rm w}+\tau_{\rm s}+t_{\rm r}\) (see also A for the definition of the locked rate at finite temperature). Here, \(t_{\rm r}\) is a characteristic running time for applying the microwave voltage after the frequency becomes sufficiently close to \(2f_{\rm L}\). In Fig. 5(b), we summarize the locked rate as a function of the running time. It shows that the locked rate, shown by black open circles, is Figure 5: (a) Examples of oscillation of \(m_{z}\), where thermal activation is included in the LLG equation even after the microwave voltage is applied. (b) Locked rates as a function of running time \(t_{\rm r}\) for various oscillating magnetic anisotropy field \(H_{\rm Ka}\) and volume \(V\). The thickness \(d\) of the ferromagnet is fixed to 1.1 nm, while the radius \(r\) is changed (\(V=\pi r^{2}d\)). The values of \(H_{\rm Ka}\) and \(r\) are 100 Oe and 50 nm for black open circles, 500 Oe and 50 nm for blue triangles, 100 Oe and 400 nm for red squares, and 500 Oe and 400 nm for black solid circles. approximately 0.5 for a wide range of the running time (see also Appendix A). Therefore, even if we use the sweeping frequency, the phase will not be uniquely locked when thermal activation exists. A solution to this issue might be to use a large amplitude \(H_{\rm Ka}\) for the oscillating perpendicular magnetic anisotropy. As implied from Eq. (10), \(H_{\rm Ka}\) determines the depth of the minima of the potential. In fact, the depth of the potential well introduced in Sec. 4.1 for \(f=2f_{\rm L}\) is \(\Delta U_{1}=\Delta U_{2}=\gamma H_{\rm Ka}\cos^{2}\Theta/(4\sqrt{\alpha^{2}+ \cos^{2}\Theta})\) (see also Appendix B). Therefore, a transition between minima may possibly be suppressed when a large \(H_{\rm Ka}\) is induced by the VCMA effect. Note that and an enhancement of the VCMA efficiency is rapidly growing, and now the controllable range of the perpendicular magnetic anisotropy field is on the order of 1.0 kOe [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. Therefore, while the value \(H_{\rm Ka}=100\) Oe used here is smaller than that used in an experiment [40] (\(H_{\rm Ka}\) in the experiment of Ref. [40] was about 503 Oe), we examine to evaluate the locked rate for a relatively large value, \(H_{\rm Ka}=500\) Oe. We found that the locked rate becomes larger than 0.6, as shown by blue triangles in Fig. 5(b). This is an approximately 10 % improvement from the results obtained for \(H_{\rm Ka}=100\) Oe [shown by black open circles in Fig. 5(b)]. It also indicates that the locked rate is unchanged for a long running time limit. It indicates that a large \(H_{\rm Ka}\) contributes to fix the phase after the frequency of the microwave voltage becomes \(2f_{\rm L}\). This is consistent with the above estimation of the depth for the potential well, where it is proportional to \(H_{\rm Ka}\) when \(f=2f_{\rm L}\). On the other hand, the locked rate depends on the running time for its short limit. This is because the frequency does not perfectly saturate to \(2f_{\rm L}\) yet, and thus, the depth of the potential is relatively shallow, as discussed in Sec. 4.1. Another solution to enhance the locked rate is to suppress the effect of thermal activation by using a large-volume sample. For example, the volume \(V\) of a device in Ref. [19] is relatively large due to a large radius of 400 nm. The red squares in Fig. 5(b) shows the locked rate for a device with the radius of 400 nm. The value of \(H_{\rm Ka}=100\) Oe is the same with that used to obtain the black open circles in the same figure. The locked rate is found to become close to 0.9. In addition, by combining two approaches, i.e., large oscillating magnetic anisotropy field \(H_{\rm Ka}\) and volume, the locked rate reaches 1, as shown by black solid circles in Fig. 5(b). The results indicate that there are various approaches to enhance the locked rate. ## 5 Conclusion In conclusion, we studied the parametric magnetization oscillation induced by the microwave-voltage driven VCMA effect. As reported in the previous work [40], the magnetization oscillates with the Larmor frequency \(f_{\rm L}\) when the frequency \(f\) of the microwave voltage is \(2f_{\rm L}\). Since the frequency of the microwave voltage is twice the value of the magnetization oscillation, it does not uniquely fix the phase of the magnetization; instead, there are two possible phases locked by the microwave voltage. The phenomenon was confirmed by the numerical simulation of the LLG equation with various initial conditions. To lock the phase uniquely, we examined two approaches. The first one is to enhance the perpendicular magnetic anisotropy before applying the microwave voltage. We found that, although this method contributes to suppress the distribution of the initial state, it is not effective enough to lock the frequency uniquely. A locked rate, which quantifying a uniqueness of the phase in the parametric oscillation state, is distributed even for relatively large perpendicular magnetic anisotropy fields. The other approach is to use a sweeping frequency, where the frequency of the microwave voltage slightly differs from \(2f_{\rm L}\) in the initial state. Using this method, the phase is locked uniquely for \(f\neq 2f_{\rm L}\) due to an asymmetric shape of a potential. Even after the frequency \(f\) is changed to \(2f_{\rm L}\), the phase is still locked. The usefulness of this method for phase locking was verified by the numerical simulation of the LLG equation and the quantification of the locked rate. Thermal activation, however, causes random transition between two phases. The issue will be solved by using a large oscillating magnetic anisotropy field and/or a large-volume sample. ## Data availability Data will be made available on request. ## Acknowledgement The author is grateful to Takayuki Nozaki for discussion. The work is supported by JSPS KAKENHI Grant Number 20H05655. ## Appendix A Details of numerical simulation In this work, the LLG equation is solved by the \(4^{\rm th}\) order Runge-Kutta method with a time increment \(\Delta t=1\) ps. The locked rates in Secs. 3 and 4.2 are estimated as follows. Let us denote the solution of the LLG equation for the \(\ell\)th initial condition (\(\ell=1,2,\cdots,100\)) as \({\bf m}_{\ell}(t)\). In particular, the solution \({\bf m}_{1}\) for the 1st sample is used as a reference sample. We solve the LLG equation for \(0\leq t\leq t_{\rm max}\) and evaluate the difference between \({\bf m}_{\ell}\) (\(\ell=2,3,\cdots,100\)) and the reference sample \(\delta_{\ell}=|{\bf m}_{\ell}(t_{\rm max})-{\bf m}_{1}(t_{\rm max})|\), where \(t_{\rm max}\) is 5.0 \(\mu\)s in this work. Since there are only two possible values of the phase, we can conclude that the \(\ell\)th sample has the same phase with the reference one when \(\delta_{\ell}\) is sufficiently small. On the other hand, if \(\delta_{\ell}\) is large, it means that two samples have different phases. Although we use a condition \(\delta_{\ell}\leq\epsilon\) with \(\epsilon=10^{-5}\) for the judgement of the definition as the same phase, the value of \(\epsilon\) depends on that of \(t_{\rm max}\). This is because \(\delta_{\ell}\) becomes small as \(t_{\rm max}\) increases when two samples have the same phase; therefore, when \(t_{\rm max}\) is shorter than 5.0 \(\mu\)s, for example, \(\epsilon=10^{-5}\) might be too small to judge as the same phase automatically. Because of the oscillation amplitude of \(m_{z}\) in the parametric oscillation state is close to 1, as shown in Fig. 2(b), it might be enough to use \(\epsilon=1\) for the judgement of the same phase. Let us denote the number of the samples satisfying \(\delta_{\ell}\leq\epsilon\) as \(n\). If \(n\geq 50\), more than a half of the 100 samples have the same phase with the reference sample. In this case, the locked rate is defined as \((n+1)/100\), where \(+1\) indicates the reference sample. On the other hand, if \(n<50\), \(100-(n+1)\) samples have the opposite phase with respect to the reference sample. In this case, the locked rate is defined as \(1-[(n+1)/100]\). The locked rate in Sec. 4.3 is evaluated similarly. In this case, one might consider that \(t_{\rm max}\) should be \(\tau_{\rm w}+\tau_{\rm s}+t_{\rm r}\). It should, however, be noted that the random torque might result in a large instantaneous \(\delta_{\ell}\) even if the \(\ell\)th sample has the same phase with the reference sample. In such a case, the judgement whether the sample has the same phase or not with respect to the reference sample might become ambiguous. Therefore, after solving the LLG equation with the random torque for \(0\leq t\leq\tau_{\rm w}+\tau_{\rm s}+t_{\rm r}\), we solve it continuously for 2.0 \(\mu\)s by removing the random torque. Then, we evaluate \(\delta_{\ell}\) and judge whether the \(\ell\)th sample has the same or opposite phase with respect to the reference sample. In this procedure, we assume that the phase at \(t=\tau_{\rm w}+\tau_{\rm s}+t_{\rm r}\) is unchanged after thermal activation is removed. ## Appendix B Depth of potential To derive Eq. (12), we estimate the values of the potential at the four steady-state points as \[U(\Psi=\Psi_{1})= -\left(\gamma H_{\rm appl}-\pi f\right)\Psi_{0}-\frac{\gamma H_{ \rm Ka}}{8}\sqrt{\alpha^{2}+\cos^{2}\Theta}\sqrt{1-\frac{16(\gamma H_{\rm appl }-\pi f)^{2}}{\gamma^{2}H_{\rm Ka}^{2}(\alpha^{2}+\cos^{2}\Theta)}}, \tag{12}\] \[U(\Psi=\Psi_{2})= -\frac{\left(\gamma H_{\rm appl}-\pi f\right)}{2}\left\{\cos^{-1} \left[\frac{4(\gamma H_{\rm appl}-\pi f)}{\gamma H_{\rm Ka}\sqrt{\alpha^{2}+ \cos^{2}\Theta}}\right]+\cos^{-1}\left[\frac{\alpha}{\sqrt{\alpha^{2}+\cos^{2 }\Theta}}\right]\right\}\] \[+\frac{\gamma H_{\rm Ka}}{8}\sqrt{\alpha^{2}+\cos^{2}\Theta} \cos\left\{\sin^{-1}\left[\frac{4(\gamma H_{\rm appl}-\pi f)}{\gamma H_{\rm Ka }\sqrt{\alpha^{2}+\cos^{2}\Theta}}\right]+\sin^{-1}\left[\frac{\alpha}{\sqrt {\alpha^{2}+\cos^{2}\Theta}}\right]\right\}\] (13) \[U(\Psi=\Psi_{3})= U(\Psi=\Psi_{1})-\left(\gamma H_{\rm appl}-\pi f\right)\pi,\] (14) \[U(\Psi=\Psi_{3})= U(\Psi=\Psi_{2})-\left(\gamma H_{\rm appl}-\pi f\right)\pi, \tag{15}\] From these values, \(\Delta U_{1}=U(\Psi=\Psi_{2})-U(\Psi=\Psi_{1})\) and \(\Delta U_{3}=U(\Psi=\Psi_{2})-U(\Psi=\Psi_{3})\), as well as \(\Delta U_{3}-\Delta U_{1}\), are evaluated. Since the mathematical expressions of \(\Delta U_{1}\) and \(\Delta U_{3}\) are complex, we estimate the depth numerically. Using the values of the parameters in the main text and the figure caption of Fig. 4(a), we find that \(\Delta U_{1}/(\gamma H_{\rm appl})\simeq 8.7\times 10^{-5}\simeq 10^{-4}\) for \(f=2f_{\rm L}\) and \(\Delta U_{1}/(\gamma H_{\rm appl})\simeq 1.5\times 10^{-5}\) for \(f=2f_{\rm L}\times 0.98\).
2304.06159
Probability-Based Estimation
We develop a theory of estimation when in addition to a sample of $n$ observed outcomes the underlying probabilities of the observed outcomes are known, as is typically the case in the context of numerical simulation modeling, e.g. in epidemiology. For this enriched information framework, we design unbiased and consistent ``probability-based'' estimators whose variance vanish exponentially fast as $n\to\infty$, as compared to the power-law decline of classical estimators' variance.
Jobst Heitzig
2023-04-12T21:06:27Z
http://arxiv.org/abs/2304.06159v1
# Probability-Based Estimation ###### Abstract We develop a theory of estimation when in addition to a sample of \(n\) observed outcomes the underlying probabilities of the observed outcomes are known, as is typically the case in the context of numerical simulation modeling, e.g. in epidemiology. For this enriched information framework, we design unbiased and consistent "probability-based" estimators whose variance vanish exponentially fast as \(n\to\infty\), as compared to the power-law decline of classical estimators' variance. ## Problem statement There is a discrete probability space with finite outcome set \(\Omega\) and probability weight function \(p:\Omega\to[0,1],\omega\mapsto p(\omega)\), \(\sum_{\omega\in\Omega}p(\omega)=1\). There is also an event \(A\subseteq\Omega\) the probability of which, \(\pi=\sum_{\omega\in\omega}p(\omega)\), we want to estimate. We don't know but we do know \(\Omega\) and \(A\), in particular we know the number \(m=|A|\) of outcomes in \(A\). We have access to a sampler which draws iid samples \(\omega_{1},\ldots,\omega_{n}\) from \((\Omega,p)\) and which in addition (!) gives us the corresponding probabilities \(x_{1}=p(\omega_{1}),\ldots,x_{n}=p(\omega_{n})\). _How to "best" make use of this additional information? E.g., what consistent (and maybe also unbiased) estimator of \(\pi\) has the smallest standard error given this information?_ ## Use case: costly simulations In an important class of use cases in which this occurs, each \(\omega\) is a possible trajectory of some stochastic dynamical system that we can simulate, and the simulator allows us to compute \(p(\omega)\) iteratively by multiplying up the probabilities of the changes performed in individual time steps. \(A\) encodes some macroscopic event that we are interested in, such as: the system tips, an epidemic gets detected, the system converges back to a certain attractor, etc. ## Application: Epidemic spreading on a network Assume a network (graph) \(G=(V,E)\) and an SI infection process where initially all nodes are susceptible, at discrete time \(t\geqslant 0\) node \(v\in V\) has a basic probability of getting infected of \(p_{1}(v,t)\), and independently for each edge \(e=\{v,v^{\prime}\}\in E\) with infected \(v\), \(v^{\prime}\) has a transmission probability of getting infected of \(p_{2}(v,v^{\prime},t)\) (e.g., [1]. Finally, there is a sequence \(((s_{j},\tau_{j}))_{j}\) with sentinel nodes \(s_{j}\in V\) and testing time points \(\tau_{j}\leqslant T\). The event \(A\) is the fact that an outbreak has been detected by one of the latter tests, i.e., for at least one \(j\), \(s_{j}\) is infected at time \(\tau_{j}\). If the network is complex, there is no simple analytical solution for \(P(A)\), hence we assume the SI process has been simulated \(n\) times from \(t=1\) to \(t=T\) and \(\omega_{i}=(\omega_{i\times t})_{\omega\times t\in\{1,\ldots,T\}}\) is the binary matrix encoding whether each node \(v\) was infected at each time \(t\). As the simulator can easily track the probability \(x_{i}\) of each realized trajectory \(\omega_{i}\), this information can be used in estimating \(\pi\). ## Toy example. As a simple analytically tractable example assume \(G\) is a chain of \(L+1\) nodes \(v=0\ldots L\), \(p_{1}(0,0)=p_{1}>0\), \(p_{1}(v,t)=0\) for all other \(v,t\), \(p_{2}(v,v^{\prime},t)\equiv p_{2}>0\), and there is only one test at \(s_{1}=L\) at time \(\tau_{1}=T\). Then \(A\) is the event that node \(L\) is infected at time \(T\). The only \(\omega\) that have positive probability are those where for each infected node \(v\) at \(t\), all \(v^{\prime}\leqslant v\) are infected at \(t\), \(v\) remains infected at all \(t^{\prime}\geqslant t\), and either \(v\) was already infected at \(t-1\), or \(v+1\) is not yet infected at \(t\). Let us encode such an \(\omega\) by the tuple of time points \(t_{1}<\cdots<t_{L}\) at which nodes \(1\ldots L\) get first infected, where \(t_{v}\in[v,\infty\,]\). With \(q_{2}=1-p_{2}\), the probability of this \(\omega\) is \[p(\omega) =p(t_{1},\ldots,t_{L}) \tag{1}\] \[=p(t_{1},\ldots,t_{L-1})q_{2}^{t_{L}-t_{L-1}-1}p_{2}\] (2) \[=p_{1}q_{2}^{t_{L}-t_{L}}p_{2}^{t}. \tag{3}\] The event \(A\) corresponds to \(t_{L}\leqslant T\) and has thus probability \[\pi =\sum_{1<t_{1}<-\epsilon<t_{L}\leqslant T}p_{1}q_{2}^{t_{L}-L}p_{2}^ {L}=p_{1}(p_{2}/q_{2})^{t_{L}^{\prime}}\sum_{t_{L}=L}^{T}\binom{t_{L}-1}{L-1}q_{2 }^{t_{L}} \tag{4}\] \[=p_{1}\bigg{(}1-p_{2}^{L}q_{2}^{T+1-L}\binom{T}{L-1}_{2}F_{1}(1,T+ 1;T+2-L;q_{2})\bigg{)}, \tag{5}\] where \({}_{2}F_{1}\) is the hypergeometric function. As we can see, this is already a rather complicated formula even for this simplest case of a network and just one test. Later we will also need the fact that the opposite event \(\neg A\) has probability \[1-\pi =1-p_{1}+\sum_{1<t_{1}<-\epsilon<t_{L}>T}p_{1}q_{2}^{t_{L}-L}p_{2}^ {L} \tag{6}\] \[=1-p_{1}+p_{1}(p_{2}/q_{2})^{L}\sum_{t_{L}=T+1}^{\infty}\binom{t_ {L}-1}{L-1}q_{2}^{t_{L}}. \tag{7}\] ## 2 Benchmark: relative frequency As is well-known, without knowledge of the probabilities \(x_{i}\), the most straightforward estimator \(\hat{\pi}_{0}\) of \(\pi\) is the relative frequency \[\hat{\pi}_{0}=|\{i:\omega_{i}\in A\}|/n. \tag{8}\] That estimator is unbiased, consistent, and has variance \[\nu_{0}=\pi(1-\pi)/n, \tag{9}\] which can be estimated by the plug-in estimator \[\hat{\nu}_{0}=\hat{\pi}_{0}(1-\hat{\pi}_{0})/n. \tag{10}\] Since the estimator is unbiased, its standard error is simply \(\sqrt{\nu_{0}}=\sqrt{\pi(1-\pi)/n}\), a very well-known fact. Any estimator using also the additional information given by the \(x_{i}\) must be compared against this benchmark. An obvious improvement is to use \[\hat{\pi}_{0,\max}=\max(\hat{\pi}_{0},\sum_{\omega\in O\wedge A}p(\omega)), \tag{11}\] where \(O=\{\omega_{1},\ldots,\omega_{n}\}\) is the set of observed outcomes. This clearly has a smaller standard error (if only negligibly smaller), but it is _not unbiased_ and surely not optimal in any sense yet. ## 3 Idea 1: use a weighted sum of the observed probabilities Let \[q(\omega) =1-p(\omega), \tag{12}\] \[O =\{\omega_{1},\ldots,\omega_{n}\}, \tag{13}\] the latter being the set of observed outcomes (counting each distinct outcome only once!), and note that we know \(p(\omega)\) for each \(\omega\in O\) (it equals one of the \(x_{i}\)). Then the following is a consistent and unbiased estimator of \(\pi\): \[\hat{\pi}_{1}=\sum_{\omega\in O\wedge A}\frac{p(\omega)}{1-q(\omega)^{n}}. \tag{14}\] It is consistent because for \(n\to\infty\), \(O\to\Omega\) almost surely, and \([1-p(\omega)]^{n}\to 0\) for all \(\omega\) with \(p(\omega)>0\). It is unbiased because \[\mathbb{E}\hat{\pi}_{1} =\sum_{\omega\in A}\frac{p(\omega)}{1-q(\omega)^{n}}\mathbb{E} \mathbb{1}_{0}(\omega) \tag{15}\] \[=\sum_{\omega\in A}\frac{p(\omega)}{1-q(\omega)^{n}}(1-q(\omega)^ {n})\] (16) \[=\sum_{\omega\in A}p(\omega)=\pi, \tag{17}\] where \(\mathbb{1}_{0}\) is the indicator function of \(O\) and \(1-q(\omega)^{n}\) is the probability that \(\omega\in O\). What is its standard error? We have \[\mathbb{E}\hat{\pi}_{1}^{2} =\sum_{\omega,\omega^{\prime}\in A}\frac{p(\omega)}{1-q(\omega)^{n }}\frac{p(\omega^{\prime})}{1-q(\omega^{\prime})^{n}}\mathbb{E}(\mathbb{1}_{ \omega\in O}\mathbb{1}_{\omega^{\prime}\in O}) \tag{18}\] \[=\sum_{\omega,\omega^{\prime}\in A,\ \omega^{\prime}\neq\omega^{ \prime}}\frac{p(\omega)}{1-q(\omega)^{n}}\frac{p(\omega^{\prime})}{1-q(\omega^ {\prime})^{n}}\times\] \[\qquad\times(1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1-p(\omega)- p(\omega^{\prime})]^{n})\] \[\qquad+\sum_{\omega\in A}\frac{p(\omega)^{2}}{(1-q(\omega)^{n})^{ 2}}(1-q(\omega)^{n})\] (19) \[=\sum_{\omega,\omega^{\prime}\in A}\frac{p(\omega)}{1-q(\omega)^{ n}}\frac{p(\omega^{\prime})}{1-q(\omega^{\prime})^{n}}\times\] \[\qquad\times(1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1-p(\omega)- p(\omega^{\prime})]^{n})\] \[\qquad+\sum_{\omega\in A}\frac{p(\omega)^{2}}{(1-q(\omega)^{n})^{ 2}}(q(\omega)^{n}-[1-2p(\omega)]^{n}). \tag{20}\] (The final bracket in the second line equals \(1-P(\omega\notin O)-P(\omega^{\prime}\notin O)+P(\omega,\omega^{\prime} \notin O)\)). The exact variance of \(\hat{\pi}_{1}\) is then \[\nu_{1} =\mathbb{E}\hat{\pi}_{1}^{2}-\pi^{2} \tag{21}\] \[=\sum_{\omega,\omega^{\prime}\in A}p(\omega)p(\omega^{\prime})\times\] \[\qquad\times\bigg{[}\frac{1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1 -p(\omega)-p(\omega^{\prime})]^{n}}{(1-q(\omega)^{\prime})(1-q(\omega^{\prime })^{n})}-1\bigg{]}\] \[\qquad+\sum_{\omega\in A}p(\omega)^{2}\frac{q(\omega)^{n}-[1-2p( \omega)]^{n}}{(1-q(\omega)^{n})^{2}}, \tag{22}\] Toy example.In our toy example from the introduction, a numerical estimation of \(\nu_{0}\) and \(\nu_{1}\) shows that for \(L=10\), \(T=20\), already at \(n=10\) we have \(\nu_{1}<\nu_{0}\), improving fast as \(n\) grows. Asymptotic variance.For large \(n\), we have \[\nu_{1} \approx\sum_{\omega,\omega^{\prime}\in A}p(\omega)p(\omega^{\prime })[1-p(\omega)-p(\omega^{\prime})]^{n}+\sum_{\omega\in A}p(\omega)^{2}q(\omega)^ {n} \tag{23}\] \[\leqslant m^{2}\tilde{p}^{2}(1-2p)^{n}+m\tilde{p}^{2}(1-p)^{n} \sim m\tilde{p}^{2}(1-p)^{n}, \tag{24}\] where \(\underline{p}=\min_{\omega\in A}p(\omega)\) and \(\tilde{p}=\max_{\omega\in A}p(\omega)\). This bound declines exponentially fast with \(n\) rather than just as an \(O(1/n)\) like for the relative frequency! In other words, asymptotically for \(n\to\infty\), \(\hat{\pi}_{1}\) will vastly outperform \(\hat{\pi}^{0}\), but we don't know when that asymptotics kicks in. For large \(\Omega\), it seems likely that a very large \(n\) will be needed for \(\hat{\pi}_{1}\) to outperform \(\hat{\pi}^{0}\). Dependence of variance on distribution.If the probability mass within \(A\) is distributed equally among \(m\) different \(\omega\), then \[\nu_{1} =\pi^{2}\bigg{[}\frac{1-2[1-\pi/m]^{n}+[1-2\pi/m]^{n}}{(1-[1-\pi/m]^{ n})^{2}}-1\bigg{]}+\] \[\quad+\pi^{2}\frac{[1-\pi/m]^{n}-[1-2\pi/m]^{n}}{m(1-[1-\pi/m]^{n}) ^{2}} \tag{25}\] \[=\pi^{2}\frac{(m-1)[1-2\pi/m]^{n}-m[1-\pi/m]^{2n}+[1-\pi/m]^{n}}{m (1-[1-\pi/m]^{n})^{2}}. \tag{26}\] Variance estimation.The quantity \[\sum_{\omega\in A}p(\omega)^{2}q(\omega)^{n} \tag{27}\] occurring in the above approximation can be estimated without bias by \[\sum_{\omega\in\mathcal{U}O}\frac{p(\omega)^{2}q(\omega)^{n}}{1-q(\omega)^{n}}. \tag{28}\] Similarly, \(\nu_{1}\) can be estimated without bias by \[\hat{\nu}_{1} =\sum_{\omega,\omega^{\prime}\in\mathcal{U}O}p(\omega)p(\omega^{ \prime})\bigg{[}\frac{1}{(1-q(\omega)^{n})(1-q(\omega^{\prime})^{n})}\] \[\quad\quad-\frac{1}{1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1-p( \omega)-p(\omega^{\prime})]^{n}}\bigg{]}\] \[\quad+\sum_{\omega\in\mathcal{U}O}p(\omega)^{2}\frac{q(\omega)^{ n}-[1-2p(\omega)]^{n}}{(1-q(\omega)^{n})^{3}}. \tag{29}\] Dual and combined estimators.While \(\hat{\pi}_{1}\) estimates \(\pi\) based on \(O\cap A\), one can of course also estimate \(1-\pi\) based on \(O-A\) in the same fashion. This gives another unbiased estimator of \(\pi\): \[\hat{\pi}_{1}^{\prime}=1-\sum_{\omega\in O\sim A}\frac{p(\omega)}{1-q(\omega) ^{n}}. \tag{30}\] Now it seems that a suitable (convex) combination of \(\hat{\pi}_{0}\), \(\hat{\pi}_{1}\) and \(\hat{\pi}_{1}^{\prime}\) should still be unbiased and have even smaller variance. _But which combination is optimal?_ If the three estimators were independent, the following convex combination would have minimal variance: \((\hat{\pi}_{0}/\nu_{0}+\hat{\pi}_{1}/\nu_{1}+\hat{\pi}_{1}^{\prime}/\nu_{1}^{ \prime})/(1/\nu_{0}+1/\nu_{1}^{\prime}+1/\nu_{1})\). Since we don't know \(\nu_{1},\nu_{1}^{\prime}\), we can only use their estimates, leading to the estimator \[\hat{\nu}_{1}^{\prime\prime}=\frac{\hat{\pi}_{0}/\hat{\nu}_{0}+\hat{\pi}_{1}/ \hat{\nu}_{1}+\hat{\pi}_{1}^{\prime}/\hat{\nu}_{1}^{\prime}}{1/\hat{\nu}_{0}+ 1/\hat{\nu}_{1}^{\prime}+1/\hat{\nu}_{1}} \tag{31}\] (where \(\hat{\nu}_{1}^{\prime}\) is like \(\hat{\nu}_{1}\) with \(O-A\) in place of \(A\cap O\)). ### Generalization to mean estimation If the goal is to estimate the expected value \(\mu=\mathbb{E}X\) of an observable random variable \(X:\Omega\to\mathbb{R}\) rather than the probability of an event, one can do \[\hat{\mu}_{1}^{\prime}=\xi+\sum_{\omega\in O}\frac{p(\omega)(X(\omega)-\xi)}{1 -q(\omega)^{n}} \tag{32}\] for any arbitrary reference point \(\xi\), which still gives an unbiased estimate. _What choice of \(\xi\) minimizes the variance of \(\hat{\mu}_{1}\)?_ The variance is \[\nu_{1}^{\prime} =\mathbb{E}\hat{\mu}_{1}^{2}-\mu^{2}\] \[=\xi^{2}+2\xi(\mu-\xi)-\mu^{2}\] \[\quad+\sum_{\omega,\omega^{\prime}}p(\omega)X(\omega)p(\omega^{ \prime})X(\omega^{\prime})\times\] \[\quad\quad\times\frac{1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1-p( \omega)-p(\omega^{\prime})]^{n}}{(1-q(\omega)^{n})(1-q(\omega^{\prime})^{n})}\] \[\quad-2\xi\sum_{\omega,\omega^{\prime}}p(\omega)X(\omega)p(\omega ^{\prime})\times\] \[\quad\quad\times\frac{1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1- p(\omega)-p(\omega^{\prime})]^{n}}{(1-q(\omega)^{n})(1-q(\omega^{\prime})^{n})}\] \[\quad+\xi^{2}\sum_{\omega,\omega^{\prime}}p(\omega)p(\omega^{ \prime})\frac{1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1-p(\omega)-p(\omega^{ \prime})]^{n}}{(1-q(\omega)^{n})(1-q(\omega^{\prime})^{n})}\] \[\quad+\sum_{\omega}p(\omega)^{2}X(\omega)^{2}\frac{q(\omega)^{n}-[1 -2p(\omega)]^{n}}{(1-q(\omega)^{n})^{2}}\] \[\quad-2\xi\sum_{\omega}p(\omega)X(\omega)p(\omega^{\prime})\frac{q( \omega)^{n}-[1-2p(\omega)]^{n}}{(1-q(\omega)^{n})^{2}}\] \[\quad+\xi^{2}\sum_{\omega}p(\omega)p(\omega^{\prime})\frac{q( \omega)^{n}-[1-2p(\omega)]^{n}}{(1-q(\omega)^{n})^{2}} \tag{33}\] and its derivative w.r.t. \(\xi\) is \[\partial_{\xi}v =2\mu-2\xi\] \[\quad-2\sum_{\omega,\omega^{\prime}}p(\omega)X(\omega)p(\omega^{ \prime})\times\] \[\quad\quad\times\frac{1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1-p( \omega)-p(\omega^{\prime})]^{n}}{(1-q(\omega)^{n})(1-q(\omega^{\prime})^{n})}\] \[\quad+2\xi\sum_{\omega,\omega^{\prime}}p(\omega)p(\omega^{ \prime})\times\] \[\quad\quad\times\frac{1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1-p( \omega)-p(\omega^{\prime})]^{n}}{(1-q(\omega)^{n})(1-q(\omega^{\prime})^{n})}\] \[\quad-2\sum_{\omega}p(\omega)X(\omega)p(\omega^{\prime})\frac{q( \omega)^{n}-[1-2p(\omega)]^{n}}{(1-q(\omega)^{n})^{2}}\] \[\quad+2\xi\sum_{\omega}p(\omega)p(\omega^{\prime})\frac{q( \omega)^{n}-[1-2p(\omega)]^{n}}{(1-q(\omega)^{n})^{2}}, \tag{34}\] which is zero if \[\xi= \Bigg{[}\mu-\sum_{\omega,\omega^{\prime}}p(\omega)X(\omega)p( \omega^{\prime})\times\] \[\quad\quad\times\frac{1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1-p( \omega)-p(\omega^{\prime})]^{n}}{(1-q(\omega)^{n})(1-q(\omega^{\prime})^{n})}\] \[\quad-\sum_{\omega}p(\omega)X(\omega)p(\omega^{\prime})\frac{q( \omega)^{n}-[1-2p(\omega)]^{n}}{(1-q(\omega)^{n})^{2}}\Bigg{]}\] \[\quad\quad\Bigg{[}1-\sum_{\omega,\omega^{\prime}}p(\omega)p( \omega^{\prime})\times\] \[\quad\quad\times\frac{1-q(\omega)^{n}-q(\omega^{\prime})^{n}+[1-p( \omega)-p(\omega^{\prime})]^{n}}{(1-q(\omega)^{n})(1-q(\omega^{\prime})^{n})}\] \[\quad\quad-\sum_{\omega}p(\omega)p(\omega^{\prime})\frac{q( \omega)^{n}-[1-2p(\omega)]^{n}}{(1-q(\omega)^{n})^{2}}\Bigg{]}. \tag{35}\] For large \(n\), this is approximately \(\mu\). This implies that a good choice of \(\xi\) is an independent estimate of \(\mu\) such as the sample mean \(\xi=\sum_{i}X(\omega_{i})/n\). Getting back to the original case of probability estimation, where \(X\) is the indicator function \(1_{A}\), we now see that a further improvement of \(\hat{\pi}_{1}\) is \[\hat{\pi}_{1}^{\prime} =\hat{\pi}_{0}+\sum_{\omega\in\omega}\frac{p(\omega)(1_{A}(\omega)- \hat{\pi}_{0})}{1-q(\omega)^{n}} \tag{36}\] \[=\hat{\pi}_{1}+\left[1-\sum_{\omega\in\omega}\frac{p(\omega)}{1- q(\omega)^{n}}\right]\hat{\pi}_{0}. \tag{37}\] ## 4 Idea 2: estimate the mean outcome probability We note that \(\pi=m\xi\) where \(\xi=\sum_{\omega\in A}p(\omega)/m\) is the average probability of the outcomes in \(A\). Also, \(\xi\) can be interpreted as the expected value of \(p(\omega_{i})\) when an \(\omega\in A\) is drawn uniformly (!) at random (rather than with relative probabilities \(p(\omega)\)). Each \(x_{i}\) of an \(i\) with \(\omega_{i}\in A\) can be seen as an estimate of \(\xi\). Wl.o.g. let us order the sample so that \(\omega_{1},\ldots,\omega_{k}\in A\) and \(\omega_{k+1},\ldots,\omega_{n}\notin A\). Then also each weighted average \(\sum_{i=1}^{k}w_{i}x_{i}\) of the \(k\) values \(x_{1},\ldots,x_{k}\), with \(\sum_{i}w_{i}=1\), is an estimate of \(\xi\). To make such an estimate unbiased, we need to choose the averaging weights \(w_{i}\) taking account of the fact that the \(\omega_{i}\) were _not_ sampled uniformly from \(A\) but using the distribution given by \(p\). The correct averaging weight \(w_{i}\) for \(x_{i}\) must thus be proportional to the ratio between the uniform probability \(1/m\) and the actually used probability \(p(\omega_{i})/\pi\). In other words, we need \(w_{i}\propto(1/m)/(p(\omega_{i})/\pi)\propto 1/x_{i}\). This results in the estimators \[\hat{\xi} =\frac{\sum_{i=1}^{k}\frac{1}{x_{i}}x_{i}}{\sum_{i=1}^{k}\frac{1 }{x_{i}}}=\frac{k}{\sum_{i=1}^{k}\frac{1}{x_{i}}}, \tag{38}\] \[\hat{\pi}_{2} =m\hat{\xi} =\frac{mk}{\sum_{i=1}^{k}\frac{1}{x_{i}}}. \tag{39}\] In other words, rather than using the arithmetic mean of the \(x_{i}\) to estimate \(\xi\), we use the harmonic mean. Indeed, the expected value of \(\hat{\xi}\) is \[\mathbb{E}_{\xi}^{\hat{\xi}}=\sum_{\omega_{1},\omega_{n}}\left(\prod_{i}p( \omega_{i})\right)\frac{\sum_{i}1_{A}(\omega_{i})}{\sum_{i}\frac{1_{A}(\omega _{i})}{p(\omega_{i})}} \tag{40}\] Variance.Because \(\hat{\xi}\) is the harmonic mean of the \(x_{i}\), which are an iid sample from the distribution given by \(p^{\prime}(\omega)=p(\omega)/\pi\) on \(A\), it is unbiased and its variance \(u\) is \[u=\frac{\theta^{4}\sigma^{2}}{k}=\frac{\pi^{2}}{m^{4}k}\left[\sum_{\omega\in A }\frac{1}{p^{\prime}(\omega)}-m^{2}\right], \tag{41}\] where \[\theta =1/E_{p^{\prime}}\left[\frac{1}{X}\right]=1/\sum_{\omega\in A}p^ {\prime}(\omega)\frac{1}{X(\omega)}=1/\sum_{\omega\in A}\frac{1}{\pi}=\frac{ \pi}{m}, \tag{42}\] \[\sigma^{2} =E_{p^{\prime}}\left[\frac{1}{X}-\frac{1}{\theta}\right]^{2}= \sum_{\omega\in A}p^{\prime}(\omega)\left[\frac{1}{p(\omega)}-\frac{m}{ \pi}\right]^{2}\] (43) \[=\frac{1}{\pi}\sum_{\omega\in A}p(\omega)\left[\frac{1}{p(\omega )^{2}}-\frac{2m}{\pi p(\omega)}+\frac{m^{2}}{\pi^{2}}\right]\] (44) \[=\frac{1}{\pi}\sum_{\omega\in A}\frac{1}{p(\omega)}-\frac{m^{2}} {\pi^{2}}. \tag{45}\] From the sampled \(x_{i}\), this variance can be estimated using standard methods, e.g., using the jackknife (leave-one-out) method: \[\hat{u}=\frac{k-1}{k}\sum_{i=1}^{k}\left(\hat{\xi}-\frac{k-1}{\sum_{j\neq i} \frac{1}{x_{j}}}\right)^{2}=\frac{k-1}{k}\sum_{i=1}^{k}\left(1-\frac{k-1}{k- \frac{\xi}{x_{i}}}\right)^{2}\hat{\xi}^{2}. \tag{46}\] For large \(k\), this is approximately \[\hat{u}\approx\frac{k-1}{k^{3}}\sum_{i=1}^{k}\left(1-\frac{\hat{\xi}}{x_{i}} \right)^{2}\hat{\xi}^{2}. \tag{47}\] The variance of \(\hat{\pi}_{2}\) is then \[v_{2}=m^{2}u=\frac{\pi^{2}}{m^{2}k}\left[\sum_{\omega\in A}\frac{1}{p^{\prime}( \omega)}-m^{2}\right] \tag{48}\] which can be estimated as \[\hat{v}_{2}=m^{2}\hat{u}=\frac{k-1}{k}\sum_{i=1}^{k}\left(1-\frac{k-1}{k- \frac{\hat{\xi}}{x_{i}}}\right)^{2}\hat{\pi}_{2}^{2}. \tag{49}\] ## 5 Generalization to importance sampling Assume now that the \(\omega_{i}\) are not from the "distribution of interest" \(p\) but some other "sampling" distribution \(p^{\prime}\), that both \(x_{i}=p(\omega_{i})\) and \(y_{i}=p^{\prime}(\omega_{i})\) are known, and that still we want to estimate \(\pi=\sum_{\omega\in A}p(\omega)\). Put \(z_{i}=x_{i}/y_{i}\). The relative frequency estimator of \(\pi\) is then replaced by the standard estimator from importance sampling [2], \[\hat{\pi}_{0}=\frac{\sum_{i=1}^{k}z_{i}}{\sum_{i=1}^{m}z_{i}}, \tag{50}\] for which we do not need to know the \(x_{i}\) or the \(y_{i}\) but only the \(z_{i}\). Put \(q^{\prime}(\omega)=1-p^{\prime}(\omega)\). Our novel estimators \(\hat{\pi}_{1}\) and \(\hat{\pi}_{2}\) should then be defined as \[\hat{\pi}_{1}=\sum_{\omega\in O\pi A}\frac{p(\omega)}{1-q^{\prime }(\omega)^{n}}, \tag{51}\] \[\hat{\pi}_{2}=m\hat{\xi},\quad\hat{\xi}=\frac{\sum_{i=1}^{k}\frac{1 }{y_{i}}x_{i}}{\sum_{i=1}^{k}\frac{1}{y_{i}}}=\frac{Z}{W},\quad Z=\sum_{i=1}^{k }z_{i},\quad W=\sum_{i=1}^{k}\frac{1}{y_{i}}, \tag{52}\] and their variance can be calculated or estimated as \[v_{1} =\sum_{\omega,\omega^{\prime}\in A}p(\omega)p(\omega^{\prime})\times\] \[\qquad\times\bigg{[}\frac{1-q^{\prime}(\omega)^{n}-q^{\prime}( \omega^{\prime})^{n}+[1-p^{\prime}(\omega)-p^{\prime}(\omega^{\prime})]^{n}}{(1 -q^{\prime}(\omega)^{n})(1-q^{\prime}(\omega^{\prime})^{n})}-1\bigg{]}\] \[\qquad+\sum_{\omega\in A}p(\omega)^{2}\frac{q^{\prime}(\omega)^{n} -[1-2p^{\prime}(\omega)]^{n}}{(1-q^{\prime}(\omega)^{n})^{2}}, \tag{53}\] \[\hat{v}_{1} =\sum_{\omega,\omega^{\prime}\in A\cap P}(\omega)p(\omega^{ \prime})\bigg{[}\frac{1}{(1-q^{\prime}(\omega)^{n})(1-q^{\prime}(\omega^{ \prime})^{n})}\] \[\qquad-\frac{1}{1-q^{\prime}(\omega)^{n}-q^{\prime}(\omega^{ \prime})^{n}+[1-p^{\prime}(\omega)-p^{\prime}(\omega^{\prime})]^{n}}\bigg{]}\] \[\qquad+\sum_{\omega\in A\cap P}p(\omega)^{2}\frac{q^{\prime}( \omega)^{n}-[1-2p^{\prime}(\omega)]^{n}}{(1-q^{\prime}(\omega)^{n})^{3}},\] (54) \[\hat{v}_{2} =m^{2}\frac{k-1}{k}\sum_{i=1}^{k}\left(\hat{\xi}-\frac{\sum_{i \neq i}z_{j}}{\sum_{i\neq i}\frac{1}{j_{i}}}\right)^{2}\] (55) \[=m^{2}\frac{k-1}{k}\sum_{i=1}^{k}\left(\frac{Z}{W}-\frac{Z-z_{i}} {W-\frac{1}{j_{i}}}\right)^{2}. \tag{56}\] As in the standard theory of importance sampling, one can now ask how the sampling distribution \(p^{\prime}\) should be chosen to minimize \(v_{1}\) or \(v_{2}\), assuming that one has some influence on the choice of \(p^{\prime}\). For large \(n\), we have roughly \[v_{1}\approx\sum_{\omega\in A}p(\omega)^{2}q^{\prime}(\omega)^{n}. \tag{57}\] Let's see whether we can find the optimal \(p^{\prime}\) simply via first-order conditions. Shifting an infinitesimal sampling probability mass \(dp^{\prime}\) from \(p^{\prime}(\omega)\) to \(p^{\prime}(\omega^{\prime})\) changes this by \[dv_{1}\approx np(\omega)^{2}q^{\prime}(\omega)^{n-1}-np(\omega^{\prime})^{2}q ^{\prime}(\omega^{\prime})^{n-1}. \tag{58}\] Setting this to zero for all \(\omega\in A\) would imply that \(p(\omega)^{2}q^{\prime}(\omega)^{n-1}\) is constant, hence \[p^{\prime}(\omega)=1-Cp(\omega)^{-2/(n-1)} \tag{59}\] for some constant \(C\), hence \[1 =\sum_{\omega\in A}p^{\prime}(\omega)=|A|-C\sum_{\omega\in A}p( \omega)^{-2/(n-1)}, \tag{60}\] \[C =(|A|-1)/\sum_{\omega\in A}p(\omega)^{-2/(n-1)},\] (61) \[p^{\prime}(\omega) =1-\frac{|A|-1}{|A|}\frac{p(\omega)^{-2/(n-1)}}{\langle p(\omega^ {\prime})^{-2/(n-1)}\rangle_{\omega^{\prime}\in A}}, \tag{62}\] which might be smaller than \(0\). So the optimal \(p^{\prime}\) will likely be a boundary solution with some \(p^{\prime}(\omega)=0\) in general rather than an interior solution given by the above equation. Ansatz: \(p^{\prime}(\omega)=0\) whenever \(p(\omega)<\alpha\) for some \(\alpha\), and \[p^{\prime}(\omega)=1-Cp(\omega)^{-2/(n-1)} \tag{63}\] whenever \(p(\omega)\geqslant\alpha\), hence \[p^{\prime}(\omega)=1-\frac{|A^{\prime}(\alpha)|-1}{|A^{\prime}(\alpha)|}\frac{ p(\omega)^{-2/(n-1)}}{\langle p(\omega^{\prime})^{-2/(n-1)}\rangle_{\omega^{\prime} \in A^{\prime}(\alpha)}}, \tag{64}\] where \(A^{\prime}(\alpha)=\{\omega\in A:p(\omega)\geqslant\alpha\}\) and \(\alpha\) is the smallest value for which all \(p^{\prime}(\omega)\) thus computed are non-negative. This is probably the smallest \(\alpha\) for which \[\frac{|A^{\prime}(\alpha)|-1}{|A^{\prime}(\alpha)|}\frac{\alpha^{-2/(n-1)}}{ \langle p(\omega^{\prime})^{-2/(n-1)}\rangle_{\omega^{\prime}\in A^{\prime}( \alpha)}}\leqslant 1. \tag{65}\] Because always \(\alpha^{-2/(n-1)}/\langle p(\omega^{\prime})^{-2/(n-1)}\rangle_{\omega^{ \prime}\in A^{\prime}(\alpha)}>1\), the factor \((|A^{\prime}(\alpha)|-1)/|A^{\prime}(\alpha)|\) needs to compensate for this to get the product \(\leqslant 1\), hence the resulting set \(A^{\prime}(\alpha)\) is likely small, i.e., only a few \(\omega\) with the largest \(p(\omega)\) get a positive \(p^{\prime}(\omega)\). Since for these largest \(p(\omega)\), the values \(p(\omega)^{-2/(n-1)}\) are all close to \(1\), the resulting \(p^{\prime}(\omega)\) are all approx. \(1/|A^{\prime}(\alpha)|\). In other words, selecting a suitable number of \(\omega\in A\) with the largest \(p(\omega)\) and then sampling uniformly from them is close to optimal. ## 6 Application: Hypothesis Testing in Epidemic Control Assume now that we want to test the hypothesis \(H_{0}\) that an epidemic outbreak of type \(SI\) has occurred in a population into which the respective disease is introduced from the outside with a known probability \(p_{1}\) per time and individual and can be transmitted with a known probability \(p_{2}\) whenever two individuals meet, and that we know the contact network and have performed a number of tests for infection at certain nodes and timepoints, all of which turned out negative. We can then simulate \(n\) potential outbreaks and corresponding sets of tests, giving trajectories \(\omega_{i}\) and corresponding probabilities \(x_{i}\), and observe which simulations resulted in all tests being negative, \(\omega_{i}\in A\), and which resulted in at least one test being positive \(\omega_{i}\in\Omega\setminus A\). Using the above designed methods, one can then estimate the probability \(\pi\) of all tests being negative under the hypothesis \(H_{0}\) of an outbreak having occurred. If this probability is below the set level of the test, say \(0.01\), one would then reject the hypothesis and conclude that no outbreak has occurred. ### Funding This work was supported by the German Bundesministerium fur Bildung und Forschung, FKZ 01K11812 as part of the Forschungsnetz Zoonosen.
2310.04809
Leveraging LLVM's ScalarEvolution for Symbolic Data Cache Analysis
While instruction cache analysis is essentially a solved problem, data cache analysis is more challenging. In contrast to instruction fetches, the data accesses generated by a memory instruction may vary with the program's inputs and across dynamic occurrences of the same instruction in loops. We observe that the plain control-flow graph (CFG) abstraction employed in classical cache analyses is inadequate to capture the dynamic behavior of memory instructions. On top of plain CFGs, accurate analysis of the underlying program's cache behavior is impossible. Thus, our first contribution is the definition of a more expressive program abstraction coined symbolic control-flow graphs, which can be obtained from LLVM's ScalarEvolution analysis. To exploit this richer abstraction, our main contribution is the development of symbolic data cache analysis, a smooth generalization of classical LRU must analysis from plain to symbolic control-flow graphs. The experimental evaluation demonstrates that symbolic data cache analysis consistently outperforms classical LRU must analysis both in terms of accuracy and analysis runtime.
Valentin Touzeau, Jan Reineke
2023-10-07T14:02:55Z
http://arxiv.org/abs/2310.04809v2
# Leveraging LLVM's ScalarEvolution for Symbolic Data Cache Analysis ###### Abstract While instruction cache analysis is essentially a solved problem, data cache analysis is more challenging. In contrast to instruction fetches, the data accesses generated by a memory instruction may vary with the program's inputs and across dynamic occurrences of the same instruction in loops. We observe that the plain control-flow graph (CFG) abstraction employed in classical cache analyses is inadequate to capture the dynamic behavior of memory instructions. On top of plain CFGs, accurate analysis of the underlying program's cache behavior is impossible. Thus, our first contribution is the definition of a more expressive program abstraction coined symbolic control-flow graphs, which can be obtained from LLVM's ScalarEvolution analysis. To exploit this richer abstraction, our main contribution is the development of _symbolic data cache analysis_, a smooth generalization of classical LRU must analysis from plain to symbolic control-flow graphs. The experimental evaluation demonstrates that symbolic data cache analysis consistently outperforms classical LRU must analysis both in terms of accuracy and analysis runtime. cache analysis, chains of recurrences, data caches, symbolic analysis ## I Introduction Due to technological developments, the latency of accesses to DRAM-based main memory is much higher than the latency of arithmetic and logic computations on processor cores. This "memory gap" is commonly tackled by a hierarchy of caches between the processor cores and main memory. In the presence of caches, the latency of a memory access may vary widely depending on the level of the memory hierarchy that is able to serve the access. Hits to the first-level cache take just a few processor cycles, while accesses that miss in all cache levels and thus need to be served by main memory can take hundreds of cycles. This variability is a challenge in the context of real-time systems, where it is necessary to bound a program's worst-case execution time (WCET) [1] to guarantee that safety-critical applications meet all of their deadlines. For accurate WCET analysis it is thus imperative to take caches into account. The timing variability induced by caches also introduces security challenges. Implementations of cryptographic algorithms have been shown to be vulnerable to cache timing attacks [2] and cache analysis [3, 4, 5] may help to uncover such vulnerabilities or prove their absence. Cache analysis aims to statically characterize a program's cache behavior by classifying memory accesses in the program as guaranteed cache hits or misses. One perspective on cache analysis is that it is the composition of two phases: 1. A transformation of the program under analysis into a simpler program abstraction: a control-flow graph (CFG) whose edges are decorated with memory accesses. 2. An analysis of this decorated CFG that classifies accesses as "always hit", "always miss", or "unknown". For instruction cache analysis this two-phase approach works well, as CFGs accurately captures most programs' instruction fetch sequences. For data cache analysis, however, a plain CFG abstraction can be highly inaccurate. Consider for example the following simple loop: ``` for(intx=0;x<100;x++) sum+=A[x] ``` In each iteration of the loop a different address is accessed, and so the corresponding edge in the CFG needs to be conservatively decorated with all possible addresses. The order in which the array elements are accessed is lost and it becomes impossible to make accurate predictions about the program's cache behavior. A program abstraction that more precisely captures a program's memory access behavior is thus needed. Our first contribution is the definition of symbolic control-flow graphs in Section IV, which is our formalization of the output of LLVM's ScalarEvolution analysis [6, 7]. Symbolic CFGs accurately capture the link between loop iterations and accessed memory blocks via chains of recurrences [8, 9] in a manner that is amenable to static analysis. To exploit this more expressive program abstraction our main contribution is the development of _symbolic data cache analysis_ in Section V, a smooth generalization of Ferdinand's classical LRU must analysis [10, 11] from plain to symbolic control-flow graphs. To fully realize the potential of symbolic data cache analysis we further introduce a context-sensitive analysis combining loop peeling and unrolling in Section VI and various implementation tricks in Section VII. The experimental evaluation on the PolyBench benchmark suite in Section VIII demonstrates that symbolic cache analysis compares favorably to classical LRU must analysis both in terms of accuracy and analysis runtime. ## II Background ### _Caches_ Caches are fast but small memories that buffer parts of the large but slow main memory in order to bridge the speed gap between the processor and main memory. Caches consist of _cache lines_, which store data at the granularity of memory blocks \(b\in\mathcal{B}\). Memory blocks usually comprise a power-of-two number of bytes \(BS\), e.g. 64 bytes, so that the block \(block(a)\) that address \(a\) maps to is determined by truncating the least significant bits of \(a\), i.e., \(block(a)=\lfloor a/BS\rfloor\). In order to facilitate an efficient cache lookup, the cache is organized in _sets_ such that each memory block maps to a unique cache set \(set(b)=b\bmod\mathit{NS}\), where \(\mathit{NS}\) is the number of sets. The number of cache lines \(k\) in each cache set is called the _associativity_ of the cache. If an accessed block resides in the cache, the access _hits_ the cache. Upon a cache _miss_, the block is loaded from the next level of the hierarchy. Then, another memory block has to be evicted due to the limited size of the cache. The block to evict is determined by the _replacement policy_. In this paper, we assume the least-recently-used (LRU) policy, which replaces the block that has been accessed least recently. A memory block \(b\) hits in an LRU cache of associativity \(k\) if \(b\) has been accessed previously and less than \(k\) distinct blocks in the same cache set have been accessed since the last access to \(b\). LRU is generally considered to be the most predictable replacement policy [12]. In this paper, we refer to the _age_ of block \(b\) as the number of distinct blocks in the same cache set that have been accessed since the last access to \(b\). Thus, an access to block \(b\) hits the cache if and only if its age is less than the associativity \(k\). ### _Control-Flow Graphs as a Program Representation_ Control-flow graphs (CFGs) are a program representation commonly employed in compilers and static analysis tools. A CFG is a directed graph \(\mathcal{G}=(V,E,v_{0})\), whose vertices \(V\) correspond to control locations in the program including the initial control location \(v_{0}\in V\), and whose edges \(E\) represent the possible control flow between the graph's vertices. For the purpose of cache analysis, CFGs are used to represent the possible sequences of memory accesses generated by the underlying program. To this end, each edge of the CFG is decorated with the set of memory addresses that may be accessed when control passes along that edge. As defined above, CFGs over-approximate the behavior of the program they represent as they do not capture the functional semantics of the instructions. In particular, all paths through the graph are assumed to be feasible even if, in reality, some are not. Also, and this is particularly problematic for data cache analysis, the CFG representation does not capture the dependence of the accessed memory addresses on the loop iterations. We will see in Section III how this may lead to gross overapproximations of the number of cache misses. To overcome this issue, we introduce symbolic control-flow graphs in Section IV. ### _Ferdinand's May and Must Cache Analysis_ The aim of Ferdinand's may and must cache analyses [10, 11] is to classify memory accesses in a CFG as definite hits or definite misses. As noted before, under LRU replacement, an access results in a cache hit if and only if the age of the accessed block is less than the cache's associativity. Instead of computing all reachable concrete cache states, must and may analysis operates on abstract cache states, which maintain upper and lower bounds on the age of each memory block. Each block's bounds hold independently of the ages of other blocks. This allows for a compact representation of large sets of concrete cache states. For example, the abstract must cache state \(\lambda b.\infty\) that maps every block to age bound \(\infty\) compactly represents all possible concrete cache states. As the correlation between the ages of different blocks is lost, the resulting analysis is not exact. However, recent work [13, 14, 15] has shown that the loss in precision due to this abstraction is small in practice. Our symbolic data cache analysis introduced in Section V, can be seen as a smooth generalization of Ferdinand's must analysis to symbolic control-flow graphs. ## III Illustrative Example As an illustrative example of the drawbacks of cache analysis performed on plain CFGs, consider the simple program in Figure 0(a). The first loop of our example program iterates across array \(A\) in the forward direction, while the second loop iterates across the same array in the opposite direction. ### _Intuitive Cache Analysis_ Let us intuitively analyze the program's cache behavior. For this analysis, we will assume a tiny set-associative cache with LRU replacement consisting of 2 cache sets, an associativity of 4, and cache lines of size 8 bytes. Thus, the cache has a capacity of \(2\cdot 4\cdot 8=64\) bytes. Also assume that integers are of size 4 bytes, and so the cache can hold 16 array cells. Assuming an initially empty cache, the first loop does not exhibit any temporal locality, as each array cell is only touched once. However, it does exhibit spatial locality, as pairs of adjacent array cells may reside in the same memory blocks. Thus, every other iteration of the first loop will result in a cache hit. The second loop accesses the same array cells as the first. Now it depends on the cache geometry whether and to what extent this temporal locality can be exploited. Under our assumptions, the cache will contain array cells \(A[84],A[85],\ldots,A[99]\) after the first loop has terminated. Thus, the first 16 iterations of the second loop hit the cache. The remaining iterations profit only from spatial locality as the first loop did, hitting in every other iteration. ### _Traditional Cache Analysis_ Under Ferdinand's must cache analysis [10, 11] and recent exact analyses [13, 14] the program is abstracted via its CFG and the CFG's edges are annotated with the sets of memory blocks that may be accessed while executing the corresponding part of the program as discussed in Section II-B. Figure 0(b) shows the plain CFG abstraction for our example program. While this abstraction is adequate for instruction cache analysis as the same instructions are accessed in each loop iteration, it is inadequate for data cache analysis, as the link between the loop iteration and the accessed address is lost. As a consequence, it is impossible to predict any of the memory accesses in the program to be cache hits or misses. If the entire set of memory blocks that can potentially be accessed fits into the cache, then persistence analysis [16, 17, 18, 19, 15] may deduce that each of these blocks results in at most one cache miss. However, in our example, the array \(A\) does not fully fit into the cache, and so persistence analysis is of no use here. ### _Symbolic Control-Flow Graphs and Cache Analysis_ Symbolic Control-Flow GraphsWe have seen that the plain CFG abstraction is inadequate for data cache analysis, because the link between loop iterations and accessed memory blocks is lost. Thus, our first step towards accurate data cache analysis is to employ what we coin _symbolic CFGs_, a simple yet powerful program representation that concisely captures the link between loop iterations and accessed data. Symbolic CFGs are our formalization of the output of LLVM's ScalarEvolution Analysis [6, 7]. Figure 0(c) shows a symbolic CFG for our example program. In a symbolic CFG--where possible--the addresses of memory accesses are expressed in terms of the loop iterations of their enclosing loops. To this end, symbolic CFGs make it explicit when a loop is entered and when a new loop iteration begins. These transitions are indicated by annotating edges with \(\mathit{entry}_{i}\) and \(\mathit{backedge}_{i}\), where \(i\) is the identifier of a loop. Consider the edge annotated with \(A[99-j]\). This is to be interpreted as follows: In an execution of the program, let \(\sigma(j)\) be the number of times that \(\mathit{backedge}_{j}\) has been traversed since the last time \(\mathit{entry}_{j}\) has been taken. Then, the accessed address is \(A[99-\sigma(j)]\). For some loops, ScalarEvolution is also able to derive the exact number of times that a loop's back edges are taken from entry to exit. To express such information, symbolic CFGs may contain \(\mathit{assume}_{i,e}\) statements, where \(e\) is an expression that may refer to loop variables other than \(i\) itself. An edge annotated with \(\mathit{assume}_{i,e}\) can only be taken if the value of \(i\) is equal to the value of expression \(e\). In our example, the back edges of both loops are taken exactly 100 times, and so the exit edges of both loops are annotated accordingly with assume statements. We define symbolic CFGs in Section IV. There we also discuss multivariate chains of recurrences [8, 9, 20], which are used to represent access expressions and loop bounds. Symbolic Cache AnalysisSymbolic CFGs are useful for data cache analysis as they capture a program's memory access behavior more precisely than plain CFGs. In fact, in our example, the symbolic CFG perfectly captures the sequence of memory accesses generated by the program. It remains to define a static analysis that can efficiently exploit this information. Simply applying Ferdinand's must analysis would not be fruitful as the underlying abstraction does not capture the relation between loop iterations and cache states. A relatively straightforward approach would be to virtually unroll the loops for the sake of the analysis, resulting in an exploded plain CFG in which each edge could once more be annotated with a concrete memory access. Ferdinand's must analysis could then be employed successfully on this exploded plain CFG. However, this approach would be very costly, in particular for programs with large loop bounds. We are thus seeking a precise analysis whose runtime is independent of the loop bounds of the program. To this end, our first basic idea are symbolic cache states that capture how cache states depend on the loop iteration. To motivate symbolic cache states, consider Figures 1(a) and 1(b), which show the concrete cache states at the ends of iterations \(15\) and \(17\) of the first loop from our example program. As we assume cache lines of size \(8\) bytes, each line contains two cells of the array. We represent each memory block by the first array cell mapping to that block. Our idea is to represent memory blocks _symbolically_ in terms of the values of loop variables. For example, \(A[14]\) can be expressed as \(A[i-1]\) if \(i\)'s value is \(15\). If we represent the states from Figures 1(a) and 1(b) in this way we arrive at the symbolic cache state depicted in Figure 1(c). Furthermore, the _same_ symbolic state will be reached at the end of each odd loop iteration, starting from iteration \(15\). Like Ferdinand's must analysis our symbolic data cache analysis determines upper bounds on the ages of memory blocks. However, instead of associating bounds with concrete memory blocks, it associates these bounds with symbolic memory blocks. A peculiar consequence of this abstraction is Fig. 1: Simple program and its plain and symbolic control-flow-graph abstractions. that symbolic cache states also need to be updated when the value of a loop variable changes. For example, if the back edge of the first loop is taken to move from iteration \(15\) (\(17,19,\dots\)) to iteration \(16\) (\(18,\dots\)), then the symbolic cache state needs to be updated to account for incrementing \(i\). The resulting symbolic cache state is depicted in Figure (d)d. We show how to lift Ferdinand's analysis to symbolic cache analysis in Section V. In our example, one can observe that the symbolic cache states "stabilize" in odd and even loop iterations after the cache has been filled in the first \(16\) iterations. Thus the analysis needs to distinguish the first \(16\) loop iterations from the rest, and odd from even loop iterations in the remainder of the execution. This can be achieved by context-sensitive analysis [21, 22, 23]. In Section VI we introduce a context-sensitive analysis that can be configured to virtually peel and unroll the loops appropriately for a given cache configuration. ## IV Symbolic Control-Flow Graphs We have seen the intuition behind symbolic control-flow graphs in Section III-C. One aspect that has been left undefined there is the shape of expressions used to represent memory accesses and loop bounds. We fill this gap in Section IV-A, which is then used in the formal definition of symbolic control-flow graphs in Section IV-B. In Section IV-C we provide a semantics for symbolic CFGs, which will allow us to make formal correctness statements about the symbolic data cache analysis introduced in Section V. ### _Multivariate Chains of Recurrences_ We employ _multivariate chains of recurrences_[8, 9, 20] (short: MCRs) as the formalism for expressions. Given a subset of a program's loop variables \(S\subseteq\mathit{LoopVar}\), the set \(M(S)\) of MCRs over \(S\) is given by the following grammar: \[e:=n\in\mathbb{Z}\] \[\mid e_{1}\,\mathit{bop}\,e_{2}\,\,\text{where}\,\,\,\mathit{bop} \in\{+,-,\cdot\},\text{and}\,\,e_{1},e_{2}\in M(S)\] \[\mid\{e_{1},+,e_{2}\}_{i}\,\,\text{where}\,\,i\in S,e_{1}\in M(S \setminus\{i\}),e_{2}\in M(S)\] Thus, expressions can (i) be constants; (ii) they can be formed from subexpressions via addition, subtraction, and multiplication; and (iii) they can be _add recurrences_ of the form \(\{e_{1},+,e_{2}\}_{i}\). Given an environment \(\sigma:\mathit{LoopVar}\rightarrow\mathbb{N}\) assigning loop variables to their values, an MCR can be evaluated as follows: \[\llbracket n\rrbracket_{\sigma} :=n\] \[\llbracket e_{1}\,\,\mathit{bop}\,\,e_{2}\rrbracket_{\sigma} :=\llbracket e_{1}\rrbracket_{\sigma}\,\,\mathit{bop}\,\, \llbracket e_{2}\rrbracket_{\sigma}\] \[\llbracket\{e_{1},+,e_{2}\}_{i}\rrbracket_{\sigma} :=\llbracket e_{1}\rrbracket_{\sigma}+\sum_{k=0}^{\sigma(i)-1} \llbracket e_{2}\rrbracket_{\sigma[i\mapsto k]}\] By \(\sigma[i\mapsto v]\) we denote the function that maps \(i\) to \(v\) and otherwise is the same as \(\sigma\). Thus, in an add recurrence \(e_{1}\) can be seen as the initial value, and \(e_{2}\) as the increment. For example: \[\llbracket\{23,+,4\}_{i}\rrbracket_{\sigma}=\llbracket 23\rrbracket_{ \sigma}+\sum_{k=0}^{\sigma(i)-1}\llbracket\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! only rarely needed in the analysis of real-time applications, in which dynamic data structures are uncommon. ### _Symbolic Control-Flow Graphs_ A _symbolic CFG_ is a tuple \(\mathcal{G}=(V,E,\mathit{LoopVar},v_{0})\), where \(V\) is a set of vertices and \(E\subseteq V\times\mathcal{D}\times V\) is a set of edges, \(\mathit{LoopVar}\) is a set of loop variables, and \(v_{0}\in V\) is a vertex with no incoming edges marking the program entry. Edges are decorated with _accesses_\(\mathcal{A}\) and _statements_\(\mathcal{S}\), i.e., \(\mathcal{D}=\mathcal{S}\cup\mathcal{A}\): * Accesses are MCRs or unknowns: \(\mathcal{A}:=M(\mathit{LoopVar})\cup\{\mathbf{X}\}\) * Statements either mark the entry to a loop (\(\mathit{entry}_{i}\)), a back edge of a loop (\(\mathit{backedge}_{i}\)), or an assumption on the value of a loop variable (\(\mathit{assume}_{i,e}\)): \[\mathcal{S} :=\{\mathit{entry}_{i},\mathit{backedge}_{i}\mid i\in\mathit{ LoopVar}\}\] \[\cup\{\mathit{assume}_{i,e}\mid i\in\mathit{LoopVar},e\in M( \mathit{LoopVar}\setminus\{i\})\}\] ### _Semantics of Symbolic Control-Flow Graphs_ The state of an execution of a symbolic control-flow graph consists of two parts: The program state \(\sigma_{p}\in\Sigma_{p}\) and the cache state \(\sigma_{c}\in\Sigma_{c}\). We represent the program state by a map \(\sigma_{p}\) that maps loop variables to their values. Each loop variable counts the number of times that the loop back edge has been taken since last entering the loop. The program semantics of a symbolic CFG is then captured by a transformer \(\mathit{update}_{\mathcal{S}}\) that captures the effects of statements on program states. \[\mathit{update}_{\mathcal{S}}(\sigma_{p},s):=\] \[\begin{cases}\sigma_{p}[i\mapsto 0]&\text{if }s=\mathit{entry}_{i}\\ \sigma_{p}[i\mapsto\sigma_{p}(i)+1]&\text{if }s=\mathit{backedge}_{i}\\ \bot_{p}&\text{if }s=\mathit{assume}_{i,\mathit{expr}}\wedge\sigma_{p}(i) \neq\llbracket\mathit{expr}\rrbracket_{\sigma_{p}}\\ \sigma_{p}&\text{if }s=\mathit{assume}_{i,\mathit{expr}}\wedge\sigma_{p}(i) =\llbracket\mathit{expr}\rrbracket_{\sigma_{p}}\end{cases}\] Note that we use the special value \(\bot_{p}\) to represent unreachable program states, i.e. those not satisfying an assume statement. We represent cache states as maps \(\sigma_{c}\) from memory blocks to ages, i.e. \(\sigma_{c}\) tracks the age of each memory block in its cache set: \(\sigma_{c}\in\mathcal{B}\rightarrow\mathbb{N}\). The LRU replacement policy is then captured by the following transformer: \[\mathit{update}_{LRU}(\sigma_{c},b):=\lambda b^{\prime}\in \mathcal{B}.\] \[\begin{cases}0&\text{if }b=b^{\prime}\\ \sigma_{c}(b^{\prime})&\text{else if }\mathit{set}(b)\neq\mathit{set}(b^{ \prime})\\ \sigma_{c}(b^{\prime})&\text{else if }\sigma_{c}(b)\leq\sigma_{c}(b^{\prime})\\ \sigma_{c}(b^{\prime})+1&\text{otherwise}\end{cases}\] To paraphrase the above definition: (i) The accessed block \(b\) attains age \(0\). (ii) The ages of blocks in other cache sets (\(\mathit{set}(b)\neq\mathit{set}(b^{\prime})\)) do not change. (iii) If the accessed block \(b\) is younger than block \(b^{\prime}\), then \(b\) has already been accounted for in the age of \(b^{\prime}\), and thus the age of \(b^{\prime}\) should not increase. (iv) Otherwise, \(b\) maps to the same cache set as \(b^{\prime}\) and is older than \(b^{\prime}\) and thus the access increases the age of \(b^{\prime}\). The complete state of the system is a pair \((\sigma_{p},\sigma_{c})\) and we can capture its evolution upon arbitrary CFG decorations by combining the previous transformers into a single one and accounting for unknown accesses: \[\mathit{update}((\sigma_{p},\sigma_{c}),d):=\] \[\begin{cases}\{(\mathit{update}_{\mathcal{S}}(\sigma_{p},d), \sigma_{c})\}&\text{if }d\in\mathcal{S}\\ \{(\sigma_{p},\mathit{update}_{LRU}(\sigma_{c},\mathit{block}(\llbracket d \rrbracket_{\sigma_{p}})))\}&\text{if }d\in\mathcal{A}\setminus\{\mathbf{X}\}\\ \{(\sigma_{p},\mathit{update}_{LRU}(\sigma_{c},b))\mid b\in\mathcal{B}\}& \text{if }d=\mathbf{X}\end{cases}\] where \(\mathit{block}\) maps addresses to the corresponding memory blocks (see Section II-A). Note that \(\mathit{update}((\sigma_{p},\sigma_{c}),d)\) maps to sets of states to capture the non-determinism introduced by unknown accesses. We lift \(\mathit{update}\) to sets of states as follows: \[\mathit{update}(S,d):=\{(\sigma_{p}^{\prime},\sigma_{c}^{\prime}) \mid(\sigma_{p},\sigma_{c})\in S\] \[\wedge(\sigma_{p}^{\prime},\sigma_{c}^{\prime})\in\mathit{update }((\sigma_{p},\sigma_{c}),d)\wedge\sigma_{p}^{\prime}\neq\bot_{p}\}\] We drop unreachable states (where \(\sigma_{p}^{\prime}=\bot_{p}\)) here. We define the set of reachable states at each control location \(R^{C}:V\rightarrow\mathcal{P}(\Sigma_{p}\times\Sigma_{c})\) as the least solution to the following set of equations: \[R^{C}(v_{0}) =\{(\lambda i.0,\sigma_{c})\mid\sigma_{c}\in\Sigma_{c}\} \tag{1}\] \[\forall v\in V\setminus\{v_{0}\}:R^{C}(v) =\bigcup_{(u,d,v)\in E}\mathit{update}(R^{C}(u),d) \tag{2}\] Equation (1) captures that initially all loop variables are zero, while the initial cache state can be arbitrary. Equation (2) Fig. 2: Cache states that arise during the execution of the first loop. captures that the reachable states at node \(v\) are determined by the reachable states at \(v^{\prime}\)s predecessor nodes \(u\) updated according to the CFG decoration between \(u\) and \(v\). In keeping with abstract interpretation literature [24], we refer to \(R^{C}\) as the _collecting semantics_. ## V Symbolic Data Cache Analysis Explicitly computing the collecting semantics \(R^{C}\) would be very costly and only possible at all if all loops were bounded. In this section, we lift Ferdinand's must analysis to symbolic control-flow graphs to obtain a tractable analysis. ### _Abstract Domain_ As described earlier, Ferdinand's must analysis maps memory blocks to an upper bound on their maximum age in order to classify memory accesses as hits. Our analysis relies on a similar map, except that it maps symbolic blocks, represented via MCRs, to such age bounds. Our abstract domain is thus \[\widehat{\sigma}\in\widehat{\textit{SymCache}}=M(\textit{LoopVar})\hookrightarrow\{ 0,\dots,k-1,\infty\},\] where \(\hookrightarrow\) indicates that symbolic cache states are partial functions. We refer to the domain of a cache state \(\widehat{\sigma}\), i.e., the set of MCRs for which \(\widehat{\sigma}\) provides an age bound, as \(\textit{dom}(\widehat{\sigma})\). If our analysis maps an MCR \(e\) to age \(x\) at program point \(v\), it means that the memory block containing the address given by \(\llbracket e\rrbracket_{\sigma_{p}}\) has age at most \(x\) for any program state \(\sigma_{p}\) reachable at \(v\). This set of program and cache states associated with an abstract state \(\widehat{\sigma}\) is captured by the concretization function \(\gamma\): \[\gamma(\widehat{\sigma}):=\{(\sigma_{p},\sigma_{c})\mid\forall e\in\textit{ dom}(\widehat{\sigma}):\sigma_{c}(\textit{block}(\llbracket e\rrbracket_{\sigma_{p}})) \leq\widehat{\sigma}(e)\} \tag{3}\] Similarly to the definition of the collecting semantics (see Equations (1) and (2)), which uses set unions to capture all possible behaviors of the program, we need a join operator on the abstract domain to summarize states from several incoming CFG edges. This join operator \(\sqcup\) conservatively keeps, for each MCR, the maximum of the two upper bounds provided by the joined states: \(\widehat{\sigma}_{1}\sqcup\widehat{\sigma}_{2}=\lambda e\in\textit{dom}( \widehat{\sigma}_{1})\cap\textit{dom}(\widehat{\sigma}_{2}).\max\{\widehat{ \sigma}_{1}(e),\widehat{\sigma}_{2}(e)\}\). This join operator is correct with respect to the concretization function: **Lemma 1** (Join Correctness).: _For all \(\widehat{\sigma}_{1},\widehat{\sigma}_{2}\in\widehat{\textit{SymCache}}\):_ \[\gamma(\widehat{\sigma}_{1})\cup\gamma(\widehat{\sigma}_{2})\subseteq\gamma( \widehat{\sigma}_{1}\sqcup\widehat{\sigma}_{2})\] The proofs of all lemmas and theorems can be found in the appendix. ### _Abstract Transformers_ To reflect the cache updates upon memory accesses, we provide two abstract transformers: \(\textit{update}_{\mathcal{A}\setminus\{\textbf{X}\}}\), for accesses to MCRs, and \(\widehat{\textit{update}}_{\textbf{X}}\), for unknown accesses. Unknown accesses can potentially increase the age of any block in the cache. Thus: \[\widehat{\textit{update}}_{\textbf{X}}(\widehat{\sigma}):=\lambda e^{\prime }\in\textit{dom}(\widehat{\sigma}).\begin{cases}\widehat{\sigma}(e^{\prime}) +1&\text{ if }\widehat{\sigma}(e^{\prime})+1<k\\ \infty&\text{ otherwise}\end{cases}\] It is easy to prove that this transformer is correct: **Lemma 2** (Unknown Access Transformer Correctness).: _For all \(\widehat{\sigma}\in\widehat{\textit{SymCache}}\), we have:_ \[\textit{update}(\gamma(\widehat{\sigma}),\textbf{X})\subseteq\gamma( \widehat{\textit{update}}_{\textbf{X}}(\widehat{\sigma}))\] The \(\widehat{\textit{update}}_{\mathcal{A}\setminus\{\textbf{X}\}}\) transformer is similar to the one used by Ferdinand's must analysis; it rejuvenates the accessed symbolic block, and increases the ages of blocks in the same cache set that are younger than the accessed block. The main difference lies in the fact that contrary to concrete memory blocks, which have a fixed address, it is not always obvious whether two symbolic blocks map to the same cache set or even to the same block. We thus rely on an auxiliary function _alias_, which, given two symbolic blocks, determines their alias relation. There are six possible alias relations between two MCRs: 1. "Same block" \(sb\): they map to the same memory block. 2. "Same set" \(ss\): they map to the same cache set. 3. "Different set" \(ds\): they map to different cache sets. 4. "Different block" \(db\): they map to different blocks. 5. "Same set, diff. block" \(ssdb\): conjunction of \(ss\) and \(db\). 6. "Same block or different set" \(sb\)+\(ds\): disjunction of \(ds\) and \(sb\); can also be seen as the complement of \(ssdb\). As shown in [25], these relations form a lattice, whose Hasse diagram is shown in Figure 3. The alias relation of two MCRs \(e_{1}\) and \(e_{2}\) can be determined as follows, where \(\textit{BS}\) is the size of memory blocks (in bytes) and \(\textit{NS}\) is the number of cache sets: \[\textit{alias}(e_{1},e_{2}):=\] \[\begin{cases}sb&\text{if }e_{1}-e_{2}=n\in\mathbb{Z}\wedge n=0\\ ds&\text{else if }e_{1}-e_{2}=n\in\mathbb{Z}\ \wedge\\ &\textit{BS}\leq n\bmod(\textit{NS}\cdot\textit{BS})\leq(\textit{BS}\cdot \textit{NS})-\textit{BS}\\ sb\text{+}\textit{ds}&\text{else if }e_{1}-e_{2}=n\in\mathbb{Z}\ \wedge-\textit{BS}<n<\textit{BS}\\ \top&\text{otherwise}\end{cases}\] We assume a modulo operation based on _floored division_, i.e., \(a\bmod n:=a-n\cdot\lfloor a/n\rfloor\), so that \(0\leq a\bmod n<n\) for \(n>0\). The alias relation between \(e_{1}\) and \(e_{2}\) is determined by computing the difference \(n\) of the two expressions. If the difference between \(e_{1}\) and \(e_{2}\) is not a constant expression, then no relation is established (last case). Otherwise, different relations can be deduced depending on the value of \(n\): 1. If \(n\) is \(0\), we can deduce \(sb\). 2. Addresses whose difference is a multiple of the way size (\(\textit{NS}\cdot\textit{BS}\)) are guaranteed to be in the same cache set. Fig. 3: Lattice of alias relations. Conversely, if the difference between \(e_{1}\) and \(e_{2}\) is more than \(\mathit{BS}\) "away" from being a multiple of the way size, then \(e_{1}\) and \(e_{2}\) must map to different sets. 3. If \(e_{1}\) and \(e_{2}\) are close, i.e., less than a block size apart, they either map to the same block or to different sets. Other aliasing relations, such as \(\mathit{ssdb}\) and \(\mathit{db}\) could also be deduced, but are not useful in the following. Using \(\mathit{alias}\) to deduce the relation between symbolic blocks, we can formally define the transformer \(\mathit{update}_{\mathcal{A}\setminus\{\mathbf{X}\}}\) to apply when performing the memory access associated with MCR \(e\). \[\widehat{\mathit{update}_{\mathcal{A}\setminus\{\mathbf{X}\}}}(\widehat{ \sigma},e):=\lambda e^{\prime}\in\mathit{dom}(\widehat{\sigma})\cup\{e\}.\] Unsurprisingly, the transformer closely resembles the definition of its concrete counterpart \(\mathit{update}_{LRU}\). (i) As in the concrete case, the accessed symbolic block is rejuvenated to age 0, as are all symbolic blocks that represent the same block. (ii) A symbolic block that is in the \(\mathit{sb}\mathit{+}\mathit{ds}\) relation to the accessed block retains its age, which is safe, as seen by the following case distinction: Either the block is actually the accessed block and it should get age \(0\), or it maps to a different set and its age should be unchanged (first two cases of \(\mathit{update}_{LRU}\)). (iii) If the accessed symbolic block \(e\) is younger than symbolic block \(e^{\prime}\), then \(e\) has already been accounted for in the age of \(e^{\prime}\), and thus the age of \(e^{\prime}\) should not increase. (iv) The age of a block cannot increase by more than one upon a single access, so the fourth case is always safe. (v) We do not distinguish ages beyond \(k\), as it is not helpful to classify accesses as hits or misses. Instead we summarize these with the safe upper bound \(\infty\). As for the join operator and for unknown accesses, we prove that the access transformer is correct: **Lemma 3** (MCR Access Transformer Correctness).: _For all \(\widehat{\sigma}\in\widehat{\mathit{SymCache}}\) and \(e\in\mathcal{A}\), we have:_ \[\mathit{update}(\gamma(\widehat{\sigma}),e)\subseteq\gamma(\widehat{\mathit{ update}_{\mathcal{A}\setminus\{\mathbf{X}\}}}(\widehat{\sigma},e))\] The \(\widehat{\mathit{update}_{\mathcal{A}\setminus\{\mathbf{X}\}}}\) transformer described above captures the effect of memory accesses. As the symbolic cache states are tied to the program state via the concretization function given in (3), changes to the loop variables need to be accounted for by appropriately adapting our symbolic cache states. We thus provide a second transformer, \(\widehat{\mathit{update}_{\mathcal{S}}}\), which captures the effect of program statements on symbolic cache states. We define \(\widehat{\mathit{update}_{\mathcal{S}}}\) separately for each type of statement. The case of a back edge is arguably the most interesting one. Each symbolic block \(e\) needs to be replaced by its shifted version when \(i\) is incremented, so that the expression preserves its original value, which is achieved as follows: \[\widehat{\mathit{update}_{\mathcal{S}}}(\widehat{\sigma},\mathit{backedge}_{ i}):=\{(\mathit{Sh}(e,i),b)\mid(e,b)\in\widehat{\sigma}\} \tag{4}\] For example, \(\mathit{Sh}(\{A,+,4\}_{i},i)=\{A-4,+,4\}_{i}\), which corresponds to replacing \(A[i]\) by \(A[i-1]\) upon incrementing \(i\). One might wonder whether the set defined in Equation (4) actually defines a function. This is indeed the case for MCRs in normal form [9, 20] for which \(\mathit{Sh}(\cdot,i)\) is bijective. Entering a loop entails resetting the corresponding loop variable to \(i\). However, unless the prior value of \(i\) is known, there is no way of rewriting expressions involving the variable \(i\) accordingly. Thus, in such cases the information for the corresponding MCRs is discarded: \[\widehat{\mathit{update}_{\mathcal{S}}}(\widehat{\sigma},\mathit{entry}_{i}): =\{(e,b)\mid(e,b)\in\widehat{\sigma}\wedge i\not\in e\}\] Finally, assume statements allow the analysis to substitute the corresponding loop variable by the assumed expression. This allows to retain information across multiple loops or in nested loops, e.g. in our running example where data cached in the first loop is reused in the second loop. \[\widehat{\mathit{update}_{\mathcal{S}}}(\widehat{\sigma}, \mathit{assume}_{i,\mathit{expr}}):=\] \[\mathit{red}(\{(e^{\prime},b)\mid(e,b)\in\widehat{\sigma} \wedge e^{\prime}=\mathit{Sub}(e,i,\mathit{expr})\neq\mathit{fail}\}),\] where \(\mathit{red}(S):=\{(e,b)\mid(e,b)\in S\wedge\forall(e,b^{\prime})\in S:b^{ \prime}\geq b\}\). The substitution may result in multiple expressions becoming equal, e.g., \(\mathit{Sub}(\{0,+,2\}_{i},i,10)=\mathit{Sub}(\{10,+,1\}_{i},i,10)\). Then \(\mathit{red}(S)\) keeps the best bound and thereby ensures that the resulting relation is still a function. This abstract transformer for statements is also correct: **Lemma 4** (Statement Transformer Correctness).: _For all \(\widehat{\sigma}\in\widehat{\mathit{SymCache}}\) and \(s\in\mathcal{S}\), we have:_ \[\mathit{update}(\gamma(\widehat{\sigma}),s)\subseteq\gamma(\widehat{ \mathit{update}_{\mathcal{S}}}(\widehat{\sigma},s))\] ### _Analysis Correctness and Termination_ We can now merge the statement and access transformers into a single one that deals with the three kinds of decorations: \[\widehat{\mathit{update}}(\widehat{\sigma},d):=\begin{cases}\widehat{ \mathit{update}_{\mathcal{S}}}(\widehat{\sigma},d)&\text{if }d\in\mathcal{S}\\ \widehat{\mathit{update}_{\mathcal{A}\setminus\{\mathbf{X}\}}}(\widehat{\sigma}, d)&\text{if }d\in\mathcal{A}\setminus\{\mathbf{X}\}\\ \widehat{\mathit{update}_{\mathbf{X}}}(\widehat{\sigma},d)&\text{if }d=\mathbf{X} \end{cases}\] Similarly to the collecting semantics we define the abstract semantics as the least solution of the following equations: \[\widehat{R}(v_{0}) =\emptyset \tag{5}\] \[\forall v\in V\setminus\{v_{0}\}:\widehat{R}(v) =\bigsqcup_{(u,d,v)\in E}\widehat{\mathit{update}}(\widehat{R}( u),d) \tag{6}\] Equations (5) and (6) are the abstract counterpart of Equations (1) and (2). We can now state the main correctness theorem about our analyzer, which follows by standard Abstract Interpretation arguments from Lemmas 1, 2, 3, and 4: **Theorem 1** (Analysis Correctness).: _For all \(v\in V\), we have:_ \[R^{C}(v)\subseteq\gamma(\widehat{R}(v))\] ## VI Loop Peeling and Unrolling A common problem that cache analyses by abstract interpretation suffer from is the loss of precision due to joins at the entry of loops. Indeed, the memory blocks loaded before a loop and within a loop usually differ. As a consequence the abstract cache states entering the loop and upon back edges from within the loop often have few, if any, memory blocks in common. A sound analysis can thus not conclude any blocks to be cached at the beginning of the loop body. One can avoid this issue by _loop peeling_, where the analysis distinguishes the first few iterations of the loop from the rest of the loop and maintains separate analysis information for each of these iterations. This allows the analysis to capture the "warm-up effect" commonly observed in loops iterating across arrays. The example in Figure 4 shows a loop for which the first 16 loop iterations are peeled, which is the optimal amount of peeling for our example from Section III. Another problem that the basic analysis described in Section V suffers from is the lack of alignment information when establishing the alias relations between MCRs. For example, one cannot tell whether \(A[i]\) and \(A[i+1]\) map to the same block if no information about the alignment of \(A[i]\) is available. Indeed, it can happen that \(A[i]\) and \(A[i+1]\) are separated by a block boundary when \(A[i]\bmod BS=BS-1\). The necessary alignment information can be obtained by _unrolling loops_, i.e. distinguishing consecutive loop iterations from each other. In the example in Figure 4 the loop is unrolled twice, distinguishing even from odd loop iterations. In our example from Section III we assumed a block size of \(8\) bytes and array cells of size \(4\) bytes. Provided knowledge about the base address of the array \(A\), with loop unrolling, the alignment of accesses to \(A[i]\) is fully determined. ### _Context-Sensitive Analysis_ Given peeling and unrolling depths \(\mathit{MaxPeel}\geq 0\) and \(\mathit{MaxUnroll}>0\), we define the following set of tags: \[\mathit{Tags}:=\{\mathit{peel}_{x}\mid 0\leq x<\mathit{MaxPeel}\} \cup\] \[\{\mathit{unroll}_{x}\mid 0\leq x<\mathit{MaxUnroll}\}\] These correspond to the nodes in the graph in Figure 4. We then define contexts as functions that associate a tag with each loop variable, i.e., \(\mathit{Ctxts}=\mathit{LoopVar}\rightarrow\mathit{Tags}\). Then, \(\mathit{peel}_{x}\) means that the loop variable has value \(x\), and \(\mathit{unroll}_{x}\) means that value of the loop variable is in \(\{\mathit{MaxPeel}+\mathit{MaxUnroll}\cdot n+x\mid n\in\mathbb{N}\}\). To avoid the precision loss at joins we lift our abstract domain to a context-sensitive domain \(\widehat{\mathit{SymCaches}}\) that associates a symbolic cache state with each context: \[\widehat{\mathit{SymCaches}}=\mathit{Ctxts}\hookrightarrow\widehat{\mathit{ SymCache}}\] These abstract states are updated as follows upon statements: \[\widehat{\mathit{update}_{\mathcal{S}}}(\widehat{\sigma},\mathit{entry}_{i}): =\lambda\mathit{ctx}\in\mathit{Ctxts}.\] \[\begin{cases}\bigsqcup_{t\in\mathit{Tags}}\widehat{\mathit{update}_{ \mathcal{S}}}(\widehat{\sigma}(\mathit{ctx}[i\mapsto t]),\mathit{entry}_{i})& \text{if }\mathit{ctx}(i)=\mathit{peel}_{0}\\ \bot&\text{otherwise}\end{cases}\] Entering loop \(i\) corresponds to setting the loop variable \(i\) to zero. Thus, independently, of the previous tag for \(i\), the new tag for \(i\) will be \(\mathit{peel}_{0}\). The abstract value for this context is obtained by merging the values of all predecessor contexts, where \(i\) may be arbitrary (first case). Contexts in which the tag for \(i\) is not \(\mathit{peel}_{0}\) are unreachable via entry edges (second case). To define the update upon back edges we first capture the structure of the graph in Figure 4 via its set of edges \(\mathcal{E}\): \[\mathcal{E}:= \{(\mathit{peel}_{x},\mathit{peel}_{x+1})\mid 0\leq x<\mathit{ MaxPeel}-1\}\] \[\cup \{(\mathit{peel}_{\mathit{MaxPeel}-1},\mathit{unroll}_{0})\}\] \[\cup \{(\mathit{unroll}_{x},\mathit{unroll}_{x+1})\mid 0\leq x<\mathit{ MaxUnroll}-1\}\] \[\cup \{(\mathit{unroll}_{\mathit{MaxUnroll}-1},\mathit{unroll}_{0})\}\] The set \(\mathcal{E}\) captures how contexts evolve when taking back edges. Based on \(\mathcal{E}\) we define \(\widehat{\mathit{update}_{\mathcal{S}}}(\widehat{\sigma},\mathit{backedge}_{i})\): \[\widehat{\mathit{update}_{\mathcal{S}}}(\widehat{\sigma},\mathit{backedge}_ {i}):=\lambda\mathit{ctx}\in\mathit{Ctxts}.\] \[\bigsqcup_{\begin{subarray}{c}\mathit{ctx}(i)=t^{\prime}\\ (t,t^{\prime})\in\mathcal{E}\end{subarray}}\widehat{\mathit{update}_{ \mathcal{E}}}(\widehat{\sigma}(\mathit{ctx}[i\mapsto t]),\mathit{backedge}_ {i})\] Assume statements and memory accesses do not modify loop variables. Thus, the update is simply applied pointwise to each context. ### _Refining Alias Relations using Context Information_ Contexts provide information about the values of loop variables, which can be used to deduce the alignment of MCRs. To do so, we rely on an auxiliary function \(\mathit{eval}_{\bmod}\left(e,\mathit{ctx}\right)\) that partially evaluates an MCR \(e\) in context \(\mathit{ctx}\) obtaining one of the following results: * \(\mathit{Exact}(n)\), if the MCR is known to be exactly equal to \(n\) in context \(\mathit{ctx}\). * \(\mathit{Mod}(n,p)\), if the MCR is known to be equal to \(n\) modulo \(p\) in context \(\mathit{ctx}\). * _Unknown_ if no such statement can be deduced. Fig. 4: Peeling and unrolling contexts and their corresponding loop iterations. We omit \(\mathit{eval}_{\bmod}\) here for brevity; its definition is provided in the appendix. Using \(\mathit{eval}_{\bmod}\), we can refine the _alias_ function and use the context to deduce alignment relations. Given two MCRs \(e_{1}\) and \(e_{2}\), and a context \(\mathit{ctx}\), we refine _alias_ as follows: \[\mathit{alias}(e_{1},e_{2},\mathit{ctx}):=\] \[\begin{cases}\mathit{sb}&\text{if }n=e_{1}-e_{2}\in\mathbb{Z} \wedge a_{1}\sqsubseteq\mathit{Mod}(n_{1},\mathit{BS})\\ &\land\ a_{2}\sqsubseteq\mathit{Mod}(n_{2},\mathit{BS})\wedge n-n_{1}+n_{2}=0 \\ \mathit{ss}&\text{if }n=e_{1}-e_{2}\in\mathbb{Z}\wedge a_{1}\sqsubseteq \mathit{Mod}(n_{1},\mathit{BS})\\ &\land\ a_{2}\sqsubseteq\mathit{Mod}(n_{2},\mathit{BS})\\ &\land\ n-n_{1}+n_{2}\bmod\mathit{NS}\cdot\mathit{BS}=0\\ &\land\ a_{2}\sqsubseteq\mathit{Mod}(n_{2},\mathit{BS})\\ &\land\ n-n_{1}+n_{2}\bmod\mathit{NS}\cdot\mathit{BS}\neq 0\\ \mathit{alias}(e_{1},e_{2})\quad\text{ otherwise}\end{cases}\] where \(a_{1}=\mathit{eval}_{\bmod}\left(e_{1},\mathit{ctx}\right)\) and \(a_{2}=\mathit{eval}_{\bmod}\left(e_{2},\mathit{ctx}\right)\), \(\mathit{Exact}(k)\sqsubseteq\mathit{Mod}(n,m)\) if \(k=n\bmod m\), and \(\mathit{Mod}(n^{\prime},m^{\prime})\sqsubseteq\mathit{Mod}(n,m)\) if \(m|m^{\prime}\) and \(n=n^{\prime}\bmod m\). This refined alias function first looks at the difference \(e_{1}-e_{2}\) just like the non-refined version, except that the conditions to derive some relations are relaxed if the alignments (\(a_{1}\) and \(a_{2}\)) of \(e_{1}\) and \(e_{2}\) are known. In the first case, \(n_{1}\) and \(n_{2}\) are the offsets of \(e_{1}\) and \(e_{2}\) in their respective blocks. Thus, one can deduce the address of the block that \(e_{1}\) maps to (\(e_{1}-a_{1}\)), and compare it to the address of the block that \(e_{2}\) maps to (\(e_{2}-a_{2}\)). The equality of block addresses can be rewritten \(n-n_{1}+n_{2}=0\). If the equality holds, then \(e_{1}\) and \(e_{2}\) map to the same block. The second case is similar, but we check an equality on cache sets instead of blocks. We thus consider alignments relative to sets, by evaluating \(e_{1}\) and \(e_{2}\) modulo \(\mathit{NS}\cdot\mathit{BS}\). The equality is also checked modulo the same value because addresses that are \(\mathit{NS}\cdot\mathit{BS}\) apart map to the same set. The third case is analogous, except we check for expressions mapping to different sets instead of the same one. Finally, in cases were \(\mathit{eval}_{\bmod}\) fails to evaluate \(e_{1}\) and \(e_{2}\) precisely, we rely on the version of _alias_ from Section V-B as a fallback. ## VII Implementation We implemented the symbolic analysis in LLVM [26, 27, 28], a WCET analysis tool based on the LLVM compiler infrastructure. In particular, LLVM relies on LLVM to compile the program, which itself uses ScalarEvolution [6, 7] to perform optimizations. It was thus convenient to reuse this framework and convert ScalarEvolution expressions to our own MCR representation upon which we added support for the shifting and substitution operations. The main difficulty arising when converting ScalarEvolution expressions to MCRs is that ScalarEvolution (SCEV) expressions do not only contain integer constants but also LLVM values that belong to the LLVM intermediate representation (IR). Consider an array \(A\) that is allocated on the stack in a function \(f\) and then passed down to another function \(g\) accessing \(A[i]\). A SCEV expression for such an access would typically look like \(\{\%A,+,4\}_{i}\), where \(\%A\) is a parameter of \(f\). We rely on debug information to determine the register containing the value of \(\%A\), and then query a dedicated constant value analysis to get the register value. This allows us to translate information available at the IR level down to the machine-code level at which our analysis is performed. Several tricks are implemented to make the analysis more efficient. First, we rely on hash consing ([https://en.wikipedia.org/wiki/Hash_consing](https://en.wikipedia.org/wiki/Hash_consing)) of MCRs to reduce the memory footprint of the analysis: when building an MCR, we check if it was already build before, and return a pointer on the old MCR when possible. In addition to saving memory, this allows us to cache and reuse the results of all operations involving MCRs. Another trick to speed up the analysis is to avoid representing a symbolic cache state \(\widehat{\sigma}\in\widehat{\mathit{symCache}}\) as a single map of MCRs to ages. Instead, a cache state is split into several maps, which we called "virtual sets". We use one virtual set per physical cache set to store expressions that are known to map to this cache set. An additional virtual set is used for expressions whose corresponding cache set is unknown. When looking for "same block" MCRs (e.g. in \(\mathit{update}_{\setminus\{\mathbf{X}\}}\)), MCRs that map to a different virtual set than the accessed MCR can be excluded from the check, saving time. Virtual sets can also be shared between abstract states. Upon a memory access, if the set to which the accessed MCR is known, only the corresponding virtual set is modified. The remaining virtual sets can thus be shared between the old and the new abstract state, saving memory and avoiding copies. Regarding the values of \(\mathit{MaxPeel}\) and \(\mathit{MaxUnroll}\), it is not possible to choose fixed values that would work well for every benchmark due to the presence of nested loops. For example, it is possible to peel the first 256 iterations of a single loop, but doing so for each loop of a loop nest of depth 3 would lead to the creation of \(256^{3}\) different contexts, blowing up the analysis complexity. We thus introduce the notion of a _peeling budget_ in the analysis, which indicates the number of peeling contexts to create per loop nest. This budget is first spent on the innermost loop, then on the second innermost loop if it is possible to fully peel the innermost one, and so on. For example, consider a loop nest of depth 2, with loop bounds of 20 and 50 for the outer and inner loops, respectively. A peeling budget of 200 would lead to fully peeling the inner loop, because the loop bound of the inner loop is less than the current budget. Then the budget remaining for the outer loop would be 200/50, leading to a \(\mathit{MaxPeel}\) value of 4 for the outer loop. We could introduce a similar notion for computing the \(\mathit{MaxUnroll}\) value associated to each loop. Because this seemed unnecessary in many benchmarks, we chose to only unroll the innermost loop. ## VIII Experimental Evaluation The aim of our experiments is to evaluate the following three aspects of our contributions: 1. The gain in accuracy obtained by performing cache analysis over a symbolic CFG. 2. Scalability when increasing the dataset sizes. 3. Scalability in terms of the cache geometry. First, we demonstrate the properties of our analysis on the illustrative example from Section III. Then, we present experiments designed to assess the accuracy gain due to the symbolic approach and its scalability. All experiments are performed assuming a set-associative cache consisting of 8 cache sets, 8 cache ways, and cache lines of 64 bytes. We qualitatively contrast our work with other related work in Section IX. In this evaluation we use the PolyBench [29] benchmarks. PolyBench has the advantage of providing a parametric dataset size, i.e. one can adapt the sizes of the data structures the algorithms iterate over. PolyBench provides 5 datasets size: _mini_, _small_, _medium_, _large_, and _extra large_, which is convenient to assess the scalability of our approach. ### _Behavior of the Symbolic Analysis on Illustrative Example_ To verify that the symbolic analysis is behaving as expected, we analyze multiple variants of the program in Figure 0(a) from Section III. In all experiments, we use an array of \(12\cdot 1024=12288\) integers, but we vary the number of loop iterations in both loops between \(4\) and \(12288\), iterating back and forth across prefixes of the array. We then compare the following analyses: * The symbolic analysis in optimal settings: we peel the exact number of iterations (1024) required to fill the cache, and we unroll enough iterations (128) to obtain perfect cache alignment information. * Ferdinand's must analysis [10, 11] under the same settings, i.e. using the same \(MaxPeel\) and \(MaxUnroll\) values. * Ferdinand's analysis where both loops are fully peeled. In each of these analyses we configure LLVMTA to compute a bound on the number of cache misses. We use Ferdinand's analysis as a baseline, as the symbolic analysis can be seen as a lifted version of Ferdinand's analysis to symbolic CFGs, and thus the observed differences can be directly attributed to operating symbolically. Figure 4(a) shows the number of predicted misses when increasing the loop bounds. As expected, for low values of the loop bounds all analyses fully peel the loops, and achieve the same perfect results: The first loop incurs one miss in every 16 iterations, as 16 consecutive integers of 4 bytes fit in a 64-byte cache line. The second loop does not lead to any additional misses because the accessed data fits entirely in the cache. Once the loop bounds are big enough to fill the cache, to the right of the dashed vertical line, the predicted number of misses increases by 2 for every 16 loop iterations for both the symbolic analysis and Ferdinand's analysis if the loops are fully peeled. This is due to additional misses at the end of the second loop, which accesses blocks that were evicted at the end of the first loop. Indeed, the results of the symbolic analysis and of Ferdinand's analysis under full peeling are exact. However, when the loop bounds exceed the number of peeled iterations, Ferdinand's analysis is unable to classify any access as a hit anymore. As a consequence, the bound on the number of potential misses increases with every access: spatial locality is not exploited because the analysis does not know the offset of the accesses inside a cache line. Figure 4(b) shows the analysis runtime of the three analyses in terms of the loop bounds. Once the loop bounds exceed the \(MaxPeel\) value, the analysis cost remains constant. Conversely, when increasing the value of \(MaxPeel\) to match the loop bound, Ferdinand's analysis gets more and more expensive, quickly exceeding the cost of the symbolic analysis. ### _Accuracy of the Symbolic Analysis_ In order to evaluate the benefits of the symbolic analysis in more realistic cases, we analyze the PolyBench benchmarks (with the default dataset size _large_), and compare its accuracy with Ferdinand's analysis. The cache configuration is fixed, but we vary the values of \(MaxPeel\) and \(MaxUnroll\). Indeed, both analyses perform very differently in terms of running time and accuracy when varying the peeling and unrolling settings, and comparing the two for a fixed setting would thus be difficult. So we set a runtime limit of one hour per benchmark and retain for each analysis the best achievable result within this time for each benchmark. Figure 6 shows that in these conditions, the symbolic analysis always outperforms Ferdinand's analysis. The geometric mean of the ratios of the bounds computed by the symbolic and non-symbolic analysis across all benchmarks is \(0.335\), significantly improving analysis accuracy. ### _Scalability Evaluation_ We claim that the symbolic analysis runtime is largely independent of the number of loop iterations, as long as the number of loop iterations exceeds the number of peeled iterations. To support this claim, we ran the analysis using the same cache configuration and peeling/unrolling settings (\(MaxPeel=1024\), \(MaxUnroll=128\)) for all the dataset sizes available in PolyBench. Figure 7 shows the analysis runtime for each benchmark and dataset size. Notice that the dataset size has a smaller impact on the analysis runtime than the benchmark itself, which suggests that the complexity of a benchmark's access patterns is more important than the number of accesses generated by the benchmark. As expected, analysis times for the _large_ and _extra large_ datasets are usually very close to each other even though the number of memory accesses in the XL case is 6.25 times higher on the average. For the smaller dataset sizes the loop bounds often do not reach the peeling settings, and thus the analysis cost still increases moving from XS to S, and sometimes also from S to M and L. ### _Impact of the Cache Geometry_ To evaluate the impact of the cache geometry on the analysis runtime we designed two experiments. In the first experiment, we investigate the impact of the associativity on the analysis runtime. We fix the cache line size to 64 bytes and the number of cache sets to 8, as in the previous experiments, and analyze associativities 8, 16, 32, and 64, corresponding to cache sizes of 4, 8, 16, and 32 KB, respectively. We run the symbolic analysis on all benchmarks of PolyBench for the _large_ dataset. To enable the analysis to exploit the increased cache size, we double the peeling budget each time we double the associativity. Figure 8 shows the geometric mean of the slowdowns relative to an analysis with associativity 8. We observe a slowdown of 2.56, 10.7, and 70 at associativity 16, 32, and 64, respectively. In the second experiment, we investigate the impact of the number of cache sets on the analysis runtime. Thus, we fix the cache line size to 64 bytes and the associativity to 8, and perform analyses for 8, 16, 32, 64, and 128 cache sets, corresponding to cache sizes of 4, 8, 16, 32 and 64 KB, respectively. Again, we double the peeling budget each time we double the number of cache lines. Figure 9 shows the geometric mean of the slowdowns relative to an analysis with 8 cache sets. We observe a slowdown of 2.07, 5.99, 23.8, and 125 at 16, 32, 64, and 128 cache sets, respectively. In both experiments, we observe that the analysis runtime increases superlinearly with the cache size. Indeed, there are two effects at play here that are each individually expected to induce a linear slowdown: (i) the peeling budget is proportional to the cache size and thus the number of contexts increases linearly, and (ii) the abstract cache states grow linearly in the cache size. The effect of (ii) on the analysis runtimes is less pronounced when increasing the number of cache sets than when increasing the associativity due to the use of virtual sets, and we observe smaller slowdowns there. Fig. 5: Accuracy and analysis time comparison on the running example. Fig. 8: Geometric mean of slowdowns relative to an analysis with associativity 8 across PolyBench for the _large_ dataset. Fig. 6: Accuracy comparison under a time constraint of 1 hour. Fig. 7: Analysis runtimes for increasing dataset sizes. Fig. 9: Geometric mean of slowdowns relative to an analysis with 8 cache sets across PolyBench for the _large_ dataset. ## IX Related Work Static cache analysis has received considerable attention in the context of WCET analysis. In the following, we focus on work targeted at data cache analysis. For a broader review of the literature consider the survey paper by Lv et al. [30]. At a high level, work on static cache analysis can be partitioned into classifying and bounding analyses: \(\bullet\)_Classifying analyses_[10, 11, 31, 32, 33, 34, 35, 13, 14, 36, 37] classify individual accesses in the program as hits or misses. Ferdinand's may and must analysis and our symbolic analysis fall into this class. \(\bullet\)_Bounding analyses_[38, 39, 11, 40, 41, 42, 19, 43] compute bounds on the number of misses that occur in a program fragment or in a subset of the program's accesses. Let us first discuss related classifying analyses. We have already extensively discussed Ferdinand's LRU must analysis [10, 11] throughout the paper. It relies on a plain CFG abstraction, and precise analysis results for data caches are only possible if loops are fully unrolled. Sen and Srikant [31] build upon LRU must analysis and make two contributions: (i) They introduce a new domain to analyze the set of memory addresses associated with a static memory reference called _circular linear progressions_. (ii) They introduce a new approach to context-sensitive analysis in which a loop is partitioned into \(n\) same-length regions that are further split into two parts. The first part is analyzed in "expansion mode", meaning that it is fully virtually unrolled, distinguishing all individual iterations, while the second part is analyzed in "summary mode". To achieve accurate results, the approach requires an unrolling value that is proportional to the number of loop iterations, similarly to Ferdinand's analysis. Hahn and Grund [25, 44] introduce _relational cache analysis_, which tracks relations between memory accesses in the program following the lattice in Figure 3 similarly to our analysis. Wegener [23] proposes to judiciously apply loop peeling and unrolling to relational cache analysis. Their work is able to detect the exploitation of spatial and temporal locality within a given loop iteration (or within a sequence of loop iterations in case of unrolling). The fundamental limitation of [25, 44, 23] that our approach overcomes, is that their analysis never tracks more than a single symbol for each static memory reference (per unrolled iteration of the loop) in the program, whereas our analysis may dynamically generate an unbounded number of symbols for the same static reference due to the shifting operation upon loop back edges. As a consequence, in our example program, the temporal locality in the second loop would be entirely missed by relational cache analysis. The other major difference lies in our use of LLVM's ScalarEvolution framework to determine access expressions and loop bounds. Let us now turn to bounding analyses. Kim et al. [38] determine a bound on the number of memory blocks accessed in a program. If at most \(m\) distinct blocks are accessed, and these fully fit into the cache, then at most \(m\) misses may occur. Such a cache persistence [19] argument only works in cases where the amount of accessed data is smaller than the cache itself, which is often not the case, e.g. in our illustrative example and in the entire PolyBench suite for larger dataset sizes. Huynh et al. [40] present a persistence analysis that takes a different perspective, separately considering each memory block accessed in the program. For each such block, the analysis determines whether it is persistent, i.e., whether accesses to that block can result in more than one miss. This persistence classification is furthermore performed at different spatial and temporal scopes, e.g. distinguishing different intervals of loop iterations. As a result the analysis may be highly accurate. However the analysis complexity is at least linear in _both_ the number of distinct memory blocks accessed by the program and the dynamic number of accesses performed (\(>10^{11}\) for several PolyBench benchmarks for the XL dataset), whereas our analysis is independent of both of these. The approach of Sotin et al. [43] consists in encoding the program semantics and the cache replacement policy in a formula whose integral solutions correspond to cache misses, and to discharge this counting problem to an external solver [45]. The approach is however limited to counting misses associated to a single static memory reference inside a loop. Ad hoc extensions handling non-linear accesses, several accesses in the same loop, and analyzing nested loops are suggested, but it is not clear whether these approaches can be combined together to handle larger classes of programs. Finally, there is a long and rich history of analytical cache models [46, 47, 48, 49, 50, 51, 52, 53, 54, 55] that determine the exact number of misses generated by loop nests. A common limitation of this line of work is that it cannot handle programs with input-dependent branches or memory accesses. ## X Conclusions and Future Work We have introduced _symbolic data cache analysis_ a novel analysis that systematically exploits a richer program abstraction than prior work, namely _symbolic control-flow graphs_, which can be obtained from LLVM's ScalarEvolution analysis. The experimental evaluation demonstrates that this new analysis outperforms classical LRU must analysis both in terms of accuracy and analysis runtime. As a proof of concept, we have lifted the classical LRU must analysis to the symbolic level. Other existing analyses operating on plain CFGs could similarly be made symbolic, e.g. persistence analyses or classifying analyses for various replacement policies. It would also be interesting to investigate whether exact cache analysis on symbolic CFGs is possible along the lines of recent exact cache analyses on plain CFGs. Another direction for future work is to apply the idea of symbolic cache analysis to even richer program abstractions, e.g. modeling operations on heap data structures. ## Acknowledgments This project has received funding from the European Research Council under the EU's Horizon 2020 research and innovation programme (grant agreement No. 101020415).
2307.07717
Deep ANN-based Touch-less 3D Pad for Digit Recognition
The Covid-19 pandemic has changed the way humans interact with their environment. Common touch surfaces such as elevator switches and ATM switches are hazardous to touch as they are used by countless people every day, increasing the chance of getting infected. So, a need for touch-less interaction with machines arises. In this paper, we propose a method of recognizing the ten decimal digits (0-9) by writing the digits in the air near a sensing printed circuit board using a human hand. We captured the movement of the hand by a sensor based on projective capacitance and classified it into digits using an Artificial Neural Network. Our method does not use pictures, which significantly reduces the computational requirements and preserves users' privacy. Thus, the proposed method can be easily implemented in public places.
Pramit Kumar Pal, Debarshi Dutta, Attreyee Mandal, Dipshika Das
2023-07-15T05:42:53Z
http://arxiv.org/abs/2307.07717v1
# Deep ANN-based Touchless 3D Pad for Digit Recognition ###### Abstract The Covid-19 pandemic has changed the way humans interact with their environment. Common touch surfaces such as elevator switches and ATM switches are hazardous to touch as they are used by countless people every day, increasing the chance of getting infected. So, a need for touch-less interaction with machines arises. In this paper, we propose a method of recognizing the ten decimal digits (0-9) by writing the digits in the air near a sensing printed circuit board using a human hand. We captured the movement of the hand by a sensor based on projective capacitance and classified it into digits using an Artificial Neural Network. Our method does not use pictures, which significantly reduces the computational requirements and preserves users' privacy. Thus, the proposed method can be easily implemented in public places. - artificial neural network, gesture recognition, microcontroller, projective capacitance. + Footnote †: journal: Biological Engineering Research and Review, 2021; 8(2): 01-425 ISSN: 2349-3232 ## Introduction The Covid-19 pandemic has put many restrictions on the daily lives of humans. Due to the highly - transmissible nature of the virus, people need to sanitize their hands constantly. The chances of getting the virus from sources frequently used by the general public are very much possible. These sources are not limited to keypads in ATMs or keypads present in an elevator; a surface that people with their fingers frequently touch raises a need for a new touchless system for interacting with such systems. We have developed a touchless system for entering numeric digits by use of mutual capacitance technology. Mutual capacitance is usually used for touch screens in mobile devices. However, by appropriate modification of the technology, we can use it for touchless human-computer interaction. The system we have developed is capable of registering 3D human hand gestures [10]. The use of an Artificial Neural Network makes it robust and reusable, i.e., the same system can be used for recognizing other symbols like numeric digits. This method can be an alternative to the conventional methods of inputting digits in a system. ## Related Works The exploitation of capacitance property for human-computer interfacing is not new. It has been in use for decades. Today, almost all modern smartphones, tablet PCs, and computers with a touch screen use projected capacitance technology for the touch screen. Another everyday use of capacitance property for the human-computer interface is touchpads based on the capacitive sensing technique. In [3], Raphael et al. used capacitive sensing methods for performing simple gestures like picking or navigating objects on a screen. The authors placed four electrodes around the four sides of a tablet screen. In [2], Lee et al. built a dodecahedron-shaped input device that provided 24 degrees of freedom. Each of the 12 flat surfaces consisted of a capacitive sensor. The authors demonstrated the devices' feasibility in three-dimensional modeling tasks. In [1], the authors made use of electric field sensing to track human hands. In [15], the author used inertial sensors in smartphones to interpret characters by performing the characters' mid-air gestures, which lacked visual feedback to the user performing the gestures mid-air. Reference [7], however, used motion tracking and a hidden Markov model for interpreting characters performed by the user. In [9], the author used cameras for recognition of writing characters in mid-air and reference. The downside of using cameras is that they cannot be used in extreme lighting conditions (either very dark or bright). Thus, they require optimum lighting conditions for proper functioning. Camera sensors are expensive to replace as their manufacturing costs are high. The use of cameras also poses a risk to the user's privacy. In references [8] and [6], the user wore a device on their wrist to recognize gestures performed by them. In [14], the authors studied extensive area array sensing feasibility using projected capacitance as the underlying principle. In [5], the authors developed an 8X8 capacitive sensor array to perform gestures for rehabilitation purposes. Here, the capacitive sensor array uses mutual capacitance as the underlying technology. The previous works primarily focus on detecting hand gestures from a visual data feed using camera-based gesture recognition systems and capacitive sensing to either track hand movements or recognize predefined hand gestures. Some works focused on getting gesture input from the user with devices worn on their hand by the user. As has been done in this paper, the projected capacitance technique for recognizing alphanumeric characters is, to our knowledge, completely new. The recognized characters can then be sent to another machine where alphanumeric inputs are necessary for operation. ## Background Projected capacitance [5] technology is regularly used in modern touch screen devices like smartphones and tablets. However, in these devices, the sensing range is minimal, and thus they require physical touches to initiate an action. Since touch screens only detect finger positions when they are touched, by suitably modifying the technique and using deep learning [13], projected capacitance can be used to recognize touch-less hand gestures [6], which is the main contribution of this paper. Fig: I show the flow diagram for our system from getting gesture input from the user and classifying the gestures into digits [14]. ## Prototype overview Our prototype consists of three main parts: sensing printed circuit board, Processing board, and Communication with PC via USB to RS232 converter. The complete block diagram representation of the prototype is shown in Fig: II. The sensing printed circuit board consists of 5 electrodes etched on a copper-clad board made of the FR4 substrate material. This printed circuit board is responsible for sensing the position of the user's hand in front of it. As the etched electrodes form mutual capacitance, it is susceptible to small changes in capacitance due to the presence of a human hand near it [3]. The processing board processes this small change in capacitance and then fed to the Arduino Nano to communicate with the PC interface via USB to RS232 converter at a band rate of 115200. The mutual capacitor formed by the sensing electrodes is part of an oscillator whose frequency is influenced by the presence of the human hand when it is within the range of sensing of the electric fields emanating from the electrodes [1]. When inside the sensing region, the hand cuts out electrostatic field lines and diverts them to another direction so that the electric flux density falling on the receiving electrodes decreases, as shown in Fig: III From the five electrodes etched on the sensing printed circuit board, shown in Fig: IV, four electrodes surrounding the center electrode are connected to four separate oscillators' high impedance input. These four electrodes are called receiving electrodes named Pad-A, Pad-B, Pad-C, and Pad-D. They are responsible for giving spatial information about the position of the human hand. The center electrode or the transmitting electrode is connected to a low impedance output of the oscillators by a NAND gate. When the hand gradually approaches the electrodes, the capacitance between the electrodes decreases, increasing the frequency of the oscillators. As we need three-dimensional coordinates from the system, each of the five electrodes must be scanned one after another in each sampling cycle. One such sampling cycle is represented in Fig: V. The five electrodes are scanned 80 times per second or at a sample rate of 80Hz. Fig: VI show one such oscillator for a single electrode. The oscillators are turned on by logic high at the ENABLE pin of the oscillator shown in Fig: VI. Each oscillator is tuned to a frequency of 1.7 MHz by the resistor R shown in Fig: VI. Under the influence of the human hand, the oscillators' frequency changes by a few hundred parts per million. (100 parts per million =0.01%). To track the position of the hand, this small change in frequency needs to be measured accurately, is achieved by a phase-frequency comparator, which compares the frequency of the four oscillators with a reference oscillator, realized by a voltage control oscillator and converts this change in frequency to a voltage signal. Unlike small changes in capacitance, the voltage changes can be measured easily by the ADC module of the Arduino Nano [12]. The oscillators' output is scaled down by a frequency divider, as shown in Fig: II so that the reference oscillator and the sensing oscillators are in the same frequency range. Due to the susceptible nature of the oscillators, they are susceptible to environmental factors like input voltage and temperature. The reference frequency needs to be adjusted by small amounts so that the output from the phase-frequency comparator is minimal when a human hand is not present. The Arduino Nano adjusts the reference frequency individually for each pad before scanning by sending a control voltage to the voltage-controlled oscillator by a 12-bit DAC. The output of the phase-frequency comparator has a high-frequency component at its output, filtered out by a simple first-order RC low pass filter with a cutoff frequency of 72.3Hz to capture very tiny movements of the hand yet filter out unwanted high-frequency noise. During scanning or multiplexing, each of the four receiving electrodes the sample and hold circuit samples and holds the filtered output voltage of the phase-frequency comparator, which is then read by the analog to digital converter of the Arduino Nano. All the processes, from setting the frequency of the reference oscillator, scanning each electrode, to communicating with the graphical user interface running in the personal computer, are done by the Arduino Nano. The Arduino Nano consists of an 8-bit ATMEGA328 microcontroller running at 16MHz. ## Gesture Capture We can get three-dimensional coordinates for the static position of the hand from the outputs of these four sensing pads, as shown in Fig: VII, where a human hand is in the sensing range of the pads. The output of each pad, namely Pad A, Pad-B, Pad-C, and Pad-D, can be seen in Fig: VIII as output values that can be differentiated from each other by their colors. From the three-dimensional coordinates, the Graphical User Interface shown in Fig: X can draw the gestures performed by the user in front of the sensing pads in real-time as the hand is moved in three-dimensional space by the user as depicted in Fig: VII. The graphical user interface [1] for visualization of the gestures drawn by the user in real-time was developed using Python 3.9 and PyQt5 libraries. The algorithm for the GUI interface is shown in Fig: IX. The calculation of the X Y and Z coordinates from the raw output signal of the four electrodes are given as follows: \[X\;Coordinate=(A-D) \tag{1}\] \[Y\;Coordinate=(B-C) \tag{2}\] \[Z\;Coordinate=(A+B+C+D)/4 \tag{3}\] \(Wh\;ere\;A,\;B,\;C,D\) represents \(the\;output\;of\;each\;of\;four\;electrodes\). The X, Y, and Z coordinates are calculated by the equation given in equations (1), (2), and (3), respectively. The graphical user interface starts drawing the gestures on the interface canvas when the user has brought his/her hand at a distance of 4 cm or less from the sensing pads. The gesture is recognized as complete when the user's hand is more than 4 cm from the sensing electrodes. Upon completion of each gesture, the software automatically captures a screenshot of the canvas region of the graphical user interface, which can be seen in Fig: X as the black region. The captured image is converted to grayscale color format and resized to an image size of (28 x 28x1) pixels. ## Deep Learning Models ### Deep CNN with Data Augmentation We propose a deep learning model for classifying gestures as digits (0-9). This model is developed based on CNN (Convolutional Neural Network) architecture [11]. CNN is a deep learning model with widespread use in computer vision and image classification. Fig: XI shows a diagrammatic representation of the various layers of the model. Our model has an input layer of 28x28x1 followed by multiple sub-layers of filters, max-pooling layers, batch normalization layers, and a densely connected network. RELU (Rectified Linear Unit) layer was chosen as an activation function after each convolution layer due to its nonlinear nature. \[f(x)=\max(0,x) \tag{4}\] The equation for the RELU activation function is given in equation (4). The dense output layer uses the SoftMax activation function, shown in equation (5), to obtain the probability of different classes. \[P(y=j|\theta^{j})=\frac{e^{\theta^{j}}}{k} \tag{5}\] Where \(\theta=w_{0}x_{0}+w_{1}x_{1}+\cdots+w_{k}x_{k}=\sum_{i=0}^{k}w_{i}x_{i}=w^{T}x\) The CNN model is shown in Fig: XI is trained on the augmented dataset by Adam optimizer with a learning rate of 0.001 and mini-batch size of 64. The augmentation involves random rotation of images within 0-degree to 180-degree, random zooming in on the images, and shifting the height and width of the images. Data augmentation ultimately introduced more variation in the dataset and thus helped the model to perform better. Fig: XIV shows the epoch versus loss and accuracy of the CNN model, and Fig: XVIII shows the confusion matrix for the validation data. ### Deep CNN without Data Augmentation The deep CNN model proposed in Fig: XI was trained and validated with our training and testing data, but withoutdata-augmentation applied to the dataset. Fig: XV shows the epoch versus loss and accuracy of the CNN model without data augmentation, and Fig: XIX shows the confusion matrix for the validation data. The sequential layers in this CNN model were the same as the model proposed above in Fig: XI. ## Multi-layer Perceptron Multi-layer Perceptron (MLP) is a type of artificial neural network widely used for classification and regression tasks. The task of classifying images is complex and requires complex deep learning models. The pictorial representation of the MLP model evaluated by us is shown in Fig: XII. MLP is trained using Adam optimizer, with a learning rate of 0.001 and batch size 32. RNN [11] is a deep learning model with memory mainly used to process time-series data. Here we have used unidirectional LSTM (Long Short-Term Memory), which is a type of RNN. The layers of the RNN model we evaluated are shown in Fig: XIII. The training was done by Adam optimizer, with a learning rate of 0.001 and batch size of 32. The data augmentation involves random rotation of images within 0-degree to 180-degree, random zooming in on the images, and shifting the height and width of the images. ## Results and Analysis In an experiment, we performed training and testing on different deep learning models and compared those results. The different architectures on which the comparisons are made are as follows: * CNN with Data Augmentation * CNN without Data Augmentation * RNN * MLP (Multi-Layer Perceptron) The training and testing performance for the CNN model with data augmentation is shown in Fig: XIV. Fig: XV shows the training and testing performance for the CNN model without data augmentation. Fig: XVI shows the training and testing performance for RNN with data augmentation applied, and Fig: XVII shows the training and testing performance of the MLP model The confusion matrices for the different models are shown in Fig: XVIII for CNN with data augmentation, Fig: XIX for CNN model without data augmentation, Fig: XX for RNN with data augmentation, and Fig: XXI for the MLP model. The density of colors in each grid represents the number of correct or incorrect representations for the corresponding models. FIGURE XX CONFUSION MATRIX FOR CNN WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR CNN WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSIONION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGATIONATION FIGURE XX CONFUSION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE X CONFUSIONION MATRIX FOR LINE WITH DATA AUGATION FIGURE X CONFUSION MATRIX FOR LINE WITH DATA AUGATION FIGURE X CONFUSION MATRIX FOR LINE WITH DATA AUGATION FIGURE X CONFUSION MATRIX FOR LINE WITH DATA AUGATION FIGURE X CONFUSION MATRIX FOR LINE WITH DATA AUGATION FIGURE X CONFUSIONION MATRIX FOR LINE WITH DATA AUGEMENTATION FIGURE X CONFUSION MATRIX FOR LINE WITH DATA AUGATION Fig: XXII shows the comparison between different models in the form of a clustered bar chart. Table 1 shows the comparison in performance between different models. From the comparison between different models, the CNN model with data augmentation performed best in both aspects of classification accuracy and validation loss, having 97.03% and 0.0967, respectively, among other models, with or without data augmentation. For the same CNN model, if data augmentation was not applied, the model suffered from fitting, which is observable from Fig: XV. Thus, for the final prototype, the CNN model with data augmentation was selected for best performance. The trained model was exported and used in live classification software based on a graphical user interface, receiving data from the hardware prototype and drawing inference on which the user drew digits. The confidence percentage of each classification result shows how accurately the model recognizes the digits based on the user's gestures. Such an instance of digit classification is shown in Fig: X. ## Conclusion This paper presents a novel system to enable touch-less hand gesture recognition for digit classification using a deep learning model. Different ANNs with data augmentation were proposed, which classified digits based on user gestures. The comparisons between various models show that CNN with data augmentation performs best in classification accuracy and reliability. Future work can involve tracking and recognition of more complex gestures like alphabets and special characters. Optical RGB sensors have been widely used for gesture recognition. The fusion of multiple sensors could thus obtain more accuracy and reliability, thus increasing system performance. Our approach can be used in systems where touching a surface is unhygienic and dangerous, considering the current ongoing global COVID-19 pandemic.
2303.04931
An Observer-Based Key Agreement Scheme for Remotely Controlled Mobile Robots
Remotely controlled mobile robots are important examples of Cyber-Physical Systems (CPSs). Recently, these robots are being deployed in many safety critical applications. Therefore, ensuring their cyber-security is of paramount importance. Different control schemes that have been proposed to secure such systems against sophisticated cyber-attacks require the exchange of secret messages between their smart actuators and the remote controller. Thus, these schemes require pre-shared secret keys, or an established Public Key Infrastructure (PKI) that allows for key agreement. Such cryptographic approaches might not always be suitable for the deployment environments of such remotely mobile robots. To address this problem, in this paper, we consider a control theoretic approach for establishing a secret key between the remotely controlled robot and the networked controller without resorting to traditional cryptographic techniques. Our key agreement scheme leverages a nonlinear unknown input observer and an error correction code mechanism to allow the robot to securely agree on a secret key with its remote controller. To validate the proposed scheme, we implement it using a Khepera-IV differential drive robot and evaluate its efficiency and the additional control cost acquired by it. Our experimental results confirm the effectiveness of the proposed key establishment scheme.
Amir Mohammad Naseri, Walter Lucia, Amr Youssef
2023-03-08T23:00:30Z
http://arxiv.org/abs/2303.04931v2
# An Observer-Based Key Agreement Scheme ###### Abstract Remotely controlled mobile robots are important examples of Cyber-Physical Systems (CPSs). Recently, these robots are being deployed in many safety critical applications. Therefore, ensuring their cyber-security is of paramount importance. Different control schemes that have been proposed to secure such systems against sophisticated cyber-attacks require the exchange of secret messages between their smart actuators and the remote controller. Thus, these schemes require pre-shared secret keys, or an established Public Key Infrastructure (PKI) that allows for key agreement. Such cryptographic approaches might not always be suitable for the deployment environments of such remotely mobile robots. To address this problem, in this paper, we consider a control theoretic approach for establishing a secret key between the remotely controlled robot and the networked controller without resorting to traditional cryptographic techniques. Our key agreement scheme leverages a nonlinear unknown input observer and an error correction code mechanism to allow the robot to securely agree on a secret key with its remote controller. To validate the proposed scheme, we implement it using a Khepera-IV differential drive robot and evaluate its efficiency and the additional control cost acquired by it. Our experimental results confirm the effectiveness of the proposed key establishment scheme. M + Footnote †: This work is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). b + Footnote †: This work is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). ## 1 Introduction Over the past recent years, the application of mobile robots in different safety critical domains, such as defence and space, search and rescue, health care, and industry 4.0, has gained an increasing interest (Tzafestas, 2013; Lewis and Ge, 2018; Fragapane et al., 2020). The research community has also been active in increasing the potential applications of these robots in networked and distributed control systems setup (Santilli et al., 2021; Klancar et al., 2017; Liu et al., 2018; Wang et al., 2021). On the other hand, such developments raise the concern of security and privacy of such systems (Dutta and Zielinska, 2021; Li et al., 2022; Wang et al., 2022). Mass adoption of robots leads to an increase in the possibilities of cyberattacks attacks against these systems. Different robotics cybersecurity issues, vulnerabilities threats and risks have been classified and discussed in (Yaacoub et al., 2021). The target of an attacker can be any components of the Confidentiality, Integrity, Availability (CIA) triad on the level of the hardware of the robot, its firmware, or its communication channels (Yaacoub et al., 2021). Unlike cybersecurity of information technology systems, robots add the additional factor of physical interaction with the environment. While taking the control of a desktop computer or a server may result in loss of information, taking the control of a robot may directly result in physical damages and endangering whatever or whoever is nearby. In this paper, we focus on applications where mobile robots are remotely controlled, i.e., where the control inputs and sensor measurements are transmitted over insecure communication channels. To guarantee any component of the CIA triad between the robot and the networked controller, or for the purpose of detection of different classes of cyberphysical attacks, in most of the proposed methods, sharing a secret key/seed is required. For example, the authors of (Noura et al., 2018) developed a physical-layer encryption algorithm for wireless machine-to-machine devices, in which sharing secret seeds is required for the implementation of the algorithm. On the other hand, the solution in (Noura et al., 2022) deals with the data integrity and source authentication problems, particularly for IoT devices. The proposed message authentication algorithm requires a secret seed/key to initialize the algorithm. Similarly, it is well-understood in the CPS community that to detect intelligent coordinated networked attacks such as covert attacks (Smith, 2015), proactive detection actions must be coordinately taken in both sides of the communication channels (Ghaderi et al., 2020). For example, moving-target (Griffioen et al., 2020) and sensor coding (Miao et al., 2016) based detection schemes implement such an idea to prevent the existence of undetectable attack, and both requires, for coordination purposes, that a secret seed is pre-shared between the plant and the controller. An anomaly detection scheme, specifically targeting differential-drive robots, is developed in (Cersullo et al., 2022), where intelligent setpoint attacks are of interest. The proposed detector leverages two command governor modules and two pseudo-random number generators (each placed in one of the two sides of the network). It has been proved that such an architecture prevent the existence of undetectable setpoint attacks only if a shared seed between the two sides of the communication channel can be established. From the above examples, it is clear that the key-establishment problem in cyber-physical systems, including mobile robots, is relevant for enhancing the security of such systems. ### Background and Related Works Traditionally, key agreement is achieved through the use of symmetric or public key cryptographic protocols (Menezes et al., 2018). For example, using elliptic curve cryptography, in (Jain et al., 2021), the authors proposed a mutual authentication and key agreement scheme between cloud-based robots (i.e., robots that access cloud resources) and cloud servers. However, such solutions might not always be usable for robotic systems. Public key protocols are computationally demanding and require a public key infrastructure (Menezes et al., 2018) and the support of a key revocation mechanism (e.g., see (Shi et al., 2021)). These requirements make public key protocol impractical for robots with limited computational capabilities (Yaacoub et al., 2021). On the other hand, symmetric key-based solutions assume the existence of a pre-shared key. However, the compromise of such long-term keys usually leads to compromising the security of the whole system. Alternative key-establishing solutions leverage the seminal concept of wiretap channel introduced by Wyner in (Wyner, 1975). Such schemes are not based on traditional cryptographic mechanisms. Instead, they utilize the role of noise, which is a natural characteristic in any communications system, to achieve secure communications. In particular, Wyner proved that if the communication channel between the sender and receiver is statistically better than the one from the sender to the eavesdropper, then it is possible to design an encoding mechanism to communicate with perfect secrecy. Over the years, such a concept has been leveraged to design different key-agreement protocols for CPSs, see, e.g., (Maurer, 1993; Ahlswede and Csiszar, 1993; Lara-Nino et al., 2021; Sutrala et al., 2021; Zhang et al., 2017; Rawat et al., 2017) and references therein. In (Maurer, 1993; Ahlswede and Csiszar, 1993), a key-agreement protocol based on public discussion is proposed. In (Sutrala et al., 2021), by considering a 5G-enabled industrial CPSs, a three-factor user authenticated key agreement protocol is developed; in (Zhang et al., 2017), by using ambient wireless signals, a cross-layer key establishment model for wireless devices in CPSs is designed to allow devices to extract master keys at the physical layer. In (Rawat et al., 2017), by exploiting an information-theoretic approach, the outage probability for secrecy rate in multipleinput multiple-output (MIMO) systems for CPSs is investigated. While all the above solutions are developed for CPSs, none of them takes advantage of the closed-loop dynamics of the underlying physical system dynamics to design the key agreement protocol. A first tentative to design a key agreement leveraging the physical properties of control systems can be found in (Li et al., 2011), where the authors exploited common information about the plant's state to establish a key between the sensor and the controller. However, the authors only consider the case where the eavesdropper cannot observe the plant's state. More recently, in (Lucia and Youssef, 2020, 2022), control theoretical approaches have been proposed to design key-agreement scheme leveraging the asymmetry in the CPS model knowledge available to the defender and adversary. ### Contribution Existing control-theoretical solutions targeting generic CPSs (Lucia and Youssef, 2020, 2022) are developed under the assumptions of linear plant's dynamics (which is not the case for mobile robots) and they have never tested on a real testbed. Consequently, in a nutshell, this work presents the following theoretical and practical contributions: * It extends the key-establishment solution in (Lucia and Youssef, 2022) to deal with the non-linear dynamics of mobile robots. * It experimentally validates, using a remotely maneuvered Khepera IV 2 mobile robot, the performance and the capacity of the proposed control theoretical key-agreement scheme. Footnote 2: [http://www.k-team.com/khepera-iv](http://www.k-team.com/khepera-iv) ### Notation and Paper Organization The set of real numbers and real-valued column vectors of dimension \(n_{r}>0\) are denoted with \(\mathbf{R}\) and \(\mathbf{R}^{n_{r}}\), respectively. \(M\in\mathds{R}^{n_{r}\times n_{c}}\) denotes a real-valued matrix of size \(n_{r}\times n_{c}\). Moreover, \(I_{n_{1}}\in\mathds{R}^{n_{1}\times n_{1}}\), \(0_{n_{0}}\in\mathds{R}^{n_{0}\times n_{0}}\), and \(1_{n_{1}}\in\mathds{R}^{n_{1}}\) denote the identity matrix, zero matrix, and all-ones column vector, respectively. The sets of non-negative integer numbers and positive integer number is denoted by \(\mathds{Z}_{+}\) and \(\mathds{Z}_{>0}\). The transpose and inverse of matrix \(M\) are denoted with \(M^{T}\) and \(M^{-1}\), respectively. Given a random variable \(v\in\mathds{R}^{n_{r}},v\sim\mathcal{N}(\mu_{n_{r}},\Sigma_{n_{r}})\) indicates a random variable normally distributed with mean \(\mu_{n_{r}}\in\mathds{R}^{m_{r}}\) and covariance matrix \(\Sigma_{n_{r}}>0\in\mathds{R}^{n_{r}\times n_{r}}\). Given an event \(E\), the probability of occurrence of such event is denoted with \(P(E)\). Given a binary string \(s\in\{0,1\}^{n_{s}}\), \(s[i]\) denotes the \(i-th\) bit of \(s\). The rest of the paper is organized as follows. In Section 2, first, the robot model and the adversary are presented, then, the considered key-establishment problem is stated. In Section 3, the proposed protocol for the key agreement in described. Experimental results obtained using a Khepera IV differential-drive robot are presented in Section 4. Finally, Section 5 concludes the paper with some final remarks. ## 2 Preliminaries and Problem Formulation Definition: Given three positive integers \(n_{c}\in\mathds{Z}_{>0},k_{c}\in\mathds{Z}_{>0},d_{c}\in\mathds{Z}_{>0}\), a linear Error Correcting Code (ECC) de fines a linear transformation of a binary string \(s\in\{0,1\}^{k_{c}}\) into a subspace \(\mathcal{C}\in\{0,1\}^{n_{c}}\) of cardinality \(2^{k_{c}}\) such that * \(\forall(c_{1},c_{2})\in\mathcal{C},c_{1}\neq c_{2}\), the Hamming distance \(d_{H}(c_{1},c_{2})<d_{c}\). * the maximum number of errors that can be corrected is \(\frac{d_{c}-1}{2}\). In what follows, we consider a scenario where a mobile robot is manoeuvred by a networked controller and the network infrastructure is vulnerable to eavesdropping attacks. ### Robot Model Among different existing categories of mobile robots, wheeled-mobile robots are very common for ground vehicles and they find application in different domains such as surveillance and warehouse automation. Moreover, among the nonholonomic configurations, the differential-drive structure, characterized by two rear independently-driven wheels and one or more front castor wheels for body support, is often adopted in the industry (Martins et al., 2017). A schematic of a differential-drive robot is shown in Fig. (a)a. The pose of a differential-drive robot is described by the planar coordinates \((p^{x},p^{y})\) of its center of mass and orientation \(\theta\) (see Fig. (a)a). By resorting to the forward Euler discretization method and a sampling time \(T>0\), the discrete-time kinematic model of the differential-drive is given by (De Luca et al., 2001): \[\begin{split} p^{x}(k+1)&=p^{x}(k)+\frac{Tr}{2} \cos\theta(k)(\omega_{r}(k)+\omega_{l}(k))+\zeta^{p^{x}}(k)\\ p^{y}(k+1)&=p^{y}(k)+\frac{Tr}{2}\sin\theta(k)( \omega_{r}(k)+\omega_{l}(k))+\zeta^{p^{y}}(k)\\ \theta(k+1)&=\theta(k)+\frac{Tr}{D}(\omega_{r}(k)- \omega_{l}(k))+\zeta^{\theta}(k)\end{split} \tag{1}\] where \(r>0\) is the radius of the wheels, \(D>0\) the rear axle length, and \(u^{D}=[\omega_{r},\,\omega_{l}]^{T}\in\mathds{R}^{2}\) the control input vector, which consists of the angular velocities of the right and left wheel, respectively. \(\zeta(k)=[\zeta^{p^{x}}(k),\zeta^{p^{y}}(k),\zeta^{\theta}(k)]^{T}\sim\mathcal{ N}(0,\mathcal{W})\) is the process noise with \(\mathcal{W}\in\mathds{R}^{3\times 3}\). Let \(x(k)=[p^{x}(k),p^{y}(k),\theta(k)]^{T}\in\mathds{R}^{3}\) denote the robot's state vector. It is assumed that \(x(k)\) can be estimated leveraging the measurement vector \(y(k)\in\mathds{R}^{n_{p}}\), \(n_{p}>0\), obtained via odometric calculations and/or extereoceptive (e.g., sonar, laser) sensors (D'Alfonso et al., 2015), i.e.,: \[y(k)=h(x(k))+\xi(k) \tag{2}\] where \(h(x(k))\) denotes the nonlinear output equation, and \(\xi(k)\sim(0,\mathcal{V})\), \(\mathcal{V}\in\mathds{R}^{n_{p}\times n_{p}}\), the measurement noise, uncorrelated with \(\zeta(k)\). By denoting with \(v(k)\) and \(\omega(k)\) the linear and angular velocities of the center of mass of the robot, it is possible to apply to (1) the transformation \[\left[\begin{array}{c}v(k)\\ \omega(k)\end{array}\right]=H\left[\begin{array}{c}\omega_{r}(k)\\ \omega_{l}(k)\end{array}\right],\quad H:=\left[\begin{array}{cc}\frac{r}{2} &\frac{r}{2}\\ \frac{r}{D}&\frac{-r}{D}\end{array}\right] \tag{3}\] and describe the robot behaviour by means of the following unicycle model (see Fig. (b)b): \[\begin{split} p^{x}(k+1)&=p^{x}(k)+Tv(k)\cos\theta(k)+\zeta^{p^ {x}}(k)\\ p^{y}(k+1)&=p^{y}(k)+Tv(k)\sin\theta(k)+\zeta^{p^{y}}(k)\\ \theta(k+1)&=\theta(k)+T\omega(k)+\zeta^{\theta}(k)\end{split} \tag{4}\] where \(u^{U}(k)=[v(k),\omega(k)]^{T}\in\mathds{R}^{2}\) is the control input vector of the unicycle. ### Adversary Model We assume a passive adversary capable of eavesdropping the control input and sensor measurements transmitted between the plant and the networked controller, see Eve in Fig. 2. We also assume that the adversary is aware that the robot is a differential-drive robot but it might not have exact knowledge of all the robot's parameters (e.g., \(T,r,D,\mathcal{W}\)) and robot's measurement function (e.g., \(h(\cdot)\) and \(\mathcal{V}\)). Therefore, we assume that the adversary has the following model: \[\begin{split} p_{a}^{x}(k+1)&=p^{x}(k)+\frac{T_{a}r_{a} }{2}\cos\theta_{a}(k)(\omega_{r}(k)+\omega_{l}(k))+\zeta_{a}^{p^{x}}(k)\\ p_{a}^{y}(k+1)&=p^{y}(k)+\frac{T_{a}r_{a}}{2}\sin\theta_{a}(k)( \omega_{r}(k)+\omega_{l}(k))+\zeta_{a}^{p^{y}}(k)\\ \theta_{a}(k+1)&=\theta_{a}(k)+\frac{T_{a}r_{a}}{D_{a}}( \omega_{r}(k)-\omega_{l}(k))+\zeta_{a}^{\theta_{a}}(k)\\ y_{a}(k)&=\,h_{a}(x_{a}(k))+\xi_{a}(k)\end{split} \tag{5}\] where \(\zeta_{a}=[\zeta_{a}^{p^{x}}(k),\,\zeta_{a}^{p^{y}}(k),\zeta_{a}^{\theta}(k)]^{T }\sim(0,\mathcal{W}_{a})\), \(\xi_{a}^{x}(k)\sim(0,\mathcal{V}_{a})\), and \((T_{a},r_{a},a_{a},h_{a}(\cdot),\mathcal{W}_{a},\mathcal{V}_{a})\) are the adversary estimations for the robot's model (1)-(2). **Assumption 1**.: Let \(\mathcal{M}=\{T,r,D,\mathcal{W},h(\cdot),\mathcal{V}\}\) and \(\mathcal{M}_{a}=\{T_{a},r_{a},D_{a},\mathcal{W}_{a},h_{a}(\cdot),\mathcal{V}_{a}\}\) be the robot's model knowledge available to the controller's designer and to the adversary, respectively. Then, \[\mathcal{M}\neq\mathcal{M}_{a} \tag{6}\] **Remark 1**.: The model discrepancy (6) might arise for different reasons. First, the adversary might not be aware of the robot construction parameters \(r,D\) or the output function \(h(\cdot)\). Instead, the attacker might just be able to estimate them using identification techniques or by inspection (e.g., via cameras). Second, while the defender can estimate \(\mathcal{W},\mathcal{V}\) by performing offline experiments, see, e.g., (Antonelli and Chiaverini, 2007; D'Alfonso et al., 2015), the eavesdropper can only perform online identification procedure relying on the online robot operations, which might be unsuitable for system identification purposes. Figure 1: Differential-drive and unicycle models. ### Problem Formulation The here considered key-agreement problem can be stated as follows. **Problem 1**: _Consider the robot and adversary models (1)-(6). Without resorting to traditional cryptographic schemes, design a key agreement protocol between the robot and the networked controller such that the keys of length \(n>0\) identified by the controller (\(\mathcal{K}_{c}\in\{0,1\}^{n}\)), robot (\(\mathcal{K}_{r}\in\{0,1\}^{n}\)) and attacker (\(\mathcal{K}_{a}\in\{0,1\}^{n}\)) are such that_ \[P\{\mathcal{K}_{c}\ =\mathcal{K}_{r}\}\approx 1\ \text{and}\ P\{\mathcal{K}_{c}\ \neq \mathcal{K}_{a}\}\approx 1 \tag{7}\] ## 3 Key Agreement Protocol As proved in (Lucia and Youssef, 2020), the asymmetry (6) in the plant model knowledge is sufficient to ensure the existence of a Wyner wiretap-like channel in networked cyber-physical systems. The latter is here leveraged to design an encoding mechanism for the considered key-exchange problem. In particular, the proposed key-agreement protocol is developed under the following assumptions. **Assumption 2**: _The available sensor measurements are sufficiently rich to allow the existence of an Unknown Input Observer (UIO) capable of simultaneously estimate \(x(k)\) and \(u^{D}(k)\) from the available measurement vector \(y(k)\). By denoting with \(\hat{x}(k)\) and \(\hat{u}^{D}(k)\) the estimated vectors, the UIO is abstractly modeled as the following recursive function_ \[[\hat{u}^{D}(k-1),\hat{x}(k)]=UIO(u^{D}(k),\hat{x}(k-1),y(k),\mathcal{M}) \tag{8}\] _where \((\hat{u}^{D}(k-1),\hat{x}(k))\) and \((\hat{u}^{D}(k-2),\hat{x}(k-1))\) are the available estimations at time steps \(k\) and \(k-1\), respectively. Moreover, the eavesdropper is able to run the same UIO as in (8) with \(\mathcal{M}_{a}\) instead of \(\mathcal{M}\)._ In the sequel, we assume that the robot is equipped with a tracking controller which provides the control vector \(u^{U}(k),\forall\,k\), i.e., \[u^{U}(k)=\left[\begin{array}{c}v(k)\\ \omega(k)\end{array}\right]=f_{c}(x(k),x_{r}(k),\hat{x}_{r}(k),\check{x}_{r}(k)) \tag{9}\] where \(f_{c}(\cdot,\cdot,\cdot)\) denotes a generic controller, \(x_{r}(k)\in\mathbf{R}^{3},\hat{x}_{r}(k)\in\mathbf{R}^{3},\check{x}_{r}(k)\in \mathbf{R}^{3}\) are the reference state, velocity and acceleration vectors, respectively (De Luca et al., 2001). By referring to the networked control system architecture illustrated in Fig. 2, the idea behind the proposed key-agreement protocol can be described in four points: _(P1)_ - The controller computes \(u^{U}(k)\) as in (9). Then, it generates two perturbed control inputs, namely \(u^{U}_{0}(k)\in\mathbf{R}^{2}\) and \(u^{U}_{1}(k)\in\mathbf{R}^{2}\), by adding and subtracting a small bias vector \(\Delta\in\mathbf{R}^{2}\) to \(u^{U}(k)\), i.e., \[u^{U}_{0}(k)=u^{U}(k)+\Delta,\quad u^{U}_{1}(k)=u^{U}(k)-\Delta \tag{10}\] where \(\Delta=[\Delta_{\omega},\Delta_{\omega}]^{T}\), \(\Delta_{v}\geq 0\), \(\Delta_{\omega}\geq 0\) and such that \(\Delta_{v}+\Delta_{\omega}>0\) (i.e., at least one between \(\Delta_{v}\) and \(\Delta_{\omega}\) must be strictly greater than zero). Finally, the differential-drive control inputs are computed as \(u^{D}_{0}(k)=H^{-1}u^{U}_{0}(k)\) and \(u^{D}_{1}(k)=H^{-1}u^{U}_{1}(k)\), see (10), and the pair \((u^{D}_{0}(k),u^{D}_{1}(k))\) is sent to the robot. _(P2)_ - Once the robot receives \((u^{D}_{0}(k),u^{D}_{1}(k))\), its CPU unit is in charge of deciding which one of the two control inputs should be used. To this end, it generates a random bit \(b(k)\in\{0,1\}\) and send to the actuators \(u^{D}_{b(k)}(k)\). Note that the bit \(b(k)\) and, consequently, the control signal applied to the robot (\(u^{D}_{b(k)}(k)\)) are unknown to the networked controller and to the eavesdropper. At each iteration, the robot appends \(b(k)\) to the local key \(\mathcal{K}_{r}\). _(P3)_ - When the networked controller receives \(y(k)\), it can run the UIO (8) and obtain the estimated pair \((\hat{x}(k),\hat{u}^{D}(k-1))\). Moreover, since also the pair \((u^{D}_{0}(k-1),u^{D}_{1}(k-1))\) is known, the controller can estimate the random bit \(b(k-1)\) (used by the robot) as \[\hat{b}(k-1)=\begin{cases}0&\text{if }d_{0}<d_{1}\\ 1&\text{if }d_{1}<d_{0}\end{cases} \tag{11}\] where \(d_{0}\) and \(d_{1}\) are the distances between the estimated control input \(\hat{u}^{D}(k-1)\) and \((u^{D}_{0}(k-1),u^{D}_{1}(k-1))\), i.e., \[\begin{split} d_{0}(k-1)&=\|\hat{u}^{D}(k-1)-u^{D}_{0}(k -1)\|_{2},\\ d_{1}(k-1)&=\|\hat{u}^{D}(k-1)-u^{D}_{1}(k-1)\|_{2}\end{split} \tag{12}\] At each iteration, the networked controller appends \(\hat{b}(k)\) to the local key \(\mathcal{K}_{c}\). _(P4)_ - The adversary can run the UIO (8) with \(\mathcal{M}_{a}\) instead of \(\mathcal{M}\) and obtain a local estimation, namely \(\hat{b}_{a}(k-1)\), of \(b(k-1)\), to append to its local key \(\mathcal{K}_{a}\). However, given the model discrepancy (6), the covariance of the unknown input estimation error for the attacker is expected to be larger of the one obtained by the networked controller (Lucia and Youssef, 2022). Consequently, for a proper choice of \(\Delta\), it is expected that \(P\{\mathcal{K}_{c}\ =\mathcal{K}_{r}\}\approx 1\) and \(P\{\mathcal{K}_{c}\ \neq\mathcal{K}_{a}\}\approx 1\). Note that the above described UIO-based decoding scheme might not be robust against possible model mismatches and/or process and measurement noises. To make the protocol more robust, we enhance its decoding operations by means of an Error Correcting Code (ECC) scheme and a feedback acknowledgment signal, namely \(ack\), which is sent by the controller along with the pair of control inputs. By assuming, for the sake of simplicity and clarity, a linear ECC, the ECC and \(ack\) feedback signal are used as follows (refer to Definition 1 for the used notation and terminology): Figure 2: Control architecture for the proposed key agreement protocol. * The robot splits a randomly generated local key \(\mathcal{K}\) into a sequence of substring \(s_{i}\). Each \(s_{i}\) is encoded into a sequence of codewords \(c_{i}\). Each bit of \(c_{i}\), namely \(c_{i}[j]\), is sequentially used to decide \(b(k)\) in _(P2)_, i.e., \(b(k)=c_{i}[j]\). * The robot estimates \(\hat{b}(k)\) as in _(P3)_ and collects them to obtain an estimation of the codewords \(c_{i}\), namely \(\hat{c}_{i}\). Then, the Hamming distance \(d_{\hat{c}_{i}}\) is evaluated \[d_{\hat{c}_{i}}=\arg\min_{c\in\mathcal{C}}d_{H}(c,c_{i})\] (13) If \(d_{\hat{c}_{i}}\) is much smaller than the number of correctable errors, then the codeword is accepted, the binary string \(\hat{s}_{i}\) (associated to \(\hat{c}_{i}\) via ECC) is appended \(\mathcal{K}_{c}\), and a positive \(ack_{i}=1\) is sent. Otherwise, the codeword is discarded and \(ack_{i}=0\) is sent. * The robot, for every received \(ack_{i}=1\), append \(c_{i}\) to \(\mathcal{K}_{r}\). The complete key-agreement protocol is summarized in Algorithm 1. ``` /*Robot */ 1 Initialization: Generate \(\mathcal{K}\), and set \(\mathcal{K}_{r}=\emptyset\). Split \(\mathcal{K}\) into sub-strings \(s_{i}\in\{0,1\}^{k_{c}}\) 2 Sequentially encode each \(s_{i}\) is into codewords \(c_{i}\!\in\!\{0,1\}^{n_{c}}\!\in\!\mathcal{C}\) 3 At each time step \(k\): - Sequentially use each bit \(c_{i}[j]\) of \(c_{i}\) to pick \(b(k)=c_{i}[j]\) and apply to the robot \(u_{b(k)}^{D}(k)\) - When all \(n_{c}\) bits of \(s_{i}\) are used, the robot receives \(ack\in\{0,1\}\) from the controller if\(ack==1\)then \(s_{i}\) is appended to \(\mathcal{K}_{r}\) else \(s_{i}\) is discarded end if /*Controller */ 4 Initialization: Set \(\mathcal{K}_{c}=\emptyset\). 5 At each time step \(k\) : - the pair \((\hat{u}^{D}(k-1),\hat{x}(k))\) and \(\hat{b}(k-1)\) are estimated using (8) and (11), respectively. - \(\hat{b}(k-1)\) is appended to the estimated codeword \(\hat{c}_{i}\) - When \(n_{c}\) bits of \(\hat{c}_{i}\) are estimated, the distance \(d_{\hat{c}_{i}}\) is computed using (13). if\(d_{\hat{c}_{i}}\ll\frac{d_{\hat{c}}-1}{2}\)then The codeword \(\hat{c}_{i}\) is considered valid \(\hat{s}_{i}\) is decoded from \(\hat{c}_{i}\) and appended to \(\mathcal{K}_{c}\); send \(ack=1\) else The codeword \(\hat{c}_{i}\) is considered invalid and discarded; send \(ack=0\) end if - Compute \((u_{0}^{D}(k),u_{1}^{D}(k))\) as in (10) and send it ``` **Algorithm 1**Proposed Key Agreement Protocol _Remark 2_.: The bias \(\Delta\) in (10) and the ECC parameters \((n_{c},k_{c},d_{c})\) are design parameters that can be tuned to achieve \(P\{\mathcal{K}_{c}\,=\mathcal{K}_{r}\}\approx 1\). Moreover, to ensure the correctness of the exchanged key, the controller and the robot can always publicity verify its correctness exchanging the hash values associated to \(\mathcal{K}_{c}\) and \(\mathcal{K}_{r}\). Moreover, to eliminate the partial key knowledge gained by the adversary, the controller and the robot can also enhance the security of the exchanged key by means of standard privacy amplification procedures, see, e.g., (Van Asche, 2006; Bennett et al., 1995). ## 4 Experimental Results In this section, the effectiveness of the proposed key-agreement protocol is verified by means of the experimental setup shown in Fig. 3. The setup consists of: * A laptop where a tracking controller is implemented in Matlab. * A Khepera IV differential-drive robot. * A Bluetooth 4.0 communication channel between the robot and the laptop for the two-way exchange of data, i.e., control inputs and sensor measurements. ### Khepera IV robot The Khepera-IV robot, produced by K-Team, is a differential drive robot whose discrete-time kinematic model is as in (1), where, \(r=0.021\,[m]\) and \(D=0.1047\,[m]\), \(\mathcal{W}=10^{-2}I_{3}\), and the maximum angular velocities of the wheels is of \(38\,[rad/\sec]\). On the other hand, the used measurement vector \(y(k)\) consists of the wheels encoder measurements that, via odometric calculations, allow to obtain an estimation of the entire state of the robot (De Luca et al., 2001). Consequently, the output equation (2) is modeled as \(y(k)=x(k)+\xi(k)\) with \(\xi(k)\) a Guassian noise with covariance matrix \(\mathcal{V}=10^{-4}I_{3}\). In the performed experiments, the robot's processing unit is equipped with a server that receives and sends, via Bluetooth, the control inputs and sensor measurements. The used sampling time is \(T_{s}=0.2\,[\sec]\). ### UIO, tracking Controller, and reference trajectory _UIO:_ The unicycle model (4) under the control law (10) can be re-written (for compactness) as Figure 3: Experimental setup. \[x(k+1)=f(x(k),u^{U}(k)+\Delta_{u})+\zeta(k), \tag{14}\] with \(\Delta_{u}\) the unknown bias of value \(\pm\Delta\). Then, the extended Kalman filter with unknown input estimation algorithm proposed in (Guo, 2018, Appendix A) has been used to implement the UIO module (8). For completeness, the UIO operations, adapted to the considered setup, are reported in Algorithm 2, where, \(P_{0}^{x}=0_{3\times 3}\), \(\hat{x}_{0}=[0,0,0]^{T}\), and \(A_{k}\), \(B_{k}\), \(G_{k}\) are the matrices characterizing the linearization \(x(k+1)=A_{k}x(k)+B_{k}u^{U}(k)+G_{k}\Delta_{u}(k)\) of (14) along the state and input trajectories, i.e., \[\begin{split} A_{k}\triangleq\left.\frac{\partial f}{\partial x }\right|_{(\hat{x}_{k|k},u^{U}(k))},\;\;B_{k}\triangleq\left.\frac{\partial f }{\partial u}\right|_{(\hat{x}_{k|k},u^{U}(k))},\\ G_{k}\triangleq\left.\frac{\partial f}{\partial\Delta_{u}}\right| _{(\hat{x}_{k|k},u^{D}(k))}\end{split} \tag{15}\] Consequently, \(\hat{u}^{D}(k-1)=H^{-1}(u^{U}(k)+\hat{\Delta}_{u}(k-1))\). ``` Input:\(u(k-1),\hat{x}_{k-1},y(k)\) Output:\(\hat{x}_{k},\Delta_{u}(k-1)\) /*Input Estimation */ \(\tilde{P}_{k-1}=A_{k-1}P_{k-1}^{x}(A_{k-1})^{T}+\mathcal{W}\) \(\tilde{R}_{k}^{*}=\tilde{P}_{k-1}+\mathbb{V}\) \(\Xi_{k}=(G_{k-1})^{T}(\tilde{R}_{k}^{*})^{-1}\) \(M_{k}=(\Xi_{k}G_{k-1})^{-1}\Xi_{k}\) \(\hat{\Delta}_{u}(k-1)=M_{k}(y(k)-f(\hat{x}_{k-1},u(k-1)))\) \(P_{k-1}^{x}=M_{k}\tilde{R}_{k}^{*}(M_{k})^{T}\) /*State Prediction */ \(\hat{x}_{k|k-1}=f(\hat{x}_{k-1},u(k-1)+\hat{\Delta}_{u}(k-1))\) \(\Phi_{k}=(I-G_{k-1}M_{k})\) \(\tilde{A}_{k-1}=\Phi_{k}A_{k-1}\) \(\tilde{Q}_{k-1}=\Phi_{k}Q_{k-1}(\Phi_{k})^{T}+G_{k-1}M_{k}R_{k}(M_{k})^{T}(G_{ k-1})^{T}\) \(P_{k|k-1}^{x}=\tilde{A}_{k-1}P_{k-1}^{x}(\tilde{A}_{k-1})^{T}+\tilde{Q}_{k-1}\) /*State Estimation */ \(\Gamma_{k}=G_{k-1}M_{k}\) \(\tilde{R}_{k}=P_{k|k-1}^{x}+R_{k}+\Gamma_{k}R_{k}+R_{k}(\Gamma_{k})^{T}\) \(L_{k}=P_{k|k-1}^{x}+R_{k}(M_{k})^{T}(G_{k-1})^{T})^{T}\tilde{R}_{k}^{-1}\) \(\hat{x}_{k}=\hat{x}_{k|k-1}+L_{k}(y(k)-h_{2}(\hat{x}_{k|k-1}))\) \(\Psi_{k}=I-L_{k}\) \(P_{k}^{x}=\Psi_{k}P_{k|k-1}^{x}\Psi_{k}^{T}+L_{k}R_{k}(L_{k})^{T}-\Psi_{k}G_{ k-1}M_{k}R_{k}(L_{k})^{T}-L_{k}R_{k}(M_{k})^{T}(G_{k-1})^{T}(\Psi_{k})^{T}\) ``` **Algorithm 2**Non-Linear Unknown Input Observer _Tracking controller:_ The robot's is controlled using the nonlinear controller based on dynamic feedback linearization described in (De Luca et al., 2001, Eq. 5.18). By denoting the reference trajectory and its first and second derivatives along the \(p^{x}\) and \(p^{y}\) axis as \((p_{r}^{x},p_{r}^{y})\), \((\hat{p}_{r}^{x},\hat{p}_{r}^{y})\), \((\hat{p}_{r}^{x},\hat{p}_{r}^{y})\), the control law is \[\begin{array}{l}v(k)=\tilde{p}_{r}^{x}(k)+k_{p}^{x}(p_{r}^{x}(k)-p^{x}(k))+k_{ d}^{x}(\hat{p}_{r}^{x}(k)-\hat{p}^{x}(k))\\ \omega(k)=\tilde{p}_{r}^{y}(k)+k_{p}^{y}(p_{r}^{y}(k)-p^{y}(k))+k_{d}^{y}(\hat{ p}_{r}^{x}(k)-\hat{p}^{y}(k))\end{array} \tag{16}\] In the performed experiments, the controller has been implemented in Matlab using \(k_{p}^{x}=k_{p}^{y}=1.10\), and \(k_{d}^{x}=k_{d}^{y}=0.80\). _Reference Trajectory:_ The reference signal is the square-shaped trajectory shown in Fig. 6. The square's vertices are \(\{(0,0),(1,0),(1,1),(0,1)\}\) and the timing laws for \((p_{r}^{x},p_{r}^{y})\), \((\hat{p}_{r}^{x},\hat{p}_{r}^{y})\), \((\hat{p}_{r}^{x},\hat{p}_{r}^{y})\), \((\hat{p}_{r}^{x},\hat{p}_{r}^{y})\) have been obtained using the built-in Matlab function _cubicplotraj_ which has been configured to travel each side of the square in \(17\,\mathrm{[sec]}\). In the performed experiments, the square trajectory repeats three consecutive times. ### Perturbed control inputs and ECC configuration _Perturbed control inputs:_ The pair \((u_{0}^{U}(k),u_{0}^{U}(k))\) has been obtained adding a small perturbation only into the linear velocity command \(v(k)\) computed as in (16), i.e., \(\Delta_{v}>0\) and \(\Delta_{\omega}=0\), see (10). _ECC configuration:_ A simple repetition code has been used to implement the ECC. Therefore, the string \(s_{i}\) consists of a single bit of \(\mathcal{K}\) (i.e., \(k_{c}=1\)) and the codewords \(c_{i}\) are vectors repeating \(s_{i}\) for \(n_{c}\) times. In the performed experiments, we set \(n_{c}=3\) and a codeword is accepted only if the number of decoding errors \(d_{\hat{c}_{i}}=0\). ### Results The proposed key-agreement protocol (Algorithm 1) has been evaluated for 10 equally spaced value of \(\Delta_{v}\in[0.02,0.45]\). For each \(\Delta_{v}\), the experiment has been repeated 10 times and with different randomly generated keys \(\mathcal{K}\) of length 345 bits. The obtained results are shown in Figs. 4-7 where the shown boxplots describe the median, minimum and maximum values of each point. Fig. 4 shows the percentages of accepted codewords and correctly decoded/agreed bits. The number of accepted codewords (red boxplot) increases with \(\Delta_{v}\), which implies that the capacity of the key agreement protocol improves with the magnitude of the state shift \(\Delta_{v}\). Moreover, for \(\Delta_{v}\geq 0.035\) all the accepted bits (blue boxplot) are also correct. The latter is justified by the fact that by increasing \(\Delta_{v}\), the distance between \(u_{0}^{D}\) and \(u_{1}^{D}\) increases until a point where estimation errors provoked by the process and measurement noises becomes negligible.
2305.16063
Individuality in Swarm Robots with the Case Study of Kilobots: Noise, Bug, or Feature?
Inter-individual differences are studied in natural systems, such as fish, bees, and humans, as they contribute to the complexity of both individual and collective behaviors. However, individuality in artificial systems, such as robotic swarms, is undervalued or even overlooked. Agent-specific deviations from the norm in swarm robotics are usually understood as mere noise that can be minimized, for example, by calibration. We observe that robots have consistent deviations and argue that awareness and knowledge of these can be exploited to serve a task. We measure heterogeneity in robot swarms caused by individual differences in how robots act, sense, and oscillate. Our use case is Kilobots and we provide example behaviors where the performance of robots varies depending on individual differences. We show a non-intuitive example of phototaxis with Kilobots where the non-calibrated Kilobots show better performance than the calibrated supposedly ``ideal" one. We measure the inter-individual variations for heterogeneity in sensing and oscillation, too. We briefly discuss how these variations can enhance the complexity of collective behaviors. We suggest that by recognizing and exploring this new perspective on individuality, and hence diversity, in robotic swarms, we can gain a deeper understanding of these systems and potentially unlock new possibilities for their design and implementation of applications.
Mohsen Raoufi, Pawel Romanczuk, Heiko Hamann
2023-05-25T13:53:13Z
http://arxiv.org/abs/2305.16063v1
# Individuality in Swarm Robots with the Case Study of Kilobots: ###### Abstract Inter-individual differences are studied in natural systems, such as fish, bees, and humans, as they contribute to the complexity of both individual and collective behaviors. However, individuality in artificial systems, such as robotic swarms, is undervalued or even overlooked. Agent-specific deviations from the norm in swarm robotics are usually understood as mere noise that can be minimized, for example, by calibration. We observe that robots have consistent deviations and argue that awareness and knowledge of these can be exploited to serve a task. We measure heterogeneity in robot swarms caused by individual differences in how robots act, sense, and oscillate. Our use case is Kilobots and we provide example behaviors where the performance of robots varies depending on individual differences. We show a non-intuitive example of phototaxis with Kilobots where the non-calibrated Kilobots show better performance than the calibrated supposedly "ideal" one. We measure the inter-individual variations for heterogeneity in sensing and oscillation, too. We briefly discuss how these variations can enhance the complexity of collective behaviors. We suggest that by recognizing and exploring this new perspective on individuality, and hence diversity, in robotic swarms, we can gain a deeper understanding of these systems and potentially unlock new possibilities for their design and implementation of applications. \({}^{1}\)Science of Intelligence, Research Cluster of Excellence, 10587 Berlin, Germany \({}^{2}\)Institute for Theoretical Biology, Department of Biology, Humboldt Universitat zu Berlin, Berlin, Germany \({}^{3}\)Department of Electrical Engineering and Computer Science, Technical University of Berlin, Berlin, Germany \({}^{4}\)Department of Computer and Information Science, University of Konstanz, Konstanz [email protected] ## Introduction While in artificial swarms, such as swarm robotics, heterogeneity in software and hardware is only appreciated since recently, the concept is widely recognized in studies of natural (complex) systems, such as fish schools, animal groups, and humans. We first give examples of individuality in natural systems and then describe how it is viewed in examples of artificial systems. ### Diversity and complexity in natural systems Diversity plays a significant role in the complexity of collective systems. The interplay of diversity and complexity in collectives is relevant in a variety of disciplines, such as physics, biology, economics, social science, and neuroscience, indicating a possible generality of the subject. According to Page (2010), three different types of diversity are distinguished as: "_variation within_ a type, differences _across_ types, differences _between_ communities." We focus on the first type of diversity, aka inter-individual variation, in this paper. Nature across many systems increases and maintains diversity among individuals (Ravary et al., 2007). These variations are not only limited to physiological, or morphological differences but include behavioral and opinion diversities as well (del Mar Delgado et al., 2018). From a number of studies, we know that the complexity of system behaviors can stem from diversity (Page, 2010). Fish are a well-studied species in this regard. For example, Nakajima et al. (2007) studied fish and the development of differences in their left-side body muscles compared to those of the right-side. They report that the "righty" fish are more likely to be hooked on the right side of their mouth. Similarly, Liu et al. (2009) studied the asymmetric development of motor functions in fish. Laskowski et al. (2022); Ehlman et al. (2022) also studied the development and emergence of individuality for twin female colonial fish. Seeley (2010) studied decision-making in bees, where the diversity of many individuals searching for a solution increases the chance of finding new options for nesting. Other studies focused on humans, rats, other social animals, and insects (Rivalan et al., 2011; Jeanson and Weidenmuller, 2014; Lonsdorf and Merz, 2017; Jeanson, 2019; Ward, 2019; Simmatis et al., 2020). The effect of variations on the performance of collective systems largely depends on the task. For example, in collective decision-making, some level of variation decreases the collective bias, but more variations do not necessarily improve its accuracy (Raoufi et al., 2021). ### Heterogeneity in Artificial Systems Different from natural systems, the inter-individual variations in artificial systems are often overlooked. As mentioned above, we study the first type of diversity (Page, 2010). However, other types have recently received attention, for example, the diversity in the composition of the population, that is, having different robotic platforms (species) within a collective (Porok et al., 2017; Dorigo et al., 2020). In this paper, we focus on inner-platform inter-individual differences, the so-called quasi homogeneity (Hamann, 2018). We even focus narrower, by excluding controllable or programmable variations (software heterogeneity); for example, robots with different control software, specialized in different tasks as by Dorigo et al. (2021). But rather we explore the intrinsic variations, that come naturally with the embodiment of robots and are an inseparable part of these systems. In most studies, the system behavior that emerges from the agent-agent, and agent-environment interactions is already complex, so that assuming a homogeneous system is sufficient (Camazine et al., 2020). The simplifying assumption of homogeneity in artificial systems, and in particular swarm robotics, improves tractability. We divide such assumptions into two main groups: noise and error. For the first group, individuality is seen as agents being deviated from the collective _norm_. To deal with this matter in the modeling, one increases the variation of the noise to the extent that it covers the inter-individual variation, resulting in an increase of (aleatoric) uncertainty in the model (Valdengro-Toro and Mori, 2022), which is indeed due to the (epistemic) uncertainties of the system that is seemingly unknown to the observer. We highlight the possibility to extract information from this "noise" that can be exploited and help us predict the behavior of the system more accurately. We use the example of heading bias for real Kilobots (Rubenstein et al., 2012) to elaborate more upon the concept. We argue that individual robots show persistent non-zero bias whose time correlation is infinitely large, which makes the noise assumption questionable. Another engineering solution followed by the noise assumption is the attempt to calibrate robot sensors and motors. Although it reduces variations, the effect is only temporary and often deteriorates over time that is, calibrated robots eventually get decalibrated and deviate from the norm again. We ask: what is the acceptable extent to which an engineer should be concerned about the decalibration of robots? For the example of the heading bias of Kilobots in an optimization task, we show that the deviations from the ideal robot do not necessarily result in a performance decrease, but rather counter-intuitively enhance it in certain cases. The second approach is the regulation of deviations using control feedback (e.g., Wang et al., 2016). By interpreting deviations from the _desired_ behavior as an "error", which is meant to be regulated by the control system, the robot constantly tries to modulate its natural deviations and to minimize the error. This requires a feedback signal to form a closed loop (Meindl et al., 2021). However, in minimal swarm robots with simple noisy perceptions, and stochastic actuators the feedback solution is either expensive or impossible. We ask: should we treat individuality as noise, or a bug and hence try to solve it? Or is it rather a feature that the individual (or the collective) can exploit? Nature has shown a great ability to increase diversity, and to find a way to take advantage of it. Given that most of the swarm robotic systems are bio-inspired, it seems even mildly ironic to ignore or even _fix_ this feature. In the remainder of the paper, we introduce heading bias to the model of a moving agent in 2D space and accordingly modify the Kilobot simulator in ARGoS (Pinciroli et al., 2012). Then, we report our results on the heterogeneity of heading bias for Kilobots and how it develops over time. In the third section, we investigate the effect of heterogeneity in phototaxis and random walk. We compare the results of three different parameters for phototaxis and discuss how learning and evolutionary algorithms are influenced by individuality. Lastly, we measure, report, and discuss variations in sensing and internal frequency as other aspects of individuality. We also mention how collective decision-making and synchronization complexities stem from individuality, which will give an insight into future works, and together with the previous sections, open the stage for further investigation of the individuality in collectives. Figure 1: Trajectories of robots moving on a straight line (b-d are in a simulation, e, f are real robot experiments). a) The color map of heading bias, showing the left- and right-biased robots in red and blue, respectively (used only in figures c, d). b) The red line is the trajectory of 100 ideal, identical robots without noise. The black lines are different realizations of the trajectories of identical robots with Gaussian noise. c) The noiseless trajectories of heterogeneous robots with a uniform heading bias distribution. d) The trajectories of heterogeneous robots with noise. e) Initial results of 4 Kilobots moving on a supposedly straight line for 4 independent repetitions starting at (0,0). Each color corresponds to a distinct robot. f) Trajectories of one robot (with ID number: 33784) moving forward for 8 different experiment trials starting at (0,0). ## Heterogeneity in Motion In this section, we study how robots with heterogeneous motor abilities perform tasks differently. First, we model the motion of a differential-wheel robot and describe the effect of heading bias on its motion in simulation. Second, we report the data we analyzed from real Kilobot experiments, where robots are supposed to walk in a straight line. ### Model Variations in actuation abilities among agents lead to different movement dynamics. We model a robot moving with speed of \(|v|\) in a 2-dimensional \((x,y)\) space with a heading angle of \(\theta\) using these equations of motion: \[\left[\begin{array}{c}\dot{x}\\ \dot{y}\end{array}\right]=\left[\begin{array}{c}v_{x}\\ v_{y}\end{array}\right]=\left|v\right|\left[\begin{array}{c}\cos(\theta)\\ \sin(\theta)\end{array}\right],\] \[\dot{\theta}=\omega \tag{1}\] For a differential-wheeled robot, the component of the velocity perpendicular to the heading is zero. The remaining components are the linear and rotational velocities (\(v\) and \(\omega\), respectively). Assuming noisy actuation, we map the right (\(m_{\text{R}}\)) and left motors (\(m_{\text{L}}\)) nominal angular rates to robot linear and rotational velocities using the corresponding coefficients (\(c_{v}\) and \(c_{\omega}\)) and add a noise term: \[v=c_{v}(m_{\text{R}}+m_{\text{L}})+\eta_{v},\] \[\omega=c_{\omega}(m_{\text{R}}-m_{\text{L}})+\eta_{\omega}, \tag{2}\] where \(\eta_{v}\) and \(\eta_{\omega}\) are the linear and rotational Gaussian noise, respectively. In an ideal case, the _desired_ and actual motor velocities are the same. However, in a real-world case it is well-known in robotics, the zero-error assumption could only be achieved by constantly regulating the error. In an open-loop system without any feedback, it is inevitable that the desired and the actual signals drift from each other. Moreover, even for identical robots that are mass-produced, each motor of each robot might deviate from its nominal properties over time leading to actuation heterogeneity. Furthermore, the differences in mass, inertia, unbalanced distribution of mass, or friction coefficient add to the variation in the dynamic of motion. In this paper, we do not aim to explain the sources of these uncertainties, and the differences between the motors causing the problem, which might be cumbersome or difficult to measure. Instead, we focus on a higher abstraction at the level of the equation of motion (Eq. 1), which is universal across systems, either artificial or natural. Considering all the aforementioned factors causing the non-ideal motion of robots, we have the following equation of motion for each individual robot \(i\): \[\left[\begin{array}{c}\dot{x}^{i}\\ \dot{y}^{i}\end{array}\right]=\left(\left|v^{i}\right|+\eta_{v}\right)\left[ \begin{array}{c}\cos(\theta^{i})\\ \sin(\theta^{i})\end{array}\right],\] \[\dot{\theta^{i}}=\omega^{i}+\eta_{\omega} \tag{3}\] Studies showed that inter-individual variation in speed (\(|v^{i}|\)) among agents leads to complex collective motion (Peruani and Aranson, 2018; Klamser et al., 2021). For the case of Kilobots, Pinciroli et al. (2018) reported "strong inter-individual variations" for linear speed and measured the variance of speed distribution (or equivalently \(\eta_{v}\)) for _calibrated_ Kilobots. We focus on the rotational motion and heterogeneity in the heading bias of Kilobots. ### Heterogeneity in Heading Bias Here, we provide simple measurements of individuality in robots to prepare our more sophisticated study of its impact on behavior. To measure the heterogeneity in heading bias, here we only consider the simple straightforward motion as an ideal motion, where the desired rotational velocity is zero (\(\omega^{i}_{\text{des}}=0\)). We conducted experiments with real Kilobots and in simulation using ARGoS simulator. We program robots to move in a straight line and log their position. For real robot experiments, we record videos and post-process the video frames using an object detection algorithm from OpenCV library (Bradski, 2000). For simulation, we use the Kilobot extension of ARGoS (Pinciroli et al., 2018) and modified it by adding the explicit heading bias to the code1. Footnote 1: [https://github.com/mohsen-raoufi/Kilobots-Individuality-ALife-23](https://github.com/mohsen-raoufi/Kilobots-Individuality-ALife-23) If we choose to reduce the heading bias heterogeneity to mere noise, we assume the following stochastic differential equation for the turning rate of each individual \(i\): \[\dot{\theta}^{i}=\eta_{\omega},\qquad\eta_{\omega}\sim\mathcal{N}(\mu,\,\sigma ^{2}). \tag{4}\] Notice that statistical properties of \(\eta_{\omega}(\mu,\,\sigma)\) are without index \(i\) as they are meant as a population-wide one-fits-all model. To show the trajectories from this type of model, we modified the simulator by adding Gaussian noise \(\mathcal{N}\) to the nominal speed of each motor (\(m_{\text{R}},m_{\text{L}}\), Eq. 2). The result of such model is a correlated random walk (Fig. 1-b) that Figure 2: Mean heading bias (over time) for each experiment grouped by robot ID, sorted by their mean heading bias, showing the individuality of heading bias for Kilobots. The right histogram is the ensemble distribution if we remove the dimension of individuality. does not qualitatively cover all trajectories we observe in real robot experiments (Fig. 1-e). The assumption of a one-fits-all noise model results in a mismatch between model and reality. Fitting the data of real robots to this model, we get a joint (ensemble) distribution for heading bias with mean close to zero (\(\mu\approx 0\)) and relatively large variance (see far-right, rotated histogram in Fig. 2). This is similar to the distribution of speeds (\(|v|\)) reported in [10]. With this model, we get a high variance (seemingly aleatoric uncertainty), that is indeed reducible only if we consider the individuality of robots as we do next. Instead, if we allow each robot its individual (Gaussian) noise model, we get: \[\dot{\theta}^{i}=\eta^{i}_{\omega},\qquad\eta^{i}_{\omega}\sim\mathcal{N}( \mu^{i},\,(\sigma^{i})^{2}). \tag{5}\] The added dimension (raised index \(i\)) to the parameter space enables us to model the individuality of each robot, which leads to lower aleatoric uncertainty. We show the distribution of heading bias for each robot in Fig. 2. Here, the data for robots with a tendency to turn to left is shown in red (blue for right). The data confirms our reasoning as robots have persistent, non-zero (\(\mu_{i}\neq 0\)) heading biases that are individual-specific. The variation of the heading bias across different experiments for each robot differs, that is, some robots are more consistent in their heading bias than others (differences across \(\sigma^{i}\)). Nonetheless, the consistency of heading bias for each robot suggests a strong inter-agent variation among Kilobots. Furthermore, these intra-individual variations are on average smaller than the ensemble variance (\(\sigma^{i}<\sigma\)). To simulate the heading bias, we add a deterministic off-balancing term to the left and right motor speeds in Eq. 2: \[\tilde{m}^{i}_{\text{R}} =m^{i}_{\text{R}}+\delta^{i},\] \[\tilde{m}^{i}_{\text{L}} =m^{i}_{\text{L}}-\delta^{i}. \tag{6}\] This non-zero rotational velocity generates circular trajectories. We illustrated the results of the simulations as a proof of concept in Fig. 1-d, c with and without noise, respectively. We get trajectories of robots in simulation that have a similar curvature to their real robot experiments, which was not possible even by increasing the variance of the zero-mean noise in Eq. 4. Each line shows the trajectory for a specific robot with a unique heading bias color-coded by the spectrum shown in Fig. 1-a. Extreme heading biases result in circles of small radii, that resemble non-calibrated Kilobots, similar to what we observe in real robot experiments of Fig. 1-e (the orange curves). ### Development of Individuality over Time The development of individuality in natural systems has already been linked to a variety of factors, such as environmental, social, and behavioral reasons. For artificial systems, the source of such developments in hardware individualities can be traced down to aging of mechanical components, such as fatigue; undergoing a major disturbance, such as damage; or simply the change in the energy source, to name but a handful of causes. For the heading bias of Kilobots in particular, we observe that over the time of experiments, robots that initially are well-calibrated lose their calibration and lose their ability to go straight. This de-calibration process is another reason why individuality emerges in synthetic systems and why calibration is not a lasting solution. Different platforms most likely have different timescales for losing their calibration. For Kilobots we observe the heading bias of a single robot changes slightly over the course of different repetitions. While for a more sophisticated robot, it may take longer. The results of an experiment with 8 repetitions are shown in Fig. 1-f. The robot moved on a rather straight line in the initial experiment (green line), whereas in the later experiments, the straight line started to bend toward the left side of the robot (yellow line). This is not meant to be a full study of the concept but to indicate the real-world effects. ## Example Scenarios with Motion To elaborate further on how individuality in motor abilities, and in particular heading bias, impacts the performance of robots, we pick two example behaviors, phototaxis (as exploitation), and random walk (as exploration). Phototaxis is a spatial sample-based optimization algorithm that maximizes the objective reward for the robot, which, in this case, is the light intensity distributed in a convex shape. The second example is designed for robots to do a random walk while gathering information from the environment. This example shows how different heading biases change the internal state of robots, namely the diversity or confidence of information gathered during the course of exploration. Figure 3: Phototaxis with real Kilobots. a) A snapshot of one robot (same as in c) doing the deterministic phototaxis (\(P_{\text{R}}=1.0\)) around the center of the source, with the decaying red trace of its trajectory by post-processing the video. Without losing the generality of the results or algorithm, and for the sake of visualization, we inverted the light distribution so that the objective for the robot is to descend on light distribution. b-d) Trajectories of 3 robots with different heading biases, each showing 3 separate repetitions from the same initial point (red plus marker), doing phototaxis to reach the center of distribution (blue star marker). The points of trajectory are shown in more red over time. ### Phototaxis for Real Kilobots Phototaxis is an example behavior showing how simple organisms approach the center of an attractive light source (Schmickl et al., 2010; Baltieri and Buckley, 2017). The algorithm is simple enough to be implemented on minimal robots, such as Kilobots. We added a light conductor on top of the light sensor of Kilobots as in our previous work (Raoufi et al., 2023) so that the individual robots can do point-wise sample-based phototaxis in space. Our real robot experiments prove that despite the simplicity of the algorithm and heterogeneity in motion, Kilobots can locate the center of the light source and exploit the reward. Our phototaxis algorithm is different from the random search explained in (Pelkonen, 2018) and the collective phototaxis as in (Holland, 2019). Our more greedy algorithm boils down to the following procedure: if the intensity of the light sample gets closer to the objective intensity, the robot keeps going forward for a predetermined time duration, otherwise, it turns. The robot stops if it gets close enough (defined by a threshold) to the objective intensity. The turning direction is determined by a parameter \(P_{\text{R}}\) which is the probability to turn to the right (and \(P_{\text{L}}=1-P_{\text{R}}\)). To study how this parameter affects the performance of Kilobots, we consider three different configurations: * (asymmetric) deterministic turn to the right (\(P_{\text{R}}=1.0\)), * symmetric stochastic turn to the left and right (\(P_{\text{R}}=0.5\)), * and asymmetric stochastic turn to the left and right (\(P_{\text{R}}=0.25\)). Our experiments with real robots (see Fig. 3) for the first algorithm suggest that robots have different performances (in approaching the source center). With \(P_{\text{R}}=1.0\), a robot with a left heading bias (Fig. 3-b) has a lower performance compared to the other robots that have either negligible (Fig. 3-c) or right heading biases (Fig. 3-d). In some cases, too strong left-biased robots failed to get closer to the center and left the area of interest. ### Phototaxis for Kilobots in Simulation To study the effect of heading bias on the performance of phototaxis for Kilobots, we conduct experiments in ARGOS with the modified simulator we explained above. We test 100 simulated robots with heading biases uniformly distributed in the range of \([-0.04,0.04]\). Each robot executes the phototaxis algorithm, for 100 independent Monte Carlo simulations (\(N_{\text{MC}}=100\)). We distribute 100 (=\(N_{\text{MC}}\)) initial points and heading directions once for all robots, in order to make simulations comparable. Each experiment lasts for 200 seconds, with the time step of \(0.1\)s.We studied different simulation configurations with and without actuation noise (as in Fig. 1-c, d). We illustrated trajectories of 3 robots with different heading biases in Fig. 4. For the rest of the paper, we discuss the results of phototaxis with noise. We consider the distance of the robot to the center of the source for the performance metric as cost and calculate the average over the last 100 time steps. We illustrate the results for each trial as a point in Fig. 5. As expected, the robots vary greatly in their performance. This variation in the performance would have been otherwise ignored when assuming homogeneous robots. A key finding is that assumed "perfect" robots without bias are, on average, outperformed by "non-calibrated" robots (see Fig. 5-b, heading bias of \(\pm 0.023\)). This relates to our observation with real robots in Fig. 3, where the biased robot achieves more rewards. To elaborate more on the optimality of non-zero bias robots, let us assume an evolutionary optimization algorithm that modifies the configuration of a robot (heading bias) over generations for a given fixed phototaxis parameter, e.g. \(P_{\text{R}}=0.5\). The fact that the stable optimal heading biases are located at non-zero would cause the evolutionary algorithm to incline toward more biased configurations and select them more often over generations. The attraction points depend on where to start the evolutionary optimization. For each of \(P_{\text{R}}=0.25,0.5\) there are two separate local optimums, one with a positive and the other with a negative heading bias. This might give some insights into the development of diversity and heterogeneity in complex systems. It confirms that calling individuality a _bug_ or a flaw (with negative impacts) is not always true. There are also other scenarios, where being biased causes harm in an asymmetric manner, for example, having an asymmetric chance of being hooked on one side of the body compared to the other for a righty fish (Nakajima et al., 2007). In addition, we compare the performance of different algorithms. Each algorithm favors a specific range of heading biases. For the deterministic algorithm (Fig. 5-c), where robots always turn to the right (\(P_{\text{R}}=1.0\)), the algorithm favors the right-biased robots more than the others. In comparison, for the algorithm with a higher chance to turn to the left (\(P_{\text{R}}=0.25\), Fig. 5-a) the left-biased robots achieve higher rewards (lower cost). To highlight the effect of heterogeneity in optimization tasks we imagine a learning problem, where robots are supposed to learn the optimal value of \(P_{\text{R}}\). Given the results we provided here, it is predictable that robots with non-identical heading biases converge to different optimal Figure 4: Simulated Kilobots doing phototaxis to the light source (blue star) at the center (0,0) for \(P_{\text{R}}~{}=~{}1.0\). a-c) Trajectories of robots without noise for negative, zero, and positive heading-biased robots, respectively. d) The trajectory of a robot with noise and zero heading bias. parameters. A left-biased robot would pick a lower \(P_{\text{R}}\) compared to a right-biased robot. In that light, we argue that tuning one parameter for all robots by only optimizing the performance of a single robot with a specific feature (e.g., a non-biased robot) might not be the best practice. Another important point is the extent of acceptable "un-calibratedness"; that is the range of heading bias within which robots perform reasonably well (\(R_{\text{acc}}\)). To quantify \(R_{\text{acc}}\) we define a threshold (\(\delta_{\text{acc}}\)) for the performance, below which the criterion is satisfied. We show the acceptable range for \(\delta_{\text{acc}}=0.75\) m in Fig. 5-a-c with the green horizontal line. Another related metric is \(N_{\text{acc}}\), which counts the number of experiments (dots) performed below the threshold. We illustrated the performance metrics versus thresholds in the inset plots. We also compared the three algorithms in terms of the acceptable range (and number) versus the threshold in Fig. 6-a. With the highest threshold, all the dots have acceptable performance (\(N_{\text{acc}}=10\)k). The ranking of the most efficient algorithms changes by decreasing the threshold. Specifically, we focus on the range [0, 1 m] (Fig. 6-b). The most efficient algorithm as of this metric depends significantly on where we set the threshold. Apart from providing performance comparison among algorithms, the acceptable range proposes some extent of freedom from calibrations. From an engineering point of view, not having to calibrate swarm robots would reduce the required effort, energy, and cost to maintain such large-scale systems. If we ignore individuality, we get a joint distribution of performance for all robots (see Fig. 5-d.) Following this simplified interpretation we may draw conclusions that are either inaccurate or not generally valid. For example, the mean performance (denoted by the green line) represents a higher performance for \(P_{\text{R}}=0.5\). However, this may not hold true for all robots. Also, the heavy upper tail for \(P_{\text{R}}=1.0\) cannot be explained unless we look through the second dimension, which is individuality. This figure is an example showing the information that is neglected when ignoring the individuality dimension. ### Random Walk Another example of a scenario for single robots with heterogeneous heading biases is the exploration task, during which they measure samples from the environment and gather information. Exploration plays an important role in decision-making, whether in individual or collective scenarios. Furthermore, the exploration correlates with the internal states of the robots, for example, their estimate of the environmental observable (Ebert et al., 2020; Pfister and Hamann, 2022; Raoufi et al., 2023). In this section, we use the modified ARGoS simulator with the ability to capture the heterogeneity of heading biases for Kilobots in a bounded environment (see Fig. 7-a). The task for each robot is to perform a random walk and cover as much area as possible during its movement. To keep the generality of the performance metric, we only consider the non-overlapping area coverage regardless of the environmental distribution (Fig. 7-b). The area coverage measures the richness of information that a robot gathers during the exploration. We conducted 100 independent Monte Carlo simulations for each of the 200 robots with different heading biases and illustrated the performance of robots in Fig. 7-c. The results suggest that the variation in robots' performance is significant. In this case, the ideal robot achieved the highest performance, while the biased ones gathered less diverse information. The richer Figure 5: Performance of robots with different heading biases in doing phototaxis with different parameters in simulation. a-c) A dot represents the performance of each robot at each simulation trial and is color-coded based on its heading bias. The black line shows the mean value over 100 Monte Carlo repetitions. The inset plots show \(R_{\text{acc}}\) (red) and \(N_{\text{acc}}\) (blue) vs thresholds \(\delta_{\text{acc}}\). d) The ensemble distribution of all robots together (the light pink violin plot) by removing the individuality dimension. The black box plot shows the quantiles and the green line is the mean of all data points, each is shown by a purple dot. Figure 6: Acceptable range \(R_{\text{acc}}\) (solid lines), and the number of acceptable experiments \(N_{\text{acc}}\) (dashed lines) for the three algorithms versus acceptable threshold \(\delta_{\text{acc}}\). b) The same plot with limited range for \(\delta_{\text{acc}}\) in [0,1]. information caused by higher area coverage leads to higher confidence. Consequently, robots with different heading biases will have heterogeneous certainty about the information they gathered during the exploration. In order to highlight the importance of heterogeneity in confidence, we consider the case of collective estimation where robots gather information from the environment and aggregate it to achieve a consensus, for example, on the mean value of the light distribution as by Ebert et al. (2020) and Raoufi et al. (2023). We divide the aggregation methods into two main categories whether they take the confidence of information into the equation or not. The latter is usually referred to as naive methods, for example, the DeGroot model for social learning (Golub and Jackson, 2010). A big family of example methods of the former, is the inference methods, such as Bayesian approaches (Hazla et al., 2021; Chin et al., 2022; Pfister and Hamann, 2022). Despite the simplifying homogeneity assumption in naive methods, collective scenarios using these methods manifest complex dynamics. Nonetheless, the heterogeneity of confidence among agents opens a new dimension to the complexity of information processing in collectives. ## Other Aspects of Heterogeneity Besides heterogeneity in motion, we observe variation in other aspects of individual behaviors. In this section, we briefly describe, measure, and report the heterogeneity in sensing and natural frequencies of Kilobots. ### Heterogeneity in Sensing Kilobots are equipped with a sensor that measures the ambient light intensity. Converted to a digital signal, the sensor output is a scalar variable in the range [0, 1023]. Although the precision of the sensor is considerably high, we observed and measured inter-individual variation among robots. To quantify the variation we designed an experiment as follows. We locate different robots exactly below the lens of a projector in a specific position and orientation. The projector is connected to a computer that controls the light intensity of a gray screen projected on the robot sensor. We change the light intensity of the screen by altering the so-called V-value of the gray color HSV. We sweep the V-value from zero to 1 (black to white), for four repetitions followed immediately by another, making a saw-teeth pattern (Fig. \(8\)-a). At the same time, the robot measures the intensity and sends it to the computer via serial communication. Using the periodicity of the signals, we trim the data of different experiments within a single period (see the red box in Fig. \(8\)-a). We compare the corresponding patch of different robots (Fig. \(8\)-b). The results confirm that inter-individual variation in light sensitivity among Kilobots is persistent and not negligible. For a Kilobot to perceive the surrounding world, it relies on its sensing through the single ambient light sensor. The difference in perceiving the state of the environment for agents can result in different decisions. For example, in a simple threshold-based binary decision, the inter-individual variation will cause disagreements among agents, even if the threshold is the same for all robots (Fig. \(8\)-c.) ### Heterogeneity in Natural Frequency The last aspect of individuality we discuss in this paper is the natural frequency of oscillating robots, due to the small differences in the internal clocking frequency. Natural frequency in its general meaning relates to a big family of periodic behavior, the natural frequency of a pendulum, any structure such as bridges, oscillators, or simply clocks. The microprocessor inside a robot, specifically the Kilobot, relies on its internal (or external) clock to perform the operations. These clocks have a nominal frequency with a minuscule tolerance. We investigate if, and to what extent Kilobots are heterogeneous in their frequency. Also, following an experiment, we show that the heterogeneity in the internal frequency contributes to asynchrony in collectives. To measure the heterogeneity in the natural frequency of Kilobots, we program 49 robots to change their LED color (alternating between red and blue) every 30 interrupt pulses of the internal clock. The algorithm is as follows. Each robot counts the number of pulses it receives from the internal clock, resem Figure 8: Kilobot measurement of light intensity. a) The sensor value versus sample. b) 12 robots with heterogeneous light sensitivity reading the same light intensity. The black line shows the median over different robots. c) The number of agreeing robots in a specific light intensity for different decision-making thresholds. Figure 7: Random walk for a single robot in a bounded environment. a) A snapshot of a Kilobot in ARGoS, b) an example of the area explored by a robot during exploration (yellow area), c) area coverage versus heading bias. bling the phase (\(\phi\)) of the robot. We also define two states for robots associated with red and blue. If the phase is below 30, the robot shows its red LED, otherwise blue. The counter (and the phase) resets if it surpasses 60. To an observer of the robot, the LED alternates between red and blue almost every second. We locate all the robots on a lattice (Fig. 9-a) and initiate their program exactly at the same time, by sending a broadcast message (minor individual deviations cannot be excluded but the results indicate good initial synchrony). During the experiment, no message is interchanged between robots. We recorded and post-processed the video of robots and labeled each robot by detecting the color of its LED. We count the number of times each robot switches and illustrated the distribution in Fig. 9-b, where we observe the persistent heterogeneity in natural frequencies. In Fig. 9-c, we show the population ratio of robots in either red or blue states, summing to one. At each time step, we count the ratio of robots with blue LED and drew a vertical line in blue with a height equal to the ratio. The rest of the population is shown by the upper red line. A full blue or red line denotes the full synchronous collective state, which is the case for the initial state of the collective. Over time, the collective deviates from the synchrony and reaches a point where the population is divided into equal halves, for example at the 2000-th time step. Based on this result, we argue that asynchrony happens naturally in these systems, resulting in delays, and more complex dynamics on the network. There are two common approaches addressing the asynchrony in distributed artificial systems: to leave the system as is, and study how the delay affects the dynamics on the network (Seuret et al., 2008; Tsianos and Rabbat, 2012); or to push the system toward a synchronous collective state via the interaction of agents, which is a complex behavior (Kuramoto, 1984; Strogatz, 2004; Ceron et al., 2023), or as done, for example, in sensor networks (Sundararaman et al., 2005; Degesys et al., 2007). We examined the synchronization in Kilobots, following a Kuramoto model (Kuramoto, 1984), and show the collective behavior for different coupling strengths in Fig. 9-d-f. Neither a weak (Fig. 9-d) nor a too-strong correlation strength (Fig. 9-f) result in a synchronous state. The optimal value to achieve synchrony lies somewhere in the middle of the previous two (Fig. 9-e). ## Conclusion Inspired by studies on natural systems and the complex behavior caused by inter-individual variations, we attempt to shed some light on the concept of individuality in swarm robotics. We found that individuality in robot systems in general, and swarm robots in particular, are often overlooked. We argue that robots have agent-specific persistent features, that are characteristic parts of them. These natural differences are either assumed to be "noise" or error and hence provoke solutions like calibration (as an offline solution) or regulation using feedback control (as an online solution). We argue that there is some useful information in the variations which can be exploited to make more accurate models and hence predictions. We also showed that robots develop individuality over the course of experiments, and thus calibration is not always a lasting solution. Also, regulating the errors comes with the price of a feedback signal, which is usually too costly for minimal swarm robots. Furthermore, dropping individuality as a dimension of the problem space will lead to increased uncertainty in the model. We observe, report, and measure the heterogeneity in motion, sensing, and frequency of real Kilobots, and show that the robots have agent-specific, persistent, non-zero mean biases. With the accurate model for heterogeneity in heading bias, we scaled up our studies and showed how different robots vary in their performance. Our results provide evidence that calling inter-individual variations a bug or a flaw is not always true. Our results show a counter-intuitive comparison of the perfect and biased robots, with the perfect robot being outperformed by biased ones in some tasks. Besides, the new perspective opens space for new insights to be gained from these complex systems. For future works, we aim to extend our case study beyond Kilobots, and to investigate the effect of heterogeneity in collective tasks, for instance, how collectives with different levels of diversity compare in their performance. These tasks will not be limited to scenarios involving motion (e.g., collective motion), but also collective decision-making, perception, and synchronization are other examples of collective behavior whose complexity might stem from inter-individual variations. Figure 9: a) 49 Kilobots on a grid are detected in either blue or red. b) Sorted distribution of switching numbers of 4 different repetitions. The right-most histogram shows the ensemble distribution. c) The composition of the population in red and blue versus sample time, for a long experiment showing the collective gets asynchronous over time. d-f) Kilobots synchronizing with low, medium, and high correlation strength, resulting in asynchronous, synchronous, and super-stable behaviors. ## Acknowledgment This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2002/1 "Science of Intelligence" - project number 390523135.
2309.01812
Into the Single Cell Multiverse: an End-to-End Dataset for Procedural Knowledge Extraction in Biomedical Texts
Many of the most commonly explored natural language processing (NLP) information extraction tasks can be thought of as evaluations of declarative knowledge, or fact-based information extraction. Procedural knowledge extraction, i.e., breaking down a described process into a series of steps, has received much less attention, perhaps in part due to the lack of structured datasets that capture the knowledge extraction process from end-to-end. To address this unmet need, we present FlaMB\'e (Flow annotations for Multiverse Biological entities), a collection of expert-curated datasets across a series of complementary tasks that capture procedural knowledge in biomedical texts. This dataset is inspired by the observation that one ubiquitous source of procedural knowledge that is described as unstructured text is within academic papers describing their methodology. The workflows annotated in FlaMB\'e are from texts in the burgeoning field of single cell research, a research area that has become notorious for the number of software tools and complexity of workflows used. Additionally, FlaMB\'e provides, to our knowledge, the largest manually curated named entity recognition (NER) and disambiguation (NED) datasets for tissue/cell type, a fundamental biological entity that is critical for knowledge extraction in the biomedical research domain. Beyond providing a valuable dataset to enable further development of NLP models for procedural knowledge extraction, automating the process of workflow mining also has important implications for advancing reproducibility in biomedical research.
Ruth Dannenfelser, Jeffrey Zhong, Ran Zhang, Vicky Yao
2023-09-04T21:02:36Z
http://arxiv.org/abs/2309.01812v1
# Into the Single Cell Multiverse: ###### Abstract Many of the most commonly explored natural language processing (NLP) information extraction tasks can be thought of as evaluations of declarative knowledge, or fact-based information extraction. Procedural knowledge extraction, i.e., breaking down a described process into a series of steps, has received much less attention, perhaps in part due to the lack of structured datasets that capture the knowledge extraction process from end-to-end. To address this unmet need, we present FlaMBe (Flow annotations for Multiverse Biological entities), a collection of expert-curated datasets across a series of complementary tasks that capture procedural knowledge in biomedical texts. This dataset is inspired by the observation that one ubiquitous source of procedural knowledge that is described as unstructured text is within academic papers describing their methodology. The workflows annotated in FlaMBe are from texts in the burgeoning field of single cell research, a research area that has become notorious for the number of software tools and complexity of workflows used. Additionally, FlaMBe provides, to our knowledge, the largest manually curated named entity recognition (NER) and disambiguation (NED) datasets for tissue/cell type, a fundamental biological entity that is critical for knowledge extraction in the biomedical research domain. Beyond providing a valuable dataset to enable further development of NLP models for procedural knowledge extraction, automating the process of workflow mining also has important implications for advancing reproducibility in biomedical research. ## 1 Introduction The recent onslaught of pre-trained language models has spurred on tremendous advances in a range of natural language processing (NLP) applications, including named entity recognition (NER), named entity disambiguation (NED), sentiment analysis, and relation extraction [1; 2; 3; 4; 5]. These applications mostly fall under the umbrella of tasks that aim to extract _declarative knowledge_, sometimes also referred to as "knowing that," since these tasks focus on matters of factual knowledge (e.g., _knowing that_ "neuron" is a cell type) [6; 7]. Declarative knowledge is often contrasted with _procedural knowledge_, or "knowing how," (e.g., _knowing how_ to conduct an experiment) [6; 7]. Early AI researchers raised the importance of developing representations of procedural knowledge, given that performing plans or procedures is a fundamental way in which humans navigate the world [8]. However, compared with declarative knowledge extraction, there remains a vast gap in the development and application of machine learning methods towards procedural knowledge tasks [9]. Recently, there has begun to be a renewed interest in using machine learning to model procedural knowledge, especially knowledge extraction from text using NLP. These efforts have mostly focused on cooking and other common household tasks [10; 11], business processes [12], and technical manuals or manufacturing [13]. The specific applications that have garnered interest seem to have been naturally motivated by either the emergence of valuable datasets (e.g., online recipes for cooking, WikiHow for various how-to tasks) or economic gain through business process optimization. Interestingly, one of the main ways scientists and engineers communicate their findings--through academic papers--is a prime source of unstructured text describing "know-how," yet few studies explore extracting procedural knowledge from scientific literature. This is the case though there is also an abundance of open access scientific literature that is frequently used for many standard declarative knowledge extraction studies. We posit that there are 3 main reasons that procedural knowledge extraction from scientific literature is not currently widely studied: 1. Though most research papers will describe procedures, i.e., methods, they are typically not written with as much structure as a recipe or technical manual, and thus not as easy to model "off the shelf." In fact, methods sections are often organized by thematic categories and do not necessarily represent the "temporal ordering" in which the individual steps were done.1 It is also often the case that the results sections need to be read together with the methods sections to reconstitute how various tools were used. Footnote 1: Note that here, temporal ordering is used loosely, as we are simply referring to the workflow ordering that a reader can deduce from the manuscript. It is of course common that scientific manuscripts present their main results in differing order than originally conducted. That said, we expect that internal ordering of tasks within each major result to be typically a good reflection of what was actually performed. Footnote 2: The terminology scRNA-tools uses for these analysis tasks is “categories,” since they are focused on grouping tools by their applications. We simplify the terminology here to make clear that each tool can have multiple category tags. 2. There can be varying degrees of ambiguity in a scientific manuscript when systematically describing a workflow. The same method or software tool can be used at several time points throughout a paper, but in different contexts and for different purposes. For example, principal component analysis (PCA) can be used for dimensionality reduction, feature selection, or visualization. Failure to account for context may lead to a workflow that appears to simply have a chain of PCAs. In addition, multiple parallel workflows can be described in a single paper. For example, a single paper can consider multiple datasets, each of which are processed differently, before they are analyzed jointly. 3. Unlike writing down recipes or household tasks, annotating the workflow used in a scientific paper is challenging without domain expertise, thus resulting in a bottleneck for developing structured datasets. Motivated by these observations, we introduce FlaMBe (Flow annotations for Multiverse Biological entities). FlaMBe is a collection of structured annotations in biomedical research papers, with a particular focus on computational analysis pipelines in single cell research. While scientists have long been interested in studying single cells [14; 15], it was with the introduction of high-throughput single cell sequencing technologies around 2010 [16] that this area has exploded in activity, not only in applications of this experimental technique to various biomedical applications, but also in the development of computational tools and software to analyze the resulting data. Recent efforts to wrangle the space of analysis tools has resulted in specialized databases such as scRNA-tools [17], which currently tracks over 1,500 software tools across over 30 analysis tasks.2 Interestingly, the majority of tools catalogued by scRNA-tools are used for more than one analysis task, and one of the most commonly used tools, Seurat [18], is associated with as many as 10 categories of tasks, further highlighting the importance of considering context. Footnote 2: The terminology scRNA-tools uses for these analysis tasks is “categories,” since they are focused on grouping tools by their applications. We simplify the terminology here to make clear that each tool can have multiple category tags. In FlaMBe, we develop a structured representation of the procedural knowledge represented in scientific literature by considering (1) the _targets_ of the study, which in the case of single cell research, are the tissues and/or cell types that are assayed; (2) the _tools_ applied in the study as well as the analysis task or _context_ in which they are being used; and (3) the _workflow_ between tools and analysis tasks, e.g., when PCA is used for dimensionality reduction before the results are clustered using DBSCAN. Part of the motivation in structuring FlaMBe in this manner is that we can break down the more complex, unstructured goal of procedural knowledge extraction into existing, more manageable declarative knowledge extraction tasks. For example, the identification of targets and tools in text reduces to NER and NED tasks. Overall, we present 55 full text papers, including over 420,000 tokens, annotated for relevant entities and relations from the PubMed Central Open Access Subset by domain experts (computational biologists). To improve coverage over a more diverse set of journals and entities, we also provide tissue/cell type annotations in 1,195 paper abstracts mined from PubMed, covering over 270,000 tokens. The entire dataset provides entity annotations as well as disambiguation, where entities are linked to identifiers in relevant knowledge bases. To our knowledge, FlaMBe is the largest NER and NED dataset for tissues/cell types. Furthermore, we also provide annotations for software tools and computational methods, also capturing 28 unique contexts in which the tools are used for single cell research and nearly 400 workflow relations between (tool, context) pairs. An example visualization of the flow between contexts is shown in Fig. 1. FlaMBe is available for exploration and download at [https://github.com/ylaboratory/flambe](https://github.com/ylaboratory/flambe). We illustrate some example use cases for FlaMBe here, but the richness of this dataset has many more potential downstream applications in machine learning as well as computational biology and the wider biomedical field. In general, the complexity of working with single cell data and its capacity for a variety of different workflows, together with its important biomedical applications, is ultimately what led us to choose the area of single cell research for FlaMBe. However, we have also proposed a systematic framework to distill procedural knowledge into a structured dataset in a manner that considers some of the unique challenges of scientific literature. It is our hope that FlaMBe provides a useful foundation for future "science-know-how" modeling and datasets. ## 2 Related Work FlaMBe is designed to represent a collection of complementary tasks that together form the basis of a structured representation that captures procedural knowledge in biomedical texts. Here, we discuss related datasets and research efforts. Biomedical NLPSystematic evaluations of language models in a variety of different benchmarking efforts have revealed that for specialized domains like biomedicine, language models developed using domain-specific text (e.g., scientific literature) often outperform general-domain language models (e.g., trained on Wikipedia, news articles, webpages, etc.) on domain-specific tasks [19; 20; 21; 22]. Furthermore, it seems that mixed-domain pretraining can sometimes hurt more than help, suggesting Figure 1: **Example overview of the workflow of tool contexts (analysis tasks).** Summary figure of workflows from different tool contexts captured by FlaMBe. Direction of edge represents which analysis task was completed prior to the output being sent to the following task. Weight of edges represent the number of papers that mentioned. Node size corresponds to degree, i.e., the number of papers that mentioned the corresponding analysis task. that transfer learning is at times unsuccessful due to how different general-domain text is from biomedical text [20]. In general, there has been a demonstrated need for both domain-specific pretrained language models as well as domain-specific datasets for method benchmarking. In the biomedical domain, language models are often trained on a mix of abstracts from PubMed and full text articles from PubMed Central [19; 20; 21; 23], at times also with additional scientific text such as medical records [24]. A variety of biomedical NLP benchmarking datasets have also been developed [20; 24; 25], but often individual tasks can be fragmented. Very recently, large scale efforts like BigBio [26] have systematically organized comprehensive public collections of biomedical NLP datasets. BigBio's curation revealed that the largest represented task within biomedical NLP is unsurprisingly NER, as there are a variety of biological entities that are often of interest for text mining (e.g., diseases, gene names, chemical compounds, anatomy/tissue/cell type). Of particular relevance to our work here are previous dataset curation efforts for tissue/cell type [27; 28; 29; 30]. However, not only are these datasets smaller in terms of total annotations in comparison with FlaMBe, but furthermore, none provide NED. Disambiguating these terms and linking them to a systematic knowledge base provides more utility for the biomedical community and also enables incorporation of information from the associated knowledge base for improved knowledge extraction. The other entity that has recently begun to be considered for NER in biomedical literature is software. As the field of computational biology grows and, accordingly, the number of software tools and computational methods, systematic identification and analysis of tool usage has become more relevant. Large-scale curation efforts for NER and NED here include bioNerDS [31], SoftCite [32], and SoMeSci [33]. These previous datasets have differing limitations. Both bioNerDS and SoftCite only consider articles published before 20113 in their dataset, while SoMeSci curates articles as recent as 2020. However, SoMeSci's main endpoint is a knowledge graph and thus does not provide its annotations in an easily usable format. Both SoftCite and SoMeSci have been used as training data to automatically identify software mentions across millions of scientific articles [34; 35], though the resulting automatically annotated datasets differ greatly. Finally, all previous datasets focus solely on software. Because one of the key goals of FlaMBe is to extract data processing and analysis workflows, we also wanted to expand annotations to computational methods that are often referred to in scientific papers without necessarily a specific associated software (e.g., PCA, SVM). Footnote 3: bioNerDS further restricts its annotations to only two journals, _BMC Bioinformatics_ and _Genome Biology_. Procedural knowledge extractionRecent efforts in procedural knowledge extraction have been spurred on by the increasing availability of naturally arising procedural knowledge-related data sources. In fact, the widespread availability of online recipes have given rise to the new research area of "food computing." [10] Other areas where there is active research in procedural knowledge extraction include household tasks based on mining data sources such as WikiHow, Instructables, and eHow [11], technical manuals [13], and business processes [12]. There has also been some limited attempts to examine scientific literature as an application area. Song et al. propose representing procedural knowledge as (target, action, method) triplets based on MEDLINE abstracts [36], and Halioui et al. consider using process-oriented case-based reasoning to extract workflows from papers mentioning phylogenetic analyses from PubMed [37]. Interestingly, these two pieces of work fall on two ends of the spectrum in terms of the complexity of the representations they propose. In addition to the limitations of modeling an entire workflow from only an abstract, Song et al.'s proposed representation is also unable to take into account when tools are applied in different contexts. Meanwhile, Halioui et al.'s representation is somewhat arduous, and their contribution is mostly focused on a rule-based workflow extraction framework rather than the assembly of a dataset that can be used by other methods. Neither Song et al. nor Halioui et al.'s datasets are accessible.4 Footnote 4: Halioui et al. provide a link to their data and framework implementation in their paper, but the link is no longer active. ## 3 Dataset Collection Methodology and Overview Annotations for NER, NED, and other knowledge extraction tasks were curated by domain experts in computational biology for a series of 55 biomedical full-text papers and 1,195 abstracts, indexed on PubMed Central (PMC) and PubMed, respectively. We chose to include both full text and abstracts in FlaMBe to have a breadth of unique tokens as well as the depth needed to extract meaningful biological workflows. ### Collection Methodology Abstract corpusThe abstract corpus was hand-curated for tissue and cell type terms across 20 high-impact biomedical journals (full list in Supplementary Materials). To ensure that no single journal was overrepresented due to publication quantity, we set the number of sampled abstracts per journal to 60. Furthermore, we only sampled from recently published works between 2016 and 2021, as advances in technology have made it possible to study cell types in addition to bulk tissue and we want to capture the new diversity of cell types in our annotations. All abstracts were downloaded using PubMedutils. To enable evaluation of interannotator agreement (Supplementary Materials), each of 3 annotators was assigned 400 abstracts (60 from each unique journal), with 240 overlapping abstracts evenly distributed across journals. Full text corpusBecause of the focus on single cell research, we used Pubmed utils to query PMC for 3 general article types ("Classical Article," "Clinical Study," and "Journal Article") using the following key words (allowing dashes to be used as a connector as well): "scRNAseq," "single cell RNAseq," "single cell RNA sequencing," "single cell transcriptomics," "single cell transcriptome." Full text articles were downloaded directly via the PMC FTP and parsed using Pubmed Parser [38]. Out of the 55 total full text articles annotated by 2 annotators, 10 papers were annotated by both to evaluate interannotator agreement (Supplementary Materials). ### Annotation Types Tissue, cell type, tool, and method were annotated using the Prodigy software tool developed by Explosion AI for easy tracking of token-level tags. Due to the more limited presence of tool and methods, ergo tool context and workflow in abstracts, these annotations were only completed in the full text corpus. Tissue and cell type were annotated in both the abstract and full text corpora. Tissue and cell typeTo determine what classifies as tissue or cell type label, we use the terms in the NCI Thesaurus,5 a comprehensive biomedical ontology for describing human samples which has cross-references to many other biomedical ontologies, as a guide. We focus on annotating useful sample descriptors that capture what biological entity is being studied, and try to tag the most specific term possible (e.g., "left ventricle" vs. "ventricle"). The full set of annotation rules given to each annotator can be found in the Supplementary Materials. Footnote 5: [https://nictihesaurus.nci.nih.gov/nictibrowser/](https://nictihesaurus.nci.nih.gov/nictibrowser/) A tissue or cell type in the text may be more specific than a term in the ontology, or it may not match exactly or any of the given synonyms. In these cases, we manually disambiguated the tag back to its nearest term in the ontology. In all other cases we programmatically mapped exact matches and synonyms back to NCIT identifiers. Additionally, in some cases, to express the specificity found in the text, we used two terms from the ontology in the disambiguation (e.g., "adipose stem cell" is mapped to two terms in NCIT "adipose" and "stem cell"). Tool and methodsUnlike tissues and cell types which have standardized ontologies, there is no concrete vocabulary to annotate tools and methods in biomedical research. We have done our best to define two concrete categories of methods, those where an important computational transformation of the data has taken place but can be done by more than one package, (e.g., K-means clustering or PCA), and those that reference a specific tool or package. We label each of these respective types as unspecified method ("UNS_METHOD") or tool ("TOOL"). Furthermore, we aimed to identify computational methods applied on data that are separate from sequencing technologies and their related protocols (e.g., those done on machines which physically handle a biological sample) and only annotate tools and methods starting from the initial processing off of sequencing machines. Tool contextIn addition to annotating tools, methods, tissue, and cell type terms in the full text we also provide a set of tool "contexts," or the analysis task that they are used for to process or augment data. This is important, as a single tool may have multiple functions or reasons that it was applied (Fig. 3 shows an example paper where Seurat used in 4 different contexts). For the sake of exploring the single cell multiverse, we restricted the set of modes to important functions in processing a wide variety of sequencing data. A single mention of a tool in the text can have one or more modes assigned to it based on its surrounding context. The full vocabulary for modes can be found in Supplementary Materials. WorkflowOn a paper level, we aim to extract the various workflows done to samples, where samples are defined as an assay (e.g., scRNA-seq, ChIP-seq, BS-seq, etc) and a sample descriptor such as tissue/cell type pair. Once a unique set of samples per paper are identified we link them with tool and mode pairs from the text. Next, we annotate the flow by tabulating all edge pairs, where a pair of tools with their corresponding modes are applied to a given sample. In cases where an unspecified important transformation took place, such as an 'UNS_METHOD' we use "unspecified_mode" as a placeholder. In this way we can reconstruct and model multiple workflows in a paper when more than one sample type is used. ### Dataset description and statistics Token level tagsAll token level tags, such as those for tissue and cell type and tool and method annotations are released as IOB and CoNLL files. The CoNLL files contain disambiguated annotations, with the tissue and cell type tags mapped semi-manually back to NCI Thesaurus identifiers and tools disambiguated back to a standardized name. An additional description file is also provided, one for tissues and cell types, which maps NCI Thesaurus ids to names, and one for tool and method annotations, with annotations to relevant references, GitHub, or project links. Together, the full-text tag files span 55 papers and 429,373 tokens with 405 disambiguated (776 before disambiguation) tissue and cell type terms, 217 disambiguated tools, and 48 unique general methods. The abstract only tag files span 1,195 papers with 272,771 tokens annotated and 286 disambiguated tissue and cell type terms. Tool context annotationsMode annotations for the various tools are provided in the tool and method CoNLL files. Each mode is manually assigned using the surrounding sentence context. Workflow annotationsThere is no predefined standard format for paper-level knowledge extraction annotations, so we split them into the following 3 files for easy parsing: A sample description and identification file, containing a listing of unique sample assay and tissue and cell type pairs; a tools applied file linking samples with the tool-mode combinations covering modes; and tool sequence file that ties pairs of tool-mode combinations together with sample identifiers. These files cover 8 unique assays, across 28 tool modes, capturing 390 tool-tool steps. There are on average, 10 workflow steps for each of the 38 papers with a defined workflow. ## 4 FlaMBe Use Cases The diverse collection of annotations in FlaMBe enables several different use cases. We explore 3 example use cases of NER, tool context prediction, and workflow visualization before discussing other potential downstream applications. Use case 1: named entity recognitionWe illustrate how the IOB and CoNLL files can be used to train BERT models to predict tissue and cell type mentions in biomedical abstracts. Using the full-text data as training and our abstract annotations as the hold out set for evaluation, we fine-tuned some of the most popular BERT models on HuggingFace (Table 1) for NER prediction. All models perform reasonably well, with PubMedBERT [20] having the best F1 for the cell type and tissue type identification tasks. In general, the domain-specific pretrained language models do tend to perform better than the general domain models, especially when it comes to recall. We also aim to demonstrate the utility of our annotations by comparing them with the only other easily obtainable software annotation dataset, Softcite [32], a resource that provides annotations of software mentions in full-text research publications in the life sciences and economics. Here, we partition FlaMBe's full-text tool annotations into two sets of full-text data, holding out 11 randomly chosen papers for evaluation. We use the remaining 44 papers from FlaMBe and the entirety of Softcite for training. Both datasets were used to train PubMedBERT, one of the consistent performers in tissue/cell type prediction (Table 2). Despite being a smaller set of annotations, FlaMBe outperforms Softcite, especially when it comes to identifying the full name of a tool, (e.g., "Search Tool for the Retrieval of Interacting Genes/Proteins," more commonly known as "STRING"). This observation seems to be supported when we examine the predictive performance broken down by tag type--the largest performance difference between a model trained on Softcite and FlaMBe is in the 'I-Tool' token (see Supplement). We hypothesize that the fact that biomedical tools often have long, multi-word names (and corresponding acronym) may play in role in this large difference. Of course, we note that in this comparison FlaMBe has the advantage of using the same annotation criteria in both the training and test sets, but nevertheless, we believe it still illustrates the importance and utility of FlaMBe's biomedical specific tool annotations. Use case 2: tool context predictionAs a proof of concept, we also used FlaMBe's tool context annotations and trained a PubMedBERT model to predict a tool's context given the sentence in which it is mentioned, akin to sentiment classification. We assembled a small set of training (191 sentences over 28 papers) and test (45 sentences over 8 papers) data, limiting ourselves to sentences containing a mention of at least one of the top 5 most mentioned tools, _Seurat, Cellanger, t-SNE, Monocle_, and _STAR_, each of which can be applied in multiple contexts. We then trained PubMedBERT models to predict context for each sentences in a one vs rest framework, for contexts that are well represented in the test and training datasets: _Alignment, Marker Genes, and Clustering_. Each of the classifers performed well, with the alignment (AUC = 0.954) and marker gene (AUC = 0.953) contexts being more distinguishable and clustering (AUC = 0.810) being the most difficult. Given this promising performance on a test case, we anticipate that more sophisticated methods will be able to achieve consistently strong performance with our annotations. Use case 3: visualization and exploration of different scientific workflowsDifferent workflows can be extracted from FlaMBe's annotations, at different levels of specificity, either by highlighting the different tools used in a paper (Fig. 2A) or the different tool contexts in a paper (Fig. 2B). These can also be combined to extract more exact methodology (Fig. 3). Benchmarking papers or work introducing a new tool have to compare with previous work and create interesting workflows, as a small set of sample types is processed with slight variations through different levels of an entire pipeline depending on a paper's objective (Fig. 2). Meanwhile, papers that seek to solve a biological problem often have a more defined flow, with fewer tools from sample to one or more endpoints (Fig. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{Cell Type} & \multicolumn{3}{c}{Tissue} \\ \cline{2-6} & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline BERT-base [39] & 0.740 & 0.823 & 0.779 & 0.813 & 0.811 & 0.812 \\ ELECTRA [40] & 0.775 & 0.802 & 0.788 & 0.819 & 0.841 & 0.830 \\ BioBERT [21] & 0.725 & 0.806 & 0.763 & **0.848** & 0.859 & 0.854 \\ BlueBERT [24] & 0.699 & 0.838 & 0.762 & 0.818 & 0.851 & 0.834 \\ BioELECTRA [23] & 0.719 & **0.855** & 0.781 & 0.751 & **0.894** & 0.816 \\ PubMedBERT [20] & **0.795** & 0.832 & **0.813** & 0.844 & 0.868 & **0.856** \\ \hline \hline \end{tabular} \end{table} Table 1: **Predictive performance (P/R/F1 scores) of various language models on abstract tissue/cell type annotations.** Language models were fine-tuned on a combination of full text and abstracts and evaluated on a mixture of both text types for cell type and tissue annotations. Best performers are highlighted in bold. \begin{table} \begin{tabular}{l c c c} \hline \hline & Precision & Recall & F1 \\ \hline Softcite [32] & 0.397 & 0.528 & 0.453 \\ FlaMBe & **0.779** & **0.909** & **0.839** \\ \hline \hline \end{tabular} \end{table} Table 2: **Predictive performance (F1 scores) of PubMedBERT on tool annotations when using Softcite or FlaMBe (excluding papers used for evaluation) as training standard.** Tool annotations from 11 full text papers were held out from FlaMBe as an evaluation standard. PubMedBERT was fine-tuned on either the entirety of Softcite annotations or the smaller FlaMBe training standard. 3). By extracting these workflows, we can not only classify the type of paper (e.g., benchmarking, new method, or biological insight), and analyze them on an individual level, but can also look at the global set of workflows for a large set of papers (Fig. 1). Thus, FlaMBe has important downstream potential for extracting knowledge at multiple levels. ### Potential downstream applications There are many other interesting downstream applications that FlaMBe can be used to study. In addition to the advances in developing systematic methods for procedural knowledge extraction, we want to highlight the scientific value of improved modeling here. Specifically, structured representations would potentially allow for improved computational method recommendation depending on the goals of a particular study, as well as highlight gaps and areas of need for new computational method development. Importantly, one of the natural concerns that has been raised in psychology and more Figure 3: **Sankey visualizations of the joint tool-context workflow from an example paper [42].** Visualization here depicts the workflow of (tool, context) pairs (vertical bars), where context is denoted within the parentheses. Figure 2: **Sankey visualizations of (A) tool and (B) context workflows from an example paper [41].** Visualizations here focus on one entity at a time, either the computational tools being used throughout the paper (vertical bars in A) or the context (vertical bars in B) in which they are being used. recently in machine learning is that having ever more complex computational workflows can spawn _multiverses_. The multiverse represents the set of parallel universes where slightly different paths (e.g., methods or analysis steps) are taken towards the same goal. Multiverse analyses are undertaken to see how reliable results and conclusions are in light of these implicit decisions [43; 44]. We believe one of the most exciting downstream applications of FlaMBe is systematic multiverse analyses of the complex workflows undertaken in biomedical research, towards the ultimate goal of improving transparency and reproducibility of research claims. ## 5 Limitations and Future Work One of the current limitations of FlaMBe is that though the number of entity-level annotations is high, there are relatively fewer examples of the more complex annotation types of tool context and workflow. We plan to address this through larger annotation efforts that will further expand these categories. Because FlaMBe has also proposed a systematic, structured representation that can be used as input to existing language models, these future efforts can be aided by computational predictions that can guide manual curation efforts. In these follow-up efforts, we foresee that the NER-related annotations will be easiest to automate, followed be NED, with the tool context and workflow predictions being more challenging. Any automated annotations will be reviewed by expert curators before release of an updated dataset. We do not foresee negative societal impacts, though incorrect workflows could potentially be misleading for downstream research, and thus we would encourage thorough evaluation of all predictions. With FlaMBe, we have broken down the more complex, abstract procedural knowledge extraction problem into more structured declarative knowledge tasks that the community is already well-equipped to tackle. Intriguingly, cognitive psychology research has pointed towards the fact that in humans, procedural and declarative knowledge are intertwined, but can sometimes be learned independently of one another [45]. Thus, there may also be benefit to using different, more "procedural" representations for learning. In some sense, one ML area that has tried to learn and mimic human procedural knowledge is reinforcement learning. A good example of this is with "script knowledge" [46] and generally text-based games [47], which have used a game approach to improve modeling at the intersection of language understanding and complex decision-making. Reinforcement learning has also found some early success in reasoning over large scale knowledge graphs. Procedural knowledge extraction from academic texts could potentially also benefit from this type of framework. One of the unique aspects of FlaMBe is that though we have developed a structured representation, they can also tie together (e.g., we have annotated individual edges that can be viewed jointly as a graph). The disambiguated terms also tie in with existing knowledge bases that can be incorporated into knowledge graph research. It will be interesting to see whether new methods can be developed that could take advantage of the joint representation and learn more than the sum of the parts. ## 6 Conclusion In conclusion, we have developed FlaMBe, a collection of datasets that together form structured representations of procedural knowledge captured in scientific literature. The dataset provides annotations for 1,195 paper abstracts and 55 full text papers, spanning nearly 700,000 tokens. In addition to providing the largest NER and NED dataset for tissue and cell type, we also provide annotations for computational tool and method, as well as the analysis task a tool is used in. Finally, we also annotate computational workflows within papers that can potentially be used in many downstream applications. Our dataset and associated code are accessible at [https://github.com/ylaboratory/flambe](https://github.com/ylaboratory/flambe).
2308.07192
gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative Sampling
A large catalogue size is one of the central challenges in training recommendation models: a large number of items makes them memory and computationally inefficient to compute scores for all items during training, forcing these models to deploy negative sampling. However, negative sampling increases the proportion of positive interactions in the training data, and therefore models trained with negative sampling tend to overestimate the probabilities of positive interactions a phenomenon we call overconfidence. While the absolute values of the predicted scores or probabilities are not important for the ranking of retrieved recommendations, overconfident models may fail to estimate nuanced differences in the top-ranked items, resulting in degraded performance. In this paper, we show that overconfidence explains why the popular SASRec model underperforms when compared to BERT4Rec. This is contrary to the BERT4Rec authors explanation that the difference in performance is due to the bi-directional attention mechanism. To mitigate overconfidence, we propose a novel Generalised Binary Cross-Entropy Loss function (gBCE) and theoretically prove that it can mitigate overconfidence. We further propose the gSASRec model, an improvement over SASRec that deploys an increased number of negatives and the gBCE loss. We show through detailed experiments on three datasets that gSASRec does not exhibit the overconfidence problem. As a result, gSASRec can outperform BERT4Rec (e.g. +9.47% NDCG on the MovieLens-1M dataset), while requiring less training time (e.g. -73% training time on MovieLens-1M). Moreover, in contrast to BERT4Rec, gSASRec is suitable for large datasets that contain more than 1 million items.
Aleksandr Petrov, Craig Macdonald
2023-08-14T14:56:40Z
http://arxiv.org/abs/2308.07192v1
# gSARec: Reducing Overconfidence in Sequential Recommendation Trained with Negative Sampling ###### Abstract. A large catalogue size is one of the central challenges in training recommendation models: a large number of items makes them memory and computationally inefficient to compute scores for all items during training, forcing these models to deploy negative sampling. However, negative sampling increases the proportion of positive interactions in the training data, and therefore models trained with negative sampling tend to overestimate the probabilities of positive interactions - a phenomenon we call _overconfidence_. While the absolute values of the predicted scores/probabilities are not important for the ranking of retrieved recommendations, overconfident models may fail to estimate nuanced differences in the top-ranked items, resulting in degraded performance. In this paper, we show that overconfidence explains why the popular SASRec model underperforms when compared to BERT4Rec. This is contrary to the BERT4Rec authors' explanation that the difference in performance is due to the bi-directional attention mechanism. To mitigate overconfidence, we propose a novel Generalised Binary Cross-Entropy Loss function (gBCE) and theoretically prove that it can mitigate overconfidence. We further propose the gSARec model, an improvement over SASRec that deploys an increased number of negatives and the gBCE loss. We show through detailed experiments on three datasets that gSARec does not exhibit the overconfidence problem. As a result, gSARec can outperform BERT4Rec (e.g. +9.47% NDCG on the MovieLens-1M dataset), while requiring less training time (e.g. -73% training time on MovieLens-1M). Moreover, in contrast to BERT4Rec, gSARec is suitable for large datasets that contain more than 1 million items. 2023 Footnote 1: [https://earthweb.com/how-many-videos-are-on-youtube/](https://earthweb.com/how-many-videos-are-on-youtube/) + Footnote †: 2023: _Bootstrap_ Binary Cross-Entropy loss: if an item \(i\) with high predicted probability \(p_{i}\) is sampled as a negative, \(\log(1-p_{i})\) calculated by the loss function tends to \(-\)infinity, causing numerical overflows and unstable training. Overall, we argue that overconfidence hinders model effectiveness and makes model training hard. Although overconfidence is a general problem applicable to _all_ recommender systems trained with negative sampling, in this paper, we focus specifically on _sequential_ recommender systems, for which negative sampling is specifically important, due to the large GPU memory requirement discussed above. Indeed, as we show in this paper, the use of negative sampling leads to overconfidence in the popular SASRec (Srivastava et al., 2015) sequential recommendation model. Existing solutions that can address overconfidence induced by negative sampling in recommender systems (e.g. (Srivastava et al., 2015; Wang et al., 2016)) are hard to adapt to deep learning-based sequential recommender models (see also Section 2.1). Hence, the overconfidence issue present in negatively-sampled sequential recommendation models remains largely unsolved. Indeed, the state-of-the-art BERT4Rec (Wang et al., 2017) model does not use negative sampling and, therefore, cannot be applied to datasets with large catalogues.2 Footnote 2: By BERT4Rec, we refer to the model architecture, the training task and the loss function. As we show in Section 6.2.1, while it is possible to train BERT4Rec’s architecture while using negative sampling, doing so negatively impacts the model’s effectiveness. Hence, to address the overconfidence issue in the sequential recommendation, we introduce a novel Generalised Binary Cross-Entropy loss (gBCE) - a generalisation of BCE loss using a generalised logistic sigmoid function (Zhou et al., 2017; Wang et al., 2016). We further propose the Generalised SASRec model (gSASRec) - an enhanced version of SASRec (Srivastava et al., 2015) trained with more negative samples and gBCE. Theoretically, we prove that gSASRec can avoid overconfidence even when trained with negative sampling (see Theorem 5.1). Our theoretical analysis aligns with an empirical evaluation of gSASRec on three datasets (Steam, MovieLens-1M, and Gowalla), demonstrating the benefits of having more negatives and the gBCE loss during training. On smaller datasets (Steam and MovieLens-1M), the combination of these improvements significantly outperforms BERT4Rec's performance on MovieLens-1M (+9.47% NDCG@10) and achieves comparable results on Steam (-1.46% NDCG@10, not significant), while requiring much less time to converge. Additionally, gBCE shows benefits when used with BERT4Rec trained with negative samples (+7.2% NDCG@10 compared with BCE Loss on MovieLens-1M with 4 negatives). On the Gowalla dataset, where BERT4Rec training is infeasible due to large catalogue size (Wang et al., 2016; Wang et al., 2016), we obtain substantial improvements over the regular SASRec model (+47% NDCG@10, statistically significant). Although this paper focuses on sequential recommendation, our proposed methods and theory could be applicable to other research areas, such as recommender systems (beyond sequential recommendation), search systems, or natural language processing. In short, our contributions can be summarised as follows: (i) we define overconfidence through a probabilistic interpretation of sequential recommendation; (ii) we show (theoretically and empirically) that SASRec is prone to overconfidence due to its negative sampling; (iii) we propose gBCE loss and theoretically prove that it can mitigate the overconfidence problem; (iv) we use gBCE to train gSASRec and show that it exhibits better (on MovieLens-1M) or similar (on Steam) effectiveness to BERT4Rec, while both requiring less training time, and also being suitable for training on large datasets. The rest of this paper is as follows: Section 2 provides an overview of related work; Section 3 formalises sequential recommendation and the typically used loss functions; we describe the problem of overconfidence in Section 4; in Section 5 we introduce gBCE and theoretically analyse its properties, before defining gSASRec; Section 6 experimentally analyses the impact of negative sampling in SASRec, BERT4Rec and gSASRec; Section 7 provides concluding remarks. ## 2. Related Work In this section we discuss existing work related to negative sampling in recommender systems. We review existing approaches for traditional (Matrix Factorisation-based) recommender systems in Section 2.1 and discuss why they are hard to apply for sequential recommendation. We then discuss training objectives and positive sampling strategies in Section 2.2 and show that this is an orthogonal research direction to negative sampling. Section 2.3 positions our work viz. the orthogonal direction of contrastive learning. Finally, in Section 2.4, we discuss how similar problems are solved in language models and why these solutions are not applicable to recommendations. ### Negative Sampling Heuristics: Hard Negatives, Informative Samples, Popularity Sampling One of the first attempts to train recommender systems with negative sampling was Bayesian Personalised Rank (BPR) (Srivastava et al., 2015). The authors of BPR observed that models tend to predict scores close to exactly one for positive items in the training data (a form of overconfidence) and proposed to sample one negative item for each positive item and optimise the relative order of these items, instead of the absolute probability of each item to be positive. However, as Rendle (the first author of BPR) has recently shown (Srivastava et al., 2015), BPR optimises the Area Under Curve (AUC) metric, which is not top-heavy and is therefore not most effective for a ranking task. Hence, several improvements over BPR, such as WAARP (Srivastava et al., 2015), LambdaRank (Chen et al., 2016), LambdaFM (Wang et al., 2016), and adaptive item sampling (Srivastava et al., 2015) have since been proposed to make negatively-sampled recommender models more suitable for top-heavy ranking tasks. These approaches usually try to mine the most informative (or _hard_) negative samples that erroneously have high scores and therefore are ranked high. Unfortunately, these approaches mostly rely on iterative or sorting-based sampling techniques that are not well-suited for neural network-based approaches used by sequential recommendation models: neural models are usually trained on GPUs, which allow efficient parallelised computing, but perform poorly with such iterative methods. Indeed, Chen et al. (Chen et al., 2016) recently proposed an iterative sampling procedure for sequential recommendation, but only experimented with smaller datasets (<30k items) where state-of-the-art results can be achieved without sampling at all (see also Section 6.2.1). Instead, sequential recommenders typically rely on simple heuristics such as uniform random sampling (used by Caser (Caser, 2016) and SASRec (Srivastava et al., 2015)) or do not use negative sampling at all (e.g. BERT4Rec (Wang et al., 2016)). Pellegrini et al. (2017) recently proposed to sample negatives according to their popularity and showed this to be beneficial when the evaluation metrics are also popularity-sampled. Our initial experiments have shown that popularity-based sampling is indeed beneficial with popularity-based evaluation metrics, but not with the full (unsampled) metrics. However, several recent publications (Beng et al., 2017; Chen et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019) recommend against using sampled metrics, and therefore we avoid popularity sampling in this paper. Another heuristic that is popular for search tasks is _in-batch_ sampling (Li et al., 2018, Ch. 5) (e.g. used by GRU4Rec (Li et al., 2019)). According to (Li et al., 2019), in-batch sampling is equivalent to popularity-based negative sampling, and hence we avoid it for the same reason stated above. Indeed, we focus on uniform sampling - as used by many sequential recommender systems - and design a solution that helps to counter the overconfidence of such models caused by uniform sampling. ### Training Objectives A _training objective_ is the task that the model learns to solve during the course of training. Some of the most popular alternative training objectives for sequential recommendation models include: _sequence continuation_, where the model learns to predict one or several next items in the sequence (used by Caser (Caser, 1977)); _sequence shifting_, where the model learns to shift the input sequence by one element to the left (used by SASRec (Kumar et al., 2017) and NextliNet (Nakumar et al., 2017)); item masking (used by BERT4Rec (Wang et al., 2017)); recency-based sampling, where the target items are selected probabilistically with a higher chance of selecting recent items (used by SASRec-RSS (Li et al., 2019; Li et al., 2019)). Each of these training objectives requires negative interactions in order to train the model to distinguish them from the positive ones. Therefore, the negative sampling strategy can be seen as orthogonal to the training objective. Hence, in this paper, we only focus on negative sampling, using the classic SASRec model (Kumar et al., 2017), with its sequence shifting training task, as our "backbone model". ### Contrastive Learning In this section, we briefly discuss contrastive learning methods, which have recently been shown to be effective in sequential recommendation (Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019); the main goal of this discussion is to highlight the orthogonality of these methods to our research. Contrastive learning methods augment the main training objective with an auxiliary contrastive objective to help the model to learn more generic sequence representations. The idea is to generate several versions of the same sequence (e.g. crop, reverse, add noise etc.) and add an auxiliary loss function that ensures that two versions of the same sequence have similar latent representations while representations of different sequences are located far away from each other in the latent space. This allows the model to learn more robust representations of sequences and generalise better to new sequences. However, these contrastive models still require regular training objectives and loss functions and, therefore, also require negative sampling when the catalogue size is large. Hence, contrastive learning is an orthogonal direction, and auxiliary contrastive loss can be used with the methods described in this paper. However, in Section 6.2.5, we demonstrate that gSARec can achieve quality comparable with the best contrastive methods even without auxiliary training objectives and loss functions. ### Large Vocabularies in Language Models In Natural Language Processing, the problem aligned to a large catalogue size is known as the _large vocabulary bottleneck_. Indeed, according to Heap's Law (Heap, 2015), the number of different words in a text corpus grows with the size of the corpus, reaching hundreds of billions of words in recent corpora (Chen et al., 2017), and making computing scores over all possible words in a corpus problematic. A typical solution employed by modern deep learning language models is to use _Word Pieces_(Wandel et al., 2017), which splits infrequent words into (more frequent) sub-word groups of characters. This allows to use a vocabulary of relatively small size (e.g. \(\sim\)30,000 tokens in BERT (Chen et al., 2017)) whilst being capable of modelling millions of words by the contextualisation of the embedded word piece representations. While decomposing item ids into sub-items can be used to reduce the item vocabulary of a recommender (Kumar et al., 2017), the decomposition requires a more complex two-stage learning process to assign sub-items. Other techniques have also been proposed to reduce the vocabulary size by pruning some tokens. For example, some classification models remove non-discriminating words (Chen et al., 2017; Li et al., 2019), which in the context of recommender systems means removing popular items (e.g. if a movie was watched by most of the users, it is not-discriminating). However, removing popular items is a bad idea as users are prone to interact with popular items and recommending popular items is a strong baseline (Li et al., 2019). Perhaps the most related work to ours is the Sampled Softmax loss (Kumar et al., 2017), which proposes a mechanism to approximate the value of a Softmax function using a small number of negatives. However, Softmax loss is known to be prone to overconfidence (Sohn et al., 2018). Indeed, Sampled Softmax loss has recently been shown to incorrectly estimate the magnitudes of the scores in the case of recommender systems (Sohn et al., 2018). Our experiments with Sampled Softmax loss are aligned with these findings. We discuss Sampled Softmax loss in detail in Section 3.3 and experimentally evaluate it in Section 6.2.4. In summary, among the related work, there is no solution to the overconfidence problem in sequential recommender systems. Hence, we aim to close this gap and design a solution for this overconfidence that is suitable for sequential models. In the next section, we cover the necessary required preliminaries and then in Section 5, we show that the problem can be solved with the help of Generalised Binary Cross-Entropy loss. ## 3. Sequential Recommendation & Loss Functions In the following, Section 3.1 describes the SASRec and BERT4Rec sequential recommendation models, which form the backbone of this paper. In Section 3.2, we more formally set the sequential recommendation task as a probabilistic problem and in Section 3.3 discuss loss functions used for training sequential models. ### SASRec and BERT4Rec Transformer (Sohn et al., 2018)-based models have recently outperformed other models in Sequential Recommendation (Kumar et al., 2017; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Two of the most popular Transformer-based recommender models are BERT4rec (Wang et al., 2017) and SASRec (Kumar et al., 2017). The key differences between the models include different attention mechanism (bi-directional vs. unidirectional), different training objective (Item Masking vs. Shifted Sequence), different loss functions (Softmax loss vs. BCE loss), and importantly, different negative sampling strategies (BERT4Rec does not use sampling, whereas SASRec samples 1 negative per positive). BERT4Rec was published one year later compared to SASRec, and in the original publication (Sun et al., 2017), Sun et al. demonstrated the superiority of BERT4Rec over SASRec. Petrov and Macdonald confirmed this superiority in a recent replicability study (Sant et al., 2019), and observed that, when fully-converged, BERT4Rec still exhibits state-of-the-art performance, outperforming many later models. Sun et al. (Sun et al., 2017) attributed BERT4Rec's high effectiveness to its bi-directional attention mechanism. Contrary to that, our theoretical analysis and experiments show that it should be attributed to the model overconfidence caused by the negative sampling used by SASRec (see Section 6.2.1). Indeed, when controlled for negative sampling, these models perform similarly (e.g. SASRec also exhibits state-of-the-art performance when trained without negative sampling). Unfortunately, as we argue in Section 1, the large size of the item catalogue in many real-world systems means that using negative sampling in the training of such systems is unavoidable, and therefore these systems can not use models that do not use sampling, such as BERT4Rec. Our goal hence is to improve SASRec's performance (by addressing overconfidence) while retaining the negative sampling, which is needed for large-scale systems. We now discuss a probabilistic view of sequential recommendation, which we use for improving SASRec in Section 5. ### Probabilistic View of Sequential Recommendation The goal of a sequential recommender system is to predict the next item in a sequence of user-item interactions. Formally, given a sequence of user-item interactions \(u=\{i_{0},i_{1},i_{2},...i_{n}\}\), where \(i_{k}\in I\), the goal of the model is to predict the next user's interaction \(i_{n+1}\). Sequential recommendation is usually cast as a _ranking problem_, so _predict_ means to rank items in the catalogue according to their estimated probability of appearing next in the sequence. We denote this (_prior_) probability distribution over all items appearing next in the sequence after \(u\) as \(P(i|u)\). \(P(i|u)\) is not directly observable: the training data only contains the user's actual interactions and does not contain information about the probabilities of any alternative items not interacted with. We refer to the prior as \(P(i)\) for simplicity. Learning to estimate the prior distribution \(P(i)\) is a hard task because the model doesn't have access to it, even during training. Instead, the model learns to estimate these probabilities, i.e. \(\hat{p}=\{\hat{p}_{1},\hat{p}_{2},...,\hat{p}_{|I|}\}\), by using a posterior distribution \(y(i)=\mathbb{I}[i=i^{+}]\), where \(i^{+}\in I\) is a positive interaction selected according to the training objective (as discussed in Section 2.2). \(y(i)\) is measured _after_ the user selected the item, so it always equals 1 for the positive item \(i^{+}\) and equals 0 for all other items. Note that to rank items, models do not have to compute the modelled probabilities \(\hat{p}\) explicitly. Instead, models frequently compute item \(s_{s}=\{s_{1},s_{2},...,s_{|I|}\}\) and assume that if item \(i\) is scored higher than item \(j\) (\(s_{i}>s_{j}\)) then item \(i\) is more likely to appear next in the sequence than item \(j\) (\(\hat{p}_{i}>\hat{p}_{j}\)). Whether or not it is possible to recover modelled item probabilities \(\hat{p}=\{\hat{p}_{1},\hat{p}_{2},...,\hat{p}_{|I|}\}\) from the scores \(s\) depends on the loss function used for model training. We say that a loss function \(\mathcal{L}\)_directly models probabilities_\(\hat{p}\), if there exists a function \(f\), which converts scores to probabilities \((\hat{p_{i}}=f(s_{i}))\) and when the model is trained with \(\mathcal{L}\), \(\hat{p}\) approximates the prior distribution \(P\) (e.g. a model trained with \(\mathcal{L}\) minimises the KL divergence between \(P\) and \(\hat{p}\)). In the next section, we discuss the loss functions used by sequential models that directly model probabilities. ### BCE Loss and Softmax Loss Two popular loss functions, which directly model probabilities are _Binary Cross-Entropy (BCE)_ (used by Caser (Caser, 2017) and SASRec (Kalal et al., 2017)) and _Softmax_ loss (used by BERT4Rec (Sun et al., 2017) and ALBERT4Rec (Sant et al., 2019)). Binary Cross-Entropy is a _pointwise_ loss, which treats the ranking problem as a set of independent binary classification problems. It models the probability with the help of the _logistic sigmoid function_\(\sigma(s)\): \[\hat{p_{i}}=\sigma(s_{i})=\frac{1}{1+e^{-s_{i}}} \tag{1}\] The value of BCE loss is then computed as: \[\mathcal{L}_{\text{BCE}}=-\frac{1}{|I|}\sum_{i\in I}y(i)\log(\hat{p_{i}})+(1- y(i))\log(1-\hat{p_{i}}) \tag{2}\] BCE minimises the KL divergence (Kalal et al., 2017, Ch. 5) between the posterior and the modelled distributions, \(D_{KL}(y(i)||p_{i})\), where each of the probability distributions is treated as a distribution with two outcomes (i.e. interaction/no interaction). BCE considers each probability independently, so their sum does not have to add up to 1. Indeed, as we show in Section 5, when BCE is used with negative sampling, the model learns to predict probabilities close to 1 for the most highly-ranked items. In contrast, Softmax loss treats the ranking problem as a multi-class classification problem, thereby considering the probability distribution across all items, obtained by using a \(\text{softmax}(\cdot)\) operation: \[\hat{p_{i}}=\text{softmax}(s_{i})=\frac{e^{s_{i}}}{\sum_{j\in I}e^{s_{j}}} \tag{3}\] The value of Softmax loss is then computed as: \[\mathcal{L}_{softmax}=-\sum_{i\in I}y(i)\log(\hat{p_{i}})=-\log(\text{softmax}(s _{i^{+}})) \tag{4}\] Softmax loss minimises KL divergence (Kal et al., 2017, Ch. 5) between posterior and modelled distributions \(D_{KL}(y||p)\), where each \(y\) and \(p\) are multi-class probability distributions. In contrast to BCE, the item probabilities \(\hat{p_{i}}\) modelled by Softmax loss add up to 1, meaning that overconfidence is less prevalent (however, it is still known to overestimate probabilities of the top-ranked items (Kal et al., 2017)). Unfortunately, the \(\text{softmax}(\cdot)\) operation used by Softmax loss requires access to _all_ item scores to compute the probabilities (which makes it more of _a listwise_ loss), whereas if the model is trained with negative sampling, the scores are only computed for the _sampled_ items. This makes Softmax loss unsuitable for training with negative sampling. In particular, this means that BERT4Rec, which uses the Softmax loss, cannot be trained with sampled negatives (without changing the loss function). To use Softmax loss with sampled negatives, Jean et al. (Jean et al., 2019) proposed _Sampled Softmax Loss (SSM)_. SSM approximates probability \(\hat{p_{i}}\) from Equation (3) using a subset of \(k\) negatives \(\frac{I_{k}^{-}}{k}\subset I^{-}\). This approximation is then used to derive the loss: \[\hat{p_{i}}=\mathrm{SSM}(s_{i},I_{k}^{-})=\frac{e^{s_{i}}}{e^{s_{i} \mu}+\sum_{j\in I_{k}^{-}}e^{s_{j}}} \tag{6}\] \[\mathcal{L}_{SSM}=-\sum_{i\in\{I_{k}^{-}\cup i^{+}\}}y(i)\log(\hat {p_{i}})=-\log(\mathrm{SSM}(s_{i^{*}})) \tag{5}\] The estimated probability value computed with Sampled Softmax is higher than the probability estimated using full Softmax, as the denominator in Equation (3) is larger than the denominator in Equation (5). However, if all high-scored items are included in the sample \(I_{k}^{-}\), the approximation becomes close. To achieve this, Jean et al. originally proposed a heuristic approach specific to textual data (they segmented texts into chunks of related text, where each chunk had only a limited vocabulary size). In the context of sequential recommender systems, some prior works (Zhou et al., 2017; Zhou et al., 2018) used variations of SSM loss with more straightforward sampling strategies, such as popularity-based or uniform sampling. In this paper, we focus on the simplest scenario of uniform sampling, and therefore in our experiments, we use Sampled Softmax loss with uniform sampling. Note that Sampled Softmax Loss normalises probabilities differently compared to the full Softmax loss, and therefore the Sampled Softmax loss and the full Softmax loss are different loss functions. Indeed, as Sampled Softmax uses only a sample of items in the denominator of Equation (5), the estimated probability of the positive item \(\hat{p_{i}}\) is an overestimation of the actual probability, a form of overconfidence. Indeed, as mentioned above, Sampled Softmax loss fails to estimate probabilities accurately for recommender systems (Srivastava et al., 2017). Nevertheless, as variations of Sampled Softmax have been used in sequential recommendations (Zhou et al., 2017; Zhou et al., 2018), we use Sampled Softmax loss as a baseline in our experiments (see Section 6.2.4). In contrast, it is possible to calculate BCE loss over a set of sampled negatives \(I_{k}^{-}\) without modifying the loss itself (except for a normalisation constant, which does not depend on the item score and therefore can be omitted), as follows: \[\mathcal{L}_{\mathrm{BCE}}=-\frac{1}{|I_{k}^{-}|+1}\left(\log(\sigma(s_{i^{*}}) )+\sum_{i\in I_{k}^{-}}\log(1-\sigma(s_{i}))\right) \tag{7}\] Using BCE with sampled negatives is a popular approach, applied by models such as SASRec (Kirkpatrick et al., 2017) (which uses 1 negative per positive), and Caser (Caser, 2017) (which uses 3 negatives). Unfortunately, negative sampling used with Binary Cross-Entropy leads to model overconfidence, which we discuss in the next section. ## 4. Model Overconfidence We say that a model is _overconfident_ in its predictions if its predicted probabilities \(\hat{p_{i}}\) for highly-scored items are much larger compared to prior probabilities \(P(i)\), i.e., \(\hat{p_{i}}\gg P(i)\). In general, the magnitude of the relevance estimates are rank-invariant, i.e. do not affect the ordering of items, and hence they are rarely considered important when formulating a ranking model. In contrast, overconfidence is problematic only for the loss functions used to train the models, particularly when they directly model the interaction probability. Indeed, for some loss functions (such as pairwise BPR (Zhou et al., 2017) or listwise LambdaRank (Beng et al., 2017)), only the difference between the scores of paired items (\(s_{i}-s_{j}\)) is important, and therefore we cannot define overconfidence for these losses. However, these losses usually require algorithms that iteratively select "informative" negative samples, which are hard to apply with deep learning methods (see also Section 2.1). As discussed in Section 3.2, the prior probability distribution \(P(i)\) cannot be directly observed, and therefore overconfidence may be hard to detect. However, in some cases, overconfidence may be obvious. For example, Figure 1 shows predicted probabilities by four different models for a sample user in the MovieLens-1M dataset. As can be seen from the figure, SASRec's predicted probabilities for items at positions 1..25 are almost indistinguishable from 1. This is a clear sign of overconfidence: only one of these items can be the correct prediction, and therefore we expect the _sum_ of probabilities to be approximately equal to 1, and not each individual probability. In fact, in this figure, the sum of all probabilities predicted by SASRec equals 338.03. In contrast, for BERT4Rec, the sum of probabilities equals exactly 1 (as the probabilities are computed using Softmax) and for our SASRec (see Section 5.4) it is equal to 1.06. From the figure, we also see that a SASRec model trained with Sampled Softmax loss is also prone to overconfidence (sum of all probabilities equals 152.3). Overconfidence for highly-ranked items is problematic: the model does not learn to distinguish these items from each other (all their predicted probabilities are approximately equal) and focuses on distinguishing top items from the bottom ones. The lack of focus on the top items contradicts our goal: we want the correct order of highly-ranked items and are not interested in score differences beyond a certain cutoff. Moreover, overconfidence is specifically problematic for BCE loss: if an item with high probability \(\hat{p_{i}}\approx 1\) is selected as a negative (the chances of such an event are high when there are many high-scored items), \(\log(1-p_{i})\) computed by the loss function tends to \(-\infty\), causing numerical overflow problems and a training instability. Next, we introduce gBCE loss, apply it for a theoretical analysis of BCE's overconfidence, and show how gBCE mitigates the overconfidence problem. Figure 1. Predicted probability at different ranks for user 963 in MovieLens-1M. SASRec-SSM is a SASRec model trained with Sampled Softmax loss with 16 negatives. ## 5. Generalised Binary Cross Entropy and its Properties In this section we design gBCE and theoretically show that it can mitigate the overconfidence problem. In Section 5.1 we introduce gBCE and analyse its properties; in Section 5.2 we show that gBCE may be replaced with regular BCE loss with transformed positive scores, which may be more convenient in practice; in Section 5.3 we show how to reparametrise gBCE to make it independent from the chosen sampling rate; finally, in Section 5.4 we introduce gSASRec - an improved version of SASRec, which uses gBCE. ### Generalised Binary Cross Entropy We now introduce _Generalised Binary Cross Entropy (gBCE)_ loss, which we use to analyse and mitigate overconfidence induced by negative sampling. We define gBCE, parameterised by \(\beta\) as: \[\mathcal{L}_{\text{gBCE}}^{\beta}=-\frac{1}{|I_{k}^{-}|+1}\left(\log(\sigma^{ \beta}(s_{i^{*}}))+\sum_{i\in I_{k}^{-}}\log(1-\sigma(s_{i}))\right). \tag{8}\] gBCE differs from regular BCE loss in that it uses the _generalised logistic sigmoid function_(Kang and Zhang, 2017; Zhang et al., 2018) for the positive sample (sigmoid raised to the power of \(\beta\)). The power parameter \(\beta\geq 0\) controls the shape of the generalised sigmoid. For example, when \(\beta\approx 0\), the output of the generalised sigmoid becomes closer to \(1\) for all input scores. On the other hand, when \(\beta=1\), BCE and gBCE are equal: \[\mathcal{L}_{\text{gBCE}}^{1}=\mathcal{L}_{\text{BCE}} \tag{9}\] Similarly to BCE, gBCE is also a pointwise loss, and it considers the probability of interaction as a sigmoid transformation of the model score (Equation (1)). We now show the exact form of the relation between the prior probability \(P(i)\) (which we desire to estimate, as we discuss in Section 3.2) and the modelled probabilities \(\hat{p}_{i}=\sigma(s_{i})\), learned by a model trained with gBCE. **Theorem 5.1**.: _For every user in the dataset, let \(P(i)\) be the prior probability distribution of the user interacting with item \(i\in I\), \(s=\{s_{1},...s_{|I|}\}\) are scores predicted by the model, \(i^{+}\) is a positive sample selected by the user, \(I_{k}^{-}=\{i_{1}^{-},i_{2}^{-},...,i_{k}^{-}\}\) - \(k\) randomly (uniformly, with replacement) sampled negatives, \(\alpha=\frac{k}{|I^{-}|}\) - negative sampling rate. Then a recommender model, trained on a sufficiently large number of training samples using gradient descent and \(\mathcal{L}_{\text{gBCE}}^{\beta}\) loss, will converge to predict score distribution \(s\), so that_ \[\sigma(s_{i})=\frac{\beta P(i)}{\alpha-\alpha P(i)+\beta P(i)};\forall i\in I \tag{10}\] Proof.: With a sufficiently large number of training samples, gradient descent converges to minimise the expectation of the loss function (Kang and Zhang, 2017, Ch. 4) (assuming the expectation has no local minima). Therefore, the predicted score distribution converges to the minimum of the expectation \(\mathbb{E}\left[\mathcal{L}_{\text{gBCE}}^{\beta}\right]\): \[s=\operatorname*{arg\,min}_{s}\mathbb{E}\left[\mathcal{L}_{\text{gBCE}}^{\beta }\right] \tag{11}\] Hence, our goal is to show that Theorem 5.1 is true if and only if the expectation \(\mathbb{E}\left[\mathcal{L}_{\text{gBCE}}^{\beta}\right]\) is minimised. To show that, we first rewrite the definition of \(\mathcal{L}_{\text{gBCE}}^{\beta}\) (Equation (8)) as a sum of contributions for each individual item in \(I\): \[\mathcal{L}_{\text{gBCE}}^{\beta}=\frac{1}{|I_{k}^{-}|+1}\sum_{i\in I} \mathcal{L}_{i} \tag{12}\] where the contribution of each item, \(\mathcal{L}_{i}\), is defined as follows: \[\mathcal{L}_{i}=-(\mathbb{I}[i=i^{+}]\log(\sigma^{\beta}(s_{i}))+\sum_{j=1}^{ k}\mathbb{I}[i=i_{j}^{-}]\log(1-\sigma(s_{i}))) \tag{13}\] The probability of an item being selected as a positive is defined by the prior distribution: \[P(\mathbb{I}[i=i_{j}^{+}])=P(i) \tag{14}\] whereas the probability of an item being selected as \(j^{th}\) negative is equal to the product of the probability of an item being negative and the negative sampling probability. If we apply a uniform sampling with a replacement for identifying negatives, then the sampling probability is always equal to \(\frac{1}{|I^{-}|}\), so overall, the probability of selecting an item \(i\) as the \(j^{th}\) negative can be written as: \[P(\mathbb{I}[i=i_{j}^{-}])=\frac{1}{|I^{-}|}(1-P(i)) \tag{15}\] We can now calculate the expectations of each individual loss contribution \(\mathbb{E}[\mathcal{L}_{i}]\): \[\mathbb{E}[\mathcal{L}_{i}]=-(P(\mathbb{I}[i=i^{+}])\log(\sigma^{\beta}(s_{i} ))+\sum_{j=1}^{k}P(\mathbb{I}[i=i_{j}^{-}])\log(1-\sigma(s_{i}))) \tag{16}\] (By the definition of expectation) (17) \[=-(P(i)\log(\sigma^{\beta}(s_{i}))+\sum_{j=1}^{k}\frac{1}{|I^{-}| }(1-P(i))\log(1-\sigma(s_{i})))\] \[\qquad\text{(Substituting Equations (\ref{eq: According to Equation (19), the expectation \(\mathbb{E}\left[\mathcal{L}_{\text{gBCE}}^{\beta}\right]\) is minimised, when, for each \(i\in I\), the individual contributions \(\mathbb{E}[\mathcal{L}_{i}]\) are minimised, i.e. when Equation (18) is true for each \(i\in I\). We now use Theorem 5.1 to analyse properties of both regular and generalised Binary Cross-Entropy losses. First, we show that it is possible to train a model to estimate a prior distribution \(P(i)\) exactly using gBCE loss. **Corollary 5.1.1**.: _If a model is trained using negative sampling with sampling rate \(\alpha\leq 1\) and gBCE loss \(\mathcal{L}_{\text{gBCE}}^{\beta}\) with \(\beta=\alpha\), then the model converges to predict probabilities calibrated with the prior distribution:_ \[\sigma(s_{i})=P(i) \tag{20}\] Proof.: We can obtain Equation (20) by substituting \(\beta=\alpha\) in Equation (10). We now use Theorem 5.1 to analyse properties of regular Binary Cross-Entropy loss. **Corollary 5.1.2**.: _If a model is trained with BCE loss \(\mathcal{L}_{\text{BCE}}\) and negative sampling, with sampling rate \(\alpha\), then it converges to predict scores \(s_{i}\) so that_ \[\sigma(s_{i})=\frac{P(i)}{\alpha-\alpha P(i)+P(i)} \tag{21}\] Proof.: According to Equation (9), \(\mathcal{L}_{\text{BCE}}\) is equal to \(\mathcal{L}_{\text{gBCE}}^{\beta}\) with \(\beta=1\). Substituting \(\beta=1\) into Equation (10) we obtain Equation (21). We can now show that SASRec learns an overconfident score distribution: **Corollary 5.1.3**.: _The SASRec model with \(\mathcal{L}_{\text{BCE}}\) and one negative per positive converges to yield scores \(s_{i}\), such that:_ \[\sigma(s_{i})=\frac{P(i)|I|-P(i)}{P(i)|I|-2P(i)+1} \tag{22}\] Proof.: SASRec uses one negative per positive, meaning that its sampling rate is equal to: \[\alpha=\frac{1}{|I|-1} \tag{23}\] Substituting Equation (23) into Equation (21), we get Equation (22). Corollary 5.1.3 explains why SASRec tends to predict very high probabilities for top-ranked items: when an item has a higher-than-average probability of being selected \((P(i)\gg\frac{1}{|I|})\), the term \(P(i)|I|\) dominates both the numerator and denominator of Equation (22), meaning that the predicted probability \(\sigma(s_{i})\) will be very to close to 1. ### Relation between BCE and gBCE In Section 5.1 we showed that gBCE is equal to regular BCE loss when the power parameter \(\beta\) is set to 1. We now show that these two loss functions have a deeper relation, which allows using well-optimised versions of BCE from deep learning frameworks instead of gBCE. **Theorem 5.2**.: _Let \(s^{+}\) be the predicted score for a positive item and \(s^{-}=\{s_{i^{-}_{1}},s_{i^{-}_{2}}..s_{|I^{-}_{|I^{-}|}}\}\) be the predicted scores for the negative items. Then_ \[\mathcal{L}_{\text{gBCE}}^{\beta}(s^{+},s^{-})=\mathcal{L}_{\text{BCE}}(\gamma (s^{+}),s^{-}) \tag{24}\] _where_ \[\gamma(s^{+})=\log\left(\frac{1}{\sigma^{-\beta}(s^{+})-1}\right) \tag{25}\] Proof.: According to the definition of the logistic sigmoid function (Equation (1)), \[\sigma(\gamma(s^{+})) =\frac{1}{e^{-\gamma(s^{+})}+1}\] \[=\frac{1}{e^{-\log\left(\frac{1}{\sigma^{-\beta}(s^{+})-1}\right) }+1}\] \[\quad\text{(Substituting $\neg\gamma(s^{+})$ with its definition (Eq. (25)))}\] \[=\frac{1}{e^{\log\left(\sigma^{-\beta}(s^{+})-1\right)}+1}\] \[\quad\text{(Using properties of the $\log(\cdot)$ function)}\] \[=\frac{1}{\sigma^{-\beta}(s^{+})-1+1}\] \[\quad\text{(The exponent and the logarithm cancel each other out)}\] \[=\frac{1}{\sigma^{-\beta}(s^{+})}=\sigma^{\beta}(s^{+}) \tag{26}\] Substituting \(\sigma^{\beta}(s^{+})=\sigma(\gamma(s^{+}))\) into the definition of \(\mathcal{L}_{\text{gBCE}}^{\beta}\) (Equation (8)) and taking into account the definition of \(\mathcal{L}_{\text{BCE}}\) (Equation (7)) we get the desired equality: \[\mathcal{L}_{\text{gBCE}}^{\beta}(s^{+},s^{-}) =-\frac{1}{|I_{k}^{-}|+1}\left(\log(\sigma^{\beta}(s_{k^{+}}))+ \sum_{i\in I^{-}}\log(1-\sigma(s_{i}))\right)\] \[=-\frac{1}{|I_{k}^{-}|+1}\left(\log(\sigma(\gamma(s^{+})))+\sum_ {i\in I^{-}}\log(1-\sigma(s_{i}))\right)\] \[=\mathcal{L}_{\text{BCE}}(\gamma(s^{+}),s^{-})\] In practice, Theorem 5.2 allows us to transform the predicted positive scores by using Equation (25) and then train the model using the regular BCE loss, instead of using gBCE directly. This is actually preferable because many machine learning frameworks have efficient and numerically stable implementations for standard loss functions such as BCE loss. Indeed, in our implementation, we also rely on Equation (25) score transformation and regular BCE loss instead of using gBCE directly. ### Calibration Parameter \(t\) As shown in Section 5.1, setting the power parameter \(\beta=1\) in gBCE resembles the regular BCE loss, whereas setting \(\beta\) equal to the sampling rate \(\alpha\) results in learning a fully calibrated distribution. This means that reasonable values of the \(\beta\) parameter lie in the interval \([\alpha..1]\). In practice, we found working with this interval inconvenient: we usually do not control the \(\alpha\) parameter directly and instead infer it from the number of negatives and size of the dataset. Similarly, the possible values of \(\beta\) depend on these variables as well. To make the interval of possible values independent from \(\alpha\), we control the power parameter \(\beta\) indirectly with the help of a _calibration parameter_\(t\), which adjusts \(\beta\) as follows: \[\beta=\alpha\left(t\left(1-\frac{1}{\alpha}\right)+\frac{1}{\alpha}\right) \tag{27}\] This substitution makes model configuration simpler: we select \(t\) in the interval \([0..1]\), where \(t=0\) (\(\beta=1\)) corresponds to regular BCE loss, and \(t=1\) (\(\beta=\alpha\)) corresponds to the fully calibrated version of gBCE, which drives the model to estimate prior \(P(i)\) exactly (according to Corollary 5.1.1). ### gSASRec _gSASRec_ (generalised SASRec) is a version of the SASRec model with an increased number of negatives, trained with gBCE loss. Compared with SASRec, gSASRec has two extra hyperparameters: (i) number of negative samples per positive \(k\in[1..|I^{-}|]\), and (ii) parameter \(t\in[0..1]\), which indirectly controls the power parameter \(\beta\) in gBCE using to Equation (27). In particular, when \(k=1\) and \(t=0\), gSASRec is the original SASRec model, as SASRec uses 1 negative per positive and gBCE becomes BCE when \(t=0\). While our primary focus is on the SASRec model, it is possible to apply gBCE with other models; as an example, we use it also with BERT4Rec (see Section 6.2.4). In the next section we empirically evaluate gSASRec and show that its generalisations over SASRec are indeed beneficial and allow it to match BERT4Rec's performance, while retaining negative sampling. ## 6. Experiments We design our experiments to answer the following research questions: **RQ1**: How does negative sampling affect BERT4Rec's performance gains over SASRec? **RQ2**: What is the effect of gBCE on predicted item probabilities? **RQ3**: What is the effect of negative sampling rate and parameter \(t\) on the performance of gSASRec? **RQ4**: How does gBCE loss affect the performance of SASRec and BERT4Rec models trained with negative sampling? **RQ5**: How does gSASRec perform in comparison to state-of-the-art sequential recommendation models? ### Experimental Setup #### 6.1.1. Datasets We experiment with three datasets: MovieLens-1M (Hu et al., 2017), Steam (Sandam et al., 2018) and Gowalla (Gowalla, 2018). There are known limitations with MovieLens-1M (Sandam et al., 2018; Sandam et al., 2018): it is a movies ratings dataset, and users may not rate items in the same order as they watch them, so the task, in this case, may be described as recommending movies to rate (and not to watch) next. However, it remains one of the most popular benchmarks for evaluating sequential recommender systems (Han et al., 2017; Sandam et al., 2018; Sandam et al., 2018; Sandam et al., 2018; Sandam et al., 2018), and more importantly, researchers use it consistently without additional preprocessing (the dataset is already preprocessed by its authors). This consistency allows us to compare results reported between different papers, and therefore we find experimenting with this dataset important. To stay consistent with previous research (Sandam et al., 2018; Sandam et al., 2018), we use preprocessed versions of the MovieLens-1M and Steam datasets provided in the BERT4Rec repository3 and do not apply any additional preprocessing. These datasets have relatively small numbers of items and therefore are suitable for training unsampled models such as BERT4Rec. Footnote 3: [https://github.com/FeSim/BERT4Rec/tree/master/data](https://github.com/FeSim/BERT4Rec/tree/master/data) 4 All code for this paper is available at [https://github.com/asrah/garsec](https://github.com/asrah/garsec) 5 Recall that SASRec uses BCE as a loss function - we do not test pairwise and listwise loss functions, because, as mentioned in Section 2.1, they are expensive to apply on GPUs, and (e.g.) LambdaRank (Sandam et al., 2018) does not improve SASRec (Sandam et al., 2018; Sandam et al., 2018). As a demonstration that gSASRec is suitable for larger datasets, we also use the Gowalla dataset, which is known to be problematic for BERT4Rec (Sandam et al., 2018); Sandam et al. (2018). For this dataset, and following common practice (Han et al., 2017; Sandam et al., 2018; Sandam et al., 2018; Sandam et al., 2018; Sandam et al., 2018), we remove users with less than 5 interactions. Table 1 lists salient the characteristics of all three datasets. We split data using the standard _leave-one-out_ approach, where we leave the last interaction for each user in the test dataset. Additionally, for each dataset, we randomly selected 512 users - for these users, we select their second last interaction and include them into a validation dataset, which we use for hyperparameter tuning as well as to control model early stopping4. Footnote 4: All code for this paper is available at [https://github.com/asrah/garsec](https://github.com/asrah/garsec) 5 Recall that SASRec uses BCE as a loss function - we do not test pairwise and listwise loss functions, because, as mentioned in Section 2.1, they are expensive to apply on GPUs, and (e.g.) LambdaRank (Sandam et al., 2018) does not improve SASRec (Sandam et al., 2018). #### 6.1.2. Metrics Until recently, a somewhat common approach in evaluating recommender systems on sampled metrics using only small number of items, but it has been shown that this leads to incorrect evaluation results in general (Bahdan original code6, whereas our implementation of BERT4Rec is based on the more efficient implementation7 from our recent reproducibility paper (Kumar et al., 2019). To ensure that the models are fully trained, we use an early stopping mechanism to stop training if NDCG@10 measured on the validation dataset has not improved for 200 epochs. Footnote 6: [https://github.com/kang205/SA8Rec/](https://github.com/kang205/SA8Rec/) 7 [https://github.com/asash/bert4rec_repro](https://github.com/asash/bert4rec_repro) ### Results #### 6.2.1. RQ1 _How does negative sampling affect BERT4Rec's performance gains over SASRec._ To answer our first research question, we train both BERT4Rec and SASRec on the Steam and MovieLens-1M datasets using the sampling strategies, which were originally used in these models: (i) one negative per positive and BCE loss (as in SASRec) and (ii) all negatives per positive and Softmax loss (as in BERT4Rec).8 We use the original training objectives for both architectures: item masking in BERT4Rec and sequence shifting in SASRec; we also retain the architecture differences in the models (i.e. we keep uni-directional attention in SASRec and bi-directional attention from BERT4rec). The results of our comparison are summarised in Table 2. The magnitude of SASRec and BERT4Rec results are aligned with those reported in (Kumar et al., 2019). As can be seen from the table, in all four cases, changing of the sampling strategy from the one used by SASRec to the one used in BERT4Rec significantly improves effectiveness. For example, SASRec's NDCG@10 on MovieLens-1M is improved from 0.131 to 0.169 (+29.0%) by removing negative sampling and applying Softmax loss. BERT4Rec achieves a larger improvement of NDCG@10 on Steam (0.0513 \(\rightarrow\) 0.0746: +45.4%) when changing the sampling strategy from 1 negative to all negatives. In contrast, the effect of changing the architecture is moderate (e.g. statistically indistinguishable in 2 out of 4 cases), and frequently negative (3 cases out of four, 1 significant). Footnote 8: In this RQ, our goal is to better understand BERT4Rec’s gains over SASRec, so we only experiment with their original loss functions and sampling strategies: We apply other loss functions, such as Sampled Softmax, and more negative samples in Section 6.2.4. In the answer to RQ1, we conclude that the absence of negative sampling plays the key role in BERT4Rec's success over SASRec, whereas any gain by applying BERT4Rec's bi-directional attention architecture is only moderate and frequently negative. Therefore, the performance gains of BERT4Rec over SASRec can be attributed to the absence of negative sampling and Softmax loss and not to its architecture and training objective. This is contrary to the explanations of the original BERT4Rec authors in (Kumar et al., 2019), who attributed its superiority to its bi-directional attention mechanism (on the same datasets). We now analyse how gBCE changes the distribution of predicted probabilities. #### 6.2.2. Rq2 _Effect of gBCE on predicted interaction probabilities._ To analyse the effects of gBCE on predicted probabilities, we train three models: a regular SASRec model and two configurations of gSASRec: a first with 64 negatives and \(t=0.5\) and a second with 256 negatives and \(t=1.0\). Our goal is to compare prior probabilities \(P(i)\) with probabilities predicted by the model \(\hat{p}_{i}\). As we discuss in Section 3.2, \(P(i)\) is unknown, so direct measurement of such relation is hard. Hence, as a substitute for \(P(i)\), we use the popular mean Precision@K metric, which according to Cormack et al. (Cormack et al., 2017) can be seen as a measurement of the conditional probability of an item being relevant, given its rank is less than K. We compare this metric with the average predicted probability of items retrieved at rank less than K. We perform this comparison for cutoffs K in the range [1..100]. Figure 2 displays the comparison results for MovieLens-1M and Steam datasets, illustrating the expected theoretical relationship between Precision@K and Predicted Probability@K based on Theorem 5.1. Figure 2b shows that the theoretical prediction from Theorem 5.1 closely matches the observed relationship between Precision@K and Predicted Probability@K in the Steam dataset. In the MovieLens-1M dataset (Figure 2a), a slight discrepancy appears between the theoretical prediction and observed relationship, likely because the smaller number of users in the dataset doesn't meet the requirement of Theorem 5.1 for an adequate amount of training samples. Despite these small discrepancies, the relation follows the trends expected from our theoretical analysis. In particular, Figure 2 shows that as expected from Corollary 5.1.3, SASRec is indeed prone to overconfidence and on average predicts probabilities very close to 1 for all ranks less than 100. In contrast, the probabilities predicted by gSASRec are considerably less than 1. For example, for MovieLens-1M, gSASRec trained with 128 negatives and \(t=0.5\), on average predicts probability 0.57 at K=1, while the version with 256 negatives and \(t=1.0\) predicts probability 0.13 at the same cutoff. Together, this analysis shows that gSASRec trained with gBCE successfully mitigates the overconfidence problem of SASRec. Furthermore, from the figure we also see that when parameter \(t\) is set to 1, the mean predicted probability is well-calibrated with mean precision at all rank cutoffs (particularly on the Steam dataset). This is well-aligned with Corollary 5.1.1, which states that when parameter \(\beta\) in gBCE is set equal to the sampling rate (i.e. setting parameter \(t=1\)) results in learning in fully calibrated probabilities. Overall in answer to RQ2, we conclude that gBCE successfully mitigates the overconfidence problem, and in a manner that is well-aligned with our theoretical analysis. We next turn to the impact of gBCE on effectiveness. #### 6.2.3. RQ3 _Effect of negative sampling rate and parameter \(t\) on the performance of gSASRec._ In comparison to SASRec, gSASRec has two additional hyperparameters: the number of negative samples and the parameter \(t\), which adjusts probability calibration. To explore the impact of these parameters on performance, we conduct a grid search: selecting the negative sample count from [1, 4, 16, 64, 256] and the calibration parameter \(t\) from [0, 0.25, 0.5, 0.75, 1.0]. \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Dataset} & Negative sampling & 1 negative & No negative & **Negative** \\ & and loss function\(\rightarrow\) & per positive; & sampling: & **sampling** \\ & Architecture\(\downarrow\) & BCE Loss & Softmax Loss & and loss \\ & (as SASRec) & (as BERT4Rec) & **effect** \\ \hline \multirow{2}{*}{ML-1M} & SASRec & 0.131 & 0.169 & +29.0\%\({}^{*}\) \\ & BERT4Rec & 0.123 & 0.161 & +30.8\%\({}^{*}\) \\ & **Architecture effect** & **-6.1\%** & **-4.7\%** & \\ \hline \multirow{2}{*}{Steam} & SASRec & 0.0581 & 0.0721 & +24.1\%\({}^{*}\) \\ & BERT4Rec & 0.0513 & 0.0746 & +45.4\%\({}^{*}\) \\ \cline{1-1} & **Architecture effect** & **-11.7\%\({}^{*}\)** & +3.4\%\({}^{*}\) & \\ \hline \hline \end{tabular} \end{table} Table 2. Effects of model architecture and negative sampling on NDCG@10, for the MovieLens-1M (ML-1M) and Steam datasets. \({}^{*}\) denotes a significant change (\(\mathit{p}\mathit{o}\mathit{o}\mathit{o}\mathit{o}\mathit{o}\mathit{o}\mathit{o} \mathit{o}\mathit{o}\)) in NDCG@10 caused by negative sampling (comparing horizontally) or model architecture (comparing vertically). Figure 3 portrays our grid search on MovieLens-1M (on others datasets we observed a similar pattern and omit their figures for brevity). From the figure, we observe that, as expected from the theoretical analysis, both \(t\) and the number of negatives have a positive effect on model effectiveness. For example, when the number of negatives is set to 1, varying \(t\) from 0 to 1 allows increasing NDCG@10 from 0.126 to 0.158 (+25%, significant, compared to SASRec, which is also gSARec with 1 negative and calibration \(t=0\)). Interestingly, the result of gSASRC with 1 negative and \(t=1\) is similar to what BERT4Rec achieves with all negatives (0.158 vs. 0.161: -1.86%, not significant). We also observe that when the number of negatives is higher, setting a high value of \(t\) is less important. For example, when the model is trained with 256 negatives (7.49% sampling rate), the model achieves high effectiveness with all values of \(t\). This is also not surprising - by design, more negative samples and higher values of \(t\) should have a similar effect in gBCE. During our experiments, we also observed that setting parameter \(t\) very close to 1 also increased the training time of the model. Keeping this in mind, in practical applications we recommend setting \(t\) between 0.75 and 0.9, and the number of negatives between 128 and 256 - this combination works well on all datasets, converging to results that are close to the best observed without increasing training time. This answers RQ3. #### 6.2.4. RQ4. Effect of gBCE loss on negatively sampled SASRec and BERT4Rec To investigate the effect of gBCE on SASRec and BERT4Rec models with negative sampling, we train the models with the number of negative samples selected from [16, 44, 256, 1, 1] and the loss function selected from [BCE, gBCE, Sampled Softmax Loss (SSM)] on the MovieLens-1M dataset. In this experiment, we use a fully-calibrated version of gBCE (\(t=1.0\)). Figure 4 summarises the results of the experiment. As we can see from the figure, gBCE performs better than both BCE and Sampled Softmax loss when the number of negatives is small. For example, for BERT4Rec trained with 4 negatives, gBCE has a higher Recall@1 (0.059) than both BCE (0.055; -5.8% compared with gBCE) and Sampled Softmax (0.046, -20%) and has the highest NDCG@10 of 0.154, while BCE has NDCG@10 of 0.150 (-2.6% compared with gBCE) and Sampled Softmax has NDCG@10 of 0.134 (-12.9%). In the case of SASRec, the difference is even larger when the number of negatives is small (recall that SASRec trained with gBCE is also gSARec). For example, with 16 negatives, gBCE achieves Recall@1 0.0769, BCE achieves 0.0673 (-12.5%), and Sampled Softmax achieves 0.0635 (-17.5%). We hypothesise that gBCE affects SASRec more than BERT4Rec due to their training objectives. SASRec predicts the next item in a sequence, while BERT4Rec predicts randomly masked items. Consequently, altering SASRec's loss function directly impacts its performance in next-item prediction. In contrast, changing BERT4Rec's loss function only affect the masking task which is less directly related to the next item prediction task [24, 256]. On the other hand, when more negatives are sampled, gBCE becomes less beneficial. For example, with 256 negatives, all three loss functions achieve similar NDCG@10 (0.1674 gBCE; 0.1703 (+1.7%) BCE and 0.1660 (-0.08%) Sampled Softmax). This is an expected result because 256 negatives represent a significant proportion of all negatives (7.5%), and overconfidence becomes less of an issue for BCE and Sampled Softmax. In conclusion, for RQ4, gBCE outperforms BCE and Sampled Softmax in SASRec and BERT4Rec with few negatives; improvement is larger in SASRec. However, with many negatives, traditional loss functions like BCE and Sampled Softmax work well unaltered, but high sampling rates are impractical due to memory and computational constraints. #### 6.2.5. RQ5. gSARec performance in comparison to state-of-the-art sequential recommendation models To answer our last research question, we compare gSASRC with the baseline models. We also add to comparison a version of SASRec trained with full Softmax (without sampling) because, as we discuss in RQ1, it exhibits SOTA Figure 3. gSARec: Effect of varying number of negatives and calibration parameter \(t\) on NDCG@10, MovieLens-1M. \({}^{*}\) denotes a significant improvement over SASRec (\(\mathit{pvalue}<0.05\), Bonferroni multiple test correction). Figure 2. Relation between Mean Precision@K metric and Mean predicted probability@K for cutoffs K in [1..100] range. The figure also includes theoretical prediction for the relation according to Theorem 5.1. performance; however, we omit non-standard versions of BERT4Rec and SASRec trained with BCE or Sampled Softmax, because as we report in RQ4 they are not beneficial compared to gBCE. We also exclude BERT4Rec with gBCE from our analysis because, as per RQ4, gSASRec achieves superior results when measured by Recall@1 and similar results when evaluated by NDCG@10. After tuning hyperparameters on the validation set, we report the results of gSASRec with 128 negatives and \(t=0.9\) for Steam and Gowalla, and with 256 negatives and \(t=0.75\) for MovieLens-1M. Table 3 summarises the results of our evaluation. The table shows that gSASRec achieved the best or the second best result on all datasets according to all metrics. Indeed, on the smaller datasets (MovieLens and Steam), where we were able to train BERT4Rec and SASRec without sampling, gSASRec performs similarly to the best unsampled model (e.g. 4.1% NDCG@10 on MovieLens-1M compared to SASRec-Softmax (not significant) or -1.74% compared to BERT4Rec on Steam, significant). Interestingly, on MovieLens-1M, both SASRec-Softmax (our version of SASRec trained without negative sampling) and gSASRec significantly improve Recall@1, suggesting that at least in some cases SASRec's unidirectional architecture may be beneficial. This also echoes our observations while analysing RQ1. Crucially, gSASRec always significantly outperforms the regular SASRec model (+34% NDCG@10 on MovieLens-1M, +26% on Steam, +47% on Gowalla). The result on Gowalla is particularly important, as it demonstrates that gSASRec is suitable for datasets with more than 1 million items, and it improves SASRec's results by a large margin on this large dataset. From Table 3 we also see that all versions of SASRec (including gSASRec) require less training time than BERT4Rec. For example, on MovieLens, gSASRec is 73.2% faster to train compared to BERT4Rec (23 minutes vs. 85 minutes) and, on Steam, gSASRec is 90.9% faster (58 minutes vs. 642 minutes). However, we also see that gSASRec requires more training time than SASRec (e.g. 58 vs. 32 minutes on Steam); we explain that by the fact that more accurate probabilities estimation with gBCE requires more training epochs to converge (238 epochs vs. 170 epochs in our experiment). Finally, for MovieLens-1M, we compare the results achieved by gSASRec with those of the most recently proposed models in the literature, which report the best results, namely ALBERT4Rec (Han et al., 2017) (an effective model similar to BERT4Rec, but based on ALBERT (Li et al., 2017)), and two contrastive models: DuoRec (Li et al., 2017), and CBiT (Li et al., 2017). All papers from our selection the use the same data-splitting strategy and unsampled metrics, so the results are comparable. Table 4 summarises this comparison. As we can see from the table, all these publications report Recall@10 close to 0.3, which is similar to what we obtain with gSASRec. However, only gSASRec achieves an NDCG@10 above 0.17. Furthermore, as observed from Figure 3, this result is not a one-off occurrence but a consistent outcome when the model is trained with 256 negatives, making it unlikely to be a statistical fluctuation. This is likely due to its better focus on highly-ranked items, as gBCE is specifically designed for improving overconfidence in highly-scored items. Overall, our experiments show that gSASRec performs on par with SOTA models, retaining the negative sampling required for use on big catalogues and converging faster than BERT4Rec. ## 7. Conclusions In this paper, we studied the impact of negative sampling on sequential recommender systems. We showed (theoretically and empirically) that negative sampling coupled with Binary Cross-Entropy loss (a popular combination used by many sequential models) leads to a shifted score distribution, called overconfidence. We showed that overconfidence is the only reason why SASRec underperforms compared to the state-of-the-art BERT4Rec. Indeed, when we control for negative sampling, the two models perform similarly. We proposed a solution to the overconfidence problem in the form of gBCE and theoretically proved that it can mitigate overconfidence. We further proposed gSASRec, which uses gBCE, and experimentally showed that it can significantly outperform the best unsampled models (e.g. +9.47% NDCG@10 on the MovieLens-1M dataset compared to BERT4Rec) requiring less training time (e.g. -90.9% on the Steam dataset compared to BERT4Rec), while also being suitable for large-scale datasets. We also showed that gBCE may be beneficial for BERT4Rec if it is trained with negative sampling (e.g. +7.2% compared to BCE when trained with 4 negatives). The theory and methods presented in this paper could also be applied to not just sequential recommendation models but also to other types of recommendation as well as to NLP or search tasks - we leave these directions to future work. Figure 4. Performance of SASRec and BERT4Rec architectures, trained on the MovieLens-1M dataset with a variable number of negatives and various loss functions. BCE is a classic Binary Cross Entropy Loss, gBCE - Generalised Binary Cross-entropy (t=1.0), SSM - Sampled Softmax Loss with uniform sampling.
2308.00825
Strong-coupling phases of trions and excitons in electron-hole bilayers at commensurate densities
We introduce density imbalanced electron-hole bilayers at a commensurate 2 : 1 density ratio as a platform for realizing novel phases involving electrons, excitons and trions. Three length scales are identified which characterize the interplay between kinetic energy, intralayer repulsion, and interlayer attraction. By a combination of theoretical analysis and numerical calculation, we find a variety of strong-coupling phases in different parameter regions, including quantum crystals of electrons, excitons, and trions. We also propose an "excitonic supersolid" phase that features electron crystallization and exciton superfluidity simultaneously. The material realization and experimental signature of these phases are discussed in the context of semiconductor transition metal dichalcogenide bilayers.
David D. Dai, Liang Fu
2023-08-01T20:24:00Z
http://arxiv.org/abs/2308.00825v3
# Strong-coupling phases of trions and excitons in electron-hole bilayers at commensurate densities ###### Abstract We introduce density imbalanced electron-hole bilayers at a commensurate \(2:1\) density ratio as a platform for realizing novel phases involving electrons, excitons and trions. Three length scales are identified which characterize the interplay between kinetic energy, intralayer repulsion, and interlayer attraction. By a combination of theoretical analysis and numerical calculation, we find a variety of strong-coupling phases in different parameter regions, including quantum crystals of electrons, excitons, and trions. We also propose an "excitonic supersolid" phase that features electron crystallization and exciton superfluidity simultaneously. The material realization and experimental signature of these phases are discussed in the context of semiconductor transition metal dichalcogenide bilayers. ## I Introduction Recently, semiconductor transition metal dichalcogenide (TMD) heterostructures have proven to be an ideal platform for exploring quantum phases of matter in a controlled, tunable, and reproducible way. An extraordinarily rich variety of quantum states have been predicted and observed, including Mott-Hubbard and charge-transfer insulators [1; 2; 3; 4; 5; 6], Wigner crystals [3; 7; 8; 9; 10; 11; 12; 13; 14], itinerant ferromagnets [15; 16; 17; 18; 19; 20], interfacial ferroelectrics [21], heavy Fermi liquids [22; 23; 24], and spin-polaron liquids with pseudogap and magnetization plateau [25; 16; 26], as well as quantum spin Hall states [27; 28] and quantum anomalous Hall states [29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. It is remarkable that all these electronic phases realized in a single material system are rooted in one common ground--the two-dimensional electron (or hole) gas in monolayer TMDs. Here the large effective mass favors strong interaction effects, and the presence of moire bands in TMD heterostructures further enriches the physics, leading to much of the observed phenomena. In addition to moire physics, TMD heterostructures provide a material realization of electron-hole (\(e\)-\(h\)) bilayer systems featuring electrons and holes on spatially separated layers, whose densities can be independently tuned by top and bottom gate voltages [39]. Owing to the coexistence of positively and negatively charged particles, both repulsion between like charges and attraction between opposite charges are present, enabling new phases of matter. When the densities of electrons and holes are equal, interlayer excitons with intrinsic out-of-plane dipole moments form, which may support high-temperature exciton superfluidity in an electrical insulator. Recently, thermodynamic evidence of excitonic insulator ground states has been observed in TMD bilayer \(\mathrm{WSe_{2}/MoSe_{2}}\) with \(\mathrm{WSe_{2}}\) as the hole layer and \(\mathrm{MoSe_{2}}\) as the electron layer separated by insulating hBN layers [40; 41]. Moreover, density imbalanced electron-hole bilayers have attracted increasing interest owing to the interaction between charge carriers and dipolar excitons [42]. In this work, we study the strong-coupling phases of an imbalanced electron-hole bilayer at commensurate electron and hole densities \(n_{e}/n_{h}=2\). Our study is motivated by the following considerations. First, because there is a net charge density \(n=n_{e}-n_{h}\neq 0\), the Coulomb interaction between charged particles is crucial in the low density regime, favoring strong-coupling phases with crystalline order. This should be contrasted to balanced electron-hole bilayers, where charge-neutral excitons condense into a superfluid at low densities because their mutual dipole-dipole interaction is parametrically weaker than the quantum kinetic energy. Second, the particular choice of the electron-to-hole density ratio \(n_{e}/n_{h}=2\) is motivated by the prospect of three-body bound states known as trions, which are charge-\(e\) composite particles made of two electrons on the same layer bound to a hole on the other layer. Through both theoretical analysis and numerical calculation, we find a number of ordered phases driven by strong interactions. These include bilayer electron-hole Wigner crystals, "composite crystals" of coexisting electrons and excitons, as well as exotic quantum phases without classical counterparts. In particular, we find a quantum supersolid made of electrons and excitons as well as a quantum crystal of trions in the low density regime. Experimental signatures of various phases are also discussed. Before presenting our main results, it is instructive to first consider the three-body problem of two electrons and one hole, which reside on spatially separated layers (\(z=d\) and \(0\)) and mutually interact through Coulomb forces. Classically, the minimum energy configuration is an electron and a dipole that are far apart due to their residual repulsion. This dipole (equivalent to a classical exciton) consists of an electron and a hole sitting directly on top of each other, separated only by the layer distance \(d\). For comparison, consider a "trion" charge cluster with the two electrons at \((\pm\mathbf{r},d)\) and the hole at \((\mathbf{0},0)\). When \(r=1/\sqrt{2^{4/3}-1}\approx 0.811d\), the net force acting on each particle is zero. However, this force-balanced configuration has energy \(-0.937/d\), which is higher than the energy \(-1/d\) of a dipole plus an electron far away. Additionally, this classical "trion" has an unstable normal mode, where the hole moves towards one of the electrons along the \(\mathbf{r}\) direction, making the classical "trion" a saddle point of the energy instead of a local minima. Our above analysis of the classical three-body problem demonstrates that quantum mechanics is crucial for the formation of a trion bound state. As shown by numerical studies, the trion is the ground state of two electrons and one hole in the bilayer when \(a_{B}/d>0.065\) assuming equal electron and hole masses [43], where \(a_{B}=\frac{4\pi e\hbar^{2}}{e^{2}m}\) is the Bohr radius (which vanishes in the classical limit \(\hbar\to 0\) or \(m\rightarrow\infty\)). We can understand the trion's quantum origin heuristically. Start with the electron and the dipolar exciton separated by a large distance \(r\gg d\). The electron exerts a repulsive force on the exciton \(F\approx d^{2}/r^{4}\), leading to a repulsive potential energy \(U\propto d^{2}/r^{3}\). On the other hand, the in-plane electric field \(\propto 1/r^{2}\) polarizes the exciton and lowers its energy through the second-order Stark effect by an amount \(\delta E_{s}\propto-a_{B}^{3}/r^{4}\) to zeroth order in \(d\). Therefore, the attraction due to the quantum Stark effect dominates the direct electron-exciton repulsion over a range of distances below a crossover length \(r_{c}\sim a_{B}^{3}/d^{2}\). For sufficiently small \(d\), this extended attraction supports a bound state of the electron and the exciton, leading to trion formation. At finite charge density, an additional length scale appears: the average inter-particle distance \(a\equiv 1/\sqrt{\pi n}\), with \(n=n_{h}\) for \(n_{e}/n_{h}=2\). With the presence of three length scales--the inter-particle distance \(a\), the layer distance \(d\), and Bohr radius \(a_{B}\), we expect that the competition between intralayer repulsion, interlayer electron-hole attraction, and quantum kinetic energy leads to a rich phase diagram for the electron-hole bilayer, which we explore below by a combination of analytical and numerical methods. The Hamiltonian for the bilayer assuming equal electron and hole effective masses \(m\) and \(1/r\) Coulomb interactions is: \[\begin{split} H&=\sum_{a=e,h}\sum_{s=\uparrow, \downarrow}\int\mathrm{d}^{2}\mathbf{r}\bigg{[}\psi_{s}^{a\dagger}(\mathbf{r}) \left(\frac{-\nabla^{2}}{2}\right)\psi_{s}^{a}(\mathbf{r})\bigg{]}\\ &+\frac{1}{2}\int\mathrm{d}^{2}\mathbf{r}\int\mathrm{d}^{2} \mathbf{r}^{\prime}\bigg{[}\sum_{a=e,h}\frac{n^{a}(\mathbf{r})n^{a}(\mathbf{r} ^{\prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|}\bigg{]}\\ &-\int\mathrm{d}^{2}\mathbf{r}\int\mathrm{d}^{2}\mathbf{r}^{ \prime}\bigg{[}\frac{n^{e}(\mathbf{r})n^{h}(\mathbf{r}^{\prime})}{\sqrt{| \mathbf{r}-\mathbf{r}^{\prime}|^{2}+d^{2}}}\bigg{]}\end{split} \tag{1}\] where \(\psi^{e,h}(\mathbf{r})\) denotes the electron or hole operator, \(n^{a}(\mathbf{r})=\sum_{s}\psi_{s}^{a\dagger}(\mathbf{r})\psi_{s}^{a}(\mathbf{ r})\) is the density operator, and \(s\) denotes the spin. We have already divided all quantities by their appropriate atomic units, i.e. lengths by the Bohr radius \(a_{B}=\frac{4\pi e\hbar^{2}}{me^{2}}\) (we use this definition throughout ) and energies by the Hartree energy \(E_{h}=\frac{\hbar^{2}}{ma_{B}^{2}}\). Unless stated otherwise, we assume for simplicity that the electron and hole masses are equal, noting that our main findings are qualitatively correct for a range of mass ratios. ## II Classical regime We first consider the classical limit defined by taking \(a_{B}\to 0\) while keeping \(d\) and \(a\) fixed, or equivalently, \(a/a_{B}\rightarrow\infty\) and \(d/a_{B}\rightarrow\infty\) for a fixed \(d/a\). In the classical limit, the phase diagram is characterized solely by the dimensionless ratio \(d/a\). Both limits \(d/a\gg 1\) and \(d/a\ll 1\) can be understood analytically. When \(d/a\gg 1\), the interlayer coupling is negligible and a triangular lattice Wigner crystal is formed independently in each layer. In the opposite regime \(d/a\ll 1\), every hole pairs with an electron at the shortest possible distance \(d\) to form a small out-of-plane dipole, leaving an equal number of excess electrons. These dipoles have a weak repulsion \(1/r-1/\sqrt{r^{2}+d^{2}}\propto d^{2}/r^{3}\) with the electrons, while electrons interact with each other through a strong Coulomb repulsion \(1/r\). Therefore, the electrons arrange themselves essentially independently of the dipoles to form a triangular lattice Wigner crystal. Once the electrons crystallize, the dipoles crystallize in half of the voids between the electrons to minimize the residual repulsion. The result is a "composite crystal" of electrons and dipoles, taking the form of a honeycomb lattice with one sublattice occupied by electrons and the other by dipoles. To determine the classical ground state as a function Figure 1: The classical phase diagram of the electron-hole bilayer at \(n_{e}/n_{h}=2\) for \(a/d>1\). The top panel depicts the checkerboard composite crystal (left) and the honeycomb composite crystal (right). The bottom panel shows the energy and energy difference of the two composite crystals, with a dashed line indicating the phase transition. of the charge density, we minimized the classical electrostatic energy. To handle the Coulomb interaction's long-ranged tail, we used Ewald summation (details are provided in the supplementary material). For the experimentally relevant parameter range \(a/d>1\), we find two distinct types of composite crystals. For \(a/d>5.42\), the numerical results confirm our expectation of a honeycomb composite crystal in the low-density limit. Interestingly, for \(a/d<5.42\), we found a "checkerboard" composite crystal, which consists of two interpenetrating square lattices of electrons and dipoles. This is likely due to the increased electron-dipole repulsion at smaller \(a/d\). Energy minimization calculations with randomly initialized configurations generally converged to a mixture of honeycomb and checkerboard structures alongside various defects, with the former (latter) more predominant at large (small) \(a/d\). We did not observe the formation of classical trions in any of our randomly initialized runs, which is consistent with the classical trion's instability. Once these two composite crystals were identified, we directly calculated their electrostatic energy as a function of \(a/d\) and determined the phase transition point, which is shown in Fig. 1. We now analyze the effect of quantum fluctuations around the classical composite crystals. At sufficiently small \(a_{B}\), the leading quantum effect is the zero-point-motion of charges in the classical ground state. The root-mean-square displacement of charges increases with \(a_{B}\), and when it becomes comparable to the lattice constant \(a\), quantum melting of the crystal takes place (Lindemann criterion). Since our composite crystals have three charges per unit cell, there are a total of six phonon modes, including both acoustic and optical branches. Notably, the optical phonons are associated with the relative vibration of charges within the unit cell, which is absent in the canonical electron Wigner crystal. When the layer distance \(d\) is small compared to the inter-particle distance \(a\), the repulsion between dipoles and electrons in the composite crystal is much weaker than the repulsion between electrons. In this case, we expect that the dipole's center of mass has the largest zero-point displacement, denoted as \(\xi_{d}\). This is indeed confirmed by our direct calculation of the optical phonon frequencies at zero wavevector. Two types of optical phonons are present: the low-frequency one corresponds to the displacement of the dipole's center of mass relative to the electron, while the high-frequency one is associated with the internal structure of the dipole. At small \(d/a\), the low-frequency optical phonon softens with \(\omega_{d}\propto\sqrt{d^{2}/ma^{5}}\), hence the zero-point displacement increases and is given by: \[\xi_{d}\sim\sqrt{\frac{\hbar}{m\omega_{d}}}\propto a(\frac{aa_{B}}{d^{2}})^{1/4}. \tag{2}\] In the classical limit \(a_{B}/a\to 0,a_{B}/d\to 0\) with \(a/d\) fixed, \(\xi_{d}\ll a\) ensures the stability of the classical composite crystal against quantum fluctuation. For small \(d/a\), Eq.(2) implies that quantum melting of the dipole (=interlayer exciton) lattice in the composite crystal occurs first when \(a_{B}\) reaches the order of \(d^{2}/a\). On the other hand, the stronger electron-electron repulsion renders the electron lattice stable against quantum fluctuations until \(a_{B}\) reaches the order of the inter-particle distance \(a\), as in the case of the canonical Wigner crystal. This naturally raises the question of what the ground state is at intermediate \(a_{B}\) between \(d^{2}/a\) and \(a\), where both quantum effects and electrostatic interactions play crucial roles. ## III Dilute quantum regime Of particular interest is the low-density limit defined by \(a\rightarrow\infty\) while keeping \(d\) and \(a_{B}\) fixed, especially if \(d\) and \(a_{B}\) are of the same order. With \(a\rightarrow\infty\), \(a_{B}\) necessarily lies between \(d^{2}/a\) and \(a\). Therefore, we expect partial melting of the composite crystal due to the quantum motion of the exciton's center of mass. The ground state in this low density limit depends crucially on the ratio of the Bohr radius \(a_{B}\) and layer distance \(d\). For \(d<d_{c}\approx 15.38a_{B}\) (assuming equal electron and hole masses), it is known that the lowest-energy state of two electrons and one hole in a bilayer system is a trion bound state with the two electrons in a spin-singlet [43]. (Note that the spin-triplet trion is absent for \(m_{e}=m_{h}\).) The binding energy of a trion is on the order of \(E_{h}\) at \(d=0\) and decreases as \(d\) approaches the critical \(d_{c}\) at which the trion unbinds. At a given \(d<d_{c}\) and in the dilute limit \(a\rightarrow\infty\), the hierarchy of energy scales is necessarily as follows: the trion binding energy is much larger Figure 2: Soft and hard optical phonon frequencies at the center of the Brillouin zone as function of \(a/d\) for the classical composite crystals. The low-frequency branches correspond to the displacement of the dipole’s center of mass relative to the electron, while the high-frequency branches are associated with oscillations in the dipole’s internal structure. than the characteristic Coulomb energy \(\propto 1/a\), which is much larger than the kinetic energy \(\propto 1/a^{2}\). Therefore, we conclude that for \(d<d_{c}\), the ground state of electron-hole bilayers in the low density limit is a triangular lattice Wigner crystal of spin-singlet trions. The physically realistic parameters for the TMD bilayer \(\text{WSe}_{2}/\text{MoSe}_{2}\) are electron effective mass \(m_{e}(\text{MoSe}_{2})=0.8m_{0}\) (\(m_{0}\) is the bare mass), hole effective mass \(m_{h}(\text{WSe}_{2})=0.4m_{0}\), dielectric constant \(\epsilon=4.7\epsilon_{0}\), and interlayer spacing \(d=3\text{ nm}\). With these numbers, the trion binding energy predicted by numerical studies is on the order of 15 K [43]. Notably, trions can crystallize at significantly higher densities than electrons. Because the trion mass is three times the electron mass, the effective Bohr radius is three times smaller, hence the critical density for crystal melting according to the Lindemann criterion is nine times larger. Another distinctive feature of our trion Wigner crystal is that the electron layer has zero total spin and a large spin gap on the order of \(E_{h}\) due to the trion binding energy, whereas the spins of localized holes interact with each other through a weak exchange interaction which goes to zero in the low density limit. Therefore, a small magnetic field can fully polarize the holes while leaving the spin-singlet electrons intact. This sharp contrast between the spin response of electron and hole layers is an indication of spin-singlet trion formation. Finally, we discuss the regime \(d>d_{c}\). Here, the trion is not stable at the three-particle level, i.e. the electron and the exciton unbind. On the other hand, as we showed earlier, the exciton lattice in the composite crystal state is necessarily unstable against quantum melting in the low density limit, whereas the electron lattice is stable. Based on these considerations, we propose that for \(d>d_{c}\), the ground state at sufficiently low density is an exciton superfluid that permeates through the electron crystal. This state is remarkable as it simultaneously exhibits crystallization and superfluidity. For this reason, we call this state an "excitonic supersolid", a quantum electron solid in which interstitial excitons Bose condense. ## IV Hartree-Fock calculations Guided by our theoretical results in the low-density regimes, we perform further numerical study to strengthen our prediction of singlet trion crystals. Specifically, we apply the Hartree-Fock method to \(N_{e}=72\) electrons and \(N_{h}=36\) holes in a periodic rectangle. Our system consists of four fermion species including spin \(s=\uparrow,\downarrow\) and electron/hole degrees of freedom, and our ansatz for the HF ground state is a product of Slater determinants, one for each fermion species. No restrictions other than orthonormality are applied to the Hartree-Fock orbitals \(\phi_{n,s}(\mathbf{r})\). As discussed previously, the coupling between hole spins is weak and plays an unimportant role, so we assume that the holes are fully spin-polarized, reducing the effective number of species to three. While our ansatz is capable of describing the crystalline phases, it cannot capture coherences between electrons and holes and therefore cannot describe the proposed excitonic supersolid, which we defer to a future study. Instead of the usual self-consistent field method, we use conjugate gradients (any other unconstrained minimization algorithm would also work) to directly minimize an analytic continuation of the Hartree-Fock energy functional onto the space of non-orthonormal orbitals [44]. Although formally equivalent to the self-consistent field method, this method is immune from instabilities such as charge sloshing and is guaranteed to converge from any starting point. Both of these features are convenient for exploratory calculations with unknown ground states. We supply a detailed derivation of our working equations in the supplementary materials. Although our predictions are general, we choose experimentally relevant parameters \(m_{e}/m_{h}=1/4\), interlayer spacing \(d=0.5a_{B,h}\), where \(a_{B,h}=\frac{4\pi e\hbar^{2}}{e^{2}m_{h}}\) is the hole Bohr radius, and hole density parameter \(r_{s}=7\) to provide a concrete example of our results. We find that with equal electron spin populations \(n_{e,\uparrow}=36,n_{e,\downarrow}=36\), randomly initialized HF calculations converge naturally to a spin-singlet state with equivalent electron \(\uparrow\) and \(\downarrow\) orbitals (apart from small imperfections). Additionally, the electrons are concentrated on top of the holes, which are arranged on a triangular lattice (with a few defects). The electron orbitals also spread out more than the hole orbitals due to the electrons' repulsion and smaller effective mass. Motivated by these randomly initialized calculations, we perform HF calculations starting directly from singlet trions placed on a triangular lattice. These calculations yield an energy per particle \(-0.04300\), which is slightly lower than the best outcomes from the randomly initialized runs. We also find that the electron HF energy spectrum has a gap of \(0.06082\), and the hole HF energy spectrum has a gap of \(0.21220\) (all energies are in terms of the hole Hartree energy). The existence of a gap is Figure 3: The total (spin \(\uparrow\) plus \(\downarrow\)) electron density is shown in the left panel, and the hole density is shown in the right panel; the coloring scheme is consistent for both densities and is shown in the colorbar in units of \(1/a_{B}^{2}\) for \(a_{B}=\frac{4\pi e\hbar^{2}}{e^{2}m_{h}}\). The calculation parameters are \(m_{e}=1/4\), \(m_{h}=1\), \(d=0.5a_{B}\), hole \(r_{s}=7\), \(n_{h}=36\), and equal electron spin-populations. consistent with the stability of singlet trions. For comparison, randomly initialized HF calculations in the fully spin-polarized sector yield energies per particle in excess of \(-0.02\), significantly higher than that of the spin-unpolarized sector. While the hole HF spectrum is still gapped, the electron HF spectrum appears gapless in the fully spin-polarized sector. Therefore, our calculations clearly show that the ground state of the electron-hole bilayer is an insulating quantum crystal of spin-singlet trions for these parameters. Additionally, applying an external magnetic field can melt the singlet trion crystal into a conducting state. It should be noted that the Hartree-Fock method generally overestimates the tendency for interactions to break symmetries. For example, Hartree-Fock underestimates the critical \(r_{s}\) at which the uniform electron gas forms a Wigner crystal [45; 46]. For our electron-hole bilayer, it is important to combine an HF study with more advanced numerical methods in order to accurately determine the parameter regions for interaction-induced phases, including the trion crystal, the composite crystals, and the putative excitonic supersolid. We leave a comprehensive numerical study of the electron-hole bilayer at commensurate \(2:1\) density ratio to a future work. ## V Conclusion Our work introduces the electron-hole bilayer at commensurate 2:1 density ratio as a platform for realizing novel strong-coupling phases. We identify two dimensionless ratios that govern the phase diagram: the ratio of the average interparticle spacing to the Bohr radius \(a/a_{B}\), and the ratio of the interlayer spacing to the Bohr radius \(d/a_{B}\). By a combination of theoretical analysis and numerical calculation, we find a number of ordered phases, shown schematically in Fig. 4. In the classical regime \(a/a_{B}\gg 1,d/a_{B}\gg 1\), we identify three crystalline phases: decoupled crystals in each layer for small \(a/d\), a checkerboard "composite crystal" comprised of electrons and bound electron-hole dipoles for intermediate \(a/d\), and a honeycomb composite crystal for large \(a/d\). Focusing on the dilute regime \(a\gg d,a_{B}\), as \(d\) is decreased, we propose that large zero-point-fluctuations of the excitons partially melt the composite crystal into an "excitonic supersolid", where the unbound electrons remain crystalline but the excitons condense into a superfluid. As \(d\) is decreased further to the order of the Bohr radius \(a_{B}\), we show that the second-order Stark effect mediates an effective attractive force between electrons and excitons. For sufficiently small \(d\), this attraction binds excitons to electrons to form spin-singlet trions, which then subsequently crystallize into a trion crystal. Notably, the trion crystal is stable throughout the low charge density regime with a lattice constant varying continuously with the charge density \(n=n_{e}-n_{h}\), provided that the electron-to-hole density ratio is maintained at the commensurate value \(n_{e}/n_{h}=2\). Thus, the trion crystal is charge-compressible with \(\frac{\partial n}{\partial\mu}\neq 0\) with \(\mu\) the charge chemical potential, but has an energy gap to adding excitons, i.e., exciton-incompressible. This should be contrasted with the excitonic insulator at charge neutrality \(n=0\) which is charge-incompressible and exciton-compressible. Remarkably, recent capacitance and optical experiments on WSe\({}_{2}\)/MoSe\({}_{2}\) have shown that the charge and exciton compressibility can be measured independently by varying the top and bottom gate voltages concurrently with \(\Delta V_{B}=\pm\Delta V_{T}\). Moreover, the formation of trion is accompanied by a reduction of spin susceptibility in the electron (majority carrier) layer, which can be detected by magnetic circular dichroism. Finally, it is noted that the binding energy of trions can be further increased by applying an magnetic field, which we leave to future study. We hope our theoretical work stimulates experimental study of TMD electron-hole bilayers at commensurate electron-hole density ratio. ## Acknowledgements We are grateful to Trithep Devakul and Aidan Reddy for helpful discussions. We thank Kin Fai Mak, Jie Shan and Feng Wang for their interest and feedback. This work was supported by the Simons Investigator Award from the Simons Foundation. DDD was supported by the Undergraduate Research Opportunities Program at MIT. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing high performance computing resources. Figure 4: Schematic phase diagram for the electron-hole bilayer at commensurate densities \(n_{e}/n_{h}=2\), with analytically known boundaries marked in red and limiting phases labelled.
2306.16533
ICSVR: Investigating Compositional and Syntactic Understanding in Video Retrieval Models
Video retrieval (VR) involves retrieving the ground truth video from the video database given a text caption or vice-versa. The two important components of compositionality: objects & attributes and actions are joined using correct syntax to form a proper text query. These components (objects & attributes, actions and syntax) each play an important role to help distinguish among videos and retrieve the correct ground truth video. However, it is unclear what is the effect of these components on the video retrieval performance. We therefore, conduct a systematic study to evaluate the compositional and syntactic understanding of video retrieval models on standard benchmarks such as MSRVTT, MSVD and DIDEMO. The study is performed on two categories of video retrieval models: (i) which are pre-trained on video-text pairs and fine-tuned on downstream video retrieval datasets (Eg. Frozen-in-Time, Violet, MCQ etc.) (ii) which adapt pre-trained image-text representations like CLIP for video retrieval (Eg. CLIP4Clip, XCLIP, CLIP2Video etc.). Our experiments reveal that actions and syntax play a minor role compared to objects & attributes in video understanding. Moreover, video retrieval models that use pre-trained image-text representations (CLIP) have better syntactic and compositional understanding as compared to models pre-trained on video-text data. The code is available at https://github.com/IntelLabs/multimodal_cognitive_ai/tree/main/ICSVR
Avinash Madasu, Vasudev Lal
2023-06-28T20:06:36Z
http://arxiv.org/abs/2306.16533v3
# ICSVR: Investigating Compositional and Semantic Understanding in Video Retrieval Models ###### Abstract Video retrieval (VR) involves retrieving the ground truth video from the video database given a text caption or vice-versa. The two important components of compositionality: objects & attributes and actions are joined using correct semantics to form a proper text query. These components (objects & attributes, actions and semantics) each play an important role to help distinguish among videos and retrieve the correct ground truth video. However, it is unclear what is the effect of these components on the video retrieval performance. We therefore, conduct a systematic study to evaluate the compositional and semantic understanding of video retrieval models on standard benchmarks such as MSRVTT, MSVD and DIDEMO. The study is performed on two categories of video retrieval models: (i) which are pre-trained on video-text pairs and fine-tuned on downstream video retrieval datasets (Eg. Frozen-in-Time, Violet, MCQ etc.) (ii) which adapt pre-trained image-text representations like CLIP for video retrieval (Eg. CLIP4Clip, XCLIP, CLIP2Video etc.). Our experiments reveal that actions and semantics play a minor role compared to objects & attributes in video understanding. Moreover, video retrieval models that use pre-trained image-text representations (CLIP) have better semantic and compositional understanding as compared to models pre-trained on video-text data. ## 1 Introduction Video-retrieval (VR) is the task of retrieving videos for a given text caption or given a video, retrieve the corresponding text caption. This involves understanding important details such as objects & attributes (Eg: two women and a man, red shirt guy), actions (Eg. playing, standing, talking etc.) in the text caption and the video. In vision it is referred to as compositional reasoning [7, 19, 32, 31, 26], i.e. representing the image or video requires the understanding of primitive concepts that make them. In the recent years, new benchmarks [6, 29, 41, 25] have been proposed to measure the compositional capabilities of foundational image models. The compositionality in these models is measured by creating new text captions from the original text captions using word ordering [51], word substitutions [42], negative pairs [22], image-text mismatch [48]. When compared to images, measuring compositionality is a lot harder in videos. There are multiple reasons to this: First, videos are made-up of time-series image frames with multiple objects & attributes and actions unlike images. Therefore, methods like creating negative pairs, mismatching pairs etc. used for evaluating compositionality in image-language models have very limited scope. Second, even though tasks based on video question answering (VQA) [6, 25] have been proposed to measure the compositionality, recent studies [4, 10, 14, 24, 45] have shown that these datasets exhibit single frame bias. Most of the previous works [22, 51, 42] focus on understanding the compositionality of image-text models. It mainly involves experimenting with objects & attributes in the text captions and retrieving the images. However, actions play a crucial role in when retrieving videos using text captions. Another important aspect which is often overlooked in the previous studies is the semantics. For example consider the query _"a guy wearing a red shirt drives a car while talking_, the objects & attributes are guy, red shirt and car, the actions are wearing, driving and talking and rest of the words (a, while) form the semantics of the text captions. The video retrieval models can comprehend such queries because of the accurate semantics and compositionality (objects & attributes and actions). Now consider the following scenarios of the text captions in which (i) objects & attributes are missing (_"a wearing a drives a while talking_) (ii) actions are missing (_"a guy a red shirt a car while_) and (iii) semantics are missing (_"guy wearing red shirt drives car talking_). This begs an important question: **What is the effect of each of these scenarios on the video retrieval performance?** To address this question, we propose a detailed study to evaluate the semantic and compositional understanding of video retrieval models. For this study we create a comprehensive test bed to evaluate the state-of-the-art video retrieval models for compositionality and semantics. We base this investigation along three axes: Objects & attributes, actions and semantics. We propose a set of 10 tasks for these categories: four tasks to evaluate the knowledge of VR models for objects & attributes (SS3.1.1), three tasks for testing action understanding (SS3.1.2) and finally, three tasks for semantic capabilities (SS3.2). Table 1 describes these tasks with an example. We perform a comprehensive evaluation on 12 state-of-the-art video retrieval models belonging to two categories (SS4.1): The first category of models such as Frozen-in-Time (FiT) [3], MCQ [16] etc. are pre-trained on large scale video datasets and fine-tuned for video retrieval. The second category uses pretrained image features like CLIP for video retrieval namely CLIP4Clip [35], CLIP2Video [11] etc. These models are tested on three standard video retrieval benchmarks (SS4.2) such as MSRVTT [55], MSVD [5] and DiDeMo [2]. Our experiments (SS5.1) reveal that objects & attributes are the most crucial to video retrieval followed by actions and semantics. Among video retrieval models, CLIP based models have a better compositional and semantic understanding when compared with pretrained video models. We further perform detailed studies to fully judge how retrieval models perceive each of the components. We find that (SS5.2) video retrieval models have a poor understanding of relationship between objects and its attributes. However, they are extremely sensitive to incorrect object references in the captions. Our studies on action understanding (SS5.3) disclose that models have poor sense of action negation and replacing them with incorrect actions lead to slight decrease in video retrieval performance. Finally, we discover (SS5.4) that models perform significantly better even without the right semantic word order. Our contributions are as follows: * Ours is the first work to comprehensively investigate the compositional and semantic understanding of video retrieval models and we propose a set of 10 tasks for this study. * We perform this analysis on a broad range of 12 state-of-the-art models and generalize the findings to the video retrieval task. * We establish that video retrieval models exhibit distinct and contrasting behaviours for interpreting various elements in the text captions. ## 2 Related Work ### Video retrieval In the recent years, there has been a tremendous improvement on the task of video-retrieval. This is mainly due to two reasons (i) with the adaption of transformer based models to vision tasks like image classification [9, 20, 34] (ii) with the availabilty of large scale video-text datasets like HowTo100M [40], WebVid-2M [3] and YT180M [57]. Frozen-in-Time [3] is a dual-stream transformer model pre-trained on WebVid-2M and Conceptual captions-3M [47] datasets and fine-tuned for downstream video retrieval. A prompt based novel pre-training task [30] is proposed to effectively align visual and text features during large scale video-text pre-training. A new pre-training approach Masked Visual-token Modeling (MVM) [12] is presented to better model the temporal dependencies among videos for video-retrieval. To incorporate the rich semantic features of the videos, a novel pretext task Multiple Choice Questions (MCQ) is put forward in which the model is trained to answer questions about the video. In a parallel direction, image features pre-trained on large amounts of image-text data have been adopted for the task of video retrieval. CLIP4Clip [35] is an end-to-end trainable video retrieval model based on CLIP [44] architecture in which frame features are extracted using clip image encoder and the temporal modelling is performed using a transformer encoder. A two-stage framework CLIP2Video [11] is proposed to enhance interaction among video features and video-text features for video-retrieval. Madasu et at. [37] used off-the-shelf multi-lingual data to enhance the performance of video-retrieval. All these video-retrieval models previously haven't been tested for semantic and compositional understanding. To the best of our knowledge, ours is the first work to comprehensively explore semantic and compositional understanding of video retrieval models. ### Semantics Transformer based language models [8, 21, 56] have achieved state-of-the-art results on most natural language understanding tasks [53, 54]. Hence, there has been a growing interest to explore the semantic capabilities of these models [43, 46, 52, 58, 15]. More recently, a body of works [1, 23, 38, 49] investigated the effect of word order and its effect on the semantic understanding in pre-trained language models. Since all the video retrieval models use pre-trained language models for encoding text captions, we build upon those works and investigate their semantic understanding. ### Compositionality Although vision-language models pretrained on large amounts of data achieved state-of-the-results there has been a growing interest to understand the working of these models [6, 25, 29, 50, 25]. These works mainly focus on compositional knowledge these models by proposing new benchmarks. Winoground [51] dataset was introduced in which a pair of text captions contain the same set of words but pertain to different images. The models are then tested for image and caption match. Another benchmark CREPE [14] was put forward to evaluate two aspects of compositionality: systematicity and productivity. This benchmark contains unseen compounds and atoms in the test split to evaluate the models' generalization. Parcalabescu et al. [42] proposed VALSE dataset to measure visio-linguistic capabilities of pretrained vision and language models. AGQA-Decomp [14] is a new benchmark to measure compostional consistency for the task of Video Question Answering. All these works proposed new benchmarks for compositional reasoning in image-language models. Contrary to these, our work focuses on measuring compositionality of video retrieval models using the standard datasets and doesn't require a new benchmark. Moreover, our experiments are evaluated on 13 models which are significantly higher than the models in these works. ## 3 Compositional and Semantic Understanding In this section, we first define semantics and compositionality and subsequently establish the evaluation protocol for semantic and compositional understanding in video retrieval models. For this evaluation, we augment the existing text captions and create new datasets that assess their semantic and compositional understanding. We explain this protocol using an example test caption _Q "a guy wearing a red shirt drives a car while talking"_ from the MSRVTT dataset. Table 1 summarizes different augmentation methods used for the proposed study. ### Compositionality in videos A video is composed of multiple objects & attributes interacting with each other in a similar or different fashion. To retrieve a video, corresponding text caption is passed as an input to the video retrieval model. This text caption typically consists of objects & attributes and interactions (actions) unique to that particular video. The video retrieval model parses the input caption and computes the matching scores with all the videos. Finally, the video with the highest matching score is the predicted ground truth video. Therefore a video retrieval model should be able to understand each of the objects & attributes and actions present in the caption. This is called compositionality in the visual world. To evaluate the compositional understanding in video retrieval models, we mainly focus on their ability to parse objects & attributes and actions. Next we discuss the evaluation protocol to measure compositionality in VR models. #### 3.1.1 Object & Attribute knowledge **Object & Attribute removal (\(Q_{objattrrem}\)):** In this setup, we remove all the objects & attributes in the original caption \(Q\) and the resulting caption is _"wearing a drives a while talking"_. Here guy, red shirt and car are the objects & attributes. **Object shift (\(Q_{obighift}\)):** To test the VR models ability to relate objects with their attributes, we shift the places of objects in the captions. The modified caption is _"a shirt wearing a red car drives a guy while talking"_. **Object replacement (\(Q_{objrep}\)):** We evaluate the VR models sensitivity to objects by randomly replacing the objects with an entirely different objects. The replaced caption is _"a surf wearing a red mars drives a guy channel while talking"_. **Object partial (\(Q_{obippartial}\)):** In this setup, the VR models are given access to just 50% of the objects in the caption. This is to understand if the models perform any shortcuts while retrieving videos. Eg: _"a wearing a red drives a car while talking"_. Next, we introduce the tasks for evaluating action knowledge in VR models. #### 3.1.2 Action knowledge Another important compositional component in the captions is action. To test the action knowledge of video retrieval models, we propose the following caption augmentations: **Action removal (\(Q_{actrrem}\)):** The actions present in the original captions are eliminated. The modified caption is _"a red shirt a car while"_ as the actions wearing, drives are removed. This is to understand the influence of actions on the video retrieval performance. **Action negation (\(Q_{actreg}\)):** A negation is added to all the actions in the captions resulting in the new caption _"a guy **not wearing a red shirt **not** drives a car while **not** talking"_ This tests the VR models ability to comprehend negation in the captions. **Action replacement (\(Q_{actrep}\)):** In this setup, the actions are randomly replaced with a different set of actions. The replaced actions are neither antonyms nor synonyms. It checks if the models truly recognize the meaning of the action words. Next we present the evaluation protocol for semantic understanding of VR models. ### Semantic understanding In the previous section we elucidated the components for compositional reasoning in videos namely objects & attributes and actions. These components are bind together by semantics there by forming a meaningful caption. Let's consider a part of the example described previously "a guy wearing a red shirt drives a car", if the word "car" and "guy" are interchanged the resulting caption will be "a car wearing a red shirt drives a guy" which is not meaningful. Conse quently, semantics also play a crucial role in video retrieval performance along with the compositonality. Subsequently, we put forward the evaluation protocol to measure semantic understanding in video retrieval models. **Semantics removal (\(Q_{semrem}\)):** Our first experiment focuses on the effect of semantics on VR models. We modify the caption by keeping just the objects & attributes, actions and eliminate any semantic words among them. The resulting caption is _"gury wearing red shirt drives car talking"_. **Word order shuffle (\(Q_{shuf}\)):** In this setup, all the words are shuffled in the caption. This destroys the order of compositonality and semantics. This tests the order sensitivity of VR models. **Word order reverse (\(Q_{rev}\)):** In this setup, we preserve the word order except that in the reverse order. It evaluates the positional knowledge of video retrieval models. Next, we present the experiment set up for quantifying the compositional and semantic understanding. ## 4 Experiments In this section, we explain the video retrieval models and datasets used for the proposed analysis. ### Models We experiment with two categories of video retrieval models. The first category of models are pretrained on large scale video-text datasets like WebVid-2.5M [3] and YT-Temporal-180M [57] and fine-tuned for downstream video retrieval datasets. These include Frozen-in-Time (FiT) [3], MCQ [16], MILES [17], VIOLET [12] and MVM [13]. The second category involves models that adapt pretrained image-text features such as CLIP [44] for the task of video retrieval. This category comprise of seven architectures namely TS2NET [33], CLIP4CLip [35], CLIP2Video [11], XCLIP [36], XPOOL [18], EMCL [27] and DiCoSA [28]. ### Datasets We perform the evaluation on three video retrieval datasets: MSRVTT [55], MSVD [5] and DiDeMo [2]. MSRVTT has 10000 videos and each video has multiple captions totalling 200K. We report the results on MSRVTT-9k split (9000 for training and 1000 for testing). MSVD consists of 1970 videos and 80K captions. The training split has 1300 videos and the test split has 670 videos. The captions in these datasets are single sentence. DiDeMo is made up of 10K videos and 40K descriptions. Following [35], we concatenate all the sentences and evaluate as paragraph-to-video retrieval. ### Implementation We use spacy1 to identify parts-of-speech for all the words in the caption. We consider nouns, adverbs and adjectives as objects & attributes, verbs as actions and rest of the parts-of-speech as semantics. We use the exact set up used by the state-of-the-art video retrieval models and measure the performance on all the augmented datasets. Footnote 1: [https://spacy.io/usage/linguistic-features](https://spacy.io/usage/linguistic-features) ## 5 Results and Discussion ### Objects & Attributes vs Actions vs Semantics: Do all of them matter? Our aim is to analyze the importance of three components: objects & attributes, actions and semantics that make up a text query for retrieving videos. Hence, we test the video retrieval models with text captions that have missing objects & attributes (\(Q_{objattrerm}\)), actions (\(Q_{actrem}\)) and semantics (\(Q_{semrem}\)). Tables 2, 3 and 4 show the results on MSRVTT, MSVD and DiDeMo datasets respectively. It is evident from the table that there is a drop in video retrieval performance when tested with text captions that \begin{table} \begin{tabular}{c c c} Notation & Caption type & Example \\ \hline \(Q\) & Original caption & a guy wearing a red shirt drives a car while talking \\ \hline \(Q_{objattrerm}\) & Object \& Attribute removal & a wearing a drives a while talking \\ \(Q_{obshift}\) & Object shift & a shirt wearing a red car drives a guy while talking \\ \(Q_{objrep}\) & Object replacement & a surf wearing a red mars drives a channel while talking \\ \(Q_{objpartial}\) & Object partial & a wearing a red drives a car while talking \\ \hline \(Q_{actrem}\) & Action removal & a guy is a red shirt a car while \\ \(Q_{actneg}\) & Action negation & a guy not wearing a red shirt not drives a car while not talking \\ \(Q_{actrep}\) & Action replacement & a guy removing a red shirt flying a car while sleeping \\ \hline \(Q_{semrem}\) & Semantics removal & guy wearing red shirt drives car talking \\ \(Q_{shuf}\) & Word order shuffle & talking red shirt drives while car a guy a wearing a \\ \(Q_{rev}\) & Word order reverse & talking while car a drives shirt red a wearing guy a \\ \end{tabular} \end{table} Table 1: Table shows the types of perturbations applied to the text captions. The example text caption is taken from the MSRVTT [55] dataset. Red color denotes the change from the original text caption. don't have actions (\(Q_{actrem}\)). The drop is more pronounced among CLIP based models than pretrained video models. This shows that actions play a role for retrieving correct videos. However, we see that the performance drop is not as expected. Videos are time-series image frames which can have same attributes. In those scenarios, actions help in differentiating those videos. We see this effect when the R@1 is lower among pretrained video models and higher among CLIP based models. When the performance is lower, actions do not play a significant role in video retrieval and hence the videos can be retrieved without them in the text caption. On the contrary if R@1 score is higher, we see a notable decline. This is due to the robust video representations of CLIP based models as compared to pretrained video models. CLIP based models accurately encode video representations but, when the differentiating factor among videos i.e actions are missing in text captions leads to retrieval of incorrect videos. In short caption length datasets like MSRVTT and MSVD, we notice a significant drop in performance as compared with DiDeMo which is a paragraph (\(>1\) sentences) dataset. This is because text captions in DiDeMo contains detailed description of the videos and hence, missing actions didn't lead to drop in performance as compared to MSRVTT and MSVD. It demonstrates that actions are not essential in paragraph-video retrieval. Next, we analyze the performance of video retrieval models tested with text captions without semantics (\(Q_{semrem}\)). From the table, it is clear that there is a reduction in R@1 without the semantics in the text captions. It validates that semantics are necessary for retrieving correct \begin{table} \begin{tabular}{c c c c c c c c c c} \multicolumn{8}{c}{Text-to-Video Retrieval} & \multicolumn{8}{c}{Video-to-Text Retrieval} \\ \hline Type & Model & Q & \(Q_{actrem}\) & \(Q_{objattrem}\) & \(Q_{semrem}\) & Q & \(Q_{actrem}\) & \(Q_{objattrem}\) & \(Q_{semrem}\) \\ \hline \multirow{8}{*}{Pretrained video} & FiT [3] & 26.1 & 22.8 & 5.2 & 20 & 27.9 & 23.7 & 5.8 & 25.7 \\ & MCQ [16] & 26 & 21.9 & 4.1 & 20.1 & 19.4 & 15.7 & 3.7 & 18.6 \\ & MILES [17] & 26 & 21.3 & 3.3 & 19.9 & 17.5 & 15.2 & 2.9 & 17.1 \\ & VIOLET [12] & 35.6 & 29.5 & 0.1 & 25 & - & - & - & - \\ & MVM [13] & 36.3 & 31 & 8.7 & 33.7 & - & - & - & - \\ \hline \multirow{8}{*}{CLIP [44]} & TS2NET [33] & 36 & 30.6 & 6 & 29.3 & 25.4 & 21.2 & 4.3 & 41.4 \\ & CLIP4Clip [35] & 43.4 & 37 & 9.7 & 35.3 & 43.6 & 39 & 10.3 & 39.7 \\ \cline{1-1} & CLIPVideo [11] & 46 & 38.8 & 8.4 & 35.3 & 43 & 38 & 10 & 40.8 \\ \cline{1-1} & XCLIP [36] & 46.1 & 39.8 & 10.5 & 35.6 & 45.4 & 40.2 & 11 & 42.2 \\ \cline{1-1} & XPOOL [18] & 46.9 & 39.5 & 7.6 & 36.4 & 44.4 & 39.6 & 11.1 & 42 \\ \cline{1-1} & EMCL [27] & 47.8 & 40.8 & 8.2 & 37.4 & 46.2 & 39.5 & 11.6 & 42.8 \\ \cline{1-1} & DiCoSA [28] & 47.9 & 41.3 & 9.1 & 38.3 & 45.9 & 41.2 & 13.4 & 43.1 \\ \hline \end{tabular} \end{table} Table 2: The table shows the results on MSRVTT [55] dataset in both text-to-video and video-to-text retrieval settings. \(Q\) denotes the performance (R@1 score) on the original unchanged dataset. \(Q_{actrem},Q_{objattrem}\) and \(Q_{semrem}\) is the R@1 score on datasets that have excluded actions, attributes and semantics respectively. \begin{table} \begin{tabular}{c c c c c c c c c} \multicolumn{8}{c}{Text-to-Video Retrieval} & \multicolumn{8}{c}{Video-to-Text Retrieval} \\ \hline Type & Model & Q & \(Q_{actrem}\) & \(Q_{objattrem}\) & \(Q_{semrem}\) & Q & \(Q_{actrem}\) & \(Q_{objattrem}\) & \(Q_{semrem}\) \\ \hline \multirow{8}{*}{Pretrained video} & FiT [3] & 36 & 32.7 & 6.9 & 34.9 & 36.1 & 31 & 7.9 & 34.9 \\ & MCQ [16] & 43.6 & 36.4 & 9 & 42.4 & 40.3 & 33.7 & 9.9 & 39 \\ \cline{1-1} & MILES [17] & 44 & 39 & 8.1 & 43.9 & 43.7 & 37.3 & 9.6 & 41.5 \\ \cline{1-1} & VIOLET [12] & 48.3 & 40.6 & 10.8 & 45.8 & - & - & - & - \\ \cline{1-1} & MVM [13] & 49.6 & 41.5 & 10.5 & 45.6 & - & - & - & - \\ \hline \multirow{8}{*}{CLIP [44]} & TS2NET [33] & 52.8 & 38.5 & 11.8 & 49.4 & 51.2 & 37.1 & 10.7 & 48.6 \\ & CLIP4Clip [35] & 54.5 & 42.1 & 11.9 & 51.9 & 51.8 & 38.5 & 10.9 & 50.7 \\ \cline{1-1} & CLIP2Video [11] & 55.8 & 41.6 & 11.8 & 50.6 & 53.6 & 40.5 & 12.5 & 51.6 \\ \cline{1-1} & XCLIP [36] & 54 & 39.7 & 12.4 & 49.7 & 54.9 & 42.6 & 13.3 & 48.6 \\ \cline{1-1} & XPOOL [18] & 56.1 & 47 & 11.9 & 53.9 & 56.6 & 48 & 12.4 & 53.3 \\ \multicolumn{8}{c}{} \\ \end{tabular} \end{table} Table 3: The table shows the results on MSVD [5] dataset in both text-to-video and video-to-text retrieval settings. \(Q\) denotes the performance (R@1 score) on the original unchanged dataset. \(Q_{actrem},Q_{objattrem}\) and \(Q_{semrem}\) is the R@1 score on datasets that have excluded actions, attributes and semantics respectively. ground truth videos. For MSRVTT, we observe that models tested without semantics under-perform compared to actions in the text captions and the average difference in performance is 2%. The reverse is true for MSVD and DiDeMo datasets where there is a huge difference of 9%. In addition, we also notice that CLIP based models are more sensitive to semantics than pretrained video models. Finally, we evaluate the video retrieval models on text captions in the absence of objects & attributes (\(Q_{objattrrem}\)). As seen from the results, these models perform poorly (a drop in 20%) which underscores the significance of objects & attributes. We also notice that \(Q_{attrrem}\) trails \(Q_{actrem}\) and \(Q_{semrem}\) by a huge margin. This difference is more striking among CLIP based models as opposed to pretrained video models. ### What role do Objects & Attributes play in video retrieval? In the previous sections (SS5.1), findings from our experimental results suggested that objects & attributes are the most important component in text captions while retrieving \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{Text-to-Video Retrieval} & \multicolumn{6}{c}{Video-to-Text Retrieval} \\ \hline Type & Model & Q & \(Q_{actrem}\) & \(Q_{objattrrem}\) & \(Q_{semrem}\) & Q & \(Q_{actrem}\) & \(Q_{objattrrem}\) & \(Q_{semrem}\) \\ \hline \multirow{4}{*}{Pretrained video} & FiT [3] & 29.2 & 28 & 4.4 & 27.4 & 28.2 & 27.1 & 5.6 & 27 \\ & MCQ [16] & 24.6 & 22.3 & 5.6 & 22.8 & 23.8 & 21.2 & 5.2 & 21.4 \\ & MILES [17] & 28 & 24.4 & 3.9 & 24 & 22.6 & 22.4 & 4.7 & 22.2 \\ & VIOLET [12] & 24.8 & 23.9 & 4.5 & 26 & - & - & - & - \\ & MVM [13] & 24.8 & 23.9 & 4.5 & 26 & - & - & - & - \\ \hline \multirow{4}{*}{CLIP [44]} & CLIP4Clip [35] & 42.6 & 25.4 & 6.7 & 37.7 & 41.4 & 19 & 7.9 & 38 \\ & XCLIP [36] & 43.2 & 39.8 & 8.7 & 41.5 & 45.6 & 41.2 & 10.3 & 40.2 \\ \cline{1-1} & XPOOL [18] & 43.7 & 40.3 & 8.4 & 40.1 & 43.7 & 40.4 & 8.9 & 39.5 \\ \cline{1-1} & EMCL [27] & 46.3 & 40.2 & 6.9 & 41.7 & 44.8 & 42.1 & 9.3 & 42.3 \\ \cline{1-1} & DiCoSA [28] & 45.4 & 41.5 & 7.9 & 41.1 & 45.1 & 41.8 & 9.5 & 41.2 \\ \hline \hline \end{tabular} \end{table} Table 4: The table shows the results on DiDeMo [2] dataset in both text-to-video and video-to-text retrieval settings. \(Q\) denotes the performance (R@1 score) on the original unchanged dataset. \(Q_{actrem}\), \(Q_{objattrrem}\) and \(Q_{semrem}\) is the R@1 score on datasets that have excluded actions, attributes and semantics respectively. Figure 1: We perform ablation studies on the role of objects & attributes in video retrieval. The video retrieval models are evaluated on three tasks namely: Object shift (\(Q_{objshift}\)), Object replacement (\(Q_{objrep}\)) and Object partial (\(Q_{objpartial}\)). Results show that swapping of objects has minor effect on performance followed by masking 50% objects. The highest drop is seen when the objects are randomly replaced. These ablation studies are performed on MSRVTT [55] and MSVD [5] datasets videos. To investigate further, we perform additional detailed studies on their importance. In captions there can be multiple objects & attributes and every pair of object & attribute is distinct from the other. Any slight modification in the pairs can totally change their correspondence and thereby the ground truth video and hence, video retrieval models should be able to account for these changes. We perform a test in which we interchange the places of objects while keeping rest of the caption same \(Q_{objshift}\). In the second study, we randomly replace objects in the caption. We also perform a test in which we interchange the places of objects while keeping rest of the caption same \(Q_{objshift}\). In the second study, we randomly replace objects in the caption. We also perform a test in which we interchange the places of objects while keeping rest of the caption same \(Q_{objshift}\). We also perform a test in which we interchange the places of objects while keeping rest of the caption same \(Q_{objshift}\). tion \(Q_{objrep}\) and evaluate the models on the modified ones. The final ablation involves keeping just half the objects in the captions (\(Q_{objpartial}\)). This is to assess if VR models adapt any shortcuts and still retrieve correct videos without the critical information. Figure 1 demonstrates the results for these studies. As shown in the figure, there is a slight deterioration of video retrieval performance when there is a object shift in the caption. The drop is a meagre 5.5% for MSRVTT and 3.6% in case of MSVD dataset. It demonstrates that VR models do not quite fully understand the relationship between object and its attribute. On the other hand if the objects are randomly replaced (\(Q_{objrep}\)) with different unrelated objects, there is a massive degradation in R@1 score. In fact, the performance is quite similar to the models performance tested on captions without objects & attributes. These results prove that video retrieval models are extremely sensitive to alteration of objects. Figure 1 shows that there is a noticeable fall in performance when the VR models have access to just 50% of the object data in the captions. The R@1 score lags by 30% in MSRVTT and 22% in MSVD datasets. It reinforces the aforesaid extreme sensitivity nature of the retrieval models towards objects. Furthermore, random object replacement performs far worse than partial objects in the captions. This highlights that factual object description even though 50% is much more crucial than access to the entire caption albeit with incorrect objects. ### Do VR models pay attention to actions? We demonstrated in the section 5.1 that actions play a role in video retrieval. Now, this raises an important question _How much attention do VR pay to actions in the captions?_ To investigate this we perform certain ablation studies on the action understanding of VR models in the text captions. We replace the action word with the negation of it (\(Q_{actneg}\)) and test the performance of VR models on the newly formed captions. In parallel, the actions in the captions are randomly replaced with different actions (\(Q_{actrep}\)) and VR models are evaluated on the altered captions. In Figure 2, we provide the results of VR models tested on captions with negated (\(Q_{actneg}\)) and replaced (\(Q_{actrep}\)) actions. From the figure, it is evident that action negation (\(Q_{actneg}\)) achieves comparable results to \(Q\) and there is a slight drop in performance in case of action replacement (\(Q_{actrep}\)). Most of the actions are expressed in positive sense in these datasets and this is not always the case. For a fine-grained description of videos, the actions of the static objects can be communicated in a negation form. So, naturally video retrieval models are expected to understand the negation in captions. However, we notice that action negation has similar performance as original captions which demonstrates that VR models lack the capability of action negation sense. Next, we randomly replace the actions with a different action and test the attention of VR models. In an ideal scenario, the performance of these models should drop drastically as the replaced actions do not correspond to that of the ones in ground truth videos. Nevertheless, we see that the R@1 score of action replacement (\(Q_{actrep}\)) is only slightly less than original caption \(Q\). In fact the average drop in R@1 is only 6.8% in MSRVTT and 7.5% in MSVD. Hence even though the actions are important in video retrieval, VR models use other influential information such as objects & attributes to retrieve ground truth videos. ### Does word order of text captions matter? In the figures 2(a) and 2(b), we present the findings on the word order evaluation. First we observe that models tested on datasets without word order perform worse than the original dataset. The R@1 score is reduced on average by 6.3% and 9.1% on shuffled (\(Q_{s}\)) and reversed (\(Q_{r}\)) MSRVTT captions respectively. On a similar note the performance drops 5.5% on shuffled and 5% on reversed DiDeMo dataset. Additionally, the R@1 decrease is more pronounced on reversed captions than shuffled. This is surprising as the object-action order is preserved in reversed captions in contrast with shuffled. This shows that models adapt bag-of-words approach for semantic understanding of captions and positioning of object-action order doesn't matter. A possible explanation for this behaviour is: all the video retrieval models use pretrained language models as their text encoder. Recent studies have shown that [39, 49] distributional information is preserved even though the syntactic word order is disturbed and hence, LMs leverage it for hierarchical text understanding. Surprisingly, video retrieval models manifest the same behaviour in caption understanding. ## 6 Conclusion In this work, we proposed a comprehensive investigation of compositional and semantic understanding of video retrieval models. For this study we put forward 10 different tasks to evaluate models reasoning of objects & attributes, actions and semantics for retrieving videos. We experiment with a wide range of 12 state-of-the-art video retrieval models and 3 standard benchmarks. We show that video retrieval performance is heavily impacted by objects & attributes and lightly by semantics. Furthermore, our results also reveal that word order matter less for video retrieval models. These results shed an important light on the inner workings of video retrieval models. We believe the future works can utilize these findings to design compositional aware video retrieval models.
2310.15904
Do Stochastic Parrots have Feelings Too? Improving Neural Detection of Synthetic Text via Emotion Recognition
Recent developments in generative AI have shone a spotlight on high-performance synthetic text generation technologies. The now wide availability and ease of use of such models highlights the urgent need to provide equally powerful technologies capable of identifying synthetic text. With this in mind, we draw inspiration from psychological studies which suggest that people can be driven by emotion and encode emotion in the text they compose. We hypothesize that pretrained language models (PLMs) have an affective deficit because they lack such an emotional driver when generating text and consequently may generate synthetic text which has affective incoherence i.e. lacking the kind of emotional coherence present in human-authored text. We subsequently develop an emotionally aware detector by fine-tuning a PLM on emotion. Experiment results indicate that our emotionally-aware detector achieves improvements across a range of synthetic text generators, various sized models, datasets, and domains. Finally, we compare our emotionally-aware synthetic text detector to ChatGPT in the task of identification of its own output and show substantial gains, reinforcing the potential of emotion as a signal to identify synthetic text. Code, models, and datasets are available at https: //github.com/alanagiasi/emoPLMsynth
Alan Cowap, Yvette Graham, Jennifer Foster
2023-10-24T15:07:35Z
http://arxiv.org/abs/2310.15904v1
# Do Stochastic Parrots have Feelings Too? ###### Abstract Recent developments in generative AI have shone a spotlight on high-performance synthetic text generation technologies. The now wide availability and ease of use of such models highlights the urgent need to provide equally powerful technologies capable of identifying synthetic text. With this in mind, we draw inspiration from psychological studies which suggest that people can be driven by emotion and encode emotion in the text they compose. We hypothesize that pretrained language models (PLMs) have an _affective deficit_ because they lack such an emotional driver when generating text and consequently may generate synthetic text which has _affective incoherence_ i.e. lacking the kind of emotional coherence present in human-authored text. We subsequently develop an emotionally aware detector by fine-tuning a PLM on emotion. Experiment results indicate that our emotionally-aware detector achieves improvements across a range of synthetic text generators, various sized models, datasets, and domains. Finally, we compare our emotionally-aware synthetic text detector to ChatGPT in the task of identification of its own output and show substantial gains, reinforcing the potential of emotion as a signal to identify synthetic text. Code, models, and datasets are available at [https://github.com/alanagiasi/emoPLMsynth](https://github.com/alanagiasi/emoPLMsynth) ## 1 Introduction Modern PLMs can surpass human-level baselines across several tasks in general language understanding (Wang et al., 2018, 2019) and can produce synthetic text that can exceed human level quality, such as synthetic propaganda thought to be more plausible than human written propaganda (Zellers et al., 2019). PLMs have been used to generate disinformation (Zellers et al., 2019; Brown et al., 2020), left- or right-biased news (Gupta et al., 2020), fake comments (Weiss, 2019), fake reviews (Adelani et al., 2019), and plagiarism (Gao et al., 2022) and can generate synthetic text at scale, across domains, and across languages. The increasing high quality of synthetic text from larger and larger PLMs brings with it an increasing risk of negative impact due to potential misuses. In this work, we focus on the task of synthetic text detection. Due to the potentially profound consequences of global synthetic disinformation we focus mainly, but not exclusively, on the detection of synthetic text in the news domain.1 Synthetic news has already been published on one highly reputable media website, only later to be withdrawn and apologies issued for the "breach of trust" (Crowley, 2023a, b). Footnote 1: The news domain is recognised as having high emotional content (Strapparava and Mihalcea, 2007; Bostan et al., 2020). Current approaches to synthetic text detection tend to focus on learning artefacts from the output distribution of PLMs (Gehrmann et al., 2019; Pillutla et al., 2021; Mitchell et al., 2023), e.g. increased perplexity caused by nucleus sampling (Zellers et al., 2019). However, PLM distributions are dependent on training data and numerous hyperparameter choices including model architecture and sampling strategy. This gives rise to a combinatorial explosion of possible distributions and makes the task of synthetic text detection very difficult. Furthermore, it is not unexpected that performance decreases when classifying out-of-distribution instances, and there is a growing field of work investigating this shortcoming (Yang et al., 2023). In this work, we consider not only the PLM output distribution, but also the other side of the synthetic text detection coin - human factors. We present a novel approach to the task of synthetic text detection which aims to exploit any difference between expression of emotion in human-authored and synthetic text. Neural word representations can have difficulty with emotion words, and PLM sampling strategies are stochastic rather than driven by emotion - we use the term _affective deficit_ to refer to these shortcomings. Thus, the resulting synthetic text can express emotion in an incoherent way, and we introduce the term _affective incoherence_ to refer to this type of limitation. To be clear, we do not contend that synthetic text is devoid of emotion, rather that the emotional content of synthetic text may be affectively incoherent, and that this affective incoherence stems from the underlying affective deficit of the PLM. For the purpose of demonstration of the affective deficit that we believe to be characteristic of text produced by PLMs, we provide the following simple example of human- versus machine-authored text with positive emotion words highlighted in orange and negative emotion words in pink. One shows coherent emotion expected of human-authored text, while the other demonstrates affective incoherence (see footnote2 to reveal which was synthetic/human-authored text). Footnote 2: (1) is human-authored while (2) is synthetic text. Both are from the _NEWSsynth_ dataset (see §4.2). 1. _Roberts chuckled when asked if he was happy to be on the other team now when Puig's name comes up. "Yeah, I am happy," he said, smiling._ 2. _I'm really happy for him. Over the course of those three seasons, the 25-year-old has gone from rolling to poor to worse and old._ In this simple example, we have demonstrated one kind of affective incoherence present in synthetic text but we suspect that fine-tuning an emotionally-aware PLM could detect additional and more complex emotional patterns that might go undetected by humans. We hypothesise that the _affective deficit_ of PLMs could result in synthetic text which is _affectively incoherent_, which could be useful in distinguishing it from human text. We use a transfer learning (Pan and Yang, 2010) method to train an "emotionally-aware" detector model. By fine-tuning a PLM first on emotion classification and then on our target task of synthetic text detection, we demonstrate improvements across a range of synthetic text generators, various sized models, datasets and domains. Furthermore, our emotionally-aware detector proves to be more accurate at distinguishing between human and ChatGPT text than (zero-shot) ChatGPT itself. Finally, we create two new datasets: _NEWSsynth_, a dataset of 20k human and synthetic news articles, and _ChatGPT100_, a testset of 100 human and ChatGPT texts on a range of topics. We make all code, models and datasets publicly available to aid future research.3 Footnote 3: [https://github.com/alanagiasi/emoPLMsynth](https://github.com/alanagiasi/emoPLMsynth) ## 2 Related Work People are relatively poor at detecting synthetic text, and have been shown to score just above random chance (Gehrmann et al., 2019; Uchendu et al., 2021). Hybrid systems, such as GLTR (Gehrmann et al., 2019) for example, use automation to provide information to aid human classification, highlighting a text sequence using colours to represent likeness to the PLM output distribution such as GPT-2 (Radford et al., 2019). Gehrmann et al. (2019) reported an increase in detection accuracy of approximately 18% (from 54% to 72%) using GLTR, while Uchendu et al. (2021) report an F1 score of 46% using GLTR with a heuristic based on an analysis of human text. Both human and hybrid approaches involve human decisions, which can be slow, expensive, susceptible to bias, and inconsistent. Automatic detection produces the best results for synthetic text detection. This usually involves training PLMs to detect other PLMs, but zero-shot detection methods also exist, e.g. DetectGPT (Mitchell et al., 2023). Potentially the best supervised detector, BERT, can detect synthetic text from 19 different generators with a mean F1 of 87.99%, compared to 56.81% for hybrid, and worst of all humans at 53.58% (Uchendu et al., 2021). Performance of SOTA detectors can however be inconsistent and unpredictable due to several factors specific to both the detector and generator, including: model size and architecture, training data and domain thereof, sampling strategy, hyperparameter selection, and sentence length. As mentioned above, Uchendu et al. (2021) showed the best of these models (BERT) achieves a mean F1 of 87.99% on 19 different synthetic text generators. However, the mean score hides the wide range (\(\approx\)53%) of F1's, ranging from as low as 47.01% to 99.97%, for distinct synthetic text generators. This volatility may be due in part to the detector simply learning artefacts of the generator distribution. Consequently, the task of synthetic text detection is somewhat of an arms race with detectors playing catch-up, forced to learn ever-changing distributions due to the numerous factors that can potentially change those distributions. Existing approaches to synthetic text detection exploit properties of synthetic text. Synthetic text can be incoherent and degrade as the length of generated text increases Holtzman et al. (2020), perplexity increases with increasing length unlike human text Zellers et al. (2019), and PLMs are susceptible to sampling bias, induction bias, and exposure bias Ranzato et al. (2016). For example, exposure bias can contribute to brittle text which is repetitive, incoherent, even containing hallucinations Arora et al. (2022). Synthetic text can have an inconsistent factual structure, such as mentioning irrelevant entities Zhong et al. (2020). Perhaps unsurprisingly, synthetic text detection is less difficult with longer excerpts of generated text, for both humans and machines Ippolito et al. (2020). One aspect of writing that has not, up to now, been a focus of synthetic text detection efforts is the expression of emotion. The problem of encoding emotion was first identified in neural NLP with static embeddings such as word2vec Mikolov et al. (2013); Wang et al. (2020). Static word embeddings have difficulty distinguishing antonyms from synonyms Santus et al. (2014). This deficit is present in embeddings for words which represent opposing emotions (e.g. joy-sadness) Seyeditabari and Zadrozny (2017). Furthermore, words representing opposing emotions can have closer embeddings relative to words representing similar emotions Agrawal et al. (2018). There have been various approaches to address this affective deficit in embeddings, such as transfer learning from sentiment analysis Kratzwald et al. (2018), an additional training phase using an emotional lexicon and psychological model of emotions Seyeditabari et al. (2019), and combining separately-learned semantic and sentiment embedding spaces Wang et al. (2020). Addressing potential affective deficits of PLMs is also the goal of work aiming to make dialogue systems more empathetic. For example Huang et al. (2018) force dialogue generation to express emotion based on the emotion detected in an utterance, while Rashkin et al. (2019) follow a similar approach with a transformer architecture to make the system more empathetic. In contrast, Wang et al. (2020) report that human text can display consistency in emotional content whereby similar emotions tend to occur adjacent to each other while dissimilar emotions seldom do.4 Footnote 4: For a comprehensive survey of sentiment control in synthetic text see Lorandi and Belz (2023) and for studies of emotion in human writing, see Brand (1985, 1987, 1991); Bohn-Gettler and Rapp (2014); Knaller (2017). Past work in synthetic text detection has focused on the properties of synthetic text generators and is yet to take advantage of the factors that potentially influence human-authored text, such as the emotions humans express in the text they write. Our work exploits this PLM affective deficit to improve synthetic text detection. ## 3 Equipping PLMs with Emotional Intelligence Our method is illustrated in Figure 1. The process works as follows: 1. PLMsynth: In the leftmost column of Figure 1, human articles and synthetic articles are used to fine-tune a PLM to discriminate between the two kinds of text. This is indicated by the blue nodes in the PLM illustration. 2. emoPLM: In the middle column of Figure 1, a second dataset annotated for emotions with Ekman's 6 emotions Ekman (1992, 1999, 2016) is used to fine-tune a PLM on the task of emotion classification. This makes our model emotionally-aware, as indicated by the red nodes in the PLM illustration. 3. emoPLMsynth: The multi-class (6 head) classification layer from emoPLM is removed and replaced with a binary classification layer. The emotionally-aware PLM is then fine-tuned on the task of discriminating between human and synthetic articles. The PLM is still emotionally-aware while also being able to detect synthetic text - as indicated by the red and blue nodes respectively in the PLM. We conduct experiments using various PLM sizes, architectures, datasets, and domains for synthetic text generation and detection. ## 4 News Domain Experiments ### Generator and Detector Models To generate synthetic text, we use the Grover causal PLM (GPT-2 architecture) pretrained on 32M news articles from the RealNews dataset Zellers et al. (2019). We choose BERT Devlin et al. (2019) as our main detector model since it is freely available and performs well in several tasks including sequence classification. A baseline BERT model (we call this BERTSynth) is fine-tuned on the task of synthetic text detection, while our proposed model is the same BERT model, firstly fine-tuned on emotion classification (we call this intermediate model emoBERT) before further fine-tuning for synthetic text detection. This final proposed model is referred to as emoBERTsynth. ### Datasets We create and release _NEWSsynth_, a dataset containing 10k human and 10k synthetic news articles. 10k human-authored news articles were taken from the RealNews-Test dataset (Zellers et al., 2019) and used as a prompt to \(\mathsf{Grover}_{\mathsf{base}}\) to generate a corresponding 10k synthetic articles. The prompt includes the news article, headline, date, author, web domain etc. as described by Zellers et al. (2019). The dataset was split 10k-2k-8k for train, validation, and test respectively, the same ratio used by Zellers et al. (2019) with 50:50 human:synthetic text in each split, see Appendix B.3 for details. An investigation of length of human vs synthetic text is provided in Appendix E. In a second experiment, we also use the full RealNews-Test dataset itself, which comprises the same 10k human news articles used in _NEWSsynth_ and 10k synthetic articles generated by \(\mathsf{Grover}_{\mathsf{mega}}\). The use of synthetic text generated by \(\mathsf{Grover}_{\mathsf{mega}}\) instead of \(\mathsf{Grover}_{\mathsf{base}}\) allows comparison of BERsynth and emoBERTsynth on text generated by a larger generator model, and against results reported for other models on this dataset. We use the GoodNewsEveryone dataset (Bostan et al., 2020) to train emoBERT. This dataset contains 5k news headlines, and was chosen since it is within the target domain (news) and language (English) and is annotated with categorical emotions. The 15 emotion labels from GoodNewsEveryone were reduced to 11 emotions using the mapping schema of (Bostan and Klinger, 2018), and further reduced to 6 emotions based on the Plutchik Wheel of Emotion (Plutchik, 1980, 2001) - see Table 1 and Figure 3 in Appendix A - resulting in 5k news headlines labelled with Ekman's 6 basic emotions, the most frequently used categorical emotion model in psychology literature (Ekman, 1992, 1999, 2016). ### Training BERsynth We train BERsynth, a BER\({}_{\mathsf{base}}\)-cased model fine-tuned for synthetic text detection (using the _NEWSsynth_ or RealNews-Test dataset). Input sequence length was maintained at the BERT maximum of 512 tokens (\(\approx 384\) words). Five training runs were conducted. Each training run was 4 epochs - the most possible within GPU time constraints and similar to those of Zellers et al. (2019) Figure 1: The emotionally-aware PLM (emoPLMsynth) takes advantage of its prior fine-tuning on emotion to improve performance on the task of synthetic text detection. In contrast, the standard PLM fine-tuned only on synthetic text detection (PLMsynth) has no training on emotion. Our experiments show the emotionally-aware PLM (emoPLMLsynth) outperforms the standard PLM (PLMsynth) in multiple scenarios. who used 5 epochs.5 For each training run, a unique seed was used for model initialization, and a unique set of three seeds were used for the dataset shuffle - one seed each for train, validation, and test splits. Furthermore, the HuggingFace library shuffles the training data between epochs. The reproducibility of the training and validation results using seeds was verified by conducting multiple runs of training and validation. Hyperparameter values are listed in Appendix C. Footnote 5: After each epoch the model (checkpoint) was run against the validation set for Accuracy, and the checkpoint and Accuracy results were saved (in addition to F1, Precision and Recall). The checkpoint with the highest Accuracy score was then run on the Test set. ### Training emoBERT We train emoBERT, a BERTbase-cased model fine-tuned on the single label multiclass task of emotion classification using the GoodNewsEveryone dataset. Fine-tuning emoBERT followed a similar process to fine-tuning BERTsynth described in SS4.3. This time, there were 5k examples and fine-tuning was for 10 epochs. Classification accuracy is not the end goal for emoBERT. Its purpose is to reduce the affective deficit of the PLM by modifying the representations of words conveying emotions and to improve performance in the task of synthetic text detection by transfer learning. The mean F1\({}_{\mu}\) for emoBERT is 39.4% on the Validation set - more than double mean chance (16.7%) and within the range 31% to 98% reported for within-corpus emotion classification in UnifiedEmotion [1]. See Appendix D for more details. ### Training emoBERTsynth We train emoBERTsynth, an emoBERT model fine-tuned for synthetic text detection (using the _NEWSsynth_ or RealNews-Test dataset). The best emoBERT model (checkpoint) from each of the 5 training runs had its emotion classification head (6 outputs) replaced with a binary classification head (2 outputs) for human vs synthetic text classification, see Figure 1. Each model was then fine-tuned on the synthetic text detection task using the exact same process and set of random seeds (for dataset shuffling) as the 5 best models described in SS4.3. This allowed a direct comparison between the 5 BERTsynth models (trained on synthetic text detection only) and the 5 emoBERTsynth models (fine-tuned on emotion classification followed by synthetic text detection). ### Results The results in Figure 2 and Table 2 show the performance of BERTsynth and emoBERTsynth when fine-tuned on the _NEWSsynth_ dataset. The results support the hypothesis that emotion can help detect synthetic text. emoBERTsynth outperforms BERTsynth in head-to-head for accuracy and F1 in all 5 runs. Looking at precision and recall, emoBERTsynth outperforms BERTsynth in precision in all 5 runs, while the opposite is the case for recall. It is worth comparing the relative difference in recall and precision between emoBERTsynth and BERTsynth models in Table 2. emoBERTsynth has a difference between the mean recall and mean precision of 4.76 (89.04 - 84.28) while the difference for BERTsynth is more than double that at 10.81 (91.63 - 80.82). \begin{table} \begin{tabular}{l l l} \hline \hline GoodNewsEveryone & & Ekman \\ \hline \hline disgust & \(\rightarrow\) & disgust (8\%) \\ fear & \(\rightarrow\) & fear (8\%) \\ sadness, guilt, shame & \(\rightarrow\) & sadness (14\%) \\ joy, trust, pride, love/like, positive anticipation/optimism & \(\rightarrow\) & happiness \\ & & (17\%) \\ anger, annoyance, negative anticipation/pessim & \(\rightarrow\) & anger (24\%) \\ negative surprise, positive surprise & \(\rightarrow\) & surprise (30\%) \\ \hline \hline \end{tabular} \end{table} Table 1: Emotion Mapping Schema: GoodNewsEveryone (15 emotions) to Ekman 6 basic emotions. % shows the emotion label distribution in the dataset. Figure 2: Test results for BERTsynth and emoBERTsynth on the _NEWSsynth_ dataset. emoBERTsynth is higher for Accuracy, Precision and F1, while BERTsynth is higher for Recall. Thus, we suggest our emotionally-aware PLM, emoBERTsynth, is a better performing model than the standard PLM, BERTsynth, because it has a better balance between precision and recall. In Table 3 we compare BERsynth and emoBERTsynth on the RealNews-Test dataset. Recall that this dataset contains synthetic articles generated by \(\text{Grover}_{\text{mega}}\) instead of the smaller \(\text{Grover}_{\text{base}}\). We also compare against the FastText, GPT-2 and BERT detector models reported by Zellers et al. (2019) on this dataset. emoBERTsynth has the highest accuracy, outperforming BERsynth by 1.4%, \(\text{BERT}_{\text{base}}\) by 9.03%, GPT-\(2_{\text{base}}\) by 10.03%, and FastText by 12.43%. These results support the hypothesis that emotion can improve synthetic text detection. There is a 7.63 point difference between our BERsynth model and the BERT model reported by Zellers et al. (2019), despite both models being \(\text{BERT}_{\text{base}}\) and fine-tuned on the same dataset and splits. However, there are differences in how the models were treated before this fine-tuning, and there may be some hyperparameter differences for fine-tuning. We described in SS4.3 how we fine-tune a randomly initialised BERT model to create BERsynth. Zellers et al. (2019) reported their BERT models were domain adapted to News (by training on RealNews) at a length of 1024 WordPiece tokens. It is possible that this additional domain-adaptation and extended input sequence length actually harmed the performance of the \(\text{BERT}_{\text{base}}\) model on the synthetic detection task. The performance of synthetic text detectors can improve with length (Ippolito et al., 2020) and the longer input sequence length could help in this regard. However, the vast majority of human and synthetic news articles in RealNews-Test are shorter than 1024 tokens. Thus, they may not benefit from that extended input length and the model may in fact be somewhat reliant on those later input tokens for prediction. ### Analysis In this section, we perform a further set of experiments to aid in interpreting our main results. #### 4.7.1 Length of Human vs Synthetic articles We investigate whether PLMs simply learn something about the length of articles as a proxy for discrimination between human and synthetic text. An analysis of NEWSsynth articles (train and validation splits) reveals no obvious correlation (Pearson \(r=0.20\)) between the number of words in a human article and the resulting synthetic article. 64% of human articles are longer than their corresponding synthetic article, while 34% of synthetic articles \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{Precision} & \multicolumn{2}{c}{Recall} & \multicolumn{2}{c}{F1} & \multicolumn{2}{c}{Accuracy} \\ Run & Bs & emoBs & Bs & emoBs & Bs & emoBs & Bs & emoBs \\ \hline 1 & 80.30 & 81.25 & 92.40 & 92.20 & 85.92 & 86.38 & 84.86 & 85.46 \\ 2 & 82.26 & 84.30 & 90.90 & 89.83 & 86.37 & 89.77 & 85.65 & 86.55 \\ 3 & 78.01 & 82.88 & 92.40 & 88.20 & 84.60 & 85.45 & 83.18 & 84.99 \\ 4 & 77.44 & 85.84 & 94.85 & 88.20 & 85.27 & 87.00 & 83.61 & 86.83 \\ 5 & 86.09 & 87.14 & 87.58 & 86.75 & 86.83 & 86.95 & 86.71 & 86.98 \\ Mean & 80.82 & **84.28** & **91.63** & 89.04 & 85.80 & **87.11** & 84.80 & **86.16** \\ Var. & (9.89) & (4.35) & (5.70) & (3.45) & (0.62) & (2.08) & (1.68) & (0.63) \\ \(\Delta\) & \multicolumn{2}{c}{+3.46} & \multicolumn{2}{c}{-2.59} & \multicolumn{2}{c}{+1.31} & \multicolumn{2}{c}{+1.36} \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of BERsynth (Bs) and emoBERTsynth (emoBs) against the _NEWSsynth_ test set. (Variance is shown in brackets under the mean). emoBs outperforms Bs in head-to-head for all 5 runs in Accuracy, F1, and Precision; while Bs outperforms emoBs in head-to-head for all 5 runs in Recall. \begin{table} \begin{tabular}{l l l} \hline \hline Size & Model & Acc. \\ \hline 11M & FastText & 63.80 \\ 124M & GPT-\(2_{\text{base}}\) & 66.20 \\ & BERT\({}_{\text{base}}\) & 67.20 \\ & BERsynth & 74.83 \\ & emoBERTsynth & **76.23** \\ \hline \hline \end{tabular} \end{table} Table 3: emoBERTsynth outperforms other model architectures and sizes detecting human and \(\text{Grover}_{\text{mega}}\) (1.5B) synthetic text from the RealNews-Test dataset. Detector model sizes include 11M and 124M parameters and architectures include FastText, GPT-\(2_{\text{base}}\), and \(\text{BERT}_{\text{base}}\). The FastText, GPT-\(2_{\text{base}}\) and \(\text{BERT}_{\text{base}}\) results are reported by Zellers et al. (2019). are longer. Human articles are longer overall, but have slightly shorter sentences than synthetic text; and human articles have more sentences per article - which accounts for their longer mean length. Similar observations were made for RealNews-Test by Bhat and Parthasarathy (2020). See Table 10 and Figs. 5 to 8 in Appendix E. Overall, these results point neither to article length nor sentence length as a reliable discriminator for synthetic text suggesting that detector models are not simply learning length as a proxy for human vs synthetic text. #### 4.7.2 Size of fine-tuning splits The BERTSynth fine-tuning regime (SS4.3) was repeated using all (20k) and half (10k) of _NEWSsynth_. In all 5 runs, the BERTSynth model trained on the larger 20k dataset performed better than the equivalent model trained on the smaller 10k dataset - see Table 4. There was a modest improvement in precision (+2.43%) with a much larger increase in recall (+11.78%). The results suggest that recall is most sensitive to the size of the training set. This is perhaps because the PLM is already trained on human text during pretraining but not synthetic text (_exposure bias_), so more exposure to synthetic text increases the model's ability to detect synthetic text correctly with fewer false negatives. #### 4.7.3 Alternative forms of emoBERT What is the effect of using different emotion datasets to fine-tune our emotionally aware PLMs on the downstream task of synthetic text detection? We conduct experiments on emoBERTsynth by fine-tuning eight alternative emoBERT models: * **GNE** involves fine-tuning using the GoodNewsEveryone dataset (SS4.2) as in the main experiments; * **GNE\({}_{\mathbf{r}}\)** involves fine-tuning with a version of GNE with randomised labels. We do this to examine the extent to which the difference between BERTSynth and emoBERTsynth can be attributed to emotion or to the process of fine-tuning on an arbitrary classification task with the GNE data; * **AT** involves fine-tuning with the AffectiveText dataset comprising 1.5k news headlines in English annotated with respect to Ekman's 6 emotions (Strapparava and Mihalcea, 2008); * **GA** is GNE and AT combined; * **SST-2** involves fine-tuning on the task of sentiment polarity classification using the SST-2 dataset of 68,221 movie reviews in English (Socher et al., 2013); * **GAS** is GNE, AT, and SST-2 combined; with SST-2 positive sentiment mapped to joy and negative sentiment mapped to sadness; * **S-GA** involves first fine-tuning on sentiment using SST-2 and then fine-tuning on emotion using GA. This experiment is inspired by Kratzwald et al. (2018) who report that emotion classification can be improved by transfer learning from sentiment analysis; * **GAS+** is GAS but mapped to positive and negative sentiment.6 Footnote 6: Happiness was mapped to positive sentiment; sadness, fear, anger and disgust were mapped to negative sentiment; surprise was mapped to sentiment using a DistilBERT (base-uncased) (Sanh et al., 2020) sentiment classifier fine-tuned on the SST-2 dataset and available on HuggingFace. [https://huggingface.co/distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) 14.05% of ‘surprise’ mapped to positive, while the remaining 85.95% mapped to negative sentiment. The results (Table 5) reveal that the best-performing emoBERTsynth models are those fine-tuned using GNE or using GNE and AffectiveText combined (GA). The latter achieves the highest accuracy and the former the highest F1. We attribute the relatively poor performance of AffectiveText on its own to its small size, comprising only 1.5k headlines (split 625 + 125 for training and dev splits respectively) compared to 5k for GNE and 68k for SST-2. Table 5 also shows that fine-tuning on GNE outperforms fine-tuning with randomised labels (GNE\({}_{\text{r}}\)). The 1.1 point drop in accuracy of GNE\({}_{\text{r}}\) compared to GNE suggests that the emotion classification task does play a role in the improved performance of emoBERTsynth versus BERTSynth. The results in Table 5 suggest that fine-tuning on sentiment is not particularly helpful. The poor performance of GAS could be due to the crude mapping of negative sentiment to sadness (because \begin{table} \begin{tabular}{r c c c c} \hline \hline Split & Prec. & Recall & F1 & Acc. \\ \hline 5-1-4k & 78.39 & 79.85 & 78.89 & 78.58 \\ Var. & (24.10) & (17.33) & (3.17) & (6.51) \\ 10-2-8k & 80.82 & 91.63 & 85.80 & 84.80 \\ Var. & (9.89) & (5.70) & (0.62) & (1.68) \\ \(\Delta\) & +2.43 & +11.78 & +6.91 & +6.22 \\ \hline \hline \end{tabular} \end{table} Table 4: BERTSynth metrics for different split sizes, using the _NEWSsynth_ dataset averaged over 5 runs (with variance shown in brackets). it could be any 1 of 5 Ekman emotions), which results in a large dataset imbalance across emotion labels. When we go in the opposite direction and mapped the emotion labels to sentiment labels (GAS+-), the results improved. Overall, however, the results suggest that mixing emotion and sentiment datasets is not a good idea (particularly if they are disproportionate in size and imbalanced), and that sentiment alone is not sufficient. #### 4.7.4 A larger detector model We next investigate what happens when we use a PLM larger than BERT to detect synthetic text. Using the same experimental setup described in SS4, we substituted BLOOM (Scao et al., 2023) in place of BERT for the synthetic text detector. BLOOM is an open-science causal PLM alternative to GPT-3 (Brown et al., 2020). We use the BLOOM 560M size model. The results in Table 6 show that the emotionally-aware BLOOM PLM (emoBLOOMsynth) outperforms the standard BLOOM (BLOOMsynth) in all metrics. ## 5 ChatGPT Experiments All experiments so far have involved PLMs pretrained with the self-supervised objective of predicting the next token or a masked token. We conduct a final experiment with ChatGPT, a more human-aligned Large Language Model (LLM) which has undergone a second training or "alignment" phase using Reinforcement Learning from Human Feedback on top of an underlying LLM (GPT 3.5 in our case) (OpenAI, 2022; Ouyang et al., 2022). We create a custom dataset comprising human articles and ChatGPT synthetic text from multiple non-news domains, and use it to compare our BERTsynth and emoBERTsynth models against ChatGPT (in a zero-shot setting) on the task of detecting ChatGPT's own synthetic text.7 Footnote 7: We use ChatGPT-3.5 (Mar-14-2023 version) between dates 16-Mar-2023 and 24-Mar-2023. **ChatGPT100** We create and release _ChatGPT100_ - a dataset comprising human articles and synthetic articles generated by ChatGPT. Following Clark et al. (2021) who collected 50 human articles and generated 50 articles using GPT2 and GPT3, we also collect 50 human articles, and we then use ChatGPT to generate 50 synthetic ones. The human written articles are from 5 different domains: Science, Entertainment, Sport, Business, and Philosophy. We used reputable websites for the human text which was gathered manually, see Table 8 in Appendix B.3. The synthetic text was generated by providing ChatGPT with a prompt such as "_In less than 400 words, tell me about moral philosophy._" where human text on the same topic, moral philosophy in this case, had already been found online. The data generated by ChatGPT is semantically correct and was checked manually. Subject areas in which the authors are knowledgeable were chosen so that the correctness of the synthetic text could be checked. To be comparable with the detectors presented in our earlier experiments, the articles were limited to a maximum of 384 words (\(\approx\) 512 tokens) and truncated at a natural sentence boundary. The two articles were then made to be approximately the same length. \begin{table} \begin{tabular}{r r r r r} \hline \hline & Prec. & Rec. & F1 & Acc. \\ \hline BLOOMsynth & 81.90 & 85.95 & 83.79 & 83.40 \\ Viz. & (4.76) & (12.22) & (12.23) & (0.93) \\ emoBLOOMsynth & **85.98** & **88.02** & **86.90** & **86.75** \\ Viz. & (5.72) & (9.96) & (0.27) & (0.15) \\ \(\Delta\) & +4.08 & +2.07 & +3.11 & +3.35 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of BLOOMsynth and emoBLOOMsynth against the _NEWSsynth_ test set averaged over 5 runs (with variance in brackets). emoBLOOMsynth outperforms BLOOMsynth in Accuracy, F1, Recall, and Precision. \begin{table} \begin{tabular}{l r r r r} \hline \hline & Prec. & Rec. & F1 & Acc. \\ \hline GAS & 81.95 & 85.58 & 83.72 & 83.36 \\ S-GA & 82.60 & 87.80 & 85.12 & 84.65 \\ GAS+- & 82.41 & 88.30 & 85.25 & 84.73 \\ AT & **85.52** & 83.88 & 84.69 & 84.84 \\ SST-2 & 82.85 & 88.38 & 85.52 & 85.04 \\ GNEr & 82.44 & 89.93 & 86.02 & 85.39 \\ GNE & 83.84 & **90.40** & **87.00** & 86.49 \\ GA & 85.34 & 88.18 & 86.73 & **86.51** \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation experiments, using different emotion datasets for fine-tuning emoBERT, comparing emoBERTsynth (eBs) detectors on the task of synthetic text detection on the _NEWSsynth_ dataset. GNE is the GoodNewsEveryone dataset which is used in the main experiments. GNE\({}_{\epsilon}\) is GNE with randomised labels. AT is AffectiveText. GA is GNE and AT combined. SST-2 is the SST-2 sentiment dataset. GAS is the combined GNE, AT, and SST-2 datasets. S-GA is first fine-tuned on sentiment using the SST-2 dataset, and then fine-tuned on emotion using the GNE and AT datasets, and finally fine-tuned on synthetic text detection. GAS+- is GAS but mapped to positive and negative sentiment. Detection taskEach article was appended to the following prompt to ChatGPT: "_Was the following written by a human or a computer, choose human or computer only?_" Having tested ChatGPT, we then tested our BERtsynth and emoBERTsynth models (the models fine-tuned on RealNews-Test from Table 3). ResultsThe results are shown in Table 7. The first thing to note is that no model performs particularly well. ChatGPT tends to misclassify its own synthetic text as human (hence the low recall score of 30%).8 BERtsynth and emoBERTsynth, on the other hand, tend to classify text as machine-written and they both obtain 100% recall. We previously saw (SS4.7.2) that recall is most sensitive to fine-tuning set size. The emoBERTsynth and emoBERTsynth models have been exposed to synthetic text during fine-tuning, whereas ChatGPT is performing the task zero-shot. This could explain some of the difference in recall between the ChatGPT and the two fine-tuned models. Footnote 8: ChatGPTs responses suggest it may use fact-checking as a proxy during synthetic text detection. Finally, as with our experiments with Grover-generated text, emoBERTsynth outperforms BERtsynth on all metrics. The dataset is small so we must be careful not to conclude too much from this result, but it does suggest that fine-tuning on emotion could be beneficial when detecting synthetic text from LLMs and more sophisticated generators, in non-news domains. This is in line with the results of our earlier experiments using variously size PLMs (such as Grover, BERT, BLOOM), used as generators and detectors in the news domain, and shows the potential for our approach with different generator models and in different domains. ## 6 Conclusion We conducted experiments investigating the role that emotion recognition can play in the detection of synthetic text. An emotionally-aware PLM fine-tuned on emotion classification and subsequently trained on synthetic text detection (emoPLMsynth) outperformed a model with identical fine-tuning on synthetic text detection, but without emotion training, (PLMsynth). The results hold across different synthetic text generators, model sizes, datasets and domains. This work specifically demonstrates the benefits of considering emotion in the task of detecting synthetic text, it contributes two new datasets (_NEWSsynth_ and _ChatGPT100_) and, more generally, it hints at the potential benefits of considering human factors in NLP and Machine Learning. Is it possible that some other proxy for synthetic text is at play? We ruled out some potential proxies related to article length in SS4.7.1. In ablation studies in SS4.7.3, we showed that the emotion labels result in an improvement in performance compared to randomized labels for the same emotion dataset. Other potential proxies are nonsensical sentences, repetitive text, etc. However, we account for these by comparing our emotionally-aware PLMs (emoPLMsynth) against standard PLMs fine-tuned on synthetic text detection only (PLMsynth). Thus, any advantage or disadvantage of sentences without meaning (or any other factor) is also available to the non-emotionally-aware model against which we compare our emotionally-aware model. Future work will investigate further the _affective profile_ (i.e. emotional content and characteristics) of human and synthetic text; and attempt to determine if there are measurable differences which may prove useful in the task of synthetic text detection. ## Limitations The datasets used in this work (synthetic text datasets, emotion datasets, and sentiment dataset) are English language and model performance in other languages may vary. We primarily focus on the news domain and, while performance in other domains may vary (Merchant et al., 2020), we include experiments in several non-news domains (SS5). The emotion datasets are imbalanced across emotion labels which can impact overall performance, and we conducted ablation experiments to find the best combination of emotion and sentiment datasets (SS4.7.3). GoodNewsEveryone's 15 emotions were mapped to Ekman's 6 emotions Ekman (1992, 1999, 2016), factoring in Plutchik's wheel of emotion Plutchik (1980, 2001), but there is no \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & Prec. & Rec. & F1 & Acc. \\ \hline ChatGPT & **75.00** & 30.00 & 42.86 & 60.00 \\ BERTsynth & 60.24 & **100.00** & 75.19 & 67.00 \\ emoBERTsynth & 67.57 & **100.00** & **80.65** & **76.00** \\ \hline \hline \end{tabular} \end{table} Table 7: Our emotionally aware PLM (emoBERTsynth) outperforms ChatGPT and BERtsynth at detecting synthetic text in the _ChatGPT100_ dataset. Note that ChatGPT is performing the task zero-shot. firm agreement in the literature as to which is the 'correct' or 'best' emotion model Ekman (2016). The emotion models used in this work are the two most popular in the literature. The maximum input sequence length of BERT is 512 tokens and articles longer than this are truncated, which may negatively affect performance on the synthetic text detection task Ippolito et al. (2020). However, we also saw that increasing the input sequence length may actually contribute to poorer performance (SS4.6). ## Ethical Considerations We release multiple PLMs (emoBERTsynth, BERtsynth, emoBLOOMsynth and BLOOMsynth) which we refer to generically as emoPLMsynth and PLMsynth. emoPLMsynth and PLMsynth are BERT or BLOOM models with versions fine-tuned on _NEWSsynth_ or the RealNews-Test Zellers et al. (2019) datasets; emoPLMsynth is also fine-tuned on combinations of the GoodNewsEveryone Bostan et al. (2020), AffectiveText Strapparava and Mihalcea (2008), and SST-2 Socher et al. (2013) datasets. We release _ChatGPT100_, a dataset comprising 100 English language articles in various non-news domains. 50 articles are human written, and 50 articles are generated by ChatGPT. The 100 articles have all been manually curated and do not contain toxic content. Furthermore, ChatGPT has a content filter which flags potentially harmful content. We release, _NEWSsynth_, a dataset comprising 40k English language articles in the news domain. 9 20k news articles are human (from RealNews-Test) and 20k generated by Grover. Publishing synthetic text is a risk, but _NEWSsynth_ is clearly labelled as containing synthetic text. This is a similar precaution to synthetic text from Grover which has already been published and is publicly available Zellers et al. (2019). Footnote 9: We include 20k articles in addition to the 20k used in this work The potential harms, such as toxic synthetic text Gehman et al. (2020), of PLMs pretrained on web-crawled data has been the subject of much discussion Bender et al. (2021). Since emoPLMsynth and PLMsynth (and Grover) were pretrained and/or fine-tuned on web-crawled data there is a possibility they could produce inappropriate synthetic text and this includes the _NEWSsynth_ dataset. We recognise these potential harms and to mitigate them include the caveat below with the released datasets (_NEWSsynth_ and _ChatGPT100_) and the released language models (emoPLMsynth, PLMsynth): Care must be taken when using these language models (emoPLMsynth and PLMsynth), and datasets (_NEWSsynth_ and _ChatGPT100_) as they may produce or contain ethically problematic content. Data scraped from the web may contain content which is ethically problematic such as adult content, bias, toxicity etc. and web-scraped data is used in the pre-trained language models such as BERT, BLOOM and Grover. PLMsynth and emoPLMsynth are based on BERT or BLOOM PLMs, while _NEWSsynth_ was generated by Grover. Consequently, emoPLMsynth and PLMsynth could produce text which is ethically problematic, while _NEWSsynth_ may contain ethically problematic content. As a result, any use of the language models (emoPLMsynth, PLMsynth) or the datasets (_NEWSsynth_ or _ChatGPT100_) should employ appropriate checks and test regimes to handle potential harmful content. The intended use of the emoPLMsynth and PLMsynth models, and the _NEWSsynth_ and _ChatGPT100_ datasets, is for research purposes and beneficial downstream tasks such as identifying synthetic text perhaps in online news, reviews, comments, plagiarism etc. Online platforms could use this identification to decide whether or not to publish such content, or where to surface it via recommender algorithms etc. This could help protect public confidence in online discourse. Energy usage was reduced by training on smaller models and for a relatively small number of epochs where possible, by using random search rather than an exhaustive grid search, and by using freely available managed compute resources where possible. ## Acknowledgements This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6183. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. We thank Rowan Zellers for his permission to use 20k human articles from RealNews-Test (Zellers et al., 2019) in the _NEWSsynth_ dataset which we release with this paper. We thank the reviewers for their valuable feedback which helped improve the paper.
2301.03827
Algorithms for the uniqueness of the longest common subsequence
Given several number sequences, determining the longest common subsequence is a classical problem in computer science. This problem has applications in bioinformatics, especially determining transposable genes. Nevertheless, related works only consider how to find one longest common subsequence. In this paper, we consider how to determine the uniqueness of the longest common subsequence. If there are multiple longest common subsequences, we also determine which number appears in all/some/none of the longest common subsequences. We focus on four scenarios: (1) linear sequences without duplicated numbers; (2) circular sequences without duplicated numbers; (3) linear sequences with duplicated numbers; (4) circular sequences with duplicated numbers. We develop corresponding algorithms and apply them to gene sequencing data.
Yue Wang
2023-01-10T07:47:30Z
http://arxiv.org/abs/2301.03827v4
# Algorithms for determining the uniqueness of the longest common subsequence ###### Abstract Given several number sequences, determining the longest common subsequence is a classical problem in computer science. This problem has applications in bioinformatics, especially determining transposable genes. Nevertheless, related works only consider how to find one longest common subsequence. In this paper, we consider how to determine the uniqueness of the longest common subsequence. If there are multiple longest common subsequences, we also determine which number appears in all/some/none of the longest common subsequences. We focus on four scenarios: (1) linear sequences without duplicated numbers; (2) circular sequences without duplicated numbers; (3) linear sequences with duplicated numbers; (4) circular sequences with duplicated numbers. We develop corresponding algorithms and apply them to gene sequencing data. **KEY WORDS:** longest common subsequence, algorithm, graph, transposable gene ## 1 Introduction Given some number sequences, a common subsequence is a number sequence which appears in all these sequences (not necessarily consecutive). Determining the longest common subsequence (LCS) for some number sequences is a classical problem in computer science. LCS is a common tool to evaluate the difference among different sequences. For example, LCS can be applied to computational linguistics [39; 61; 60]. In biology, it is common to use the length of LCS as a quantitative score for comparing DNA sequences [12; 26; 95]. LCS has also been used to define ultraconserved elements [55] or remove incongruent markers in DNA sequences [16]. Various scenarios for the LCS problem have been studied. Here we list Scenarios A-E, where the first two are more commonly studied. For more works in these scenarios, readers may refer to more thorough reviews [5; 24; 87]. Scenario A considers two sequences with possibly repeated numbers, and the sequence length is \(n\). The goal is to find the LCS. If a number appears multiple time in a common subsequence, all appearances are counted when calculating the length of this common subsequence. This can be solved by dynamic programming with \(\mathcal{O}(n^{2})\) time complexity and \(\mathcal{O}(n)\) space complexity [23], but \(\mathcal{O}(n^{2-\epsilon})\) time complexity for any \(\epsilon>0\) is impossible [4]. This also can be solved with \(o(n)\) space complexity and \(\mathcal{O}(n^{3})\) time complexity [35]. In Scenario B, there are \(m\) sequences with possibly repeated numbers, and the sequence length is \(n\). The goal is to find the LCS. If a number appears multiple time in a common subsequence, all appearances are counted when calculating the length of this common subsequence. A standard dynamic programming algorithm has \(\mathcal{O}(n^{m})\) time complexity [7]. There have been other faster algorithms [67; 45; 27]. This scenario is equivalent to the maximum clique problem in graph theory, which is NP-hard [40], but has relatively fast exact and heuristic algorithms [30; 37; 74]. Scenario C considers 2 sequences with possibly repeated numbers, and the sequence length is \(n\). The goal is to find the LCS, where each number appears at most once. This scenario is NP-hard [1]. Scenario D is similar to Scenario B, but only consider common subsequences that contain or do not contain certain strings [69; 47]. In Scenario E, the sequences are arc-annotated, and LCS should have the same arc annotation in original sequences [31]. In this paper, the motivation of studying the LCS problem is to apply it to compare gene sequences. Assume we have some gene sequences from different individuals of the same species or different species. Some genes are relatively unstable, and they can change their relative locations in the gene sequence (transposable). An unstable gene might also be duplicated or deleted. Therefore, these gene sequences from different individuals are not identical. Then we can find the LCS, which is useful for measuring the stability of genes. Genes in the LCS should be more stable, and genes not in the LCS should be transposable. Due to the motivation of comparing gene sequences, we consider four scenarios that are different from the previously studied LCS problems. These four scenarios are determined by two factors: whether the considered species has linear or circular gene sequences, and whether genes have multiple copies. When genes have multiple copies, we only consider common subsequences that consist of all or none of copies of the same gene. Scenario 1 has linear sequences without duplicated genes; Scenario 2 has circular sequences without duplicated genes; Scenario 3 has linear sequences with duplicated genes; Scenario 4 has circular sequences with duplicated genes. Most known methods only aim at finding one LCS. Since we concern the stability of genes, the uniqueness of LCS should be determined. When the LCS is not unique, we also need to classify whether a gene appears in all/some/none of the LCSs. A gene that appears in all the LCSs is highly stable; a gene that appears in some LCSs is moderately stable; a gene that appears in no LCS is unstable. Determining all LCSs is too time-consuming, since there might be exponentially many LCSs. For example, consider two sequences \((1,2,3,4,5,6,\ldots,2n-1,2n)\) and \((2,1,4,3,6,5,\ldots,2n,2n-1)\). Although the sequence length is \(2n\), and the LCS length is \(n\), the number of LCSs is \(2^{n}\). To determine the relationship between genes and LCSs, we develop corresponding algorithms with polynomial time complexities for Scenarios 1, 2 (Algorithms 2, 4). To our knowledge, there are no other determinations of whether genes appear in all LCSs with polynomial complexities. Scenarios 3, 4 only consider subsequences that consist of all or none copies of the same gene, and calculate the length by genes. Therefore, they are different from the classic Scenario B. We develop the equivalence of Scenario 3 with the maximum clique problems on graphs (Proposition 1). We prove that Scenario 4 is between the maximum clique problems on graphs and the maximum clique problems on 3-uniform hypergraphs (Propositions 2, 3). Although circular sequences are commonly studied in the context of genomic rearrangements, they are rare in the literature of the LCS problems. Therefore, our Algorithm 3 that finds one LCS for Scenario 2 should also be novel. We test Algorithms 1, 2, 3, 4 on the gene sequences of different _Escherichia coli_ individuals and find some possible transposable genes. If we only need to find one LCS, then Scenario 1 is a special case of Scenario B, and our method (Algorithm 1) can be easily derived from standard algorithms. Scenarios 3, 4 are equivalent to maximum clique problems in graphs and hypergraphs, which are NP-hard. These properties are also similar to Scenario B. Although there have been numerous algorithms for the maximum clique problem [88], for the sake of completeness, we design fast heuristic algorithms (Algorithms 5, 6) and test them to find that they only fail in rare cases. We proposed the idea of using the LCS to find transposable genes and Algorithm 1 in a previous paper [32], where Algorithm 1 was applied to study the "core-gene-defined genome organizational framework" (the complement of transposable genes) in various bacteria, and it was found that for different species, the transposable gene distribution and developmental traits are correlated. This paper considers other situations (especially when the LCS is not unique), and can be regarded as a theoretical sequel of that previous paper. Algorithm 1 is contained in this paper for the sake of completeness. In sum, our main contributions are Algorithms 2, 3, 4 in Scenarios 1, 2 and Propositions 1, 2, 3 in Scenarios 3, 4. We first introduce the background of transposable genes in Section 2. Then we describe the setup for the LCS problem we study in Section 3. In Sections 4-7, we transform the LCS problem into corresponding graph theory problems and design algorithms. We finish with some discussions in Section 8 and conclusions in Section 9. All the algorithms in this paper have been implemented in Python. For the code and data files, see [https://github.com/YueWangMathbio/Transposon](https://github.com/YueWangMathbio/Transposon). ## 2 Biological background of transposable genes In this section, we review how gene sequences become different, and introduce the specific biological problem we want to study. We also explain how Scenarios 1-4 of the LCS problem are derived from the biological problem. The nucleotide sequence can be changed by various events, such as inversion, insertion, deletion, and duplication [28]. Such rearrangement events lead to the existence of transposons (also called transposable elements or jumping genes), which are DNA sequences that can change their relative positions within the genome. Transposons were first discovered in maize by Barbara McClintock [42]. Transposons have various types: long terminal repeats (LTR) retrotransposons, Dictyostelium intermediate repeat sequence (DIRS)-like elements, Penelope-like elements (PLE), long interspersed elements (LINE), short interspersed elements (SINE), terminal inverted repeats (TIR), Helitrons, etc. [41]. Transposons are common in various species. For the human genome, the proportion of transposons is approximately 44%, although most of transposons are inactive [43]. Transposons can participate in controlling gene expression [94], and they are related to several diseases, such as cancer [13], hemophilia [33], and porphyria [46]. Transposons can drive rapid pheno -type variations, which cause complicated cell behaviors [92, 49, 48, 11, 29]. Transposons can be used to detect cancer drivers [50, 73] and potential therapies [2, 86], which can provide guidance for the complicated treatment policy design [84, 85]. Transposons are also essential for the development of _Oxytricha trifallax_[51], antibiotic resistance of bacteria [3], and the proliferation of various cells [54, 89, 14]. With the presence of transposons, the regulation between genes might be affected, which is a challenge for inferring the structures of gene regulatory networks [82, 76] and general transcriptome analysis [59, 93]. When transposons have been determined, we can use them to compare the genomes of different species, and such comparisons can be combined with other measurements between species, such as metrics on developmental trees [72]. Such comparisons can be also extended to different tissues to help with the prediction of tissue transplantation experiments [83]. Besides, for some species, cells at different positions have different gene expression patterns, which might be related to transposons [77, 78]. Many transposons are as short as \(10^{2}-10^{3}\) base pairs, shorter than a general gene [53]. To determine such short transposons, one needs to analyze the original AGCT nucleotide sequences. There have been many algorithms developed to determine short transposons from nucleotide sequences, such as MELT (Mobile Element Locator Tool) [18], ERVcaller (Endogenous Retro-Virus caller) [10], and TEMP2 (Transposable Elements Movements Present 2) [91]. Different algorithms may only determine certain types of transposons. For more details, readers may refer to other papers [52, 20]. They use raw DNA sequencing data, which only contain imperfect information about the true DNA sequence, and the data quality depends on some factors that vary across different datasets [17]. Besides, they need a corresponding genome or reference transposon libraries. There are gross DNA changes that associate with many genes, also called genomic rearrangements [21]. Such rearrangements include inversion, transposition, fusion, and fission [8]. To determine such gross genomic rearrangements, one first needs to convert nucleotide sequences into gene sequences by annotation. For two different gene sequences, the general idea of determining rearrangements is to calculate the minimal number of operations required for transforming one sequence into the other [63]. This defines an editing distance between gene sequences, which can be used to compare the evolution distance between species and construct the phylogenetic tree [62]. There have been many algorithms developed to determine genomic rearrangements. They consider different scenarios: whether the gene sequence is linear or circular, whether genes have unique labels, and what operations can be taken. Kececioglu and Sankoff only consider inversion for linear sequences with unique gene labels [34]; Blanchette et al. consider inversion and transposition for circular sequences with unique gene labels [6]; Tesler considers inversion, transposition, fusion, and fission for linear and circular sequences with unique gene labels [63]; Terauds and Sumner study circular sequences with representation theory tools [62]; Bohnenkamper et al. consider linear and circular sequences with possibly duplicated labels [8]. There are also systematic pipelines for determining rearrangements from whole-genome assemblies [19, 44]. Nevertheless, these methods consider large-scale rearrangements, and minimize the number of operations to transform one gene sequence into the other, not concrete genes that can change their locations. Besides, these methods only compare two gene sequences, not more. Their results depend on the set of possible operations, which is somewhat arbitrary. In this paper, we consider a mesoscopic scenario between the genomic rearrangement situation and the short transposon situation: _Given accurately annotated gene sequences (not nucleotide sequences) from different individuals, determine individual genes (not short nucleotide segments or long gene strands) that can change their locations (transposable)._ This provides a qualitative description for the stability of genes, which can guide gene editing [68] and phylogenetics [32]. The proportion of fixed genes quantifies the robustness of the genome. We aim at minimizing the number of genes to move. When there are only two gene sequences, this is equivalent to calculating genomic arrangements, where the only allowed operation is single-gene transposition. In the copy-paste (duplication) case and deletion case, we can compare the numbers of copies of genes for different individuals to determine the transposable genes that have changed their copy numbers. In the inversion case, we can check the direction of genes to determine transposable genes that have changed their orientations [38]. In the cut-paste (insertion) case, the compositions of gene sequences are the same, but the orders of genes differ. It is not straightforward to uniquely determine which genes have changed their relative locations. In this case, we need to introduce the LCS problem. ## 3 Problem setup Given raw DNA sequencing data, the first step is to transform them into gene sequences. This can be done with various genome annotation tools [58, 9]. For simplicity, we replace the gene names by numbers \(1,\ldots,n\). For some species, the DNA is a line [57]. We can represent this DNA as a linear gene sequence of distinct numbers that represent genes: \((1,2,3,4)\). If some genes change their transcriptional orientations, we can simply detect them and handle the remaining genes. Now a linear DNA naturally has a direction (from 5' end to 3' end), thus \((1,2,3,4)\) and \((4,3,2,1)\) are two different gene sequences. Consider two linear gene sequences from different individuals: \((1,2,3,4)\) and \((1,4,2,3)\). We can intuitively detect that gene 4 changes its relative position, and should be regarded as a transposable gene. However, changing the positions of genes \(2,3\) can also transform one sequence into the other. The reason that we think gene 4 (not genes \(2,3\)) changes its relative position is that the number of genes we need to move is smaller. Nevertheless, the number of genes that change their relative locations is difficult to determine. We can consider the complement of transposable genes, i.e., genes that do not change their relative positions. These fixed genes can be easily defined as the LCS of the given gene sequences. Here a common subsequence consists of some genes (not necessarily adjacent, different from a substring) that keep their relative orders in the original sequences. _Thus transposable genes are the complement of this LCS._ Notice that the LCS might not be unique. We classify genes by their relations with the LCS(s). The motivation of classifying transposable genes with respect to the intersection and union of LCSs is similar to defining essential variables with Markov boundaries in causal inference [81]. **Definition 1**.: _A gene is **proper-transposable** if it is not contained in any LCS. A gene is **non-transposable** if it is contained in every LCS. A gene is **quasi-transposable** if it is contained in some but not all LCSs._ In the example of \((1,2,3,4)\) and \((1,4,2,3)\), the unique LCS is \((1,2,3)\). Thus 4 is proper-transposable, and \(1,2,3\) are non-transposable. In the following, we consider other scenarios, where the proper/quasi/non-transposable genes still follow Definition 1, but the definition of the LCS differs. For some species, the DNA is a circle, not a line [66]. A circular DNA also has a natural direction (from 5' end to 3' end), and we use the clockwise direction to represent this natural direction. In the circular sequence scenario, a common subsequence is a circular sequence that can be obtained from each circular gene sequence by deleting some genes. See Fig. 1 for two circular gene sequences and their LCS. Notice that we can rotate each circular sequence for a better match. A gene might have multiple copies (duplicated) in a gene sequence [25]. Notice that the definition of the transposable gene is a gene (specific DNA sequence) that has the ability to change its position, not a certain copy of a gene that changes its position. This means transposable genes should be defined for genes, not gene copies. Thus we should only consider common subsequences that consist of all or none copies of the same gene. When calculating the length of a common subsequence, we should count genes, not gene copies. Consider two linear sequences \((4,1,2,1,1,3,2,4,1,1)\) and \((4,1,2,3,1,1,2,1,1,4)\). If we consider any subsequences, the LCS is \((4,1,2,1,1,2,1,1)\); if we only consider subsequences that contain all or none copies of the same gene, but count the length by copies, the LCS is \((1,2,1,1,2,1,1)\); if we only consider subsequences that contain all or none copies of the same gene, and count the length by genes, the unique LCS is \((4,2,3,2,4)\), and gene \(1\) is proper-transposable. When we consider circular gene sequences with duplicated genes, we should still only consider subsequences that consist of all or none copies of the same gene, and calculate the length by genes. Notice that circular sequences can be rotated. See Fig. 2 for two circular gene sequences with duplicated genes and their LCS. We have turned the problem of determining transposable genes into finding the LCS of several gene sequences. Depending on whether the gene sequences are linear or circular, and whether genes have multiple copies, the Figure 1: Two circular gene sequences without duplicated genes and their LCS, corresponding to Scenario 2. Figure 2: Two circular gene sequences with duplicated genes and their LCS, corresponding to Scenario 4. problem can be classified into four scenarios: **Scenario 1**: Consider \(m\) linear sequences of genes \(1,\ldots,n\), where each gene has only one copy in each sequence. Determine the longest linear sequence that is a common subsequence of these \(m\) sequences. **Scenario 2**: Consider \(m\) circular sequences of genes \(1,\ldots,n\), where each gene has only one copy in each sequence. Determine the longest circular sequence that is a common subsequence of these \(m\) sequences. Here circular sequences can be rotated. **Scenario 3**: Consider \(m\) linear sequences of genes \(1,\ldots,n\), where each gene can have multiple copies in each sequence. Determine the longest linear sequence that is a common subsequence of these \(m\) sequences. Only consider subsequences that consist of all or none copies of the same gene, and calculate the length by genes. **Scenario 4**: Consider \(m\) circular sequences of genes \(1,\ldots,n\), where each gene can have multiple copies in each sequence. Determine the longest circular sequence that is a common subsequence of these \(m\) sequences. Only consider subsequences that consist of all or none copies of the same gene, and calculate the length by genes. Here circular sequences can be rotated. These four scenarios correspond to different algorithms, and will be discussed separately. ## 4 Linear sequences without duplicated genes In Scenario 1, consider \(m\) linear gene sequences, where each sequence contains \(n\) genes \(1,\ldots,n\). Each gene has only one copy. For such permutations of \(1,\ldots,n\), we need to find the LCS. ### A graph representation of the problem Brute-force searching that tests whether each subsequence appears in all sequences is not applicable, since the time complexity is exponential in \(n\). To develop a polynomial algorithm, we first design an auxiliary directed graph \(\mathcal{G}\). **Definition 2**.: _For \(m\) linear sequences with \(n\) non-duplicated genes, the corresponding **auxiliary graph**\(\mathcal{G}\) is a directed graph, where each vertex is a gene \(g_{i}\), and there is a directed edge from \(g_{i}\) to \(g_{j}\) if and only if \(g_{i}\) appears before \(g_{j}\) in all \(m\) sequences._ A directed path \(g_{1}\to g_{2}\to g_{3}\rightarrow\cdots\to g_{4}\to g_{5}\) in \(\mathcal{G}\) corresponds to a common subsequence \((g_{1},g_{2},g_{3},\ldots,g_{4},g_{5})\) of \(m\) sequences, and vice versa. We add \(0\) to the head of each sequence and \(n+1\) to the tail. Then the LCS must start at \(0\) and end at \(n+1\). _The problem of finding the LCS becomes finding the longest path from \(0\) to \(n+1\) in \(\mathcal{G}\)._ See Fig. 3 for an example of using the auxiliary graph to determine transposable genes. This auxiliary graph \(\mathcal{G}\) has no directed loop (acyclic). If there exists a loop \(g_{1}\to g_{2}\to g_{3}\to\cdots\to g_{4}\to g_{1}\), then \(g_{1}\) is prior to \(g_{4}\) and \(g_{4}\) is prior to \(g_{1}\) in all sequences, a contradiction. ### Find the longest path Determining the longest path between two vertices in a directed acyclic graph can be solved by a standard dynamic programming algorithm. For a vertex \(g_{i}\in\{0,1,\ldots,n\}\), consider the longest path from \(g_{i}\) to \(n+1\). Since there exists an edge \(g_{i}\to n+1\), and \(\mathcal{G}\) is acyclic, this longest path exists. If the longest path is not unique, assign one arbitrarily. **Definition 3**.: _Define \(F_{+}(g_{i})\) to be the length of the longest path from \(g_{i}\) to \(n+1\) in \(\mathcal{G}\), and \(H_{+}(g_{i})\) to be the vertex next to \(g_{i}\) in this path._ \(F_{+}\) and \(H_{+}\) can be calculated recursively: For one gene \(g_{i}\), consider all genes \(g_{j}\) with an edge \(g_{i}\to g_{j}\) in \(\mathcal{G}\). The gene \(g_{j}\) with the largest \(F_{+}(g_{j})\) is assigned to be \(H_{+}(g_{i})\), and \(F_{+}(g_{i})=F_{+}(g_{j})+1\). If \(g_{l}\to n+1\) is the only edge that starts from gene \(g_{l}\), then \(F_{+}(g_{l})=1\), and \(H_{+}(g_{l})=n+1\). In other Figure 3: The auxiliary graph \(\mathcal{G}\) of two sequences \(([0],1,2,3,4,[5])\) and \(([0],1,4,2,3,[5])\). The unique longest path (double arrows) from \(0\) to \(5\) is \(0\to 1\to 2\to 3\to 5\), meaning that the unique longest common sequence is \(([0],1,2,3,[5])\). Thus \(1,2,3\) are non-transposable, and \(4\) is proper-transposable. words, \[H_{+}(g_{i})=\underset{\{g_{j}\text{ with }g_{i}\to g_{j}\}}{\operatorname{argmax}}F_{+}(g_{j});\] \[F_{+}(g_{i})=1+F_{+}[H_{+}(g_{i})].\] Then \(0\to H_{+}(0)\to H_{+}^{2}(0)\to H_{+}^{3}(0)\to\cdots\to H_{+}^{f-1}(0)\to H_{+} ^{f}(0)=n+1\), denoted by \(\mathcal{L}_{0}\), is a longest path in \(\mathcal{G}\). Here \(f=F_{+}(0)\), and \(H_{+}^{i}\) is the \(i\)th iteration of \(H_{+}\). ### Test the uniqueness of the longest path To test whether quasi-transposable genes exist, we need to check the uniqueness of this longest path. **Definition 4**.: _For \(g_{i}\in\{1,\ldots,n,n+1\}\), define \(F_{-}(g_{i})\) to be the length of the longest path from \(0\) to \(g_{i}\) in \(\mathcal{G}\), and \(H_{-}(g_{i})\) to be the vertex prior to \(g_{i}\) in this path._ \(F_{-}\) and \(H_{-}\) can be calculated similar to \(F_{+}\) and \(H_{+}\). We can see that \(F_{+}(g_{i})+F_{-}(g_{i})\) is the length of \[0=H_{-}^{F_{-}(g_{i})}(g_{i})\to H_{-}^{F_{-}(g_{i})-1}(g_{i})\to\cdots\to H_{ -}(g_{i})\to g_{i}\] \[\to H_{+}(g_{i})\to\cdots\to H_{+}^{F_{+}(g_{i})-1}(g_{i})\to H_{+}^{F_{+}(g_{ i})}(g_{i})=n+1,\] a longest path from \(0\) through \(g_{i}\) to \(n+1\). For \(g_{i}\notin\mathcal{L}_{0}\), if \(F_{+}(g_{i})+F_{-}(g_{i})<F_{+}(0)\), then \(g_{i}\) is proper-transposable; if \(F_{+}(g_{i})+F_{-}(g_{i})=F_{+}(0)\), then \(g_{i}\) is quasi-transposable. If every \(g_{i}\notin\mathcal{L}_{0}\) is proper-transposable, then the LCS is unique, and all genes in \(\mathcal{L}_{0}\) (excluding the auxiliary \(0\) and \(n+1\)) are non-transposable. The procedure of determining transposable genes stops here. Otherwise, the LCS is not unique, and we need to find quasi-transposable genes in \(\mathcal{L}_{0}\). ### Find quasi-transposable genes When determining all quasi-transposable genes \(g_{1},\ldots,g_{k}\) not in \(\mathcal{L}_{0}\), as described above, we construct corresponding longest paths \(\mathcal{L}_{1},\ldots,\mathcal{L}_{k}\) from \(0\) to \(n+1\), where each \(\mathcal{L}_{i}\) passes through \(g_{i}\). We claim that a gene \(g_{j}\in\mathcal{L}_{0}\) is non-transposable if and only if \(g_{j}\) is contained in all \(\mathcal{L}_{1},\ldots,\mathcal{L}_{k}\). To prove this, we need the following lemma. **Lemma 1**.: _In Scenario 1 of linear sequences without duplicated genes, each quasi-transposable gene \(g_{i}\) has a corresponding quasi-transposable gene \(g_{j}\), so that no LCS can contain both \(g_{i}\) and \(g_{j}\)._ If a gene \(g_{j}\in\mathcal{L}_{0}\) is non-transposable, then it is contained in all \(\mathcal{L}_{1},\ldots,\mathcal{L}_{k}\). If \(g_{j}\in\mathcal{L}_{0}\) is quasi-transposable, by Lemma 1, there is a quasi-transposable gene \(g_{l}\notin\mathcal{L}_{0}\) which is mutual-exclusive with \(g_{j}\), in the sense that \(g_{l}\) and \(g_{j}\) cannot appear in the same LCS. The corresponding longest path \(\mathcal{L}_{l}\) contains \(g_{l}\), thus cannot contain \(g_{j}\). This proves our approach to determine the quasi-transposable genes in \(\mathcal{L}_{0}\). Proof of Lemma 1.: Fix a quasi-transposable gene \(g_{i}\). It is contained in a longest path \(\mathcal{L}_{i}\), which contains all non-transposable genes. Thus for each non-transposable gene \(g^{*}\), there is an edge between \(g^{*}\) and \(g_{i}\) in \(\mathcal{G}\). Assume \(g_{i}\) has no such mutual-exclusive quasi-transposable gene \(g_{j}\). Then there is an edge (direction unknown) in \(\mathcal{G}\) between \(g_{i}\) and each quasi-transposable gene \(g_{j}\). Choose a longest path \(\mathcal{L}^{*}\) in \(\mathcal{G}\) that does not contain \(g_{i}\). Whether \(g_{j}\in\mathcal{L}^{*}\) is a non-transposable gene or a quasi-transposable gene, there is an edge between \(g_{j}\) and \(g_{i}\). Determine the first gene \(g_{k}\) in \(\mathcal{L}^{*}\) that has an edge \(g_{i}\to g_{k}\). Since there is an edge \(g_{i}\to n+1\), \(g_{k}\) exists. Since there is an edge \(0\to g_{i}\), \(g_{k}\neq 0\). Denote the previous gene of \(g_{k}\) in \(\mathcal{L}^{*}\) by \(g_{l}\), then \(g_{l}\) exists, and there is an edge \(g_{l}\to g_{i}\). Thus we construct a path \(0\to\cdots\to g_{l}\to g_{i}\to g_{k}\to\cdots\to n+1\), which is longer than the longest path, a contradiction. Thus \(g_{i}\) has a mutual-exclusive quasi-transposable gene \(g_{j}\). ### Algorithms and complexities We summarize the above method as Algorithms 1,2. If we have known that the LCS is unique, then we just need to apply Algorithm 1, so that genes in \(\mathcal{L}_{0}\) are non-transposable, and genes not in \(\mathcal{L}_{0}\) are proper-transposable. We have reported Algorithm 1 previously [32, 70]. Algorithm 1 is kept here to make the story complete. Assume we have \(m\) sequences with length \(n\), and the length of the LCS is \(n-k\). The time complexities of Steps 2-5 in Algorithm 1 are \(\mathcal{O}(m)\), \(\mathcal{O}(mn^{2})\), \(\mathcal{O}(n)\), \(\mathcal{O}(n)\). The time complexities of Step 2 and Step 3 in Algorithm 2 are \(\mathcal{O}(k)\) and \(\mathcal{O}(kn)\). Since \(k\leq n\), the overall time complexity of determining transposable genes in Scenario 1 by Algorithms 1,2 is \(\mathcal{O}(mn^{2})\). The space complexity is trivially \(\mathcal{O}(mn+n^{2})\). ### Applications on experimental data We test Algorithms 1,2 on _Escherichia coli_ gene sequences. From NCBI sequencing database, we obtain gene sequences of three individuals of _E. coli_ strain ST540 (GenBank CP007265.1, GenBank CP007390.1, GenBank 1. **Input** \(m\) linear sequences of genes \(1,\ldots,n\). No duplicated genes. 2. **Modify** the sequences: Add \(0\) to the head, and \(n+1\) to the tail of each sequence 3. **Construct** the auxiliary graph \(\mathcal{G}\): Vertices of \(\mathcal{G}\) are all the genes \(1,\ldots,n\) **For** each pair of genes \(g_{i},g_{j}\) **If**\(g_{i}\) is prior to \(g_{j}\) in all \(m\) sequences **Add** a directed edge \(g_{i}\to g_{j}\) in \(\mathcal{G}\) **End** of if **End** of for 4. **Calculate**\(F_{+}(\cdot)\) and \(H_{+}(\cdot)\) for each gene \(g_{i}\) in \(0,1,\ldots,n\) recursively; **calculate**\(F_{-}(\cdot)\) and \(H_{-}(\cdot)\) for each gene \(g_{i}\) in \(1,\ldots,n,n+1\) recursively: \[H_{+}(g_{i})=\operatorname*{argmax}_{\{g_{j}\text{ with }g_{i}\to g_{j}\}}F_{+}(g_{j})\] % If \(g_{j}\) with \(g_{i}\to g_{j}\) that maximizes \(F_{+}(g_{j})\) is not unique, choose one randomly \[F_{+}(g_{i})=1+F_{+}[H_{+}(g_{i})]\] \[H_{-}(g_{i})=\operatorname*{argmax}_{\{g_{j}\text{ with }g_{j}\to g_{i}\}}F_{-}(g_{j})\] % If argmax is not unique, choose one randomly \[F_{-}(g_{i})=1+F_{-}[H_{-}(g_{i})]\] 5. **Construct** a longest path \(\mathcal{L}_{0}\) from \(0\) to \(n+1\): \[0\to H_{+}(0)\to H_{+}^{2}(0)\to H_{+}^{3}(0)\to\cdots\to H_{+}^{f-1}(0)\to H _{+}^{f}(0)=n+1\] % Here \(f=F_{+}(0)\), and \(H_{+}^{i}\) is the \(i\)th iteration of \(H_{+}\) 6. **Output**\(F_{+}(\cdot),H_{+}(\cdot),F_{-}(\cdot),H_{-}(\cdot),\mathcal{L}_{0}\) **Algorithm 1:** Detailed workflow of determining proper-transposable genes and quasi-transposable genes in Scenario 1, preparation stage. 1. **Input** \(F_{+}(\cdot),H_{+}(\cdot),F_{-}(\cdot),H_{-}(\cdot),\mathcal{L}_{0}\) calculated from Algorithm 1 **Denote** all genes not in \(\mathcal{L}_{0}\) by \(g_{1},\ldots,g_{k}\) 2. **For** each gene \(g_{i}\) in \(g_{1},\ldots,g_{k}\) **If**\(F_{+}(g_{i})+F_{-}(g_{i})<F_{+}(0)\) **Output**\(g_{i}\) is a proper-transposable gene **Else** **Output**\(g_{i}\) is a quasi-transposable gene **End** of if **End** of for 3. **If** all genes in \(g_{1},\ldots,g_{k}\) are proper-transposable **Output** all genes in \(\mathcal{L}_{0}\) are non-transposable **Else** **For** each gene \(g_{i}\) in \(g_{1},\ldots,g_{k}\) Use \(H_{+}(\cdot)\) and \(H_{-}(\cdot)\) to **construct**\(\mathcal{L}_{i}\), a longest path from \(0\) to \(n+1\) that passes \(g_{i}\). **End** of for **For** each gene \(g_{j}\) in \(\mathcal{L}_{0}\) (excluding auxiliary \(0\) and \(n+1\)) **If**\(g_{j}\) is contained in all \(\mathcal{L}_{1},\ldots,\mathcal{L}_{k}\) **Output**\(g_{j}\) is non-transposable **Else** **Output**\(g_{j}\) is quasi-transposable **End** of if **End** of if 4. **Output**: whether each gene is proper/quasi/non-transposable **Algorithm 2:** Detailed workflow of determining proper-transposable genes and quasi-transposable genes in Scenario 1, output stage. CP007391.1) and three individuals of _E. coli_ strain ST2747 (GenBank CP007392.1, GenBank CP007393.1, GenBank CP007394.1). All three sequences of ST540 start with gene dnaA and end with gene rpmH. We can regard them as linear gene sequences. We remove genes that appear more than once in one sequence, and remove genes that do not appear in all three sequences. After applying Algorithms 1,2 on these three sequences, there are 301 non-transposable genes, 4 quasi-transposable genes (hpaC, iraD, fbpC, psiB), and 263 proper-transposable genes. The reason for the large amount of proper-transposable genes is that sequence CP007265.1 is significantly different from the other two. After removing it and applying Algorithms 1,2 to the remaining two sequences (CP007390.1 and CP007391.1), there are 564 non-transposable genes and 4 quasi-transposable genes (hpaC, iraD, fbpC, psiB). Therefore, some genes in hpaC, iraD, fbpC, psiB are likely to translocate. All three sequences of ST2747 start with gene glnG and end with gene hemG. We can regard them as linear gene sequences. We remove genes that appear more than once in one sequence, and remove genes that do not appear in all three sequences. After applying Algorithms 1,2 on these three sequences, all 573 genes are non-transposable. ## 5 Circular sequences without duplicated genes In Scenario 2, consider \(m\) circular gene sequences, where each sequence contains \(n\) genes \(1,\ldots,n\). Each gene has only one copy in each sequence. For such circular permutations of \(1,\ldots,n\), we need to find the LCS. Assume the length of the LCS is \(n-k\). ### Find one LCS We first randomly choose a gene \(g_{i}\). Cut all circular sequences at \(g_{i}\) and expand them to be linear sequences. For example, the circular sequences in Fig. 1 cut at 1 are correspondingly \((1,2,3,4,5,6)\) and \((1,2,6,4,5,3)\). Using Algorithm 1, we can find \(\mathcal{L}_{i}\) that begins with \(g_{i}\), which is one LCS of all expanded linear sequences. In the above example, the longest common linear subsequence starting from 1 is \((1,2,4,5)\). If \(g_{i}\) is a non-transposable gene or a quasi-transposable gene, then \(\mathcal{L}_{i}\) (glued back to a circle) is a longest common circular subsequence. If \(g_{i}\) is a proper-transposable gene, then \(\mathcal{L}_{i}\) is shorter than the longest common circular subsequence. In Fig. 1, gene 1 is non-transposable, and \((1,2,4,5)\) (glued) is the longest common circular subsequence. We do not know if \(\mathcal{L}_{i}\) (glued) is an LCS (whether containing \(g_{i}\) or not) for all circular sequences. If there is a longer common subsequence, it should contain genes that are not in \(\mathcal{L}_{i}\). Consider four variables \(\mathcal{L}\), \(g\), \(C\), and \(\mathcal{S}\), whose initial values are \(\mathcal{L}_{i}\), \(g_{i}\), the length of \(\mathcal{L}_{i}\), and the complement of \(\mathcal{L}_{i}\). These variables contain information on the longest common linear subsequence that we have found during this procedure. Choose a gene \(g_{j}\) in \(\mathcal{S}\), and cut all circular gene sequences at \(g_{j}\). Apply Algorithm 1 to find \(\mathcal{L}_{j}\), which is the longest in common subsequences that contain \(g_{j}\). If the length of \(\mathcal{L}_{j}\) is larger than \(C\), set \(\mathcal{L}\) to be \(\mathcal{L}_{j}\), set \(g\) to be \(g_{j}\), set \(C\) to be the length of \(\mathcal{L}_{j}\), and set \(\mathcal{S}\) to be the complement of \(\mathcal{L}_{j}\). Otherwise, keep \(\mathcal{L}\), \(g\), \(C\), and \(\mathcal{S}\) still. Choose another gene \(g_{l}\) in \(\mathcal{S}\) which has not been chosen before, and repeat this procedure. This procedure terminates when all genes in \(\mathcal{S}\) have been chosen and cut. Denote the final values of \(\mathcal{L}\), \(g\), \(C\), and \(\mathcal{S}\) by \(\mathcal{L}_{0}\), \(g_{0}\), \(C_{0}\), and \(\mathcal{S}_{0}\). Here \(\mathcal{S}_{0}\) is the complement of \(\mathcal{L}_{0}\). During this procedure, if the current \(g\) is a proper-transposable gene, then \(\mathcal{S}\) contains a non-transposable gene or a quasi-transposable gene, which has not been chosen. Thus \(\mathcal{L}\), \(g\), \(C\), \(\mathcal{S}\) will be further updated. If the current \(g\) is a non-transposable gene or a quasi-transposable gene, then \(C\) has reached its maximum, and \(\mathcal{L}\), \(g\), \(C\), \(\mathcal{S}\) will not be further updated. This means \(\mathcal{L}_{0}\) is a longest common circular subsequence, and \(C_{0}\) is the length of the LCS, \(n-k\). Also, the total number of genes being chosen and cut is \(k+1\). All \(k\) genes in \(\mathcal{S}_{0}\) and \(g_{0}\) are chosen and cut. A gene \(g_{t}\) in \(\mathcal{L}_{0}\) (excluding \(g_{0}\)) is non-transposable or quasi-transposable, and cannot be chosen and cut. The reason is that it cannot be chosen before \(g_{0}\) is chosen (only proper-transposable genes can be chosen before \(g_{0}\) is chosen), and it cannot be chosen after \(g_{0}\) is chosen (\(g_{t}\notin\mathcal{S}_{0}\)). ### Determine quasi-transposable genes For each gene \(g_{p}\in\mathcal{S}_{0}\), apply Algorithm 1 to calculate \(C_{p}\), the length of the LCS that contains \(g_{p}\). If \(C_{p}<C_{0}\), \(g_{p}\) is a proper-transposable gene. Otherwise, \(C_{p}=C_{0}\) means \(g_{p}\) is a quasi-transposable gene. We have found all proper-transposable genes. If all genes in \(\mathcal{S}_{0}\) are proper-transposable, then all genes in \(\mathcal{L}_{0}\) are non-transposable, and the procedure terminates. If \(\mathcal{S}_{0}\) contains quasi-transposable genes, then \(\mathcal{L}_{0}\) also has quasi-transposable genes. To determine quasi-transposable genes in \(\mathcal{L}_{0}\), we need the following lemma. **Lemma 2**.: _In Scenario 2, choose a quasi-transposable gene \(g_{p}\) and cut the circular sequences at \(g_{p}\) to obtain linear sequences. A proper-transposable gene for the circular sequences is also a proper-transposable gene for the linear sequences; a non-transposable gene for the circular sequences is also a non-transposable gene for the linear sequences._ Proof.: Consider an LCS \(\mathcal{L}_{p}\) for linear sequences cut at \(g_{p}\). Since \(g_{p}\) is a quasi-transposable gene, the length of \(\mathcal{L}_{p}\) is also \(n-k\), meaning that \(\mathcal{L}_{p}\) is also an LCS for circular sequences. Now, this lemma is proved by the definition of proper/quasi/non-transposable gene. If a gene \(g_{r}\) in \(\mathcal{L}_{0}\) is non-transposable for the circular sequences, then \(g_{r}\) is a non-transposable gene for linear sequences cut at each quasi-transposable gene \(g_{q}\in\mathcal{S}_{0}\). If a gene \(g_{s}\) in \(\mathcal{L}_{0}\) is quasi-transposable for the circular sequences, then there is a longest common circular subsequence \(\mathcal{L}_{t}\) that does not contain \(g_{s}\), meaning that \(\mathcal{L}_{t}\) contains a quasi-transposable gene \(g_{t}\) not in \(\mathcal{L}_{0}\). Then \(g_{s}\) is a proper/quasi-transposable gene for linear sequences cut at \(g_{t}\). Therefore, we can use the following method to determine quasi-transposable genes in \(\mathcal{L}_{0}\). For each quasi-transposable gene \(g_{q}\in\mathcal{S}_{0}\), cut at \(g_{q}\) and apply Algorithms 1,2 to determine if each gene in \(\mathcal{L}_{0}\) is proper/quasi/non-transposable for the linear gene sequences cut at \(g_{q}\). A gene \(g_{r}\in\mathcal{L}_{0}\) is non-transposable for the circular sequences if and only if it is non-transposable for linear sequences cut at any quasi-transposable gene \(g_{q}\in\mathcal{S}_{0}\). A gene \(g_{s}\in\mathcal{L}_{0}\) is quasi-transposable for the circular sequences if and only if it is proper/quasi-transposable for linear sequences cut at some quasi-transposable gene \(g_{q}\in\mathcal{S}_{0}\). When we have determined all quasi-transposable genes in \(\mathcal{S}_{0}\), it might be tempting to apply a simpler approach to determine quasi-transposable genes in \(\mathcal{L}_{0}\): For each quasi-transposable gene \(g_{q}\in\mathcal{S}_{0}\), cut at \(g_{q}\) and apply Algorithm 1 to find an LCS \(\mathcal{L}_{q}\). A gene in \(\mathcal{L}_{0}\) is non-transposable if and only if it appears in all such \(\mathcal{L}_{q}\). This approach is valid only if the following conjecture holds, which is similar to Lemma 1: **Conjecture 1**.: _In Scenario 2 of circular sequences without duplicated genes, each quasi-transposable gene \(g_{i}\) has a corresponding quasi-transposable gene \(g_{j}\), so that no LCS can contain both \(g_{i}\) and \(g_{j}\)._ However, Conjecture 1 does not hold. See Fig. 4 for a counterexample. All genes are quasi-transposable. Any two quasi-transposable genes are contained in an LCS (length 3). Thus the simplified approach above does not work. We summarize the above method as Algorithms 3,4. If we have known that the LCS is unique, then we just need to apply Algorithm 3, so that genes in \(\mathcal{S}_{0}\) are proper-transposable, and genes not in \(\mathcal{S}_{0}\) are non-transposable. Assume we have \(m\) sequences with length \(n\), and the length of the LCS is \(n-k\). The time complexities of Step 2 and Step 3 in Algorithm 3 are \(\mathcal{O}(mn^{2})\) and \(\mathcal{O}(kmn^{2})\). The time complexities of Step 2 in Algorithm 4 is \(\mathcal{O}(kmn^{2})\). The overall time complexity of determining transposable genes in Scenario 2 by Algorithms 3,4 is \(\mathcal{O}(kmn^{2})\). The space complexity is trivially \(\mathcal{O}(mn+n^{2})\). ### Applications on experimental data Similar to Subsection 4.6, we test Algorithms 3,4 on _Escherichia coli_ gene sequences. From NCBI sequencing database, we obtain gene sequences of three individuals of _E. coli_ strain ST540 (GenBankank CP007265.1, GenBank CP007390.1, GenBank CP007391.1) and three individuals of _E. coli_ strain ST2747 (GenBank CP007392.1, GenBank CP007393.1, GenBank CP007394.1). We regard all three sequences of ST540 as circular gene sequences. We remove genes that appear more than once in one sequence, and remove genes that do not appear in all three sequences. After applying Algorithms 3,4 on these three sequences, there are 389 non-transposable genes, 50 quasi-transposable genes, and 129 proper-transposable genes. The reason for the large amount of proper-transposable genes is that sequence CP007265.1 is significantly different from the other two. After removing it and applying Algorithms 3,4 to the remaining two sequences (CP007390.1 and CP007391.1), there are 564 non-transposable genes and 4 quasi-transposable genes (hpaC, iraD, fbpC, psiB). Therefore, some genes in hpaC, iraD, fbpC, psiB are likely to translocate. We regard all three sequences of ST2747 as circular gene sequences. We remove genes that appear more than once in one sequence, and remove genes that do not appear in all three sequences. After applying Algorithms 3,4 on Figure 4: A counterexample with three circular sequences that fails Conjecture 1. 1. **Input** \(m\) circular sequences of genes \(1,\ldots,n\), where each gene has only one copy in each sequence 2. **Choose** a gene \(g_{i}\) randomly **Cut** all circular sequences at \(g_{i}\) and expand them to be linear sequences **Apply** Algorithm 1 to find \(\mathcal{L}_{i}\), an LCS in the expanded linear sequences **Set**\(C\) to be the length of \(\mathcal{L}_{i}\), and **set**\(\mathcal{S}\) to be the complement of \(\mathcal{L}_{i}\) 3. **While**\(\mathcal{S}\) has a gene \(g_{j}\) that has not been chosen and cut **Cut** all circular sequences at \(g_{j}\) and apply Algorithm 1 to find \(\mathcal{L}_{j}\) **Denote** the length of \(\mathcal{L}_{j}\) by \(C_{j}\) **If**\(C_{j}>C\) **Update**\(C\) to be \(C_{j}\), and **update**\(\mathcal{S}\) to be the complement of \(\mathcal{L}_{j}\) **End** of if **End** of while **Denote** the final \(C\) by \(C_{0}\), and **denote** the final \(\mathcal{S}\) by \(\mathcal{S}_{0}\) 4. **Output**\(C_{0}\) and \(\mathcal{S}_{0}\) **Algorithm 3**Detailed workflow of determining proper-transposable genes and quasi-transposable genes in Scenario 2, preparation stage. 1. **Input** \(m\) circular sequences of genes \(1,\ldots,n\), where each gene has only one copy in each sequence; \(C_{0}\) and \(\mathcal{S}_{0}\) calculated from Algorithm 3 2. **For** each gene \(g_{l}\in\mathcal{S}_{0}\) **Cut** all circular sequences at \(g_{l}\) and expand them to be linear sequences **Apply** Algorithm 1 to find \(\mathcal{L}_{l}\), an LCS in the expanded linear sequences. **Denote** the length of \(\mathcal{L}_{l}\) by \(C_{l}\) **If**\(C_{l}<C_{0}\) **Output**\(g_{l}\) is a proper-transposable gene **Else** **Output**\(g_{l}\) is a quasi-transposable gene **Cut** all circular sequences at \(g_{l}\) and **apply** Algorithms 1,2 to find all proper/quasi-transposable genes for linear gene sequences starting at \(g_{l}\) **Output** genes not in \(\mathcal{S}_{0}\) but being proper/quasi-transposable for such linear sequences are quasi-transposable for circular sequences **End** of if **End** of for **Output** other genes that have not been determined to be proper/quasi-transposable are all non-transposable 3. **Output**: whether each gene is proper/quasi/non-transposable **Algorithm 4:** Detailed workflow of determining proper-transposable genes and quasi-transposable genes in Scenario 2, output stage. these three sequences, all \(573\) genes are non-transposable genes. ## 6 Linear sequences with duplicated genes In Scenario 3, consider \(m\) linear gene sequences, where each sequence contains different numbers of copies of \(n\) genes \(1,\ldots,n\). We need to find the LCS. Here we only consider common subsequences that consist of all or none copies of the same gene, and the subsequence length is calculated by genes, not gene copies. ### A graph representation of the problem Similar to Scenario 1, we construct an auxiliary graph \(\mathcal{G}\), where each vertex is a gene (not a copy of a gene). However, in this case, the auxiliary graph is undirected: There is an undirected edge between gene \(g_{i}\) and gene \(g_{j}\) if and only if all the copies of \(g_{i}\) and \(g_{j}\) keep their relative locations in all sequences. For example, consider two sequences \((1,2,3,2,3,4,5)\) and \((2,1,3,3,2,4,5)\). For gene pair \(1,3\), the corresponding sequences are \((1,3,3)\) and \((1,3,3)\), meaning that there is an edge between \(1\) and \(3\). For gene pair \(1,2\), the corresponding sequences are \((1,2,2)\) and \((2,1,2)\), meaning that there is no edge between \(1\) and \(2\). See Fig. 5 for the auxiliary graph in this case. **Definition 5**.: _A subgraph of \(\mathcal{G}\) consists of some genes \(g_{1},\ldots,g_{l}\) and the edges between them. In a subgraph, if there is an edge between any two genes, this subgraph is called a complete subgraph (also called a clique)._ Figure 5: The auxiliary graph \(\mathcal{G}\) of two sequences \((1,2,3,2,3,4,5)\) and \((2,1,3,3,2,4,5)\). The unique largest complete subgraph is \(\{1,3,4,5\}\), meaning that the unique longest common sequence is \((1,3,3,4,5)\). Thus \(1,3,4,5\) are non-transposable genes, and \(2\) is a proper-transposable gene. **Definition 6**.: _In graph \(\mathcal{G}\), the degree of a gene \(g\) is the number of edges linking \(g\). In a complete graph of \(p\) genes, where any two genes have an edge in between, each gene has degree \(p-1\)._ **Definition 7**.: _If all copies of genes \(g_{1},\ldots,g_{l}\) keep their relative locations in all linear sequences, we say that \(g_{1},\ldots,g_{l}\) form a common subsequence._ The following Lemma 3 shows that there is a bijection between common subsequences and complete subgraphs in \(\mathcal{G}\). _The problem of determining the LCS now becomes determining the largest complete subgraph of \(\mathcal{G}\)._ **Lemma 3**.: _In Scenario 3, construct the auxiliary graph \(\mathcal{G}\) from gene sequences. If \(g_{1},\ldots,g_{k}\) form a complete subgraph in \(\mathcal{G}\), then \(g_{1},\ldots,g_{k}\) form a common subsequence, and vice versa._ Proof.: If \(g_{1},\ldots,g_{l}\) form a common subsequence, then there is an edge in \(\mathcal{G}\) between any two genes in \(g_{1},\ldots,g_{l}\), meaning that they form a complete subgraph. For the other direction, only consider copies of \(g_{1},\ldots,g_{k}\) in these sequences. If \(g_{1},\ldots,g_{k}\) do not form a common subsequence, find the first digit that such sequences differ. Assume \(g_{p}\) and \(g_{q}\) can both appear in this digit. Then \(g_{p},g_{q}\) cannot form a common subsequence, and there is no edge between \(g_{p}\) and \(g_{q}\). We illustrate this proof with Fig. 5: For genes \(2,3,4\), the sequences are \((2,3,2,3,4)\) and \((2,3,3,2,4)\). The third digit is different, where \(2\) and \(3\) can both appear. Then the sequences for genes \(2,3\), \((2,3,2,3)\) and \((2,3,3,2)\), cannot match, and there is no edge between \(2\) and \(3\). ### A heuristic algorithm The above discussion shows that given gene sequences, we can construct an undirected graph \(\mathcal{G}\), so that there is a bijection between common subsequences and complete subgraphs. The inverse also holds: We can construct corresponding gene sequences for a graph. **Lemma 4**.: _Given an undirected graph \(\mathcal{G}\), we can construct two gene sequences, so that there is a bijection between common subsequences and complete subgraphs._ Proof.: Assume the graph has \(n\) genes. We start with two sequences \((1,2,\ldots,n)\) and\((1,2,\ldots,n)\). For each pair of genes \(g_{i},g_{j}\), if there is no edge between them in \(\mathcal{G}\), add \(g_{i},g_{j}\) to the end of the first sequence, and \(g_{j},g_{i}\) to the end of the second sequence. Then \(g_{i},g_{j}\) cannot both appear in a common subsequence, and this operation does not affect other gene pairs. For example, corresponding to Fig. 5, we start with \((1,2,3,4,5)\) and \((1,2,3,4,5)\). Since there is no edge between \(1,2\), we add them to have \((1,2,3,4,5,1,2)\) and \((1,2,3,4,5,2,1)\). Since there is no edge between \(2,3\), we add them to have \((1,2,3,4,5,1,2,2,3)\) and \((1,2,3,4,5,2,1,3,2)\). These two sequences corresponds to Fig. 5. Combining Lemma 3 and Lemma 4, we obtain the following result: **Proposition 1**.: _Finding the longest common sequence in Scenario 3 is equivalent to the maximum clique problem, which is NP-hard._ Proof.: For an undirected graph, we can use Lemma 4 to construct corresponding sequences. If we have the solution of finding the longest common sequence in Scenario 3, then we can find the largest complete subgraph in an extra polynomial time. For gene sequences in Scenario 3, we can construct corresponding auxiliary graph. If we have the solution of finding the largest complete subgraph, then we can use Lemma 3 to find the longest common sequence in Scenario 3 in an extra polynomial time. Therefore, finding the longest common sequence in Scenario 3 and finding the largest complete subgraph are equivalent. The problem of determining the largest complete subgraph is just the maximum clique problem, which is NP-hard [65]. Thus finding the longest common sequence in Scenario 3 is also NP-hard. This means it is not likely to design an algorithm that always correctly determines the LCS in polynomial time. We have transformed Scenario 3 into the maximum clique problem for a graph \(\mathcal{G}\). There have been various algorithms for the maximum clique problem [30, 37, 74], and readers may refer to a review for more details [88]. For completeness, we propose a simple idea: In the auxiliary graph \(\mathcal{G}\), repeatedly abandon the gene with the smallest degree (and also edges linking this gene) until the remaining genes form a complete subgraph. See Algorithm 5 for the details of this greedy heuristic method. This algorithm is easy to understand, and can provide some intuition. We do not claim that Algorithm 5 is comparable to other sophisticated algorithms. We test Algorithm 5 on random graphs. Construct a random graph with \(n\) genes, and any two genes have probability \(0.5\) to have an edge in between. Use brute-force search to find the maximum clique, and compare its size with the result of Algorithm 5. For each \(n\leq 15\), we repeat this for \(10000\) times, 1. **Input** \(m\) linear sequences of genes \(1,\ldots,n\), where each gene can have multiple copies 2. **Construct** the auxiliary graph \(\mathcal{G}\): Vertices of \(\mathcal{G}\) are all the genes \(1,\ldots,n\) (not their copies) **For** each pair of genes \(g_{i},g_{j}\) **If** all copies of \(g_{i}\) and \(g_{j}\) keep their relative locations in all \(m\) sequences **Add** an undirected edge between \(g_{i}\) and \(g_{j}\) in \(\mathcal{G}\) **End** of if **End** of for **Calculate** the degree for each gene in \(\mathcal{G}\) 3. **While** true **Find** a gene \(g_{i}\) with the smallest degree \(d_{i}\) in \(\mathcal{G}\) % If the minimal \(g_{i}\) is not unique, choose one randomly **If**\(d_{i}+1\) is smaller than the number of genes in \(\mathcal{G}\) **Delete**\(g_{i}\) and edges linking \(g_{i}\) in \(\mathcal{G}\) **Update** the degrees of other genes **Else** % The remaining genes form a complete subgraph **Break** the while loop **End** of if **End** of while **% The final \(\mathcal{G}\) is a complete subgraph of the original \(\mathcal{G}\), and it is likely to be the largest one 4. **Output** genes in the final \(\mathcal{G}\) are not transposable, and genes not in the final \(\mathcal{G}\) are transposable **Algorithm 5**: A heuristic method for detecting transposable genes in Scenario 3. and every time Algorithm 5 returns the correct result. Therefore, for small random graphs, the \(95\%\) credible interval for the success rate of Algorithm 5 is \([0.9997,1]\). We can claim that Algorithm 5 is a good heuristic algorithm that fails with a very small probability. Since finding the true maximum clique requires exponentially slow brute-force search, we do not test on very large graphs. Nevertheless, Algorithm 5 does not always produce the correct result. See Fig. 6 for a counterexample. Here genes \(1,2,3,4,5,6\) have degree \(4\), while genes \(7,8,9,10\) have degree \(3\). When applying Algorithm 5, genes \(7,8,9,10\) are first abandoned, and the final result just has three genes, such as \(1,3,5\). However, the largest complete graph is \(7,8,9,10\). Besides, Algorithm 5 can only determine one (possibly longest) common subsequence. Thus we cannot determine the existence of quasi-transposable genes. Assume we have \(m\) sequences with \(n\) genes. In general, the copy number of a gene is small, and we can assume the length of each sequence is \(\mathcal{O}(n)\). The time complexities of Step 2 and Step 3 in Algorithm 5 are \(\mathcal{O}(mn^{2})\) and \(\mathcal{O}(n^{2})\), and the overall time complexity is \(\mathcal{O}(mn^{2})\). The space complexity is trivially \(\mathcal{O}(mn+n^{2})\). ## 7 Circular sequences with duplicated genes In Scenario 4, consider \(m\) circular gene sequences, where each sequence contains different numbers of copies of \(n\) genes \(1,\ldots,n\). We need to find the LCS. Here we only consider common subsequences that consist of all or none copies of the same gene, and the subsequence length is calculated by genes, not gene copies. We shall prove that finding the LCS in Scenario 4 is no easier than in Scenario 3. Thus Scenario 4 is also NP-hard. **Proposition 2**.: _Finding the LCS in Scenario 4 is NP-hard._ Figure 6: The auxiliary graph \(\mathcal{G}\) of linear sequences \((7,8,9,10,1,1,2,3,3,4,5,5,6)\) and \((1,2,1,3,4,3,5,6,5,7,8,9,10)\). This counterexample fails Algorithm 5. Proof.: From Proposition 1, Scenario 3 is NP-hard, meaning that any NP problem can be reduced to Scenario 3 in polynomial time. We just need to prove that Scenario 3 can be reduced to Scenario 4 in polynomial time. Given \(m\) linear sequences with \(n\) genes in Scenario 3, add genes \(n+1,\ldots,2n+1\) to the end of each sequence, and glue each linear sequence into a circular sequence. The LCS for these circular sequence has the following properties: (1) it contains all genes \(n+1,\ldots,2n+1\); (2) after cutting at \(n+1\) and removing genes \(n+1,\ldots,2n+1\), the remaining linear sequence is the LCS in Scenario 3. (1) The LCS has at least \(n+1\) genes \((n+1,\ldots,2n+1)\). Therefore, at least one gene in \(n+1,\ldots,2n+1\) is included, such as \(n+1\). Since gene \(n+1\) aligned in all sequences, \(n+2,\ldots,2n+1\) are also aligned, meaning that they are also in the LCS. (2) After cutting and removing \(n+1,\ldots,2n+1\), the remaining linear sequence is a common subsequence in Scenario 3. If there is a longer common subsequence, then that with \(n+1,\ldots,2n+1\) should be a longer common subsequence in Scenario 4, a contradiction. Therefore, if we can find the LCS for these circular sequences, then we can find the LCS for linear sequences in polynomial time. Similar to Scenario 3, to find the LCS in Scenario 4, we want to reduce it to a maximum clique problem. However, Lemma 3 does not hold in Scenario 4. For example, we can consider a circular sequence \((1,2,3)\) and its mirror symmetry. These two sequences are different, but any two genes form a common subsequence. However, inspired by Lemma 3, we have the following conjecture, although we do not know if it is correct or not. **Conjecture 2**.: _In Scenario 4, if any three genes \(g_{i},g_{j},g_{l}\) in \(g_{1},\ldots,g_{k}\) form a common subsequence, then \(g_{1},\ldots,g_{k}\) form a common subsequence._ To solve Scenario 4,construct a \(3\)-uniform hypergraph \(\mathcal{G}\) as following [15]: vertices are genes \(1,\ldots,n\); there is a \(3\)-hyperedge (undirected) that links genes \(g_{i},g_{j},g_{k}\) if and only if they form a common subsequence. **Proposition 3**.: _If Conjecture 2 holds, then finding the longest common sequence in Scenario 4 can be reduced to the maximum clique problem for \(3\)-uniform hypergraphs._ Proof.: If \(g_{1},\ldots,g_{k}\) form a common subsequence, then any three genes \(g_{i},g_{j},g_{l}\) has a \(3\)-hyperedge, and \(g_{1},\ldots,g_{k}\) form a complete subgraph. If \(g_{1},\ldots,g_{k}\) form a complete subgraph, then any three genes \(g_{i},g_{j},g_{l}\) form a common subsequence. By Conjecture 2, this means \(g_{1},\ldots,g_{k}\) form a common subsequence. Therefore, there is a bijection between common subsequence and complete subgraph. If we can find the maximum clique problem for \(3\)-uniform hypergraphs, then it corresponds to the LCS. We have reduced Scenario 4 into the maximum clique problem for \(3\)-uniform hypergraphs, which is also NP-hard [88]. There have been some algorithms for the maximum clique problem for \(3\)-uniform hypergraphs [64, 56]. For completeness, we propose a simple idea: Repeatedly delete the gene that has the smallest degree, until we have a complete subgraph that any three genes have a \(3\)-hyperedge that links them. We summarize this greedy heuristic method as Algorithm 6. This algorithm is easy to understand, and can provide some intuition. We do not claim that Algorithm 6 is comparable to other sophisticated algorithms. We test Algorithm 6 on random graphs. Construct a random graph with \(n\) genes, and any two genes have probability \(0.5\) to have an edge in between. Use brute-force search to find the maximum clique, and compare its size with the result of Algorithm 6. For each \(n\leq 15\), we repeat this for \(10000\) times, and every time Algorithm 6 returns the correct result. Therefore, for small random graphs, the \(95\%\) credible interval for the success rate of Algorithm 6 is \([0.9997,1]\). We can claim that Algorithm 6 is a good heuristic algorithm that fails with a very small probability. Since finding the true maximum clique requires exponentially slow brute-force search, we do not test on very large graphs. Nevertheless, Algorithm 6 does not always produce the correct result. See Fig. 7 for a counterexample. Here each gene in \(1,2,3,4,5,6\) has degree \(4\), while each gene in \(7,8,9,10\) has degree \(3\).When applying Algorithm 6, genes \(7,8,9,10\) are first deleted, and the final result just has three genes, such as \((1,3,5)\). However, the LCS \((7,8,9,10)\) has four genes. Assume we have \(m\) sequences with \(n\) genes. In general, the copy number of a gene is small, and we can assume the length of each sequence is \(\mathcal{O}(n)\). The time complexities of Step 2 and Step 3 in Algorithm 6 are \(\mathcal{O}(mn^{3})\) and \(\mathcal{O}(n^{3})\), and the overall time complexity is \(\mathcal{O}(mn^{3})\). The space complexity is trivially \(\mathcal{O}(mn+n^{3})\). ## 8 Discussion A gene \(g_{i}\) might be missing in some sequences. Since \(g_{i}\) is not in any LCS, it should be a proper-transposable gene. This gene can be directly removed before applying corresponding algorithms. 1. **Input** \(m\) circular sequences of genes \(1,\ldots,n\), where each gene can have multiple copies 2. **Construct** the auxiliary graph \(\mathcal{G}\): Vertices of \(\mathcal{G}\) are all the genes \(1,\ldots,n\) (not their copies) **For** each gene triple \(g_{i},g_{j},g_{k}\) **If** all copies of \(g_{i},g_{j},g_{k}\) keep their relative locations in all \(m\) sequences **Add** a 3-hyperedge that links \(g_{i},g_{j},g_{k}\) in \(\mathcal{G}\) **End** of if **End** of for 3. **While** there exist three genes that do not share a 3-hyperedge **Calculate** the degree for each gene in \(\mathcal{G}\) **Delete** the gene with the smallest degree and 3-hyperedges that links this gene % If there are multiple genes with the smallest degree, delete one randomly **End** of while % After this while loop, any three genes form a common subsequence % If Conjecture 2 holds, the remaining genes form a common subsequence 4. **Output** remaining genes are not transposable, and other genes are transposable **Algorithm 6**: A heuristic method for detecting transposable genes in Scenario 4. We can adopt a stricter definition of transposable genes to exclude a gene which only changes its relative position in a few (no more than \(l\), where \(l\) is small enough) sequences. Then we should consider the longest sequence which is a common subsequence of at least \(m-l\) sequences. We can run the corresponding algorithm for every \(m-l\) sequences. Thus the total time complexity will be multiplied by a factor of \(m^{l}\). In Scenario 1 and Scenario 2 (linear/circular sequences without duplicated genes), if each sequence has \(n\) genes, and the LCS has length \(n-k\), then there are at most \(k\) proper-transposable genes. About quasi-transposable genes, inspired by Lemma 1, we have the following guess. **Conjecture 3**.: _Consider \(m\) linear/circular sequences with \(n\) genes without multiple copies. Assume the length of the LCS is \(n-k\), and there are \(l\) proper-transposable genes. Then the number of quasi-transposable genes is no larger than \(2(k-l)\)._ When \(l+2(k-l)\leq n\), in both linear and circular scenarios, we can find examples with \(2(k-l)\) quasi-transposable genes. Given some different gene sequences, determining the LCS is similar to calculating the minimal operations needed to transform them into the same form (ancestor). In reality, this process might not be minimal, and we need certain stochastic models to describe this process [90, 80, 79]. In the view of evolutionary biology, it is also interesting to study the change of the LCS length with differential equation models [75]. We prove that certain LCS problems can be solved in polynomial time. The other side is to prove that lower time complexities are impossible [71]. Figure 7: Four circular sequences. The LCS is \((7,8,9,10)\). This counterexample fails Algorithm 6. Conclusion In this paper, we study the LCS problem and design Algorithms 1-6 for different scenarios. Specifically, we consider the case where the LCS is not unique, and determine whether each number appears in all/some/none of the LCSs. These algorithms are applied to gene sequences to determine the stability of genes. To apply those algorithms, one needs to apply genomic annotation tools to transform raw DNA sequencing data into gene sequences, and replace gene names by numbers. Those algorithms have at most \(O(mn^{3})\) time complexity, where \(m\) is the number of sequences, and \(n\) is the number of genes. Thus they can run in a reasonable time for most applications. We prove that the latter two scenarios are NP-hard (Propositions 1, 2), and propose two unresolved problems (Conjectures 2, 3) in discrete mathematics. We start with gene sequences and determine translocated genes. Therefore, short transposons (possibly shorter than a gene) cannot be determined. Besides, we do not determine specific genomic rearrangement events. We aim at determining which genes are able to translocate (i.e., less stable). Specifically, we study how many LCSs contain a certain gene, as a measure for its "stability". This mesoscopic viewpoint can be intriguing for understanding changes in genome. The results in this paper are not limited to Scenarios 1-4. They can be applied to other bioinformatics situations, or even other fields that need discrete mathematics tools, such as text processing, compiler optimization, data analysis, image analysis [22]. Besides, algorithms in this paper might be able to detect non-syntenic regions [36]. There are some possible future directions: (1) prove Conjectures 2, 3; (2) extend Proposition 3 to find more efficient solutions to Scenario 4; (3) determine whether genes appear in all LCSs in other similar scenarios. ## Acknowledgments The author would like to thank Zhongkai Zhao for helping with designing Algorithm 1. The author would like to thank Lucas Bottcher and anonymous reviewers for providing helpful comments.
2306.06720
Cavity optomechanical detection of persistent currents and solitons in a bosonic ring condensate
We present numerical simulations of the cavity optomechanical detection of persistent currents and bright solitons in an atomic Bose-Einstein condensate confined in a ring trap. This work describes a novel technique that measures condensate rotation in situ, in real-time, and with minimal destruction, in contrast to currently used methods, all of which destroy the condensate completely. For weakly repulsive inter-atomic interactions, the analysis of persistent currents extends our previous few-mode treatment of the condensate [P. Kumar et al. Phys. Rev. Lett. 127, 113601 (2021)] to a stochastic Gross-Pitaevskii simulation. For weakly attractive atomic interactions, we present the first analysis of optomechanical detection of matter-wave soliton motion. We provide optical cavity transmission spectra containing signatures of the condensate rotation, sensitivity as a function of the system response frequency, and atomic density profiles quantifying the effect of the measurement backaction on the condensate. We treat the atoms at a mean-field level and the optical field classically, account for damping and noise in both degrees of freedom, and investigate the linear as well as nonlinear response of the configuration. Our results are consequential for the characterization of rotating matter waves in studies of atomtronics, superfluid hydrodynamics, and matter-wave soliton interferometry.
Nalinikanta Pradhan, Pardeep Kumar, Rina Kanamoto, Tarak Nath Dey, M. Bhattacharya, Pankaj Kumar Mishra
2023-06-11T16:45:35Z
http://arxiv.org/abs/2306.06720v1
# Cavity optomechanical detection of persistent currents and solitons in a bosonic ring condensate ###### Abstract We present numerical simulations of the cavity optomechanical detection of persistent currents and bright solitons in an atomic Bose-Einstein condensate confined in a ring trap. This work describes a novel technique that measures condensate rotation _in situ_, in real-time, and with minimal destruction, in contrast to currently used methods, all of which destroy the condensate completely. For weakly repulsive inter-atomic interactions, the analysis of persistent currents extends our previous few-mode treatment of the condensate [P. Kumar _et al_. Phys. Rev. Lett. **127**, 113601 (2021)] to a stochastic Gross-Pitaevskii simulation. For weakly attractive atomic interactions, we present the first analysis of optomechanical detection of matter-wave soliton motion. We provide optical cavity transmission spectra containing signatures of the condensate rotation, sensitivity as a function of the system response frequency, and atomic density profiles quantifying the effect of the measurement backaction on the condensate. We treat the atoms at a mean-field level and the optical field classically, account for damping and noise in both degrees of freedom, and investigate the linear as well as nonlinear response of the configuration. Our results are consequential for the characterization of rotating matter waves in studies of atomtronics, superfluid hydrodynamics, and matter-wave soliton interferometry. ## I Introduction An atomic Bose-Einstein Condensate (BEC) confined in a ring potential exhibits superflow, _i.e._, transport without dissipation [1; 2; 3]. It is therefore a natural platform for studying superfluid hydro-dynamical phenomena such as quantized persistent currents [4], phase slips [5; 6], excitations [7], two-component rotation [8; 9], hysteresis [10; 11], and shock waves [12]; a versatile enabler for applications such as matter-wave interferometry [13; 14], atomtronic circuits [15; 16; 17; 18], and gyroscopy [19; 20]; and a convenient simulator of topological excitations [21; 22; 23], early universe cosmology [24], and time crystals [25]. Inspired by the experimental activity in the field, a large number of theoretical proposals have been put forward, based on the BEC-in-a-ring system, characterizing plain wave to soliton transitions [26], self-trapping [27], simulated Hawking radiation [28], the Berry phase [29], qubits for computation [30], critical velocities [31; 32], superflow decay [33; 34; 35; 36], phonons [37], rotating lattices [38], rotation sensing [39], gauge fields [40], matter-wave interference [41], double-ring geometries [42; 43; 44], etc. In all these studies, knowledge of the condensate rotation is an important consideration. At present, all demonstrated methods of detecting such rotation in ring BECs are destructive of the condensate [45]. Due to issues related to optical resolution, the methods typically also require time-of-flight expansion of the atoms, making _in situ_ measurements difficult. A theoretical proposal exists based on atom counting for a minimally destructive measurement of the condensate rotation [46]. Recently, our group suggested a method for detecting condensate rotation in real time, _in situ_ and with minimal destruction to the condensate [47]. This method proposed to use the techniques of cavity optomechanics, a discipline that addresses the coupling of mechanical motion to electromagnetic fields confined in resonators [48]. Probably the best-known optomechanical device in existence is the Laser Interferometer Gravitational-Wave Observatory (LIGO), which detected the gravitational waves predicted by Einstein's theory of general relativity [49], an accomplishment recognized by a Nobel prize. Cavity optomechanics is now a mature field that is capable of supporting the sensitive detection of any physical variable that actuates the mechanical motion coupling to the electromagnetic fields in the cavity. Thus, cavity optomechanical principles have been employed to construct accelerometers [50], magnetometers [51], thermometers [52], mass [53] and force [54] sensors, etc. In our previous proposal, which considered a rotating BEC in a cavity, it was shown that the resulting sensitivity of BEC rotation measurement was three orders of magnitude better than demonstrated hitherto [47]. This conclusion regarding the detection of a persistent current was based on a few-mode approximation for the condensate. In the present work, we consider a BEC confined in a ring trap and interacting with an optical cavity mode carrying orbital angular momentum (OAM) [47]. This may be regarded as the rotational analog of a BEC with a linear degree of mechanical freedom combined with a standing wave optical cavity lattice in an optomechanical context [55]. For weak repulsive atomic interactions, we extend the previous two-mode characterization of the condensate to a mean field, _i.e._, Gross-Pitaevskii, treatment. Our method allows us to confirm the basic results of the two-mode treatment regarding the rotation detection of a persistent current, to investigate the modifications resulting from taking the full condensate dynamics into account, and to quantify the effect of measurement backaction on the condensate. It also allows us to consider the detection of a superposition of persistent current states in the condensate. We also investigate, for the first time, the case of weak attractive atomic interactions [56] - which results in a bright-soliton ground state in the ring condensate - in the optomechanical context. Such solitons are of great interest e.g., to rotation sensing and matter-wave interferometry [57; 58; 59; 60; 61; 62; 63]. However, a soliton is not amenable to a few-mode optomechanical treatment, due to the large number of matter-wave OAM states contributing to the condensate dynamics. Our numerical simulations make this case tractable, extracting, as in the case of the persistent currents, cavity transmission spectra with signatures of soliton rotation, the sensitivity of the measurement as a function of system response frequency, and atomic density profiles showing the effect of the measurement on the condensate. In all simulations, the matter is treated at the mean-field level, light is treated classically, and noise arising from both optical as well as matter-wave fields are taken into account. This paper is organized as follows. In Section II the theoretical model and details of the numerical simulation are presented. In Sections III.1 and III.2 we provide the dynamics, OAM content, optical spectra, measurement sensitivity, and condensate density fidelity for the persistent current and bright soliton detection, respectively. The conclusions are presented in Section IV. ## II Theoretical model and details of numerical simulation In this section, we present the theoretical model for the configuration of interest, shown in Fig. 1, _i.e._, a BEC confined in a one-dimensional ring trap coupled to a cavity using Laguerre-Gauss beams [47]. The dynamical equations governing the system are given by [64; 65; 66] \[\begin{split}(i-\Gamma)\frac{d\psi}{d\tau}=\biggl{[}-\frac{d^{2} }{d\phi^{2}}&+\frac{U_{0}}{\omega_{\beta}}|\alpha(\tau)|^{2}\cos^{ 2}\left(l\phi\right)\\ &-\mu+2\pi\frac{\chi}{N}|\psi|^{2}\biggr{]}\psi+\xi(\phi,\tau), \end{split} \tag{1}\] and \[\begin{split} i\frac{d\alpha}{d\tau}=\biggl{\{}-\biggl{[}\Delta_{ c}&-U_{0}\langle\cos^{2}\left(l\phi\right)\rangle_{\tau}+i\frac{ \gamma_{0}}{2}\biggr{]}\alpha\\ &+i\eta\biggr{\}}\omega_{\beta}^{-1}+i\sqrt{\gamma_{0}}\omega_{ \beta}^{-1}\alpha_{in}(\tau).\end{split} \tag{2}\] In the above, Eq. (1) is the stochastic Gross-Pitaevskii equation, where \(\psi\equiv\psi(\phi,\tau)\) represents the microscopic wave function of the condensate with \(\phi\) the angular variable along the ring, and \(\tau\) the scaled time, to be defined below. The wave function obeys, at any time, the normalization condition \[\int_{0}^{2\pi}|\psi(\phi,\tau)|^{2}d\phi=N,\] where \(N\) is the number of atoms in the condensate. In order to obtain the dimensionless Eq. (1), energy and time have been scaled using the quantities \[\hbar\omega_{\beta}=\frac{\hbar^{2}}{2mR^{2}}\text{ and }\tau=\omega_{\beta}t, \tag{3}\] respectively, where \(m\) is the atomic mass and \(R\) is the radius of the ring-shaped trap. The first term inside the square bracket on the right-hand side of Eq. (1) stands for the kinetic energy of the atoms due to their rotational motion. The second term in the bracket represents the optical lattice potential created with a superposition of two Laguerre-Gauss beams having orbital angular momenta \(\pm l\hbar\) respectively, with \(U_{0}=g_{0}^{2}/\Delta_{a}\), where \(g_{0}\) is the single photon-single atom coupling and \(\Delta_{a}\) is the detuning of the driving laser from the atomic resonance. The third term in the bracket corresponds to the chemical potential \(\mu\) of the condensate, which is corrected by \(\Delta\mu\) at each time step (\(\Delta\tau\)) as [67] \[\Delta\mu=(\Delta\tau)^{-1}\ln\biggl{[}\int\lvert\psi(\phi,\tau)\rvert^{2}d \phi/\int\lvert\psi(\phi,\tau+\Delta\tau)\rvert^{2}d\phi\biggr{]},\] Figure 1: A schematic setup for the BEC with winding number \(L_{p}\) rotating in a ring trap around the axis of the Fabry-Perot cavity. The red beam represents the Laguerre-Gauss modes with the orbital angular momentum of \(\pm l\hbar\) used to probe the BEC rotation. The output signal \(a_{out}\) is the field transmitted from the cavity. to conserve the normalization of the condensate in the presence of the dissipation \[\Gamma=\frac{\omega_{m}}{\omega_{\beta}}, \tag{4}\] set by the lifetime \(\omega_{m}^{-1}\) of the persistent currents [1; 47]. The fourth term inside the bracket represents the scaled atomic interaction \[\chi=\frac{gN}{2\pi\hbar\omega_{\beta}}. \tag{5}\] Here \[g=\frac{2\hbar\omega_{\rho}a_{s}}{R}, \tag{6}\] with \(a_{s}\) the \(s\)-wave atomic scattering length and \(\omega_{\rho}\) the harmonic trap frequency along the radial direction [47]. Thermal noise \(\xi\), with zero mean and correlations provided below, has been added to the condensate in accordance with fluctuation-dissipation theory [68]. The dynamics of the complex intracavity coherent field amplitude \(\alpha\) is described by Eq. (2). In our simulations, we have treated \(\alpha\) as a classical quantity as this approximation has been shown to be adequate for similar setups experimentally. For example, in [55] although bistability is observed at intracavity photon numbers below unity (\(|\alpha|^{2}\lesssim 1\)), the corresponding experimental data is very well described using a classical theory for the optical field [69]. As explained in [70], this is due to the fact that in the 'bad cavity' limit [48], where the condensate mechanical (i.e., sidemode) oscillation frequencies are lower than the cavity linewidth, the number of photons passing through the cavity during one mechanical period is much larger than one. The quantum fluctuations in the photon number thus have a negligible effect on the dynamics of the condensate density modulations. In our simulations below, we have ensured that the bad cavity limit always applies. In the first term inside the square bracket on the right-hand side of Eq. (2), \(\Delta_{c}\) signifies the detuning of the driving field frequency from the cavity resonance frequency \(\omega_{c}\). The second term represents the coupling between the light mode and condensate, where the expectation value of the light potential \(\cos^{2}\left(l\phi\right)\) taken with respect to the condensate wave function \(\psi(\phi,\tau)\) \[\langle\cos^{2}\left(l\phi\right)\rangle_{\tau}=\int_{0}^{2\pi}\left|\psi \left(\phi,\tau\right)\right|^{2}\cos^{2}\left(l\phi\right)d\phi, \tag{7}\] is a time-dependent quantity. In the third term, \(\gamma_{0}\) is the energy decay rate of the cavity field. The last term inside the curly braces represents the laser drive with pump rate \(\eta=\sqrt{P_{in}\gamma_{0}/\hbar\omega_{c}}\), where \(P_{in}\) is the input optical power. The last term on the right-hand side of Eq.(2) signifies the optical shot noise present in the system. The thermal and optical fluctuations each have zero mean and their correlations are given by [71; 47] \[\langle\xi(\phi,\tau)\xi^{*}(\phi^{\prime},\tau^{\prime})\rangle =2\Gamma T\delta(\phi-\phi^{\prime})\delta(\tau-\tau^{\prime}), \tag{8}\] \[\langle\alpha_{in}(\tau)\alpha_{in}^{*}(\tau^{\prime})\rangle =\omega_{\beta}\delta(\tau-\tau^{\prime}), \tag{9}\] where \(T\) is the non-dimensionalized temperature in units of \(k_{B}/(\hbar\omega_{\beta})\), with \(k_{B}\) being the Boltzmann constant. For the numerical simulation of these stochastic equations, the noise terms are modeled as follows \[\xi(\phi,\tau) =\sqrt{2\Gamma T/(d\phi d\tau)}\mathcal{N}(0,1,N_{\phi})\mathcal{ N}(0,1,N_{\phi}), \tag{10}\] \[\alpha_{in}(\tau) =\sqrt{\omega_{\beta}/d\tau}\mathcal{N}(0,1,1), \tag{11}\] where \(\mathcal{N}(0,1,N_{\phi})\) is a normally distributed random variable with zero mean and unit variance, where the third argument in \(\mathcal{N}(0,1,N_{\phi})\) refers to the size of the array containing the random numbers and \(N_{\phi}\) is the number of grid points in the \(\phi\) direction. We have considered \(N_{\phi}=1024\) for all the simulation runs. To attain the dynamics of the persistent current, we have considered the initial state as a plane wave, and then we evolve the system in real-time using the coupled BEC - cavity equations and the RK4 scheme [72]. A different approach is taken for the case of soliton. Initially, we prepare a localized state with a Gaussian density profile, and then we evolve it in imaginary time using the Strang splitting Fourier method [73] to reach a soliton as its ground state in the presence of an optical lattice as well as atomic interactions. The resulting ground state is then used as the initial state for the subsequent real-time evolution using the RK4 scheme. All the results in the paper have been presented for single realizations of a BEC, and each realization has been averaged over 5 \(\mu\)s. We have adopted the time step \(d\tau=10^{-7}\) for all the simulation runs. ## III Results ### Persistent current #### iii.1.1 Rotational eigenstate In this section, we present the dynamics accompanying the detection of a persistent current in the ring BEC. Such currents can exist for macroscopic times as metastable flow states of the condensate with atoms that weakly repel each other [1]. The basic idea is for the circular optical lattice to act as a probe of the angular momentum, and hence the winding number, of the condensate [47]. For low intra-cavity photon number, the matter wave Bragg diffracts from the weak optical lattice. This results, in the first order, in two additional OAM states (sidemodes), which modulate the condensate density. These modulations add sidebands to the optical modes, which can subsequently be detected in the cavity transmission. In our simulation, for which we have used \({}^{23}\)Na atoms [1], a phase gradient is imprinted initially on the condensate, in order to impart a winding number \(L_{p}\) to it. The resulting persistent current then gets coupled to the angular optical lattice, which displays \(2l\) interference maxima along the ring on which the BEC is trapped. For all our simulations related to the persistent current, we consider an initial state for the condensate wave function of the form \[\psi(\phi)=\sqrt{\frac{N}{2\pi}}e^{iL_{p}\phi}, \tag{12}\] which corresponds to an eigenstate of condensate rotation in the absence of the optical lattice. The resulting condensate density (\(\psi\)) obtained from the Eqs. (1)-(2), modulated by the presence of the condensate sidemodes created by the optical lattice, is shown in Fig. 2(a). The OAM content of the modulated condensate density (\(|\tilde{\psi}|^{2}\)), where \(\tilde{\psi}\) is the Fourier amplitude of the condensate density (\(\psi\)), is shown in Fig. 2(b), which displays the first-order peaks, resulting from matter-wave Bragg diffraction, at \(L_{p}\pm 2l\). The figure, which accounts for the full Gross-Pitaevskii condensate dynamics, implies that only three OAM modes are dominant and therefore provides justification for the few-mode model proposed earlier. [47]. In Fig. 3 we show the phase quadrature of the resulting cavity transmission spectrum [47], \[S(\omega)=\left|\mathrm{Im}\left[\alpha_{out}(\omega)\right]\right|^{2}, \tag{13}\] where \(\alpha_{out}\) is the output field, transmitted from the cavity and it is related to the input field into the cavity, through the input-output relation \(\alpha_{out}=-\alpha_{in}+\sqrt{\gamma_{0}}\alpha\)[48]. The vertical dashed lines for \(L_{p}=1\) correspond to the sidemode frequencies \(\omega_{c,d}\) defined in the Supplementary Material of [47], see Eq.S33. These definitions account for the atomic interactions. The agreement between the vertical lines and the peak locations show that the Gross-Pitaevskii simulation retains the results predicted by the few-mode theory presented earlier. It also shows that our classical treatment of \(\alpha\), the optical field, reproduces the results of [47], which treated the optical Figure 4: Persistent current eigenstate rotation measurement sensitivity \(\zeta\) [Eq. (14)] as a function of the system response frequency \(\omega\). Here \(G=2\pi\times 7.5\) kHz and \(|\alpha_{s}|^{2}=0.096\), which corresponds to \(P_{in}=0.2\) pW. Other parameters are the same as for Fig. 2. Figure 5: Variation of the fidelity [Eq. (15)] of the ring condensate density for a persistent current with time. The red dashed line is the guide to see the fidelity value when the density profile of the condensate at different times are in phase with the initial state. The set of parameters are same as those in Fig. 2. Figure 3: Power spectra of the output phase quadrature of the cavity field as a function of the system response frequency for \(L_{p}=1\) (blue) and \(L_{p}=2\) (red). The vertical dashed lines (grey and green) correspond to the analytical predictions for the side modes of \(L_{p}=1\) and \(L_{p}=2\) respectively, including atomic interactions, made earlier [47]. Other parameters are same as mentioned in Fig. 2. field quantum mechanically. It can also be seen that the peaks for \(L_{p}=2\) are spectrally distinct from those for \(L_{p}=1\). Thus our method can reliably distinguish between neighboring values of condensate winding number \(L_{p}\). We have also analyzed the effect of high cavity power drive \(P_{in}\) on the power spectra of the output phase of the cavity and find that increasing the \(P_{in}\) leads to the deviation of the different peaks of \(S(\omega)\) from the analytical results as well as generation of the other OAM modes at higher frequencies as shown in the Fig. A.2. We quantify the performance of our scheme by using the sensitivity of the rotation measurement, defined as \[\zeta=\frac{S(\omega)}{\partial S(\omega)/\partial\Lambda}\times\sqrt{t_{meas}}, \tag{14}\] where \(t_{meas}^{-1}=8(\alpha_{s}G)^{2}/\gamma_{0}\) is the optomechanical measurement rate in the bad cavity limit, \(G=U_{0}\sqrt{N}/2\sqrt{2}\)[47], and \(\alpha_{s}\) is the steady state of the cavity field [47]. The sensitivity of Eq. (14), obtained from a fit to the spectrum [Eq. (13)], is displayed in Fig. 4, and matches the result from [47] quite well. To quantify to what extent the measurement backaction affects the condensate, we display the fidelity \(F(t)\), i.e. the position-averaged autocorrelation function of the condensate density \[F(t)=\int_{0}^{2\pi}\left[\psi^{*}(\phi,t)\psi(\phi,0)\right]^{2}d\phi, \tag{15}\] as a function of the unscaled time \(t\) in Fig. 5. As can be seen, the density fidelity stays close to unity for small times. For macroscopic times, it decays mainly due to the damping (\(\Gamma\)) and noise (\(\xi\)) of the persistent current. This indicates that the measurement backaction on the condensate is small. Certainly, unlike existing techniques, the measurement does not completely destroy the condensate. Figure 8: Variation of the fidelity [Eq. (15)] of the condensate density with time, resulting from the superposition of two persistent currents. The set of parameters are same as in Fig. 6. Inset shows the fidelity from the initial time to 100 ms. Figure 6: (a) Angular profile of the condensate density per particle for a persistent current superposition [Eq. (16)] with \(L_{p1}=3,L_{p2}=4\) (b) OAM state content of the condensate. Here \(P_{in}=1\) pW and the other parameters are the same as in Fig. 2. Figure 7: Persistent current superposition (a) Power spectrum of the output phase quadrature of the cavity field as a function of the system response frequency. The vertical dashed lines correspond to the analytical predictions for the side modes corresponding to \(L_{p1}=3,L_{p2}=4\). (b) Rotation measurement sensitivity. The parameters used are the same as in Fig. 6. Two state superposition In this section, we investigate the dynamics of the condensate prepared in a superposition state of two different winding numbers i.e. \(L_{p1}\neq L_{p2}\). These states could be of interest in the context of quantum information processing, matter wave interferometry, as well as studies of mesoscopic quantum mechanics and [74, 75]. We start with the initial state \[\psi(\phi)=\sqrt{\frac{N}{4\pi}}\left(e^{iL_{p1}\phi}+e^{iL_{p2}\phi}\right). \tag{16}\] Fig. 6(a) shows the condensate density resulting from the superposition of two persistent currents. The modulation in the condensate density is relatively high due to the high input optical power required to observe condensate rotation. The OAM content of the corresponding state is shown in Fig. 6(b). The increased complexity in the OAM content of the state is due to the interference between the two persistent currents, which introduces additional modulations to the system. Despite these complicated modulations, sidebands corresponding to \(L_{p1}\) and \(L_{p2}\) can be observed in the phase quadrature of the resulting cavity transmission spectrum, which is presented in Fig. 7(a). The sensitivity of the rotation measurement is shown in Fig. 7(b) and is comparable to that of the measurement of the rotational eigenstate [Fig. 4]. Variation of the fidelity of the condensate density with time is shown in Fig. 8. In this case, due to the interference between multiple sidemodes, the fidelity of the superposition measurement is at best around 0.5. ### Bright soliton In this section, we present the dynamics accompanying the detection of a bright soliton in the ring BEC. Such solitons can be sustained by the condensates, in which the atoms weakly attract each other [76, 77, 78, 79, 80, 81, 82, 83, 84, 85]. In our simulations, we imprint a density and phase modulation on a uniform condensate of \({}^{7}\)Li atoms [83] and this leads to a bright soliton rotating on the ring and carrying a winding number \(L_{p}\) (e.g. see Eq. (43) of [26]). For this situation, we consider the initial state as \[\psi(\phi)=\sqrt{\frac{N}{\sqrt{\pi}}}e^{-\phi^{2}/2}e^{iL_{p}\phi}. \tag{17}\] For \(L_{p}=1\), the temporal evolution of the soliton density profile is shown in Fig. 9(a). As can be seen, the spatial profile of the soliton stays close to its initial shape as it moves in the ring. The OAM distribution of the soliton, when it has interacted with the optical lattice, is shown in Fig. 9(b). The resulting cavity transmission spectrum used to detect the winding number of the soliton is shown in Fig. 10(a). As can be seen, the \(L_{p}=1\) and \(L_{p}=2\) peaks are resolvable, indicating that our method can distinguish between neighboring winding numbers for the soliton. Remarkably, the analytical predictions for the spectral locations of the sidemode peaks from the analytical treatment of the persistent current case [47] agree quite well with the full Gross-Pitaevskii treatment of the soliton. This can be seen from the coincidence of the numerically obtained side mode peaks and the vertical dashed lines in Fig. 10(a). The corresponding measurement sensitivity is shown in Fig. 10(b). As can be seen, the benefits of high measurement sensitivities in our optomechanical scheme carry over from the persistent current case (where it was three orders of magnitude better than demonstrated previously [47]) to the case of the bright soliton. The fidelity of the density profile is shown in Fig. 11 and remains close to unity for macroscopic times. The decay in fidelity is largely due to the dissipation (\(\Gamma\)) and noise (\(\xi\)) in the system, and not so much due to measurement backaction. Hence our scheme represents a minimally destructive measurement of the motion of a bright soliton in a BEC. For high input powers results the appearance of more modes accompanied with the noise in the condensate as shown in the Fig. A.2. Figure 9: (a) Temporal evolution of the moving soliton density profile (b) OAM content of the soliton. Here \(N=6000\), \(a_{s}=-27.6a_{0}\), where \(a_{0}\) is the Bohr radius, \(m=7.01\) amu, \(L_{p}=1\), and \(P_{in}=0.4\) pW, and all other parameters are same as in Fig. 2. ## IV Summary and conclusions Using a stochastic mean-field Gross-Pitaevskii formalism for modeling atoms in a BEC in a ring trap, and a classical approximation for the optical mode, we have demonstrated that cavity optomechanics can make real-time, _in situ_, and minimally destructive measurements of both persistent currents as well as bright solitons. In support of our conclusions, we have presented numerical simulations of cavity transmission spectra, measurement sensitivities as a function of the system response frequency, and the fidelity of condensate density profiles. Our numerical simulations have verified and extended the analytical model proposed by us earlier. Remarkably, the previously analytically-found locations of the peaks in the cavity transmission crucial for determining condensate rotation agree well with the numerical results for both persistent current states as well as solitons. We expect our findings to be of interest to studies of superfluid hydrodynamics, atomtronics, and soliton interferometry. The technique we have presented could be extended to other systems such as polariton ring condensates [86]. ## V Acknowledgments We thank the International Centre for Theoretical Sciences, Bengaluru, where this work was initiated, for hosting us. M.B. would like to thank the Air Force Office of Scientific Research (FA9550-23-1-0259) for support. R.K. acknowledges support from JSPS KAKENHI Grant No. JP21K03421. We also gratefully acknowledge our supercomputing facility Param-Ishan (IITG), where all the simulation runs were performed. ## Appendix A Noise for high input power In this appendix, we present the effect of increasing the cavity drive power \(P_{in}\) and thus making a stronger measurement of the persistent current and bright soliton rotation. In Fig. 11, we show the power spectra of the output phase quadrature of the cavity field in the frequency domain for three different input powers, namely, \(P_{in}=0.5\) pW, 1 pW and 2 pW for the persistent current. We find that increasing the power (from the left column to the right) results in nonlinear behavior, as discussed below. Density profiles of the soliton have been shown in Figs. 12(a)-(c). We can see that at low powers \(P_{in}\) the density profile is only slightly modulated, while it is quite heavily modulated at high \(P_{in}\). Thus, as expected, the measurement backaction increases with cavity power. However, in the regime of the powers presented, the soliton does not break up as a result of interaction with the probe lattice, and thus the measurement is not fully destructive. Figure 11: Fidelity (Eq. 15) of the soliton density profile versus time for \(P_{in}=0.4\) pW. The red dashed line is the guide to the fidelity value when the solitons at different times are in phase with the initial state. The remaining set of parameters are the same as in Fig. 9. Figure 10: (a) Power spectrum of the output phase quadrature of the cavity field as a function of the system response frequency for a soliton. The vertical dashed lines (grey and green) correspond to the analytical predictions for the side modes of \(L_{p}=1\) and \(L_{p}=2\) respectively. (b) Variation of rotation measurement sensitivity \(\zeta\) with the system response frequency \(\omega\) for \(L_{p}=1\). The set of parameters used here are the same as in Fig. 9. The OAM content of the soliton for the corresponding powers have been shown in Figs. A.2(d)-(f). The spectra are displayed in Figs. A.2(g)-(i), the peaks labeled by the winding number \(L_{p}\). As can be seen, use of higher optical powers results in additional peaks and more noise. Finally the sensitivities are shown in Figs. A.2(j)-(l), showing that they are comparable to the low power version. Figure A.1: Power spectrum of the output phase quadrature of the cavity field as a function of the system response frequency for different input powers, (a) \(P_{in}=0.5\) pW, (b) \(P_{in}=1\) pW, and (c) \(P_{in}=2\) pW. Vertical dashed lines indicate the analytical predictions of [47]. All other parameters are the same as in Fig. 2. Figure 2: (a)-(c)Temporal evolution of the moving soliton density profile (d)-(f) OAM states of the condensate occupied by the soliton (g)-(i) the power spectrum of the imaginary part of cavity field versus response frequency (j)-(l) soliton rotation measurement sensitivity as a function of system response frequency for \(P_{in}=0.7\) pW, 1 pW, and 2 pW, respectively. Here \(G=2\pi\times 5.8\) kHz and \(|\alpha_{s}|^{2}=0.33,0.48,0.96\) for the above input power values. The remaining set of parameters are the same as in Fig. 9.
2302.05906
On Comparing Fair Classifiers under Data Bias
In this paper, we consider a theoretical model for injecting data bias, namely, under-representation and label bias (Blum & Stangl, 2019). We empirically study the effect of varying data biases on the accuracy and fairness of fair classifiers. Through extensive experiments on both synthetic and real-world datasets (e.g., Adult, German Credit, Bank Marketing, COMPAS), we empirically audit pre-, in-, and post-processing fair classifiers from standard fairness toolkits for their fairness and accuracy by injecting varying amounts of under-representation and label bias in their training data (but not the test data). Our main observations are: 1. The fairness and accuracy of many standard fair classifiers degrade severely as the bias injected in their training data increases, 2. A simple logistic regression model trained on the right data can often outperform, in both accuracy and fairness, most fair classifiers trained on biased training data, and 3. A few, simple fairness techniques (e.g., reweighing, exponentiated gradients) seem to offer stable accuracy and fairness guarantees even when their training data is injected with under-representation and label bias. Our experiments also show how to integrate a measure of data bias risk in the existing fairness dashboards for real-world deployments.
Mohit Sharma, Amit Deshpande, Rajiv Ratn Shah
2023-02-12T13:04:46Z
http://arxiv.org/abs/2302.05906v2
# On Testing and Comparing Fair classifiers under Data Bias ###### Abstract. In this paper, we consider a theoretical model for injecting data bias, namely, under-representation and label bias (Blum & Stangl, 2019). We theoretically and empirically study its effect on the accuracy and fairness of fair classifiers. Theoretically, we prove that the Bayes optimal group-aware fair classifier on the original data distribution can be recovered by simply minimizing a carefully chosen reweighed loss on the bias-injected distribution. Through extensive experiments on both synthetic and real-world datasets (e.g., Adult, German Credit, Bank Marketing, COMPAS), we empirically audit pre-, in-, and post-processing fair classifiers from standard fairness toolkits for their fairness and accuracy by injecting varying amounts of under-representation and label bias in their training data (but not the test data). Our main observations are: (1) The fairness and accuracy of many standard fair classifiers degrade severely as the bias injected in their training data increases, (2) A simple logistic regression model trained on the right data can often outperform, in both accuracy and fairness, most fair classifiers trained on biased training data, and (3) A few, simple fairness techniques (e.g., reweighing, exponentiated gradients) seem to offer stable accuracy and fairness guarantees even when their training data is injected with under-representation and label bias. Our experiments also show how to integrate a measure of data bias risk in the existing fairness dashboards for real-world deployments. algorithmic fairness, fair classification, under-representation bias, label bias, group imbalance + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. under-representation, over-representation, and label biases [12], and study the change in accuracy and fairness of various classifiers as we inject bias in their training data. The central question we ask is: _Among the various fair classifiers available in the standard fairness toolkits (e.g., [7]), which ones are less vulnerable to data bias?_ The definitions of fairness prevalent in literature fall into two broad types: individual fairness and group fairness [6; 8; 23; 27; 32; 50; 50; 62]. Individual fairness promises similar treatment to similar individuals. Our focus is group-fair classification, which promises equal or near-equal outcomes across different sensitive groups (e.g., race, gender). Examples of group fairness include Equal Opportunity (equal false negative rates across groups) and Statistical Parity (equal acceptance rate across groups). Various bias mitigation techniques have been proposed for group-fair classification motivated by different applications, and stages of the machine learning pipeline [29; 44; 68]. They are broadly categorized into three types: (1) _Pre-Processing_, where one transforms the input data, independent of the subsequent stages of training, and validation [15; 25; 36], (2) _In-Processing_, where models are trained by fairness-constrained optimization [1; 14; 16; 38; 63; 64; 66], and (3) _Post-Processing_, where the output predictions are adjusted afterward to improve fairness [32; 37; 52]. These techniques appear side by side in standard fairness toolkits with little normative prescription about which technique to choose under data bias. We attempt to close this gap by providing an auditing method to compare different fair classifiers under a simple model of data bias. To examine the effect of data bias on group-fair classifiers, we use a theoretical model by Blum & Stangl [12] for under-representation and label bias. It is a simple model for demographic under-/over-sampling and implicit label bias seen in real-world data [13; 31; 40; 58; 61]. Given a train-test data split, we create biased training data by injecting under-representation and label bias in the training data (but not the test data). We then study the error rates and (un)fairness metrics of various pre-, in-, and post-processing fair classifiers on the original test data, as we vary the amount of bias in their training data. We inject two types of under-representation biases in the training data: \(\beta_{\text{pos}}\)-bias, where we undersample the positive labeled population of the minority group by a multiplicative factor of \(\beta_{\text{pos}}\), and \(\beta_{\text{neg}}\)-bias, where we undersample the negative labeled population of the minority group by a multiplicative factor of \(\beta_{\text{neg}}\). We examine the effect of increasing the parameters \(\beta_{\text{pos}}\) and \(\beta_{\text{neg}}\) separately as well as together. We also investigate the effect of label bias parameter \(v\), where we flip the positive labels of the minority group to negative ones with probability \(v\). **Our Contributions:** Our main contributions and observations can be summarized as follows. * We prove that the Bayes optimal group-aware fair classifier on the original data distribution can be recovered by simply minimizing a carefully chosen reweighed loss on the \(\beta_{\text{pos}}\)-biased (or \(\beta_{\text{neg}}\)-biased) distribution; see Theorem 1. The reweighing requires the knowledge of the \(\beta_{\text{pos}}\) (or \(\beta_{\text{neg}}\) parameter, which is usually not known in practice. * Using synthetic and real-world datasets, we show that the fairness and accuracy guarantees of many fair classifiers from standard fairness toolkits are highly vulnerable to under-representation and label bias. In fact, often, an unfair classifier (e.g., logistic regression classifier) trained on the correct data can be more accurate and fairer than fair classifiers trained on biased data. * Some fair classification techniques (viz., reweighing [36], exponentiated gradients [1]) have stable accuracy and fairness guarantees even when their training data in injected with under-representation and label bias. We provide a theoretical justification to support the empirically observed stability of the reweighing classifier [36]; see Theorem 2). * Our experimental pipeline can be leveraged to create audit and test suites to check the vulnerability of fair classifiers to simple models of data bias before their real-world deployment. The rest of the paper is organized as follows. Section 2 explains related work to set the context for our work. Section 3 explains our experimental setup, and Section 4 discusses our key empirical observations. Section 5 contains the main theorems and proof outlines. Section 6 discusses our work in the context of other research directions on data bias in fair machine learning. The complete proofs and additional experimental results are contained in the Appendix. ## 2. Related Work The robustness of machine learning models when there is a mismatch between their training and test distributions has been studied under various theoretical models of distribution shifts, e.g., covariate shift and label shift. Recent works have studied fair classification subject to these distribution shifts and proposed solutions under reasonable assumptions on the data distribution [10; 12; 18; 19; 21; 30; 43; 54; 57; 59]. As pointed out in the introduction, our goal is different, so we defer the critical contributions of these works to Appendix A.1 due to a lack of space. However, later in Section 6, we discuss different distribution shifts and their relation to the data bias model of Blum and Stangl [12] used in our paper. Our work builds on the under-representation and label bias model proposed in Blum & Stangl [12]. Under this model, they prove that the optimal fair classifier that maximizes accuracy subject to fairness constraints (equal opportunity constraints, to be precise) on the biased data distribution gives the maximum accuracy on the unbiased data distribution. For under-representation bias, the above result can be achieved by equalized odds and equal opportunity constraints but not by demographic parity constraints. Along similar lines, Maity et al. [43] derive necessary and sufficient conditions about when performance parity constraints in the presence of subpopulation shifts give better accuracy on the test distribution. Jiang et al. [35] have also considered the label bias model, although we look at the aggregate effect over varying amounts of label bias. Our observations on label bias also corroborate various findings by other works investigating the role of label noise in fair classification [28; 41; 42; 60]. Parallel to our work, a recent preprint by Akpinar et al. [2] proposes a simulation toolbox based on the under-representation setting from Blum & Stangl [12] to stress test four different fairness interventions with user-controlled synthetic data and data biases. On the other hand, our work uses the under-representation framework from Blum & Stangl [12] to extensively examine the stability of different types of fair classifiers on various synthetic and real-world datasets. While Akpinar et al. [2] focus on proposing a simulation framework to extensively test all findings of Blum & Stangl [12], we focus on using the under-representation framework to highlight the differences in performance between various fair classifiers and theoretically investigating the stability of one of the fair classifiers. We compare various fair classifiers in the presence of under-representation and label bias. Our work complements the surveys in fair machine learning that compare different methods either conceptually [26; 44; 49; 68] or empirically, on important aspects like the choice of data processing, splits, data errors, and data pre-processing choices [10; 29; 34; 53]. We elaborate on these works in Appendix A.2. ## 3. Experimental Setup For our analysis, we select a comprehensive suite of commonly used fair classifiers implemented in the open-source AI Fairness 360 (AIF-360) toolkit [7]. From the available choice of pre-processing classifiers, we include two: (1) Reweighing ('rew') [36], and (2) Adaptive Reweighing ('jiang_nachum') [35]. Since 'jiang_nachum' is not a part of AIF-360, we use the authors' implementation1. Among the in-processing classifiers in AIF-360, we choose three: (1) Prejudice Remover classifier ('prej_remover') [38], (2) Exponentiated Gradient Reduction classifier ('exp_grad'), and its deterministic version ('grid_search') [1] and (3) the method from Kearns et al. ('gerrry_fair') [39]. From post-processing classifiers, we choose three: (1) the Reject Option classifier ('reject') [37], (2) the Equalized Odds classifier ('eq') [32], and (3) the Calibrated Equalized Odds algorithm ('cal_eq') [52]. We omitted the use of some classifiers in the AIF360 toolkit due to two main reasons: resource constraints and their extreme sensitivity to the choice of hyperparameters. Footnote 1: [https://github.com/google-research/google-research/tree/master/label_bias](https://github.com/google-research/google-research/tree/master/label_bias) All fair classifiers use a base classifier, which it uses in its optimization routine to mitigate unfairness. We experiment with two base classifiers: a logistic regression (LR) model and a Linear SVM classifier with Platt's scaling [51] for probability outputs. We use the Scikit-learn toolkit to implement the base classifiers [47]. The 'prej_remover' and 'gerrry_fair' classifiers do not support SVM as a base classifier; hence, we only show results for those on LR. The fairness metric we use in the main paper is Equal Opportunity Difference (EOD) [32], which is defined as \(|\Pr(\hat{Y}=1|Y=1,S=1)-\Pr(\hat{Y}=1|Y=1,S=0)|\), where \(\hat{Y}\in\{0,1\}\) denotes the predicted label, \(Y\in\{0,1\}\) denotes a binary class label, and \(S\in\{0,1\}\) denotes a binary sensitive attribute. Blum & Stangl [12] suggest that equal opportunity and equalized odds constraints on biased data can be more beneficial than demographic parity constraints. Therefore, we use the equalized odds constraint for 'jiang_nachum' and 'exp_grad'. We show our results on the test with EOD as the unfairness metric. Different classifiers have been proposed for different fairness metrics and definitions. However, we often compare different methods with one or two standard metrics in practice. Hence, we stick with two fairness metrics, EOD and Statistical Parity difference (SPD). SPD is defined as \(|\Pr(\hat{Y}=1|S=1)-\Pr(\hat{Y}=1|S=0)|\). EOD and SPD metrics can easily be generalized for multiple classes and groups. For both metrics, values closer to \(0\) means better fairness, whereas values closer to \(1\) indicate extreme parity difference between the two sensitive groups \(S=0\) and \(S=1\). We train on multiple datasets to perform our analysis. We consider four standard real-world datasets from fair classification literature: the Adult Income dataset [22], the Bank Marketing dataset [46], the COMPAS dataset [3], and the German Credit dataset [22]. We also consider a synthetic dataset setup from Zeng et al. [65], which consists of a binary label \(y\in\{0,1\}\) and a binary sensitive attribute setting \(s\in\{0,1\}\). We take the original train-test split for a given dataset, or in its absence, a \(70\%\)-\(30\%\) stratified split on the subgroups. Specific details for all datasets are given in Table 1 in Appendix B. To perform our analysis, we use the data bias model from Blum & Stangl [12] and inject varying amounts of under-representation and label bias into the original training data before giving it to a classifier. We summarize our experimental setup in Algorithm 1. Let \(Y\in\{0,1\}\) represent the label, and \(S\in\{0,1\}\) represent the sensitive attribute. Let \(\beta_{pos}\) be the probability of retaining samples from the subgroup defined by the favorable label \(Y=1\) and underprivileged group \(S=0\) (\(10\)-subgroup). Similarly, let \(\beta_{neg}\) be the probability of retaining samples from the unfavorable label \(Y=0\) and underprivileged \(S=0\) group (\(00\)-subgroup). We inject under-representation bias into the training data by retaining samples from the \(10\)-subgroup and the \(00\)-subgroup with probability \(\beta_{pos}\) and \(\beta_{neg}\), respectively. We consider ten different values each for \(\beta_{pos}\) and \(\beta_{neg}\) varying over \(\{0.1,0.2,....,1.0\}\). This results in \(100\) different settings (\(10\)\(\beta_{pos}\) factors x \(10\)\(\beta_{neg}\) factors) for training data bias. We separately inject ten different levels of label bias by flipping the favorable label of the underprivileged group to an unfavorable one (\(10\to 00\)) with a probability \(\nu\) varying over \(\{0.0,0.1,.......,0.9\}\). The test set for all settings is the original split test set, either provided at the original dataset source or taken out separately beforehand. This results in \(110\) different datasets corresponding to different data bias settings, and consequently \(110\) different results each for a particular fair classifier. Finally, the training of all classifiers and the procedures to create biased training data using \(\beta_{pos}\), \(\beta_{neg}\), and \(v\) is performed 5 times to account for randomness in sampling and optimization procedures. We also note that some fair classifiers in our list, like 'exp_grad' and 'cal_eq', are randomized classifiers. Therefore, repeating the entire pipeline multiple times and reporting means and standard deviations normalizes the random behavior of these methods to some extent, apart from the measures taken during the implementation of these methods in widely used toolkits like the one we are using, AIF360 (Datta et al., 2017). We include all our code for data preparation, implementation of fair algorithms, and result analysis in the supplementary material. ## 4. Stability of Fair Classifiers: Under-Representation and Label Bias In this section, we present our experimental results about the stability of fairness and accuracy guarantees of various fair classifiers after injecting under-representation and label bias in their training data. Stability means whether the error rate and unfairness show high variance/spread in response to increasing or decreasing under-representation or label bias. A stable behavior is indicated by small changes in error rate and unfairness, as \(\beta_{pos},\beta_{neg}\) or \(v\) change. We first plot the unfairness (Equal Opportunity Difference) and error rates of various fair classifiers on the original training splits. Figure 1 shows the results of various fair classifiers with a logistic regression model on original dataset splits without any explicitly injected under-representation or label bias. While most fair classifiers exhibit an expected behavior of trading off some accuracy in favor of minimizing unfairness, some classifiers perform worse in mitigating unfairness than the unfair LR model. We attribute this reason to the strong effects of different kinds of data preprocessing on the performance of fair classifiers, as noted in previous works (K under-representation or label bias, lie in the fourth quadrant, as they trade-off some accuracy to mitigate unfairness. Any method lying in the first quadrant is poor, as it is worse than the base LR model in terms of both error rate and unfairness. Finally, any method lying in the third quadrant is excellent as it improves over both error rate and unfairness compared to the LR model. Figure 1 shows how various fair classifiers perform without explicit under-representation and label bias. We can now look at the individual effects of both under-representation factors (\(\beta_{pos}\), \(\beta_{neg}\) and label bias (\(v\))), with the LR model as a reference, denoted by the vertical and the horizontal error rate and Equal Opportunity Difference (EOD) lines respectively. Figures 2(a) and 3(a) show those results for the Synthetic and the Adult dataset, respectively. The results for the Bank Marketing, Credit, and Compas datasets are given in Figures 7(a), 8(a) and 9(a) in section D of the Appendix respectively. We first observe that many fair classifiers exhibit a large variance in the unfairness-error rate plane and often become unfair as the bias factors increase. This is also indicated by how the fair classifiers jump from quadrant four to quadrant one, as \(\beta_{pos}\), \(\beta_{neg}\), or \(v\) increase. In fact, for \(\beta_{pos}\), the gerry_fair classifier is so unstable on the Adult dataset that it lies outside the plot limits (Figure 3(a)). However,'rew', 'jiang_nachum', and 'exp_grad' fair classifiers remain stable across all three kinds of biases for both error rate and EOD and show a minimal spread across both the error rate and unfairness (EOD) axes. They tend to remain in the same quadrant with varying bias factors. On the Synthetic dataset, 'prej_remover' tends to do better for both error rate and unfairness. However, that performance quickly degrades and becomes unstable for the Adult dataset and others (indicated by Figures 7(a), 8(a) and 9(a) in the Appendix). Furthermore, the 'eq' classifier tends to remain stable for EOD but not for error rate across all datasets. We can also look at the combined effect of \(\beta_{pos}\) and \(\beta_{neg}\) in Figures 2(b) and 3(b) with the help of heatmaps. Each cell in the heatmap denotes a possible \(\beta_{pos}\), \(\beta_{neg}\) setting, thus covering all the 100 possible settings. Darker color values in cells denote lower error rates and unfairness. For this analysis, to examine stability with heatmaps, we look for uniformity of colors in the entire grid, which means that the error rate and unfairness values do not change with different \(\beta_{pos}\), \(\beta_{neg}\) values. Figures 2(b) and 3(b) again confirm the stability of'rew', 'jiang_nachum' and 'exp_grad' fair classifiers even when they are trained on datasets with combined \(\beta_{pos}\) and \(\beta_{neg}\) biases. However, 'jiang_nachum' and 'exp_grad' emerge as stable and better classifiers in terms of their unfairness values than the'rew' classifier, because they have darker uniform colors across their entire error rate and unfairness grid. The 'eq' classifier also remains stable for unfairness, except on the Credit dataset (\(8\) in Appendix D). Similar observations hold for the Bank Marketing, Credit, and the Compas dataset in Figures 7(b), 8(b), and 9(b) in Appendix D respectively. Figure 1: Error Rate-EOD values for various classifiers on all datasets, with no explicit under-representation and label bias. The blue horizontal and vertical lines denote the error rate and EOD of a base LR model without any fairness constraints. Figure 2: Stability analysis of various fair classifiers on the Synthetic dataset (Lighter the shade, more the under-representation and label bias): (a) The spread of error rates and Equal Opportunity Difference(EOD) of various fair classifiers as we vary \(\beta_{pos}\), \(\beta_{neg}\) and label bias separately. The vertical and horizontal reference blue lines denote the performance of an unfair logistic regression model on the original dataset without any under-representation or label bias. (b) Heatmap for Error Rate and EOD across all different settings of \(\beta_{pos}\) and \(\beta_{neg}\). Darker values denote lower error rates and unfairness. Uniform values across the grid indicate stability to different \(\beta_{pos}\) and \(\beta_{neg}\). (c) Mean error rates, Statistical Parity Difference (SPD), and Equal Opportunity Difference (EOD) across all \(\beta_{pos}\) and \(\beta_{neg}\) settings for both kinds of base classifiers (SVM and LR). Figure 3: Stability analysis of various fair classifiers on the Adult dataset (Lighter the shade, more the under-representation and label bias): (a) The spread of error rates and Equal Opportunity Difference(EOD) of various fair classifiers as we vary \(\beta_{pos}\), \(\beta_{neg}\) and label bias separately. The vertical and horizontal reference blue lines denote the performance of an unfair logistic regression model on the original dataset without any under-representation or label bias. (b) Heatmap for Error Rate and EOD across all different settings of \(\beta_{pos}\) and \(\beta_{neg}\). Darker values denote lower error rates and unfairness. Uniform values across the grid indicate stability to different \(\beta_{pos}\) and \(\beta_{neg}\). (c) Mean error rates, Statistical Parity Difference (SPD), and Equal Opportunity Difference (EOD) across all \(\beta_{pos}\) and \(\beta_{neg}\) settings for both kinds of base classifiers (SVM and LR). To quantitatively summarize the findings from the heatmap plots and to show this statistic with other choices of base classifiers (LR and SVM) and unfairness metrics (SPD and EOD), we look at the average performance across 100 different settings of \(\beta_{pos}\) and \(\beta_{neg}\) in Tables 2(c) and 3(c) for the Synthetic and the Adult dataset respectively. An indication of stability with these results is whether the mean error rate and unfairness across 100 different under-representation settings stay low and whether this mean performance has a high standard deviation. Because we run the whole experimental setup 5 times with different random seeds for reproducibility, the values represent the mean value of the mean across 5 runs for each \(\beta_{pos},\beta_{neg}\) setting, and its standard deviation over the 100 settings. We embolden the top-3 mean and top-3 standard deviation values for each column, where the top-3 means the lowest 3 error rate and the unfairness metrics (SPD and EOD). For all datasets, the observations across all emboldened values corroborate our findings from the heatmap plots. For both unfairness metrics (SPD and EOD), 'exp_grad' and 'jiang_nachum' often appear in the top-3 mean performance across all settings, whereas'rew', 'jiang_nachum' and 'exp_grad' often appear in the top-3 smallest standard deviation values, indicating their stability. Furthermore,'rew' and 'jiang_nachum' often appear in the top-3 mean and standard deviation list for error rate. Finally,'rew' appears often in the top-3 mean performance for Statistical Parity Difference (SPD). As expected, the unfair base classifier often appears in the top-3 mean error rate list since its objective always is to minimize error rates without regard for unfairness. Another interesting observation across the results of all datasets is that the deterministic version of the 'exp_grad' fair algorithm, the 'grid_search' classifier (Bradner, 2006), is very unstable compared to the 'exp_grad' classifier, which is the most stable amongst the lot. Other results such as Figures 2, 3 (a,b) with SPD as the unfairness metric cannot be included in the manuscript. However, they can be generated using the code provided with the supplementary material. As indicated by observations from Figures 2, 3(c), the observations with EOD also hold with SPD. ## 5. Theoretical Results In this section, we theoretically analyze the effect of under-representation bias on the Bayes optimal classifiers with and without fairness constraints. Our analysis is for group-aware binary fair classification with a binary sensitive attribute or group (e.g., two races, two genders). Group-aware classification means that the use of sensitive attribute is allowed for training and prediction. We use \((X,S,Y)\) to denote the original joint distribution, where \(X\) denotes the features, \(S\) denotes the binary sensitive attribute, and \(Y\) denotes the binary class label of a randomly sampled data point from \(D\). Similarly, \((X^{\prime},S^{\prime},Y^{\prime})\) denotes the joint distribution on features, binary sensitive attribute and binary class label for a randomly sampled point from \(\beta\)-biased distribution \(D_{\beta}\). We abuse this notation to denote both \(\beta_{\text{pos}}\)-bias and \(\beta_{\text{neg}}\)-bias, except when deriving certain the exact mathematical expressions that are different for these two cases. Note that the Bayes optimal classifier in our group-aware setting (i.e., group is allowed to be used for prediction) is the binary classifier \(f^{*}=\text{argmin}_{f}Pr(f(X,S)\neq Y)\), and it is folklore that it can be written as \(f^{*}(x)=\llbracket\eta(x,s)>1/2\rrbracket+\alpha\ \llbracket\eta(x,s)=1/2\rrbracket\), for any \(\alpha\in[0,1]\), where \(\eta(x,s)=Pr(Y=1|X=x,S=s)\) and \(\llbracket\cdot\rrbracket\) denotes the \(0/1\)-indicator function, i.e., \(\llbracket E\rrbracket=1\) if and only if \(E\) holds true). It is known that if we want to minimize a weighted \(0/1\) loss (also known as _cost-sensitive risk_) instead that weights false positives as \(c\) and false negatives as \(1-c\), for some \(c\in 0,1\), then the corresponding optimal classifier is given by \(f^{*}(x)=\llbracket\eta(x,s)>c\rrbracket+\alpha\ \llbracket\eta(x,s)=c\rrbracket\), for any \(\alpha\in[0,1]\)(Kang et al., 2016; Kang et al., 2016). ### Bayes Optimal Classifiers under data bias First, we show our results for the Bayes optimal classifiers that maximize accuracy without any fairness constraints. All the proofs are included in Appendix E.1. **Lemma 1**: _For any data distribution \(D\), let us consider the Bayes optimal classifier without any fairness constraints._ \[Pr(Y=y|X=x,S=s)=\frac{Pr(X=x|S=s,Y=y)Pr(S=s,Y=y)}{\sum_{y\in\{0,1\}}Pr(X=x|S=s,Y=y )Pr(S=s,Y=y)}\] _Then the following holds for the Bayes Optimal Classifer on \(D_{\beta}\) distribution \(Pr(Y^{\prime}=y|X^{\prime}=x,S^{\prime}=s)\)):_ 1. _For a_ \(D_{\beta}\) _with the same_ \(\beta_{pos}\) _and_ \(\beta_{neg}\) _factors,_ \(Pr(Y^{\prime}=y|X^{\prime}=x,S^{\prime}=s)=Pr(Y=y|X=x,S=s).\) _(The Bayes optimal classifier does not change for symmetric_ \(\beta_{pos}/\beta_{neg}\)_.)_ 2. _For a_ \(D_{\beta}\) _with only_ \(\beta_{pos}\) _or_ \(\beta_{neg},Pr(Y^{\prime}=y|X^{\prime}=x,S^{\prime}=1)=Pr(Y=y|X=x,S=1).\) _(The Bayes optimal classifier is unaffected for_ \(S=1\) _(privileged) group.)_ 3. _For a_ \(D_{\beta}\) _with only_ \(\beta_{pos},Pr(Y^{\prime}=1|X^{\prime}=x,S^{\prime}=0)\to 0\) _as_ \(\beta_{pos}\to 0\)_, and for a_ \(D_{\beta}\) _with only_ \(\beta_{neg},Pr(Y^{\prime}=0|X^{\prime}=x,S^{\prime}=0)\to 0\) _as_ \(\beta_{neg}\to 0\)_._ We can similarly investigate the impact of \(\beta_{pos}\) and \(\beta_{neg}\) on fair Bayes Optimal classifiers using their characterizations given in Corollary 5 and 7 from Menon & Williamson [45]. We first restate these characterizations of fair Bayes optimal classifiers known for Statistical Parity and Equal Opportunity. Here \(\lambda\) is a given accuracy-fairness trade-off parameter, and the objective for SP-fair Bayes optimal classifier is to minimize \(Pr(f(X,S)\neq Y)-\lambda\left(Pr(f(X,S)=1|S=0)-Pr(f(X,S)=1|S=1)\right)\), whereas the objective for EO-fair Bayes optimal classifier is to minimize \(Pr(f(X,S)\neq Y)-\lambda\left(Pr(f(X,S)=0|Y=1,S=0)-Pr(f(X,S)=0|Y=1,S=1)\right)\) **Corollary 1**: _(Restatement of the solution of Problem 3.2 in Corollary 5 for Statistical Parity Difference using Remark 4.1 and \(c=1/2\) in Menon & Williamson [45]) Let \(\eta(x,s)=Pr(Y=1|X=x,S=s).\) For Statistical Parity Difference, the fair Bayes Optimal classifier with a tradeoff parameter \(\lambda\in\mathbb{R}\) can be expressed as \(H_{\alpha}\circ s^{*}(x,s)\), where:_ \[s^{*}(x,0)\,\pm\,\eta(x,0)-\frac{1}{2}\left(1-\lambda\right), \qquad s^{*}(x,1)\,\pm\,\eta(x,1)-\frac{1}{2}\left(1+\lambda\right),\] \[\text{and}\quad H_{\alpha}\circ s=\llbracket s>0\rrbracket+ \alpha\llbracket s=0\rrbracket,\ \text{for any}\,\alpha\in[0,1].\] **Corollary 2**: _(Restatement of the solution of Problem 3.2 in Corollary 7 for Equal Opportunity Difference using Remark 4.1 and \(c=1/2\) in Menon & Williamson [45]) Let \(\eta(x,s)=Pr(Y=1|X=x,S=s).\) For Equal Opportunity Difference, the fair Bayes Optimal classifier with a tradeoff parameter \(\lambda\in\mathbb{R}\) can be expressed as \(H_{\alpha}\circ s^{*}(x,s)\), where:_ \[s^{*}(x,0)\,\pm\,\left(1+\frac{\lambda}{2\,Pr(Y=1)}\right)\eta(x,0)-\frac{1}{2},\qquad s^{*}(x,1)\,\pm\,\left(1-\frac{\lambda}{2\,Pr(Y=1)} \right)\eta(x,1)-\frac{1}{2}\] \[\text{and}\quad H_{\alpha}\circ s=\llbracket s>0\rrbracket+ \alpha\llbracket s=0\rrbracket,\ \text{for any}\,\alpha\in[0,1].\] It is important to observe that the optimal fair classifier apply group-dependent thresholds, and thus, can be equivalently thought of as minimizing group-dependent weighted loss functions (or cost-sensitive risks) that use group-dependent weights \(c\) to weigh false positive rates (and corresponding \(1-c\)'s to weigh false negative rates) for different groups. For example, Corollary 1 implies that the Bayes optimal classifier for Statistical Parity with the fairness trade-off parameter \(\lambda\) can be equivalently obtained by minimizing a weighted loss with \(c=(1-\lambda)/2\) on the group \(s=0\) and \(c=(1+\lambda)/2\) on the group \(s=1\). Next we show the effect of \(\beta_{pos}\) or \(\beta_{neg}\) on the Bayes optimal classifier under fairness. We prove that injecting under-representation bias shifts the threshold (and equivalently, the weight \(c\)) only for the underprivileged group \(s=0\). Moreover, since we can find out the shifted threshold as a function of the fairness trade-off parameter \(\lambda\) and the bias parameter \(\beta_{\text{pos}}\) (or \(\beta_{\text{neg}}\), it can be corrected, and equivalently, the fair Bayes optimal classifier on the original distribution \(D\) can be recovered by minimizing a carefully chosen expected weighted loss on the biased distribution \(D_{\beta}\) **Lemma 2**: _Consider the form of the fair Bayes Optimal Classifier in Corollary 1 and 2 for Statistical Parity and Equal Opportunity Difference respectively. Then, the following holds for the limiting cases of \(\beta\) associated to the biased distribution \(D_{\beta}\):_ 1. _For a_ \(D_{\beta}\) _with the same_ \(\beta_{pos}\) _and_ \(\beta_{neg}\) _factors, the fair Bayes Optimal classifier remains the same._ 2. _For a_ \(D_{\beta}\) _with only_ \(\beta_{pos}\) _or_ \(\beta_{neg}\)_, the bayes optimal classifier for the group_ \(S=1\) _does not change._ 3. _For a_ \(D_{\beta}\) _with only_ \(\beta_{pos}\)_, when_ \(\beta_{pos}\to 0\)_, the optimal fair classifier for group_ \(S=0\) _for Statistical Parity Difference from Corollary_ 1 _takes the following form:_ \(H_{\alpha}(X=x,S=0,\lambda)=\llbracket\lambda>1\rrbracket+\alpha\llbracket \lambda=1\rrbracket\)_. Similarly, the optimal fair classifier for group_ \(S=0\) _for Equal Opportunity Difference from Corollary_ 2 _becomes:_ \(H_{\alpha}(X=x,S=0,\lambda)=0\)_._ 4. _For a_ \(D_{\beta}\) _with only_ \(\beta_{neg}\)_, when_ \(\beta_{neg}\to 0\)_, the optimal fair classifier for group_ \(S=0\) _for Statistical Parity Difference from Corollary_ 1 _takes the following form:_ \(H_{\alpha}(X=x,S=0,\lambda)=\llbracket\lambda>-1\rrbracket+\alpha\llbracket \lambda=-1\rrbracket\)_. Similarly, the optimal fair classifier for group_ \(S=0\) _for Equal Opportunity Difference from Corollary_ 2 _becomes:_ \(H_{\alpha}(X=x,S=0,\lambda)=\llbracket Pr(Y=1)>\lambda\rrbracket+\alpha \llbracket Pr(Y=1)=\lambda\rrbracket\)_._ _where \(\lambda\in\mathbb{R}\) and \(\alpha\in[0,1]\)._ Note that when \(\beta_{pos}=\beta_{neg}\), the Bayes optimal classifier (with or without fairness) does not change. This is also empirically confirmed by the diagonals of the heatmaps in Figures 2(b) and 3 (and in Figures 7(b), 8 and 9 in Appendix D). An interesting observation from Lemma 2 is that for \(\beta_{pos}\to 0\), getting a positive decision from statistical parity optimal classifier depends on the fairness penalty \(\lambda\) being \(>1\), whereas for equal opportunity, the resulting classifier just outputs \(0\). However, for \(\beta_{pos}\to 0\), for statistical parity, \(\lambda\) must only be greater than a small value \(-1\), and for equal opportunity, the base rate (\(Pr(Y=1)\)) must be greater than the tradeoff parameter \(\lambda\). Now we prove that if one knows \(\beta\), then the fair Bayes optimal classifiers from Corollary 1 and Corollary 2 on the original distribution \(D\) can be recovered by minimizing a reweighed loss on the biased distribution \(D_{\beta}\). **Theorem 1**: _Consider \(f_{\lambda}^{*}\) which is the fair Bayes optimal classifier with a tradeoff parameter \(\lambda\) for either Statistical parity or Equal Opportunity difference: \(f_{\lambda}^{*}\in\operatorname*{argmin}_{f\in F}\big{\{}Pr_{(X,Y,S)\sim D} \big{[}f(X,S)\neq Y\big{]}+\lambda\cdot abs\left(fair\_constraint\right) \big{\}}\), where \(fair\_constraint\) can be either:_ 1. _Statistical Parity Difference (SPD)_ \(\doteq Pr_{(X,Y,S)\sim D}\left(f(X,S)=1|S=1\right)-Pr_{(X,Y,S)\sim D}\left(f(X,S)=1|S=0\right)\)__ 2. _Equal Opportunity Difference (EOD)_ \(\doteq Pr_{(X,Y,S)\sim D}\left(f(X,S)=1|Y=1,S=1\right)-Pr_{(X,Y,S)\sim D}\left( f(X,S)=1|Y=1,S=0\right)\)__ _and \(F\) is the set of all deterministic classifier achievable via the \(0-1\) loss. Then, \(f_{\lambda}^{*}\) is also achievable by weighted risk minimization on the distribution \(D_{\beta}\) (with \(\beta_{pos}\)): \(f_{\lambda}^{*}\in\operatorname*{argmin}_{h\in F}\mathbb{E}_{(X,Y,S)\sim D_{ \beta}}\left[w(y,s,\beta,\lambda)\right)\cdot\mathbb{1}\left(h(X,S)\neq Y \right)\), where \(w(y,s,\beta,\lambda)\) is defined so that the above reweighed expected loss corresponds to certain (different) cost-sensitive risks on the two groups in the biased distribution \(D_{\beta}\) but is in turn equivalent to the optimization problem with fairness trade-off parameter \(\lambda\) for obtaining the fair Bayes optimal classifier on the original distribution \(D\)._ 1. _For Statistical Parity Difference:_ \[w(y,s,\beta,\lambda)=\begin{cases}\frac{1}{2}(1+\lambda),&\text{if}\,s=1,h(x, s)=1,y=0\\ \frac{1}{2}(1-\lambda),&\text{if}\,s=1,h(x,s)=0,y=1\\ \left(\frac{1+\lambda}{\beta(1-\lambda)}+1\right)^{-1},&\text{if}\,s=0,h(x,s)= 1,y=0\\ 1-\left(\frac{1+\lambda}{\beta(1-\lambda)}+1\right)^{-1},&\text{if}\,s=0,h(x,s)= 0,y=1\end{cases}\] _._ 2. _For Equal Opportunity Difference:_ \[w(y,s,\beta,\lambda)=\begin{cases}\frac{1}{2\left(1-\frac{\lambda}{2(\beta P_{10} +p_{11})}\right)},&\text{if }s=1,h(x,s)=1,y=0\\ 1-\frac{1}{2\left(1-\frac{1}{2(\beta P_{10}+p_{11})}\right)},&\text{if }s=1,h(x,s)=0,y=1\\ \left(\frac{1+\beta}{\beta}+\frac{\lambda}{\beta(\beta p_{10}+p_{11})} \right)^{-1}&\text{if }s=0,h(x,s)=1,y=0\\ 1-\left(\frac{1+\beta}{\beta}+\frac{\lambda}{\beta(\beta p_{10}+p_{11})} \right)^{-1},&\text{if }s=0,h(x,s)=0,y=1\end{cases}\] A similar statement can be written down for when \(D_{\beta}\) only contains \(\beta_{neg}\). The proof for this theorem is given in Appendix E.4. The proof uses the observation that a thresholding classifier with respect to \(\eta(x,s)\) can be also realized by optimizing a reweighed combination of False Positive and False Negative rates, where the weights are given by the threshold (Kang and Zadeh, 2016; Kang and Zadeh, 2016), as noted earlier. We take this idea, and apply it to use the fairness adjusted scores from Corollary 1 and 2. ### Stability Theorems for Reweighing Classifier The following theorems give a theoretical justification for the stability of test accuracy when the reweighing classifier (Kang and Zadeh, 2016) is trained even on data injected with extreme under-representation such as \(\beta_{\text{pos}}\to 0\) or \(\beta_{\text{neg}}\to 0\). Theorem 2 ().: _Let \(D\) be the original data distribution and \(\beta\in(0,1]\). Let \(D_{\beta}\) be the biased data distribution after injecting under-representation bias in \(D\) using \(\beta_{\text{pos}}=\beta\) (or similarly \(\beta_{neg}=\beta\)). Let \(f\) be any binary classifier and \(L(f)\) be any non-negative loss function on the prediction of \(f\) and the true label. Let \(L^{\prime}\) denote the reweighed loss optimized by the reweighing classifier (Kang and Zadeh, 2016). Then_ \[\frac{\alpha^{2}}{4}\ \mathbb{E}_{D}[L(f)]\leq\mathbb{E}_{D_{\beta}}[L^{\prime}(f )]\leq\frac{4}{\alpha}\ \mathbb{E}_{D}[L(f)],\] _where \(\alpha=\frac{\min_{ij}\mathrm{P}\left(Y=i,S=j\right)}{\max_{ij}\mathrm{P} \left(Y=i,S=j\right)}\) denotes the subgroup imbalance in the original distribution \(D\)._ The proof for this theorem is given in Section E.2 of the Appendix. The proof involves bounding subgroup and group/label probabilities using \(\alpha\), and with a reasonable assumption that \(X^{\prime}|Y^{\prime}=y,S^{\prime}=s\) in \(D_{\beta}\sim X|Y=y,S=s\) in \(D\). The main take-away from Theorem 2 is the following: when we reweigh a given non-negative loss function using the Reweighing scheme from Kamiran and Calders (Kamiran and Calders, 2016) on samples from \(D_{\beta}\), we always lie in some constant 'radius' of the expected loss on the true distribution \(D\) for any classifier \(f\). This does not tell us anything about the expected loss on \(D\) itself, but rather, about the ability of the reweighed loss \(L^{\prime}\) to recover from any arbitrary under-representation bias \(\beta\). However, the constant factors depend on \(\alpha\), the worst case imbalance between the subgroups for a given distribution. Remark 1 ().: _Whenever \(\mathbb{E}_{D}[L(f)]\) is small, Theorem 2 says that the expected reweighed loss on the biased distribution \(\mathbb{E}_{D_{\beta}}[L^{\prime}(f)]\) will also be small, for any classifier \(f\). This means that if we are given a non-negative loss function which also subsumes a given fairness constraint (let's call it \(L_{fair}\)), then reweighing with \(L_{fair}\) can give us low unfairness and stable classifiers._ Using Theorem 2, we prove the following bound on the expected loss of the reweighing classifier that is obtained by empirical minimization of the reweighed loss on the biased distribution \(D_{\beta}\) but tested on the original distribution \(D\). **Theorem 3**.: _For any \(\beta\in(0,1]\), let \(\hat{g}_{\beta}\) the classifier obtained by empirical minimization of the reweighed loss \(L^{\prime}\) on \(N\) samples from the biased distribution \(D_{\beta}\) defined as in Theorem 2. Then, with probability at least \(1-\delta\), we have_ \[\mathbb{E}_{D}[L(\hat{g_{\beta}})]\leq\frac{16}{\alpha^{3}\beta}\sqrt{\frac{ \ln{(2/\delta)}}{2N}}+\frac{16}{\alpha^{3}}\ \mathbb{E}_{D}[L(f^{*})],\] _where \(f^{*}\) is the Bayes optimal classifier on the original distribution \(D\), \(\alpha\) is the subgroup imbalance as in Theorem 2._ The proof for this theorem is given in Section E.2 of the Appendix. It uses a generalized version of the Hoeffding's inequality (Hoeffding, 1973), proof of Theorem 2 of Zhu et al. (Zhu et al., 2017) and Theorem 2. Note that Theorem 3 implies that given any \(\epsilon>0\) and \(\delta>0\), we can get the guarantee \[\mathbb{E}_{D}[L(\hat{g}_{\beta})]\leq\frac{16}{\alpha^{3}}\ \mathbb{E}_{D}[L(f^{*})]+\epsilon,\] with probability at least \(1-\delta\), by simply choosing the number of samples \(N\) from \(D_{\beta}\) in the empirical reweighed loss minimization to be large enough so that \[N\geq\frac{128\log(2/\delta)}{\alpha^{6}\beta^{2}\epsilon^{2}}.\] It is important to note that Theorems 2 and 3 hold for data without any injected bias, i.e., \(\beta_{\text{pos}}=\beta_{\text{neg}}=1\), where such theoretical guarantees for the reweighing classifier (Zhu et al., 2017) were not known earlier. ## 6. Discussion In this section, we discuss where our results stand in the context of related works on distribution shifts and data bias in real-world AI/ML models and make a few practical recommendations based on our work. ### Under-Representation and Distribution Shifts Under-representation is a simple distribution shift that is different from and complements other distribution shifts studied in fair classification literature. Let \(P_{\text{train}}\) and \(P_{\text{test}}\) define the probabilities for training and testing distributions, respectively, with a random data point denoted by \((X,S,Y)\) with features \(X\), sensitive attribute \(S\), and class label \(Y\). Let \(Y=1\) be the favorable label and \(S=0\) be the underprivileged group. A covariate shift in fair classification is defined as \(P_{\text{train}}(Y|X,S)=P_{\text{test}}(Y|X,S)\)(Sirsh et al., 2017). Singh et al. (Singh et al., 2018) look at fairness and covariate shift from a causal perspective using the notion of a separating set of features that induce a covariate shift. Coston et al. (Coston et al., 2018) look at covariate shift and domain adaptation when sensitive attributes are not available at source (train) or target (test). Distribution shifts can also be induced via class labels. Dai et al. (Dai et al., 2019) present a more general model for label bias, building upon the work of Blum et al. (Blum et al., 2019). They present a model for label shift where \(P_{\text{train}}(Y)\neq P_{\text{test}}(Y)\), but \(P_{\text{train}}(X|Y,S)=P_{\text{test}}(X|Y,S)\). Biswas et al. (Biswas et al., 2019) study model shifts in prior probability, where \(P_{\text{train}}(Y=1|S)\neq P_{\text{test}}(Y=1|S)\) but \(P_{\text{train}}(X|Y=1,S)=P_{\text{test}}(X|Y=1,S)\). Our work is also different from sample selection bias (Bum et al., 2019), where the objective is to train only on selected samples(\(\mathcal{S}=1\)), but generalize well for fairness and accuracy on the overall population (unselected samples \(\mathcal{S}=0\) and \(\mathcal{S}=1\)). The under-representation model used in our paper comes from Blum et al. (Blum et al., 2019) and can be thought of as the following distribution shift: \(P_{\text{train}}(Y=0,S=0)=\beta_{\text{neg}}\cdot P_{\text{test}}(Y=0,S=0)\) and \(P_{\text{train}}(Y=1,S=0)=\beta_{\text{pos}}\cdot P_{\text{test}}(Y=0,S=0)\), while the other two subgroups \((Y=0,S=1)\) and \((Y=1,S=1)\) are left untouched. The above distribution shift is different from covariate and label shifts in that it affects both the joint marginal \(P(X,S)\) and \(P(X|Y,A)\). In a broader sense beyond the mathematical definition used in our work, under-representation of a demographic and implicit bias in the labels are known problems observed in real-world data. Shankar et al. (Shankar et al., 2018) highlight geographic over-representation issues in public image datasets and how that can harm use cases concerning prediction tasks involving developing countries. Buolamwini et al. (Buolamwini et al., 2019) highlight a similar under-representation issue with gender classification systems trained on over-represented lighter-skinned individuals. Wilson et al. (Wilson et al., 2019) highlight similar disparities for pedestrian detection tasks. Biased labels in the training data have also been observed in the context of implicit bias in the literature (Wilson et al., 2019; Wilson et al., 2020). ### Practical Recommendations We show that the performance guarantees of commonly used fair classifiers can be highly unstable when their training data has under-representation and label bias. Our experimental setup can serve a template of a dashboard to check vulnerability of fair classifiers to data bias, by injecting bias using simple models similar to Blum & Stangl (Blum and Stangl, 2019). Our work motivates the importance of creating test suites for robustness to data bias and incorporating them into modern fairness toolkits such as AIF-360 (Bum and Stangl, 2018). Our results complement previous studies on the stability of classifiers across different train-test splits (Bum and Stangl, 2018) or different data pre-processing strategies (Blum and Stangl, 2019). We experiment with a range of \(\beta_{pos}\), \(\beta_{neg}\) and \(v\) factors. In practice, this can be done with a user-supplied range of under-representation and label bias relevant for specific use cases. The ability to measure the stability of chosen fairness metrics across dynamically specified bias factors can be a great first step towards safe deployment of fair classifiers, similar to recent works like Shifty (Shifty, 2020). Motivated by the same setup of Blum & Stangl (Blum and Stangl, 2019), Akpinar et al. (Akpinar et al., 2019) also propose a toolkit and dashboard to stress test fair algorithms under a user-controlled synthetic setup, which allows the user to control various aspects of the simulation. A handy addition to existing fairness toolkits such as AIF360 (Bum and Stangl, 2018), and Fairlearn (Fairlearn, 2018) can be to incorporate input distribution stability routines from our work and Akpinar et al. (Akpinar et al., 2019), along with some comparative analysis routines from Friedler et al. (Friedler et al., 2018) to create a practical checklist for any fairness algorithm. Finally, our experimental setup can provide an additional tool for checking the robustness of models to data bias, along with the existing robustness test suites, e.g., CheckList (Schindler et al., 2019). ## 7. Conclusion We show that many state-of-the-art fair classifiers exhibit a large variance in their performance when their training data is injected with under-representation and label bias. We propose an experimental setup to compare different fair classifiers under data bias. We show how the Bayes Optimal unfair and fair classifiers react with under-representation bias and propose a reweighing scheme to recover the true fair Bayes Optimal classifiers while optimizing over biased distributions with a known bias. We give a theoretical bound on the accuracy of the reweighing classifier (Srivastava et al., 2017) that holds even under extreme data bias. Finally, we discuss our work in the broader context of data bias literature and make practical recommendations. A limitation of our work is that we use a simple model for injecting under-representation and label bias, with which one cannot hope to address the root cause behind different kinds of biases in real-world data. Thinking about the causal origin of data bias instead may allow one to model more types of biases and obtain better fixes for them under reasonable assumptions. Theoretically explaining the stability of the 'exp_grad' (Blum and Stangl, 2019), and 'jiang_nachum' (Schindler et al., 2019) fair classifiers with under-representation bias is also an interesting future work.
2307.06409
Faster Control Plane Experimentation with Horse
Simulation and emulation are popular approaches for experimentation in Computer Networks. However, due to their respective inherent drawbacks, existing solutions cannot perform both fast and realistic control plane experiments. To close this gap, we introduce Horse. Horse is a hybrid solution with an emulated control plane, for realism, and simulated data plane, for speed. Our decoupling of the control and data plane allows us to speed up the experiments without sacrificing control plane realism.
Eder Leao Fernandes, Gianni Antichi, Timm Boettger, Ignacio Castro, Steve Uhlig
2023-07-12T19:02:43Z
http://arxiv.org/abs/2307.06409v1
# Faster Control Plane Experimentation with Horse ###### Abstract. Simulation and emulation are popular approaches for experimentation in Computer Networks. However, due to their respective inherent drawbacks, existing solutions cannot perform both fast and realistic control plane experiments. To close this gap, we introduce Horse. Horse is a hybrid solution with an emulated control plane, for realism, and simulated data plane, for speed. Our decoupling of the control and data plane allows us to speed up the experiments without sacrificing control plane realism. Network Simulation, Network Emulation, Traffic Engineering + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition execute actual SDN applications. The Connection Manager (CM) is the bridge between the emulation and simulation. The CM has visibility to control plane packets and is responsible for sending events that trigger a change to the FTI mode. The data plane is a typical DES engine, with an Event Queue, Scheduler and a simulated model of the nodes of the network topology. The current implementation of Horse supports routers with Quagga as routing daemon and OpenFlow enabled switches. In the future, we plan to also support P4 switches. The code from Horse is Open Source and is available online 1. Footnote 1: [https://github.com/ederl/horse](https://github.com/ederl/horse) ## 3. Demonstration The demonstration presents how Horse enables quicker experimentation with BGP and SDN control planes. We use a Fat-Tree topology (Bogorty et al., 2010) to demonstrate the time required to perform experiments on different network sizes. Even though the demonstration focus on a Data Center (DC) scenario, Horse is not restricted to DCs and can also be used for other types of networks, e.g., Wide Area Networks (WAN). The demonstration consists of three experiments showcasing different Traffic Engineering (TE) approaches to achieve better link utilization: (i) BGP plus Equal Cost Multipath (ECMP) path selection by hashing of IP source and destination; (ii) Hedera (Hedera, 2008); (iii) SDN 5-tuple (IP source and destination, IP protocol, transport source and destination ports) ECMP. The Fat-Trees for the scenarios have 4, 6 and 8 pods with links of \(1Gbps\). A single traffic pattern is used for all experiments: each server of the DC sends a single UDP flow to another server inside the DC, at the constant rate of \(1Gbps\). If no congestion occurred, the total traffic rate expected in the network would be equal to the number of hosts (For example, for 4 with 16 hosts, the total traffic is \(1Gbps\)). The execution of the experiments for each topology starts simultaneously. For each experiment, we show the amount of time required to create the topology and the consolidated time to execute the three TE approaches. At the end of each execution, we show a graph of the aggregated rate of all flows arriving at the hosts for each TE case. These three TE approaches are chosen because of different levels of control plane interactions. The cases of ECMP for SDN and BGP, control plane events are concentrated at the beginning of the simulation while our implementation of Hedera queries for network statistics every 5 seconds. The demonstration of different scenarios is meant to help researchers to understand how the tool can be used and whether it is a good match for their use cases. For comparison, it would be interesting to demonstrate Horse along with Mininet. However, as shown by Figure 3, Mininet takes 5 times longer than Horse to finish the largest topology 2. The difference of time would break the flow of the demonstration. Footnote 2: Experiments performed in a virtual machine with 4GB of RAM with four cores of an Intel(R) Xeon(R) Silver 4114 (@ 2.20GHz CPU assigned. ## Acknowledgments This research is supported by the UK's Engineering and Physical Sciences Research Council (EPSRC) under the EARL: sdn EnAbled MeasuRement for all project (Project Reference EP/P025374/1).
2307.11613
Charge density response in layered metals: retardation effects, generalized plasma waves and their spectroscopic signatures
Transverse plasma polaritons and longitudinal plasmons describe the propagation of light-matter modes in an isotropic metal. However, in a layered metal the anisotropy of the bare electromagnetic response mixes the longitudinal and transverse excitations, making the distinction between polariton and plasmon blurred at small wavevectors, where retardation effects of the electromagnetic interactions become quantitatively relevant. In the usual Kubo approach for the linear response, this effect appears as a mixing between the density and the transverse current fluctuations, that requires to revise the standard RPA approach for density correlations where only the instantaneous Coulomb potential is included. In this paper we derive the general expression for the density and current correlation functions at long wavelength in a layered metal, showing that below a crossover scale set by the anisotropy of the plasma frequencies retardation effects make the dispersion of the generalized plasma modes different from the standard RPA result. In addition, the mixed longitudinal and transverse nature of these excitations reflects in a double-peak structure for the density response, that can be eventually accessed by means of high-momentum resolution electron-energy-loss or X-rays spectroscopies.
Francesco Gabriele, Riccardo Senese, Claudio Castellani, Lara Benfatto
2023-07-21T14:33:09Z
http://arxiv.org/abs/2307.11613v1
Charge density response in layered metals: retardation effects, generalized plasma waves and their spectroscopic signatures ###### Abstract Transverse plasma polaritons and longitudinal plasmons describe the propagation of light-matter modes in an isotropic metal. However, in a layered metal the anisotropy of the bare electromagnetic response mixes the longitudinal and transverse excitations, making the distinction between polariton and plasmon blurred at small wavevectors, where retardation effects of the electromagnetic interactions become quantitatively relevant. In the usual Kubo approach for the linear response, this effect appears as a mixing between the density and the transverse current fluctuations, that requires to revise the standard RPA approach for density correlations where only the instantaneous Coulomb potential is included. In this paper we derive the general expression for the density and current correlation functions at long wavelength in a layered metal, showing that below a crossover scale set by the anisotropy of the plasma frequencies retardation effects make the dispersion of the generalized plasma modes different from the standard RPA result. In addition, the mixed longitudinal and transverse nature of these excitations reflects in a double-peak structure for the density response, that can be eventually accessed by means of high-momentum resolution electron-energy-loss or X-rays spectroscopies. ## I Introduction The propagation of electromagnetic (e.m.) waves in metals represents one of the main knobs to investigate the collective properties of the electronic system. On general grounds, in an isotropic metal transverse electromagnetic waves hybridize with the conduction-electron excitations giving rise to the so-called plasma polaritons, normal modes of the system propagating at a renormalized light velocity in the region of positive permittivity[1]. For zero momentum the frequency of plasma polaritons coincides with the frequency of longitudinal bulk plasmons, which are characterized by zero magnetic field and longitudinal electric field (\(\mathbf{\nabla}\times\mathbf{E}=\mathbf{0}\)), so that they satisfy Maxwell's equations under the condition of vanishing permittivity [1]. While plasma polaritons can be excited by an external e.m. radiation, longitudinal plasmons couple efficiently to density fluctuations and as such they are measured via electron-energy loss spectroscopies[2] (EELS) or resonant inelastic X-Ray scattering[3] (RIXS), which access the charge-density response of the electron gas [4; 5]. Nowadays, the significant advances in the spectroscopic techniques using either confined light, as e.g. in. near-field optics [6; 7], or integrating EELS with scanning transmission electron microscopy[8; 9; 2], made possible a detailed investigation of the energy-momentum dispersion of plasma modes at various length scales. A particular attention has been put to the wide category of layered metals, ranging from van der Waals materials[7; 10] to layered high-\(T_{c}\) cuprate superconductors[11]. From the theoretical point of view, the behavior of metallic plasmons in a layered geometry is actually a very old problem, that has been studied since the late 70thies in connection with the physics of semiconducting superlattices [12; 13; 14; 15]. The basic observation has to do with the fact that when conduction in the stacking-layers direction is poor the lack of screening of the inter-plane Coulomb interactions strongly modifies the plasmon dispersion with respect to the bulk isotropic case. For zero inter-layer momentum \(q_{z}=0\), with \(z\) being the stacking direction, one recovers the standard weakly dispersing plasmon as a function of the momentum \(q_{\parallel}\) in the \(xy\) plane, with a large (of order of the eV) value \(\omega_{xy}\) at \(q_{\parallel}=0\). However, at finite \(q_{z}\) the dispersion changes drastically, with a severe softening of the plasma energy towards \(q_{\parallel}=0\). In the limit of zero inter-layer hopping the plasmon energy goes to zero as \(\sim q_{\parallel}\) at finite \(q_{z}=\pi/d\)[13; 14; 15], \(d\) being the inter-layer distance, or it reaches a finite value of order of \(\omega_{z}\ll\omega_{xy}\) when a finite hopping is allowed[12]. With the discovery of high-temperature superconductivity in layered cuprates, that are well modelled by a stacking of weakly-coupled layers in the metallic phase, it has been also explored the possibility that such "acoustic-like" plasmon branches can play a role in the superconducting phenomenon itself[16; 17; 18]. However, the direct detection of the dispersive plasmon branches remained elusive for long time, mostly because in hole-doped cuprates RIXS signal is dominated by the spin excitations, that emerge strongly in proximity of the Mott-insulating antiferromagnetic phase. Only recently the existence of acoustic-like plasmon branches has been proven by RIXS, first in different families of electron-doped cuprates[19; 20] and more recently also in hole-doped cuprates[21; 22]. In all these cases the plasmon dispersion follows qualitatively the prediction of a weakly-correlated layered electron model[23; 24; 25], even though it does not capture the significant broadening of the plasmon observed at increasing in-plane momen tum. Such an effect is even more pronounced in recent EELS measurements of the plasma dispersion at larger momenta[26; 27; 28; 8], such that they have been interpreted as signatures of the so-called "strange metal" regime[29]. At the same time, substantial work has been devoted in recent years to the possibility to drive non-linearly the soft inter-layer plasmon of cuprates at \(\omega_{z}\) of few Thz with strong THz light pulses[30; 31; 32]. Indeed, from one side the gap opening below the superconducting critical temperature \(T_{c}\) makes it undamped[33; 34; 35; 36; 37], in contrast to what happens in the metallic phase. From the other side, in the superconducting state the plasma modes appear also in the spectrum of the superconducting phase of the complex order parameter, allowing for its non-linear driving via optical probes[30; 31]. In the theoretical work aimed at describing the plasma modes measured by RIXS or EELS the usual approach [12; 14; 15; 19; 20; 21; 22; 25] consists in computing the density response of the anisotropic electron system by including at RPA level the effect of a Coulomb-like interaction term, in analogy with the usual isotropic case[4; 5; 38]. This means that retardation effects, corresponding to the coupling of the charge density to the magnetic field induced via current fluctuations, have not been included[13]. In the isotropic case this is actually not an approximation, but an exact result: indeed due to the complete decoupling between longitudinal and transverse degrees of freedom density fluctuations only induce longitudinal current fluctuations, remaining then decoupled from the magnetic field. On the other hand, in a layered system the anisotropy of the current response with respect to the in-plane and out-of-plane directions leads to an unavoidable coupling between charge and transverse current fluctuations, making magnetic-field effects in general not zero[39; 40]. The main consequence of the imperfect longitudinal/transverse decoupling is that in a layered metal there is an intrinsic mixture among plasmon and polaritons at generic wavevector, while an almost perfect decoupling is only reached at a momentum scale larger than a threshold \(q_{c}\sim\sqrt{\omega_{xy}^{2}-\omega_{z}^{2}}/c\), set by the plasma-mode anisotropy[41; 42]. Above this scale retardation effects are irrelevant and the standard approach including only Coulomb interactions between electrons is quantitatively correct. For typical values of \(\omega_{xy}\) and \(\omega_{z}\) the scale is \(q_{c}\sim 1-10\)\(\mu\)m\({}^{-1}\), so it is much smaller than the state-of-the art momenta accessible by RIXS and EELS. However, for the experiments with THz light mentioned above the regime \(q<q_{c}\) is being probed, and indeed the role of magnetic-field effects has been discussed within the recent literature focusing on THz driving of the soft inter-layer plasmon below \(T_{c}\)[30; 31; 39; 40; 43; 44; 45; 46; 47]. It is worth noting that the theoretical investigation of plasma modes in the superconducting state is actually easier than in the metal, since plasmons appear in the response of the superconducting phase, whose dynamics has a relatively simple description at long wavelength[41; 42]. In a recent publication[41] three of us took advantage of this peculiarity to derive an analytical expression for the generalized plasma modes in the SC state of layered superconductors via an effective-action formalism for the phase degrees of freedom, further extended to the bilayer case in Ref. [42]. In this manuscript we aim at providing an analogous derivation for the layered metal, by employing again an effective-action formalism where both Coulomb and retardation effects are taken into account by integrating out the fluctuations of the internal e.m. degrees of freedom. This procedure is formally equivalent to an RPA approximation for both the density-density and current-current interactions, mediated respectively by the Coulomb potential and the transverse e.m. propagator. This approach allows us to derive in a rather compact and elegant way the density and current response to an external perturbation valid at any momenta, in terms of the bare susceptibility of the layered system. Such a formulation has the twofold advantage to allow us for an analytical derivation of the generalized plasma waves for the uncorrelated layered metal, and to provide us with a general expression where short-range correlation effects can be included in the electronic susceptibilities. Indeed, as long as short-range interactions are included by preserving the gauge-invariance relations for the electronic response functions[48; 49], our scheme allows one to derive the plasma modes including retardation effects. As an example we study specifically the density response, as accessed by EELS and RIXS experiments, and we show that the coupling among longitudinal and transverse degrees of freedom leads to a doubling of the peaks of the loss function at small momenta, that can eventually become accessible by improving momentum resolution in these probes. The plan of the paper is the following. In Sec. II an introductory analysis of the influence of retardation effects on the longitudinal propagation of plasmons is provided within the framework of Maxwell's Equations. As anticipated before, while in isotropic systems plasmons are never influenced by those effects, in layered systems the anisotropy of the bare electronic response leads to a non-trivial mixing between instantaneous (longitudinal) and retarded (transverse) fields. The same issue is then analyzed in detail within a microscopic many-body approach in Secs. (III) and (IV). In Sec. III, in order to give a pedagogical illustration of the formalism employed throughout the manuscript, we consider the linear response of an isotropic electron gas in the absence and in the presence of e.m. interactions, finding in both cases the results usually discussed in the literature: in the first case we find the Lindhard response functions, which are then renormalized at a standard RPA level when e.m. interactions are included. In Sec. IV we address the linear-response theory of a layered system: we find that the mixing discussed in Sec. II has the crucial consequence that the response functions deviate, at low momenta, from their standard-RPA counterparts. A remarkable consequence is that the correct density-density response function accounts for the propagation of two mixed longitudinal-transverse modes, which both appear in the spectrum of density fluctuations, that we discussed in detail in Sec. V for the case of a weakly-correlated layered metal. We then conclude the paper with a a general discussion about the results in Sec. VI. ## II Retardation effects and transverse/longitudinal mixing from Maxwell's equations To outline the physical mechanism behind the propagation of e.m. modes in a layered metal, that will be addressed in the rest of the paper by using a general many-body formalism, we start from the framework of classical Maxwell's Equations, connecting the density \(\rho\) and currents \({\bf J}\) fluctuations to the electric \({\bf E}\) and magnetic \({\bf B}\) fields. The former is defined by the Gauss' law and Faraday's law as[50] \[\mathbf{\nabla}\cdot{\bf E}({\bf r},t)=4\pi\rho({\bf r},t),\quad\mathbf{\nabla}\times{ \bf E}({\bf r},t)=-\frac{1}{c}\frac{\partial{\bf B}}{\partial t}({\bf r},t), \tag{1}\] where \(c\) is the light velocity in vacuum, while the magnetic field is defined by the divergence-free condition along with the Ampere-Maxwell equation \[\mathbf{\nabla}\cdot{\bf B}({\bf r},t)=0,\quad\mathbf{\nabla}\times{\bf B}({\bf r},t)= \frac{4\pi}{c}{\bf J}({\bf r},t)+\frac{1}{c}\frac{\partial{\bf E}}{\partial t }({\bf r},t). \tag{2}\] The above equations can also be formulated in terms of scalar \(\phi\) and vector \({\bf A}\) potentials as \[{\bf E}({\bf r},t)=-\mathbf{\nabla}\phi({\bf r},t)-\frac{1}{c}\frac{\partial{\bf A }}{\partial t}({\bf r},t),\ {\bf B}({\bf r},t)=\mathbf{\nabla}\times{\bf A}({\bf r},t), \tag{3}\] that make explicit the fact that while \({\bf B}\) is always a transverse field, the electric field \({\bf E}={\bf E}_{L}+{\bf E}_{T}\) has in general both a longitudinal \(\mathbf{\nabla}\times{\bf E}_{L}={\bf 0}\) and a transverse \(\mathbf{\nabla}\cdot{\bf E}_{T}=0\) component. As it is discussed in standard textbooks[51], the \({\bf E}_{T}\) component, that according to Faraday's law (1) is induced by a time-varying magnetic field, is the main responsible for the retardation effects, i.e. it mediates an interaction that takes a finite amount of time \(\Delta t\equiv|{\bf r}-{\bf r}^{\prime}|/c\) to propagate to point \(({\bf r},t)\) from the source at \(({\bf r}^{\prime},t-|{\bf r}-{\bf r}^{\prime}|/c)\). This can be easily seen e.g. by using the Coulomb gauge \(\mathbf{\nabla}\cdot{\bf A}=0\), where \({\bf A}\equiv{\bf A}_{T}\) is purely transverse. In this gauge, that we will also use in the derivation below, the equations for the scalar and vector potentials decouple. The scalar potential satisfies the same Poisson equation \(\nabla^{2}\phi=-4\pi\rho\) of the electrostatic, so that the longitudinal electric-field component \({\bf E}_{L}=-\nabla\phi\) is not retarded. However, in the same gauge the vector potential \({\bf A}_{T}\) satisfies an inhomogeneous d'Alembert equation with the transverse component of the current \({\bf J}_{T}\) as the only source, and whose solution is a retarded potential[51] \[{\bf A}({\bf r},t)=\frac{1}{c}\int d^{3}r^{\prime}\frac{{\bf J}_{T}({\bf r}^{ \prime},t-|{\bf r}-{\bf r}^{\prime}|/c)}{|{\bf r}-{\bf r}^{\prime}|} \tag{4}\] One then sees that the \({\bf E}_{T}\) component, expressed by Faraday's law as \({\bf E}_{T}=-(\partial{\bf A}_{T}/\partial t)/c\), accounts for retardation effects and vanishes as \(c\to\infty\): for this reason, such a feedback of the current fluctuations on the electric field is sometimes referred to as a "relativistic" effect. The previous discussion does not include yet the effect, specific of metals, of the current induced as local response to an electric field. As we shall clarify, such an induced response is responsible for the mixing between longitudinal and transverse e.m. modes in a layered metal. We consider first an isotropic conducting medium where \({\bf J}=\sigma{\bf E}\), with \(\sigma\) being a scalar conductivity, so that \({\bf J}\) and \({\bf E}\) are parallel, and we switch for convenience to a Fourier-space notation. Let us assume that the external perturbation induces a finite charge fluctuation \(\rho\), which in turn induces a longitudinal electric field with magnitude \(|{\bf E}_{L}|=(4\pi/|{\bf q}|)\rho\), as given be Gauss's Law. Since \({\bf J}\) is parallel to \({\bf E}\), \({\bf E}_{L}\), as due to \(\rho\), can only induce a longitudinal current \({\bf J}_{L}\). This implies that we cannot have any source for the magnetic field in Ampere-Maxwell's equation and, therefore, any finite transverse field \({\bf E}_{T}\) from Faraday's Law. Finally, by approximating the conductivity with the Drude model at frequencies larger than the inverse electronic scattering rate as \(\sigma\simeq-\omega_{p}^{2}/(i4\pi\omega)\), \(\omega_{p}\equiv\sqrt{4\pi e^{2}n/m}\) being the 3D plasma frequency, and using the continuity equation \(\partial_{t}\rho=-\mathbf{\nabla}\cdot{\bf J}_{L}\), we end up with a closed equation for \({\bf E}_{L}\): \[\left(1-\frac{\omega_{p}^{2}}{\omega^{2}}\right)|{\bf E}_{L}({\bf q},\omega)|=0. \tag{5}\] The solutions of Eq. (5) with \(|{\bf E}_{L}|\neq 0\) require \(\omega=\omega_{p}\), that is the (dispersionless) expression for the longitudinal plasma mode in an isotropic conductor. In isotropic systems retardation effects do not affect longitudinal excitations, i.e. plasmons, since a longitudinal electric field never induces a transverse current as a source of a magnetic field. Conversely, a transverse current perturbation, induced in the metal in response to external transverse waves in the vacuum, does not induce any longitudinal response, making the polariton propagation independent from the plasmon. In layered materials the situation is radically different. The electronic excitations in these systems can be modelled, in first approximation, with anisotropic effective masses for propagation in the planes or perpendicular to them, i.e. \(m_{xy}\neq m_{z}\). This results in an anisotropy of the conductivity tensor given, in Cartesian coordinates, by \(\hat{\sigma}=\begin{pmatrix}\sigma_{xy}&0\\ 0&\sigma_{z}\end{pmatrix}\), with \(\sigma_{xy/z}\simeq\omega_{xy/z}^{2}/(i4\pi\omega)\), \(\omega_{xy/z}\equiv\sqrt{4\pi e^{2}n/m_{xy/z}}\) being the plasma frequency along the \(xy\) plane/\(z\) axis, so that in general \({\bf J}\) and \({\bf E}\) are no more parallel. For a perturbation with momentum \({\bf q}\) forming a generic angle \(\eta\) with the \(z\)-axis we obtain, by simple rotation to the longitudinal/transverse basis, the general relation between the current and the electric field[52] as \[\begin{pmatrix}\mathbf{J}_{L}\\ \mathbf{J}_{T}\end{pmatrix}=\begin{pmatrix}\sigma_{L}&\sigma_{mix}\\ \sigma_{mix}&\sigma_{T}\end{pmatrix}\begin{pmatrix}\mathbf{E}_{L}\\ \mathbf{E}_{T}\end{pmatrix} \tag{6}\] where \(\sigma_{L/T}=\left(\sigma_{xy}q_{xy/z}^{2}+\sigma_{z}q_{z/xy}^{2}\right)/| \mathbf{q}|^{2}\) is the longitudinal/transverse part of the conductivity tensor and the non-diagonal element is defined as \[\sigma_{mix} = -\left(\sigma_{xy}-\sigma_{z}\right)\frac{q_{x}q_{z}}{|\mathbf{q }|^{2}} \tag{7}\] \[= -\frac{\sigma_{xy}-\sigma_{z}}{2}\sin(2\eta).\] Therefore if we now introduce, as in the isotropic case, a charge-density perturbation that induces a longitudinal electric field, a transverse current \(\mathbf{J}_{T}=\sigma_{mix}\mathbf{E}_{L}\) is also produced. \(\mathbf{J}_{T}\) acts as a source for the magnetic field in the Ampere-Maxwell's equation \(\mathbf{B}=4\pi/(ic|\mathbf{q}|)\mathbf{J}_{T}=4\pi/(ic|\mathbf{q}|)\sigma_{ mix}\mathbf{E}_{L}\), where we neglected in first approximation the contribution of the displacement current. At the end, a _transverse_ electric field \(\mathbf{E}_{T}=(\omega/c|\mathbf{q}|)\mathbf{B}\), as prescribed by Faraday's Law, appears in response to a _longitudinal_ perturbation. The relative magnitude among the two components \(|\mathbf{E}_{T}|/|\mathbf{E}_{L}|\) is approximately given by \[\frac{|\mathbf{E}_{T}|}{|\mathbf{E}_{L}|}=\frac{4\pi\omega}{c^{2}|\mathbf{q}| ^{2}}|\sigma_{mix}|\simeq\frac{q_{c}^{2}}{|\mathbf{q}|^{2}}\frac{\sin(2\eta)} {2}, \tag{8}\] where we defined the momentum \(q_{c}\) as \[q_{c}\equiv\frac{\sqrt{\omega_{xy}^{2}-\omega_{z}^{2}}}{c}. \tag{9}\] A better estimate of the ratio among the longitudinal and transverse components within the Maxwell's formalism is presented in Appendix A, and an analogous one will be derived below within the Many-Body approach. Nonetheless, Eq. (8) already gives an idea of the mechanism at play in layered systems. First of all, Eq. (8) is zero for purely in-plane (\(\eta=\pi/2\)) or out-of-plane propagation (\(\eta=0\)), showing that no mixing occurs in these cases. At a generic angle, according to the same equation, when \(|\mathbf{q}|\gg q_{c}\) one can neglect the induced transverse electric field, and thus retardation effects, so that the plasmon decouples from the polariton and one recovers the result of the isotropic case. In the Many-Body language, we expect this to be the regime where transverse current fluctuations induced by a density perturbation are negligible. Conversely, when \(|\mathbf{q}|\sim q_{c}\) one must account for retardation effects and longitudinal and transverse modes become intrinsically mixed. Eq. (8) allows one to estimate the relevance of such effects depending on the probe under consideration, that sets the value of the momentum \(\mathbf{q}\). In systems like cuprates typical values of the plasma frequencies are \(\omega_{xy}\sim 1\) eV and \(\omega_{z}\sim 10^{-3}\omega_{xy}\). Using \(\hbar c\sim 1.9\cdot 10^{2}\) eV nm one obtains that the largest value of the crossover momentum is \(q_{c}\sim 10^{-3}-10^{-2}\) nm\({}^{-1}\). At present the typical momentum resolution of RIXS does not exceed \(\sim 0.1\) nm\({}^{-1}\), and it can be even larger for EELS, pushing then the measurement in a regime where retardation effects cannot be appreciated. Conversely, for light propagation the momentum is set by the frequency of the probe, being \(q=\omega/c\). It then turns out that the maximum value of \(|\mathbf{E}_{L}|/|\mathbf{E}_{T}|\) scales as \(\sqrt{\omega_{xy}^{2}-\omega_{z}^{2}}/\omega\simeq\omega_{xy}/\omega\), where \(\omega\) is the frequency of the e.m. radiation. One then understands why the mixing is crucial for \(\omega\) of the order of few THz (1 THz\(\simeq 4.1\) meV), as indeed discussed within the context of layered superconductors[30; 31; 39; 40; 43; 44; 45; 46; 47]. In the next Sections the above results will be derived within a many-body approach to linear-response theory, with the aim of providing a general structure of the density response in a layered metal that includes retardation effects when needed, and can be extended to the case of correlated metals, where also short-range interactions play a crucial role in determining the electronic response. ## III Response functions for the isotropic system ### Path-integral approach to linear-response theory In order to provide a pedagogical illustration of the formalism, we consider the case of an isotropic free-electron gas, which is widely discussed in textbooks in the context of Many-Body Green's-function formalism [5; 53]. For the sake of simplicity, we put \(\hbar=k_{B}=1\) in the following. We start from the imaginary-time action for the non-interacting electrons, which reads, in real and Fourier space respectively, \[S_{0}[\overline{\psi},\psi] = \sum_{\sigma}\int_{0}^{\frac{1}{\epsilon}}d\tau\int d\mathbf{r} \overline{\psi}_{\sigma}(\mathbf{r},\tau)\] \[\cdot \left(\partial_{\tau}-\frac{\mathbf{\nabla}^{2}}{2m}-\mu\right)\psi_ {\sigma}(\mathbf{r},\tau)\] \[= -\sum_{k}\sum_{\sigma}\overline{\psi}_{\sigma}(k)\mathcal{G}_{0} ^{-1}(k)\psi_{\sigma}(k),\] where \(\sigma\) is the spin index and \(k\) is a shortcut for the momentum \(\mathbf{k}\) and the fermionic Matsubara frequency \(\omega_{l}=(2l+1)\pi T\). In the last row of Eq. (III) we introduced the free-electron Matsubara Green's function \(\mathcal{G}_{0}(k)=1/\left(i\omega_{l}-\xi_{\mathbf{k}}\right)\), where \(\xi_{\mathbf{k}}=|\mathbf{k}|^{2}/(2m)-\mu\) is the free-electron energy dispersion with respect to the chemical potential \(\mu\), \(m\) being the effective mass of the electron. Since we are interested in computing the electromagnetic (e.m.) response we introduce the scalar and vector potentials \(\phi\) and \(\mathbf{A}\) associated with the e.m. fields by means of the usual minimal-coupling substitution on both time and space derivatives as [53; 54] \[i\partial_{\mu}\to i\partial_{\mu}-\frac{e}{c}A_{\mu}, \tag{11}\] where \(e>0\) is the absolute value of the electron charge, \(\partial_{\mu}=(\partial_{t},\mathbf{\nabla})\) is the 4-gradient operator and we introduced \(A^{\mu}=(c\phi,\mathbf{A})\) and \(A_{\mu}=(-c\phi,\mathbf{A})\) as the contravariant and covariant 4-potentials, respectively. Eq. (11) ensures that the total action exhibits an invariance under simultaneous gauge transformations for the fermionic and the e.m. potentials, i.e. \[\begin{cases}&\psi(\mathbf{r},t)\rightarrow\psi(\mathbf{r},t)\exp\left(- \frac{ie}{c}\lambda(\mathbf{r},t)\right),\\ &\\ &A_{\mu}(\mathbf{r},t)\to A_{\mu}(\mathbf{r},t)+\partial_{\mu} \lambda(\mathbf{r},t),\end{cases} \tag{12}\] where \(\lambda\) is an arbitrary function. In the imaginary-time formalism where \(it\rightarrow\tau\) one equivalently replaces \(\partial_{\tau}\rightarrow\partial_{\tau}-e\phi\). Since the charge density and current are defined, as usual, as functional derivatives of the action with respect to the e.m. potentials, one can express the induced 4-current \(J^{\mu}=(\rho,\mathbf{J})\) (\(\rho\) and \(\mathbf{J}\) being, respectively, the induced density and current) to an external source field in linear-response theory as \[J^{\mu}(q)=-\frac{e^{2}}{c}K^{\mu\nu}(q)A_{\nu}(q), \tag{13}\] where \(q=(\mathbf{q},i\Omega_{n})\) is a compact shortcut notation for the momentum \(\mathbf{q}\) and the bosonic Matsubara frequency \(i\Omega_{n}=i2n\pi T\) and the response function \(K^{\mu\nu}\) can be readily obtained as \[K^{\mu\nu}(q)=-\frac{e^{2}}{e^{2}}\frac{\delta^{2}\ln Z[A]}{\delta A_{\mu}(q) \delta A_{\nu}(-q)}|_{A_{\mu},A_{\nu}=0}, \tag{14}\] where the partition function reads \(Z[A]=\int\mathcal{D}[\psi,\overline{\psi}]e^{-S[\psi,\overline{\psi},A_{\mu}]}\), \(S\) being the imaginary-time action describing the quantum dynamics of the fermions in the presence of the e.m. fields. In a charged system the e.m. fields induce charge and density fluctuations within the medium that should be included in the density response, that is the main focus of the present work. In the usual perturbative approach one accounts for this effect by adding an interaction term in the electronic Hamiltonian accounting for density-density or current-current interactions. Here we will follow a different but completely equivalent approach, by making an explicit distinction between the internal statistical potentials, which, from now on, will be denoted by \(A_{\mu}=(-c\phi,\mathbf{A})\), and the auxiliary external "source" fields, denoted by \(A_{\mu}^{ext}=(-c\phi^{ext},\mathbf{A}^{ext})\). In this case one can define the response to the external perturbation as \[K_{ext}^{\mu\nu}(q)=-\frac{e^{2}}{e^{2}}\frac{\delta^{2}\ln Z[A^{ext}]}{ \delta A_{\mu}^{ext}(q)\delta A_{\nu}^{ext}(-q)}|_{A_{\mu}^{ext},A_{\nu}^{ext }=0}, \tag{15}\] where \(Z[A^{ext}]=\int\mathcal{D}[\psi,\overline{\psi},A]e^{-S[\psi,\overline{\psi},A,A^{ext}]}\). The integration over the internal e.m. degrees of freedom will account for the e.m. interaction among the electrons, and the 4-current (13) will be the response to the external perturbation. In order to highlight the role of the e.m. interactions and to make a direct analogy with known results, let us first neglect the effect of the internal e.m. fields and let us just compute the response to the external sources. Once introduced \(A^{ext}\) by means of the prescription (11) we get the action \[S[\overline{\psi},\psi,A^{ext}]=S_{0}[\overline{\psi},\psi]+S_{el+e.m.}[ \overline{\psi},\psi,A^{ext}], \tag{16}\] where the coupling between the electrons and the auxiliary e.m. fields is encoded into \(S_{el+e.m.}\), which reads \[S_{el+e.m.}[\overline{\psi},\psi,A^{ext}]=\] \[= \frac{e}{c}\sum_{\sigma}\sqrt{\frac{T}{V}}\sum_{k,k^{\prime}} \overline{\psi}_{\sigma}(k)\psi_{\sigma}(k^{\prime})s^{\mu}(\mathbf{k}, \mathbf{k^{\prime}})A_{\mu}^{ext}(k-k^{\prime})\] \[+ \frac{e^{2}}{2mc^{2}}\sum_{\sigma}\frac{T}{V}\sum_{k,k^{\prime},q }\overline{\psi}_{\sigma}(k)\psi_{\sigma}(k^{\prime})\mathbf{A}^{ext}(k-k^{ \prime}+q)\cdot\mathbf{A}^{ext}(-q),\] where \(s^{\mu}(\mathbf{k},\mathbf{k^{\prime}})=\left(1,\left(\mathbf{k}+\mathbf{k^{ \prime}}\right)/(2m)\right)\) is the density-current vertex. Since Eq. (16) is quadratic in the fermionic fields, they can be integrated out exactly, leading to an effective action \(S_{eff}[A^{ext}]\) which includes all powers of \(A^{ext}\). However, in order to compute the response function \(K^{\mu\nu}\) through Eq. (15) it is sufficient to retain terms quadratic in the external fields, i.e. to define an effective Gaussian action \(S_{G}\), such that \[S_{G}[A^{ext}]=\frac{e^{2}}{2c^{2}}\sum_{q}A_{\mu}^{ext}(q)K^{\mu\nu}(q)A_{\nu} ^{ext}(-q), \tag{18}\] so that \(Z=e^{-S_{G}}\) and \(K^{\mu\nu}(q)\equiv\frac{e^{2}}{e^{2}}\frac{\delta^{2}S_{G}[A^{ext}]}{\delta A _{\mu}^{ext}(q)\delta A_{\nu}^{ext}(-q)}\), as a direct consequence of Eq. (15). It is worth noting that the response functions \(K^{\mu\nu}\) cannot be independent on each other. Indeed, Eq. (18), which depends on \(A_{\mu}^{ext}\) only, must be still invariant with respect to the second transformation of Eq. (12), which reads, in Fourier space, \[A_{\mu}^{ext}(q)\to A_{\mu}^{ext}(q)+iq_{\mu}\lambda(q), \tag{19}\] where the 4-momentum is defined as \(q_{\mu}=(-i\Omega_{n},\mathbf{q})\). The gauge invariance requires that any additional term introduced into the action (18) by Eq. (19), i.e. those proportional to \(iq_{\mu}K^{\mu\nu}A_{\nu}^{ext}\), \(iA_{\mu}^{ext}K^{\mu\nu}q_{\nu}\) and \(q_{\mu}K^{\mu\nu}q_{\nu}\), must vanish. This is guaranteed only if the linear-response functions obey the following gauge-invariance conditions: \[q_{\mu}K^{\mu\nu}(q)=0,\qquad K^{\mu\nu}(q)q_{\nu}=0. \tag{20}\] In the following we will check that both in the isotropic and anisotropic case such conditions are satisfied. Let us then recall briefly the result obtained without the contribution of the internal e.m. fields, which is equivalent to the standard "bare" response of the non-interacting electron gas. It is straightforward to show that in this case the action (18) has the form \[S_{G}^{0}[A^{ext}] = \frac{e^{2}}{2c^{2}}\sum_{q}A_{\mu}^{ext}(q)\chi_{0}^{\mu\nu}(q)A_ {\nu}^{ext}(-q) \tag{21}\] where the bare susceptibilities \(\chi_{0}^{\mu\nu}\) are the standard imaginary-time linear-response functions of the free-electron gas[5], i.e. \[\chi_{0}^{\mu\nu}(q)=\frac{n}{m}\delta^{\mu\nu}\left(1-\delta^{\mu 0}\right)+\tilde{\chi}_{0}^{\mu\nu}(q). \tag{22}\] They are given by the sum of a diamagnetic-like term, i.e. the first one, and a paramagnetic-like one \(\tilde{\chi}_{0}^{\mu\nu}\), which is given by[5] \[\tilde{\chi}_{0}^{\mu\nu}(q) = \frac{2T}{V}\sum_{k}\gamma^{\mu}\left({\bf k},{\bf q}\right) \gamma^{\nu}\left({\bf k},{\bf q}\right){\cal G}_{0}(k+q){\cal G}_{0}(k)=\] \[= \frac{2}{V}\sum_{\bf k}\gamma^{\mu}\left({\bf k},{\bf q}\right) \gamma^{\nu}\left({\bf k},{\bf q}\right)\frac{f(\xi_{\bf k})-f(\xi_{\bf k+q} )}{i\Omega_{n}-\left(\xi_{\bf k+q}-\xi_{\bf k}\right)}\] where the overall 2 factor accounts for the spin degeneracy and \(f(\xi)=1/\left(\exp\left(\xi/T\right)+1\right)\) is the Fermi distribution. In Eq. (22), \(n=2(T/V)\sum_{i\omega_{l},{\bf k}}{\cal G}_{0}(i\omega_{l},{\bf k})e^{-i\omega_ {l}0^{-}}=2/V\sum_{k}f(\xi_{\bf k})\) is the density of electrons (with spin degeneracy included). Also, in Eq. (23), the density-current vertex \(\gamma^{\mu}\) is now defined as \(\gamma^{\mu}({\bf k},{\bf q})=\left(1,\left({\bf k}+{\bf q}/2\right)/m\right)\). It is easy to prove that the bare response functions indeed fulfill the gauge-invariance conditions prescribed by Eq. (20). To this aim, we notice that the isotropic bare density-current function is always longitudinal, i.e. parallel to \({\bf q}\), since[5] \[\chi_{0}^{0i}(q)=\frac{i\Omega_{n}q_{i}}{|{\bf q}|^{2}}\chi_{0}^{00}(q) \tag{24}\] and the isotropic current-current function always allows for the following longitudinal-transverse decomposition[5]: \[\chi_{0}^{ij}(q)=\frac{(i\Omega_{n})^{2}}{|{\bf q}|^{2}}\chi_{0}^{00}(q)\left( \hat{P}_{L}({\bf q})\right)^{ij}+\chi_{0}^{T}(q)\left(\hat{P}_{T}({\bf q}) \right)^{ij}. \tag{25}\] In Eq. (25) \(\chi_{0}^{T}=(1/2)\left(\hat{P}_{T}\right)_{ij}\chi_{0}^{ji}\) is the transverse part of \(\chi_{0}^{ij}\), and \(\hat{P}_{L}\) and \(\hat{P}_{T}\) are, respectively, the longitudinal and transverse projection operators, defined as \[\left(\hat{P}_{L}({\bf q})\right)_{ij}=\frac{q_{i}q_{j}}{|{\bf q}|^{2}},\quad \left(\hat{P}_{T}({\bf q})\right)_{ij}=\delta_{ij}-\frac{q_{i}q_{j}}{|{\bf q}| ^{2}}. \tag{26}\] Given the two identities (24) and (25), it is trivial to prove that \(\chi_{0}^{\mu\nu}\) indeed satisfies the gauge-invariance conditions given by Eq. (20), i.e. \[q_{\mu}\chi_{0}^{\mu\nu}(q)=0\qquad\chi_{0}^{\mu\nu}(q)q_{\nu}=0. \tag{27}\] For instance, by virtue of Eqs. (24) and (25), we have that \(-i\Omega_{m}\chi_{0}^{00}+\chi_{0}^{0i}q_{i}-i\Omega_{m}\chi_{0}^{00}+i\Omega_ {m}\chi_{0}^{00}q_{i}q_{i}/|{\bf q}|^{2}=0\) and \(-i\Omega_{m}\chi_{0}^{0i}+\chi_{0}^{0j}q_{j}=-i\Omega_{m}\chi_{0}^{0i}+(i\Omega _{m}/|{\bf q}|)^{2}\chi_{0}^{00}q_{i}=0\), which are, respectively, the time-like (\(\nu=0\)) and the space-like (\(\nu=i\)) components of Eq. (27). ### Linear-response theory in the presence of electromagnetic interaction: isotropic systems Let us now show how the bare electronic response is dressed by the integration of the internal e.m. fields. To this aim, we couple the electrons to both internal and external fields by means of the minimal-coupling substitution (11), and we add the e.m. action of the internal fields, which reads, in real and Fourier space respectively[53; 54], \[S_{\rm e.m.}[A] = \int d\tau d{\bf r}\left[\frac{(\mathbf{\nabla}\times{\bf A})^{2}}{8 \pi}-\frac{\varepsilon}{8\pi}\left(\frac{i\partial_{\tau}{\bf A}}{c}+\mathbf{\nabla} \phi\right)^{2}\right] \tag{28}\] \[= \frac{\varepsilon}{8\pi}\sum_{q}\left[-|{\bf q}|^{2}|\phi(q)|^{2}\right.\] \[+ \left.\left(\Omega_{m}^{2}+\frac{c^{2}}{\varepsilon}|{\bf q}|^{2} \right)\frac{|{\bf A}_{T}(q)|^{2}}{c^{2}}+\Omega_{m}^{2}|{\bf A}_{L}(q)|^{2}\right.\] \[+ \left.i\Omega_{n}{\bf q}\cdot\left(\phi(q)\frac{{\bf A}(-q)}{c}+ \phi(-q)\frac{{\bf A}(q)}{c}\right)\right].\] Eq. (28) is the transcription in imaginary time of the usual Lagrangian density \((-\varepsilon|{\bf E}|^{2}+|{\bf B}|^{2})/(8\pi)\), where \({\bf B}=\mathbf{\nabla}\times{\bf A}\) is the magnetic field and the electric field \({\bf E}\) reads \({\bf E}=-(i/c)\partial_{\tau}{\bf A}-\mathbf{\nabla}\phi\), with \(\varepsilon\) being a background dielectric constant which accounts for ionic screening. It is worth noting that in order to have a definition of \({\bf E}\) analogous to the one valid for real time one should assume that \(\phi\) is purely imaginary, i.e. one should replace \(\phi\to i\phi\). In this case, by defining the imaginary-time electric field as \({\bf E}\equiv-\frac{1}{c}\partial_{\tau}{\bf A}-\mathbf{\nabla}\phi\) the action for the free e.m. fields would read \(\frac{\varepsilon|{\bf E}|^{2}+|{\bf B}|^{2}}{8\pi}\). Such a rescaling of the scalar potential would also make the quadratic term in the scalar potential arising from \((\mathbf{\nabla}\phi)^{2}\) positive defined, as required to perform the Gaussian integration. To make notation more compact we will not explicitly rescale the potential in what follows, but we will implicitly assume that a formal definition of the Gaussian integration in the imaginary-time formalism requires such a regularization. Finally, in Eq. (28) we introduced explicitly the longitudinal-transverse decomposition \({\bf A}={\bf A}_{L}+{\bf A}_{T}\) for the vector potential, where \({\bf A}_{L}=\hat{\bf q}\left(\hat{\bf q}\cdot{\bf A}\right)\) is the longitudinal part, such that \({\bf q}\times{\bf A}_{L}={\bf 0}\), and \({\bf A}_{T}={\bf A}-{\bf A}_{L}=(\hat{\bf q}\times{\bf A})\times\hat{\bf q}\) is the transverse part obeying \({\bf q}\cdot{\bf A}_{T}=0\) This allows one to clearly identify the bare propagators for the internal gauge fields. Indeed, the coefficient of the quadratic term in \(\phi\) can be recast as \(-e^{2}/\left(2V_{C}(\mathbf{q})\right)\), where \[V_{C}(\mathbf{q})=\frac{4\pi e^{2}}{\varepsilon|\mathbf{q}|^{2}} \tag{29}\] is the Coulomb potential, while \(-\left(e/c\right)^{2}/(2D_{T})\) is the coefficient of the quadratic term in the transverse gauge field \(\mathbf{A}_{T}\), with \[D_{T}(q)=\frac{4\pi e^{2}/\varepsilon}{\left(i\Omega_{n}\right)^{2}-\tilde{c }^{2}|\mathbf{q}|^{2}}, \tag{30}\] being the transverse propagator. The poles of Eq. (30) yield, after analytic continuation \(i\Omega_{n}\rightarrow\omega+i0^{+}\), the light dispersion \(\omega=\tilde{c}|\mathbf{q}|\), where \(\tilde{c}=c/\sqrt{\varepsilon}\) is the renormalized light velocity. Moreover, the last line of Eq. (28) shows that the scalar potential only couples to the longitudinal component of \(\mathbf{A}\), so that for isotropic systems the Coulomb gauge \(\nabla\cdot\mathbf{A}=0\), i.e. \(\mathbf{A}_{L}=0\), allows one to completely decouple the scalar and the vector potentials. Since in this gauge the longitudinal part of the electric field \(\mathbf{E}_{L}=-\nabla\phi\) is controlled by the scalar potential only, one also achieves for the isotropic system a complete decoupling among transverse and longitudinal degrees of freedom. However, as we shall see in the next section, in the anisotropic case even in the Coulomb gauge such a decoupling is not allowed. It is then more convenient for what follows to rewrite in Fourier space the \((\nabla\times\mathbf{A})^{2}\) term as \(|\mathbf{q}\times A(\mathbf{q})|^{2}=|\mathbf{q}|^{2}|A(\mathbf{q})|^{2}-| \mathbf{q}\cdot A(\mathbf{q})|^{2}\), so that Eq. (28) reads \[S_{e.m.}[A] = \frac{e^{2}}{2}\sum_{q}\left[-\frac{|\phi(q)|^{2}}{V_{C}(\mathbf{ q})}-\frac{|\mathbf{A}(q)|^{2}/c^{2}}{D_{T}(q)}-\frac{1}{4\pi e^{2}}| \mathbf{q}\cdot\mathbf{A}(q)|^{2}+\frac{\varepsilon}{4\pi e^{2}}i\Omega_{n} \mathbf{q}\cdot\left(\phi(q)\frac{\mathbf{A}(-q)}{c}+\phi(-q)\frac{\mathbf{A }(q)}{c}\right)\right].\] Once introduced the coupling of \(A_{\mu}\) with the fermionic fields according to Eq. (11) we obtain the total action \(S[\overline{\psi},\psi,A+A^{ext}]+S_{e.m.}[A]\), where \(S\) is given by Eq. (16), apart from the fact that it now depends on the total field \(A_{\mu}(q)+A_{\mu}^{ext}(q)\). Then, in full analogy with the free-electron case, we can integrate out the fermionic fields, which still appear at quadratic order. The result of the integration is twofold: from one side we recover the effect of matter on the bare e.m. response, and from the other we describe the perturbation due to the external source fields \(A_{\mu}^{ext}\). The Gaussian action for both the internal and the external e.m. fields is explicitly given by \[S_{G}^{iso}[A,A^{ext}] =\frac{e^{2}}{2c^{2}}\sum_{q}\left(A_{\mu}(q)+A_{\mu}^{ext}(q) \right)\chi_{0}^{\mu\nu}(q)\left(A_{\nu}(-q)+A_{\nu}^{ext}(-q)\right)+S_{e.m. }[A]\] \[=\frac{e^{2}}{2}\sum_{q}\left[\chi_{0}^{00}(q)|\phi^{ext}(q)|^{2} +\chi_{0}^{ij}(q)\frac{A_{i}^{ext}(q)}{c}\frac{A_{j}^{ext}(-q)}{c}-\chi_{0}^{0 i}(q)\phi^{ext}(q)\frac{A_{i}^{ext}(-q)}{c}+c.c.\right. \tag{32a}\] \[+\chi_{0}^{00}(q)\phi^{ext}(q)\phi(-q)+\chi_{0}^{ij}(q)\frac{A_{ i}^{ext}(q)}{c}\frac{A_{j}(-q)}{c}-\phi^{ext}(q)\chi_{0}^{0i}(q)\frac{A_{i}(-q)}{c}- \phi(q)\chi_{0}^{0i}(q)\frac{A_{i}^{ext}(-q)}{c}+c.c.\] (32b) \[+\left(\chi_{0}^{00}(q)-\frac{1}{V_{C}(\mathbf{q})}\right)|\phi(q )|^{2}+\left(\chi_{0}^{ij}(q)-\frac{1}{D_{T}(q)}\delta_{ij}\right)\frac{A_{i}( q)}{c}\frac{A_{j}(-q)}{c}-\phi(q)\chi_{0}^{0i}(q)\frac{A_{i}(-q)}{c}+c.c.\] (32c) \[+\left.\frac{\varepsilon}{4\pi e^{2}}i\Omega_{n}\mathbf{q}\cdot \left(\phi(q)\frac{\mathbf{A}(-q)}{c}+\phi(-q)\frac{\mathbf{A}(q)}{c}\right)- \frac{1}{4\pi e^{2}}\left(\mathbf{q}\cdot\mathbf{A}(q)\right)^{2}\right]. \tag{32d}\] As for the free-electron case, the invariance of the action (32) under local-gauge transformations of both \(A_{\mu}\) and \(A_{\mu}^{ext}\) is ensured by Eq. (27) for the bare response functions \(\chi_{0}^{\mu\nu}\). As a last step one integrates out the internal potential \(A_{\mu}\) in Eq. (32), that is equivalent to compute the response functions at a RPA level in the usual diagrammatic approach to fermionic models[5]. The inclusion in Eq. (32) of higher-order terms in \(A_{\mu}\) would yield, once integrated out, beyond-RPA corrections to the linear-response functions: these will not be addressed in the present work. Before solving the integral, one must fix the gauge for the internal potentials, in order to get rid of the divergence due to the redundancy of \(A_{\mu}\): indeed, there are infinitely many potentials \(A_{\mu}+iq_{\mu}\lambda\) accounting for the same physical configuration of the e.m. fields \(\mathbf{E}\) and \(\mathbf{B}\), and such an arbitrariness leads to a divergence of the functional integral. A proper gauge-fixing procedure prevents the Gaussian integral from being singular. For the isotropic case, the Coulomb gauge \(\mathbf{\nabla}\cdot\mathbf{A}=0\), i.e. \(q_{i}A_{i}=0\) in Fourier space, makes the computations rather straightforward. Indeed, since the bare isotropic density-current function (24) is always longitudinal, i.e. \(\chi_{0}^{0i}\propto q_{i}\), it follows that all the terms proportional to \(\chi_{0}^{0i}A_{i}\propto q_{i}A_{i}\) vanish in the Coulomb gauge. It then follows that the last two terms of Eq. (32b), the last term of Eq. (32c) and the whole Eq. (32d) cancel out, leading to a complete decoupling among the scalar and the vector potentials, that can be integrated out separately to obtain the full response functions \(\tilde{\chi}_{iso}^{\mu\nu}\) of the isotropic case. Integrating out \(\phi\) is then equivalent to the standard RPA dressing of the bare bubbles \(\chi_{0}^{00}\), \(\chi_{0}^{0i}\) and \(\chi_{0}^{ij}\) with respect to the Coulomb potential: \[\tilde{\chi}_{iso}^{00}(q)\equiv\chi_{RPA}^{00}(q)=\frac{\chi_{0}^{00}(q)}{1-V _{C}(\mathbf{q})\chi_{0}^{00}(q)}, \tag{33}\] \[\tilde{\chi}_{iso}^{0i}(q)\equiv\chi_{RPA}^{0i}(q)=\frac{\chi_{0}^{0i}(q)}{1-V _{C}(\mathbf{q})\chi_{0}^{00}(q)}, \tag{34}\] \[\chi_{RPA}^{ij}(q)=\chi_{0}^{ij}(q)+V_{C}(\mathbf{q})\frac{\chi_{0}^{i0}(q) \chi_{0}^{0j}(q)}{1-V_{C}(\mathbf{q})\chi_{0}^{00}(q)}. \tag{35}\] First of all, we notice that, thanks to the structure encoded into Eqs. (33)-(35), and to the relations (27), also the standard-RPA response functions obey the gauge-invariance conditions prescribed by Eq. (20), i.e. \(q_{\mu}\chi_{RPA}^{\mu\nu}=0\) and \(\chi_{RPA}^{\mu\nu}q_{\nu}=0\). This is a direct consequence of the fact that, in the current-current function given by Eq. (35), the standard-RPA correction has a purely longitudinal structure, i.e. it only renormalizes the longitudinal part \(\chi_{0}^{L}\) of the bare current function, which already satisfies the condition (20). This is in agreement with the observation done before that for the isotropic system the longitudinal degrees of freedom in the Coulomb gauge are fully described by the scalar potential. The most complete current-current function \(\tilde{\chi}_{iso}^{ij}\) is, in fact, obtained after integration of \(\mathbf{A}\) as well, that only dresses the transverse sector \(\chi_{0}^{T}\) of \(\chi_{0}^{ij}\) with respect to the transverse propagator \(D_{T}\), due to the absence of coupling term between \(\mathbf{A}\) and \(\phi^{ext}\). Once both integrations are carried out, one finds that the full current function \(\tilde{\chi}_{iso}^{ij}\) is given by \[\tilde{\chi}_{iso}^{ij}(q)=\tilde{\chi}_{iso}^{L}(q)\left(\hat{P}_{L}(\mathbf{ q})\right)^{ij}+\tilde{\chi}_{iso}^{T}(q)\left(\hat{P}_{T}(\mathbf{q})\right)^{ij}, \tag{36}\] where \(\tilde{\chi}_{iso}^{L}=\chi_{RPA}^{L}=\left(i\Omega_{n}/|\mathbf{q}|\right)^{2 }\tilde{\chi}_{iso}^{00}\) equals the longitudinal part of Eq. (35), while \(\tilde{\chi}_{iso}^{T}\) is given by \[\tilde{\chi}_{iso}^{T}(q)=\frac{\chi_{0}^{T}(q)}{1-D_{T}(q)\chi_{0}^{T}(q)}. \tag{37}\] Having computed the electronic response in the presence of internal e.m. fields, we now briefly recall the standard outcomes for the collective modes of an isotropic system, in which Eq. (13) reduces to the following two independent equations for the density \(\rho\) and the transverse current \(\mathbf{J}_{T}\): \[\rho(q)=\tilde{\chi}_{iso}^{00}(q)\phi^{ext}(q),\quad\mathbf{J}_{T}(q)=\tilde{ \chi}_{iso}^{T}(q)\mathbf{A}_{T}^{ext}(q). \tag{38}\] We did not mention the equation for the longitudinal current \(\mathbf{J}_{L}=\tilde{\chi}_{iso}^{L}\mathbf{A}_{L}^{ext}\), since it carries the same information of the first of Eq. (38), \(\tilde{\chi}_{iso}^{L}\) being proportional to \(\tilde{\chi}_{iso}^{00}\) and \(\mathbf{J}_{L}\) being related to \(\rho\) through the continuity equation. Within this context, the longitudinal plasmon and the transverse plasma polariton appear as poles of the density-density response \(\tilde{\chi}_{iso}^{00}\) and of the transverse current response \(\tilde{\chi}_{iso}^{T}\) respectively. At long wavelength we can derive an analytical expression for both modes by using the approximated behavior[5] of the Lindhard functions in the long-wavelength dynamical limit \(v_{F}|\mathbf{q}|\ll\omega\) (\(v_{F}\) being the Fermi velocity), such that \[\chi_{0}^{00}(q) \simeq \frac{n|\mathbf{q}|^{2}}{m\omega^{2}}\left(1+\frac{3}{5}\frac{v_ {F}^{2}|\mathbf{q}|^{2}}{\omega^{2}}\right), \tag{39}\] \[\chi_{iso}^{T}(q) \simeq \frac{n}{m}. \tag{40}\] As a consequence for the longitudinal mode the pole of Eq. (33) gives \[1-V_{C}(\mathbf{q})\chi_{00}(q)=0\Longrightarrow\omega_{L}(\mathbf{q})=\omega _{p}\sqrt{1+v_{p}^{2}|\mathbf{q}|^{2}}, \tag{41}\] with \(v_{p}^{2}=3v_{F}^{2}/(5\omega_{p}^{2})\) setting the scale of the plasmon dispersion, where \(\omega_{p}\) is the 3D isotropic plasma frequency defined as \[\omega_{p}\equiv\sqrt{\frac{4\pi e^{2}n}{\varepsilon m}}. \tag{42}\] On the other hand the plasma polariton is defined by the pole of Eq. (37), and it is given by \[1-D_{T}(q)\chi_{0}^{T}(q)\Longrightarrow\omega_{T}(\mathbf{q})=\sqrt{\omega_ {p}^{2}+\tilde{c}^{2}|\mathbf{q}|^{2}}. \tag{43}\] Notice that terms of order \(v_{F}^{2}|\mathbf{q}|^{2}\) have been neglected in the dispersion of the polariton (43) since as usual \(v_{F}<<\tilde{c}\), being \(\tilde{c}\sim 10^{8}\) ms\({}^{-1}\), far bigger than the typical Fermi velocity \(v_{F}\sim 10^{6}\) ms\({}^{-1}\) of an isotropic metal. A better insight onto the role of the plasmon for the density response is obtained by deriving the general expression for its spectral function \(S(q)\equiv-\operatorname{Im}\tilde{\chi}_{iso}^{00}(q)\) as \[S(q)=-\frac{\chi_{0}^{00}{}^{\prime\prime}(q)}{\left(1-V_{C}( \mathbf{q})\chi_{0}^{00}{}^{\prime}(q)\right)^{2}+\left(V_{C}(\mathbf{q})\chi _{0}^{00}{}^{\prime\prime}(q)\right)^{2}}, \tag{44}\] where the single and the double apostrophes denote, respectively, the real and the imaginary parts of the bare density bubble, and the overall minus sign is due to the fact that we are considering retarded response functions[5]. In the simplified case in which short-range interactions are negligible the long-wavelength dynamical bare density bubble has a vanishing imaginary part, i.e. \(\chi_{0}^{00^{\prime\prime}}\to 0^{-}\), due to the absence of particle-hole excitation at \(v_{F}|\mathbf{q}|\ll\omega\) in a free-electron gas[5; 38], while the real part can be once again expanded as \(\chi_{0}^{00^{\prime}}\simeq n|\mathbf{q}|^{2}/(m\omega^{2})\). One then finds that within such a limit Eq. (44) displays a delta-like peak centered around \(\omega_{L}\), i.e. \[S(q)\simeq\pi I_{L}(\mathbf{q})\delta\left(\omega-\omega_{L}(\mathbf{q}) \right), \tag{45}\] where the overall spectral weight is given by \[I_{L}(\mathbf{q})=\frac{\omega_{L}(\mathbf{q})}{2V_{C}(\mathbf{q})}. \tag{46}\] One then sees that the peak at the plasmon in the density response has zero spectral weight as \(\mathbf{q}\rightarrow\mathbf{0}\), as expected by charge conservation[5]. ## IV Response functions for a layered system So far, we considered the case of an isotropic system: in this sense, we were allowed to consider the effective mass \(m\) a scalar quantity. Here we will be interested instead in describing _layered_ materials, which are made of 2D weakly coupled conducting planes. Such a weak interaction between the layers strongly suppresses the out-of-plane transport: one can account for such an _anisotropy_ within an approximate free-electron continuum model with an effective mass \(m_{i}\) depending on the direction \(i\), with \(m_{i}=m_{xy}\) for \(i=x,y\), and \(m_{i}=m_{z}\) for \(i=z\), being \(m_{xy}<m_{z}\). Substituting the isotropic mass \(m\) with the index-dependent one \(m_{i}\) yields the _anisotropic_ bare Lindhard bubbles, which we denote here by \(\Pi_{0}^{\mu\nu}\). Their form can be easily derived from the known expansion of the isotropic case[5] by mapping the anisotropic electron gas with effective masses \(m_{xy}\) and \(m_{z}\) into a fictitious isotropic one with effective mass \(m^{*}\equiv\left(m_{xy}^{2}m_{z}\right)^{1/3}\). Such a procedure, which is shown in detail in Appendix B, leads to the following identities: \[\Pi_{0}^{00}(\mathbf{q},\omega)=\chi_{0*}^{00}(\tilde{\mathbf{q}},\omega) \tag{47}\] \[\Pi_{0}^{0i}(\mathbf{q},\omega) = \sqrt{\frac{m^{*}}{m_{i}}}\chi_{0*}^{0i}(\tilde{\mathbf{q}},\omega) \tag{48}\] \[= \sqrt{\frac{m^{*}}{m_{i}}}\frac{i\Omega_{m}\tilde{q}_{i}}{|\tilde {\mathbf{q}}|^{2}}\chi_{0*}^{00}(\tilde{\mathbf{q}},\omega)\] \[\Pi_{0}^{ij}(\mathbf{q},\omega) = \sqrt{\frac{m^{*}}{m_{i}}}\sqrt{\frac{m^{*}}{m_{j}}}\chi_{0*}^{ij }(\tilde{\mathbf{q}},\omega) \tag{49}\] The momentum \(\tilde{\mathbf{q}}\) is rescaled such that its components satisfy \(\tilde{q}_{i}=\sqrt{\frac{m^{*}}{m_{i}}}q_{i}\), and \(\chi_{0*}^{\mu\nu}\) denotes the generic response function of the isotropic free-electron gas with effective mass \(m^{*}\). The second row of Eq. (48) has been rewritten, for the sake of the following discussion, by taking advantage of the expression of the isotropic density-current function provided by Eq. (24). The rescaling encoded in Eqs.(48)-(49) does not affect the gauge-invariance condition of the non-interacting electron system, so that the \(\Pi_{0}^{\mu\nu}\) function still satisfy Eq. (20): \[q_{\mu}\Pi_{0}^{\mu\nu}(q)=0,\qquad\Pi_{0}^{\mu\nu}(q)q_{\nu}=0. \tag{50}\] On the other hand, from Eq. (48) it follows that, in contrast to the isotropic case where \(\chi_{0}^{0i}\propto q_{i}\) and therefore \(\chi_{0}^{0j}\left(\hat{P}_{T}\right)_{ji}=0\), the anisotropic density-current function acquires a finite _transverse_ component: \[\Pi_{0}^{0i}(q)=\frac{i\Omega_{n}q_{i}}{|\mathbf{q}|^{2}}\Pi_{0}^{00}(q)+\Pi_ {0}^{0j}(q)\left(\hat{P}_{T}(\mathbf{q})\right)_{ji}. \tag{51}\] Analogously, the current-current function \(\Pi_{0}^{ij}\) does not admit a longitudinal-transverse decomposition as in Eq. (25), but it reads \[\Pi_{0}^{ij}(q)=\frac{(i\Omega_{n})^{2}}{|\mathbf{q}|^{2}}\Pi_{0} ^{00}(q)\left(\hat{P}_{L}(\mathbf{q})\right)^{ij}+\Pi_{0}^{T}(q)\left(\hat{P}_ {T}(\mathbf{q})\right)^{ij}\] \[+\frac{i\Omega_{n}q_{i}}{|\mathbf{q}|^{2}}\Pi_{0}^{0k}(q)\left( \hat{P}_{T}(\mathbf{q})\right)_{kj}+\frac{i\Omega_{n}q_{j}}{|\mathbf{q}|^{2}} \Pi_{0}^{0k}(q)\left(\hat{P}_{T}(\mathbf{q})\right)_{ki}, \tag{52}\] where \(\Pi_{0}^{T}\equiv(1/2)\left(\hat{P}_{T}\right)_{ij}\Pi_{0}^{ji}\) is the purely transverse part. Eq. (51) encodes the physical mechanism highlighted in Sec. II within the formalism of Maxwell's equations, i.e. the possibility in a layered system to get a density fluctuation in response to a transverse current perturbation and viceversa. From the point of view of the present derivation, the crucial consequence of Eq. (51) is that the terms \(\phi\Pi_{0}^{0i}A_{i}\), that now replace the corresponding ones of the isotropic case in Eq. (32b)-(32c), are no more zero, even in the Coulomb gauge, leading to a finite coupling between internal (or external) scalar and vector potentials. In other words, the gauge-fixing procedure does not provide a decoupling between longitudinal and transverse degrees of freedom, as represented by the internal scalar and vector potentials, respectively. If one ignores the latter and retains only the former, as it is usually done in the context of RIXS and EELS experiments[12; 14; 15; 19; 20; 21; 22; 25], one obtains the generalization of Eq.s (33)-(35) to the layered metal: \[\Pi_{RPA}^{00}(q)=\frac{\Pi_{0}^{00}(q)}{1-V_{C}(\mathbf{q})\Pi_{0}^{00}(q)}, \tag{53}\] \[\Pi_{RPA}^{0i}(q)=\frac{\Pi_{0}^{0i}(q)}{1-V_{C}(\mathbf{q})\Pi_{0}^{00}(q)}, \tag{54}\] \[\Pi^{ij}_{RPA}(q)=\Pi^{ij}_{0}(q)+V_{C}({\bf q})\frac{\Pi^{i0}_{0}(q)\Pi^{0j}_{0}( q)}{1-V_{C}({\bf q})\Pi^{00}_{0}(q)}. \tag{55}\] The standard-RPA anisotropic functions defined above satisfy, as their isotropic counterparts, the gauge-invariance conditions \[q_{\mu}\Pi^{\mu\nu}_{RPA}(q)=0,\quad\Pi^{\mu\nu}_{RPA}(q)q_{\nu}=0. \tag{56}\] Also, Eqs. (54) and (55) can be put in the forms prescribed by Eqs. (51) and (52) for their bare counterparts, provided that one defines the transverse part of (55) as \(\Pi^{T}_{RPA}\equiv(1/2)\left(\hat{P}_{T}\right)_{ij}\Pi^{ji}_{RPA}\). In such "non-relativistic" limit the density-density and density-current response function of a layered systems are fully exhausted by Eq.s (53) and (54) respectively. In particular, following the same reasoning of Eq. (41) above, the poles of Eq. (53) yield, in the long-wavelength dynamical limit in which \(\Pi^{00}_{0}\simeq n/\omega^{2}\left(q_{xy}^{2}/m_{xy}+q_{z}^{2}/m_{z}\right)\) (\(q_{xy}\equiv\sqrt{q_{x}^{2}+q_{y}^{2}}\) being the in-plane momentum), the dispersion of the purely longitudinal layered plasmon usually quoted in the literature [13; 16; 23; 55], i.e. \[\omega^{2}_{L}({\bf q}) = \omega^{2}_{xy}\frac{q_{xy}^{2}}{|{\bf q}|^{2}}+\omega^{2}_{z} \frac{q_{z}^{2}}{|{\bf q}|^{2}} \tag{57}\] \[= \omega^{2}_{xy}\sin^{2}\eta+\omega^{2}_{z}\cos^{2}\eta,\] where \(\eta\) denotes as before the angle between \({\bf q}\) and the \(z\)-axis. In full analogy, one could define the dispersion of the anisotropic plasma polariton by the generalization of Eq. (43) to the anisotropic case, i.e. \(1-D_{T}\Pi^{T}_{0}=0\). In this case, by exploiting the long-wavelength dynamical limit \(\Pi^{T}_{0}\simeq n\left(q_{z}^{2}/m_{xy}+q_{x}^{2}/m_{z}\right)/|{\bf q}|^{2}\) of the transverse anisotropic current response, one gets \[\omega^{2}_{T}({\bf q}) = \omega^{2}_{xy}\frac{q_{z}^{2}}{|{\bf q}|^{2}}+\omega^{2}_{z} \frac{q_{xy}^{2}}{|{\bf q}|^{2}}+\tilde{c}^{2}|{\bf q}| \tag{58}\] \[= \omega^{2}_{xy}\cos^{2}\eta+\omega^{2}_{z}\sin^{2}\eta+\tilde{c}^ {2}|{\bf q}|.\] One can immediately notice that the expressions (57)-(58) are non-analytic functions as \({\bf q}\rightarrow{\bf 0}\). As such, they predict a continuum of possible values as \({\bf q}\rightarrow{\bf 0}\), with \(\omega_{T}\) being even smaller than \(\omega_{L}\) at specific values of the angle \(\eta\), leading to a crossing between the two dispersions as \({\bf q}\) is increased. These features, in particular the continuum of \({\bf q}={\bf 0}\) values, are unphysical, and in direct contrast with Maxwell's equation expectation, as we will discuss below. On the other hand, they provide a valid approximation for the non-relativistic limits (i.e. when the coupling is negligible, see below) of the generalized plasma modes valid in the anisotropic case at generic value of the wavevector. In order to account for the finite coupling between the scalar and vector potentials we must go back to Eq. (32) in the anisotropic case (i.e. with \(\chi\rightarrow\Pi\)) and integrate out the e.m. potentials \(A_{\mu}\). In the following, instead of choosing a particular gauge we will employ the so-called Faddeev-Popov gauge-fixing procedure[53], which consists in spoiling explicitly the gauge invariance of the model by adding a term \[S_{gf}[A_{\mu}]=\frac{1}{2\alpha}\int d\tau d{\bf r}\left[f\left(A_{\mu} \right)\right]^{2} \tag{59}\] that is not gauge-invariant. In Eq. (59) \(\alpha\) is an arbitrary multiplicative constant, and \(f\) is a generic linear function of the 4-potential. \(f^{2}\) is thus quadratic in \(A_{\mu}\), and therefore Eq. (59) is a gaussian function of the 4-potential centered around the zeroes of \(f(A_{\mu})=0\). As a consequence, contributions from \(A_{\mu}\) that do not satisfy such a condition are exponentially suppressed, and one is guaranteed that, whilst the gauge-invariance of the physical quantities is preserved, the divergence associated with the infinite gauge orbits is eliminated. A particular case is the \(\alpha\to 0\) one, i.e. when the width of the gaussian vanishes and only fields obeying exactly \(f(A_{\mu})=0\) survive in the functional integral. In our case, in order to reproduce the Coulomb gauge in the \(\alpha\to 0\) limit we choose \(f(A_{\mu})=\mathbf{\nabla}\cdot{\bf A}\). The advantage of the Faddeev-Popov method lies in the fact that the potentials remain linearly independent, so that there is no need to parameterize them according to the chosen gauge and one can easily integrate out all the four component of \(A_{\mu}\). If we fix the multiplicative constant as \(\alpha=4\pi\) Eq. (59) becomes, in Fourier space, \[S_{gf}[A_{\mu}] = \frac{1}{8\pi}\sum_{q}q_{i}q_{j}A_{i}(q)A_{j}(-q) \tag{60}\] \[= \frac{1}{8\pi}\sum_{q}\left|{\bf q}\cdot{\bf A}(q)\right|^{2},\] which, as one immediately sees, cancels out with the \(-1/(8\pi)\sum_{q}\left|{\bf q}\cdot{\bf A}\right|^{2}\) coming from Eq. (31). The total action for the layered system will then be given by Eq. (32), without the last term of Eq. (32d), once the \(\chi\)'s are replaced with the \(\Pi\)'s. We then integrate, as a first step, the scalar potential. This amounts to the standard-RPA dressing of the bare response function, plus a dressing of the remaining terms proportional to the vector potential: \[S[{\bf A},A_{\mu}^{ext}] = \frac{e^{2}}{2}\sum_{q}\left[\Pi_{RPA}^{00}|\phi^{ext}(q)|^{2}+\Pi_ {RPA}^{ij}(q)\frac{A_{i}^{ext}(q)}{c}\frac{A_{j}^{ext}(-q)}{c}\right.\] \[+ \left.\left(-\Pi_{RPA}^{0i}(q)\phi_{ext}(q)\frac{A_{i}^{ext}(-q)}{ c}-\Pi_{MIX}^{0i}(q)\phi^{ext}(q)\frac{A_{i}(-q)}{c}+\Pi_{MIX}^{ij}(q)\frac{A_{i}^{ ext}(q)}{c}\frac{A_{j}(-q)}{c}+c.c.\right)\] \[+ \left.\left(\Lambda^{-1}(q)\right)^{ij}\frac{A_{i}(q)}{c}\frac{A_ {j}(-q)}{c}\right].\] The coefficient of \(A_{i}A_{j}\) is defined as the inverse of the 3\(\times\)3 tensor \(\Lambda^{ij}\) as \[(\Lambda^{-1})^{ij}(q)=\frac{c^{2}|{\bf q}|^{2}}{4\pi e^{2}}\left(\hat{P}_{L} ({\bf q})\right)^{ij}+\left(\Pi_{RPA}^{T}(q)-\frac{1}{D_{T}(q)}\right)\left( \hat{P}_{T}({\bf q})\right)^{ij}, \tag{62}\] and the coefficients of the mixed terms in \(\phi^{ext}A_{i}\) and in \(A_{i}^{ext}A_{j}\) are given, respectively, by \[\Pi_{MIX}^{0i}(q)=\Pi_{RPA}^{0i}(q)-\frac{i\Omega_{n}q_{i}}{|{\bf q}|^{2}}\Pi _{RPA}^{00}(q), \tag{63}\] \[\Pi_{MIX}^{ij}(q)=\Pi_{RPA}^{ij}(q)-\frac{i\Omega_{n}q_{j}}{|{\bf q}|^{2}}\Pi _{RPA}^{0i}(q). \tag{64}\] \(\Pi_{MIX}^{0i}\) is purely transverse, since \(\Pi_{MIX}^{0i}q_{i}=0\), while \(\Pi_{MIX}^{ij}\), which is not symmetric under exchange of the indices, is such that \(\Pi_{MIX}^{ij}q_{j}=0\) but \(q_{i}\Pi_{MIX}^{ij}\) is finite. This is even clearer when one writes \[\Pi_{MIX}^{0i}(q)\equiv\Pi_{RPA}^{0j}(q)\left(\hat{P}_{T}({\bf q})\right)_{ji} \tag{65}\] and \[\Pi_{MIX}^{ij}(q) \equiv \Pi_{RPA}^{T}(q)\left(\hat{P}_{T}({\bf q})\right)^{ij} \tag{66}\] \[+ \frac{i\Omega_{n}q_{i}}{|{\bf q}|^{2}}\Pi_{RPA}^{0k}(q)\left(\hat {P}_{T}({\bf q})\right)_{kj},\] which can be straightforwardly obtained by taking advantage of the fact that \(\Pi_{RPA}^{0i}\) and \(\Pi_{RPA}^{ij}\) admit two decompositions similar, respectively, to those of Eqs. (51) and (52). Eq.s (65) and (66) clarify two main aspects. First of all, by direct comparison between Eqs. (63) and Eq. (51) one sees that \(\Pi_{MIX}^{0i}\) is exactly the transverse part of the standard-RPA density-current response function, that is not zero only in the layered case. Thus, the coupling between the scalar and vector potentials in Eq. (61) is a direct consequence of the system anisotropy. At the same time, since \(\Pi_{MIX}^{0i}q_{i}=0\) and \(\Pi_{MIX}^{ij}q_{j}=0\), the external scalar \(\phi^{ext}\) and vector \({\bf A}^{ext}\) potentials in Eq. (61) couple only to the transverse part of the internal vector potential \({\bf A}_{T}\). As a consequence, the gauge-dependent longitudinal component \({\bf A}_{L}\), which only appears in the last term of Eq. (61) via the quadratic contribution \((1/\Lambda_{L})|A_{L}|/c^{2}\), with \(1/\Lambda_{L}=c^{2}|{\bf q}|^{2}/(4\pi e^{2})\), does not contribute to the dressing of the response functions. Once the integration of \({\bf A}\) in Eq. (61) is carried out we are left with the action for the auxiliary fields only \[S[A_{\mu}^{ext}]=\frac{e^{2}}{2c^{2}}\sum_{q}A_{\mu}^{ext}(q)\tilde{\Pi}^{\mu \nu}(q)A_{\nu}^{ext}(-q). \tag{67}\] The full response functions \(\tilde{\Pi}^{\mu\nu}\) are given by \[\tilde{\Pi}^{\mu\nu}(q) = \Pi_{RPA}^{\mu\nu}(q)-\Pi_{MIX}^{\mu i}(q)\Lambda_{ij}^{T}(q) \Pi_{MIX}^{\nu j}(q),\] where \[\Lambda_{ij}^{T}\equiv\left(\hat{P}_{T}\right)_{ij}/\left(\Pi_{RPA}^{T}-1/D_{T }\right) \tag{69}\] and we used a compact 4-vector notation in which the mixing coefficients (63)-(64) are defined as \[\Pi_{MIX}^{\mu i}(q)=\Pi_{RPA}^{\mu i}(q)-\frac{i\Omega_{n}q_{i}}{|{\bf q}|^{ 2}}\Pi_{RPA}^{0\mu}(q). \tag{70}\] The full response functions (68) are still gauge invariant. Indeed, thanks to the gauge-invariance conditions (50) and (56) for the bare and standard-RPA response functions respectively, we also have \[q_{\mu}\tilde{\Pi}^{\mu\nu}(q)=0,\quad\tilde{\Pi}^{\mu\nu}(q)q_{\nu}=0 \tag{71}\] for the full response functions. To show that Eq. (71) is indeed satisfied, we note that each \(\tilde{\Pi}^{\mu\nu}\) is given as the sum of a standard-RPA function \(\Pi_{RPA}^{\mu\nu}\), which already obeys a gauge-invariant constraint, i.e. Eq. (56), plus the mixing term (70), which can be proven to obey a similar condition. Indeed, from \(q_{\mu}\Pi_{RPA}^{\mu\nu}=0\) and \(\Pi_{RPA}^{\mu\nu}q_{\nu}=0\), it follows that \[q_{\mu}\Pi_{MIX}^{\mu i}=q_{\mu}\Pi_{RPA}^{\mu i}-\frac{i\Omega_{n}q_{i}}{|{ \bf q}|^{2}}\Pi_{RPA}^{0\nu}q_{\nu}=0, \tag{72}\] where the vanishing of the first and the second terms comes from Eq. (20) for \(\nu=i\) and \(\mu=0\) respectively. In the following we will be interested in the density-density function \(\tilde{\Pi}^{00}\), whose expression is given by Eq. (68) for time-like indices \(\mu=\nu=0\) and reads explicitly \[\tilde{\Pi}^{00}(q) = \Pi^{00}_{RPA}(q)-\Pi^{0i}_{MIX}(q)\Lambda^{T}_{ij}(q)\Pi^{0j}_{ MIX}(q).\] Eq. (73) is the first central result of our work. It provides an analytical expression for the gauge-invariant density-density response function in a layered metal at arbitrary momentum and frequency. It can be readily extended to the case of interacting electron systems, once that short-range interactions are included[48; 49] by preserving the gauge-invariant condition (50) for the bare response function. Indeed, all effects coming from the coupling to the long-range part of the interaction are included in an exhaustive way by Eq. (68). In the next section we will provide additional analytical insights into the nature of the plasma modes obtained as poles of the general structure (73) for the non-interacting case. ## V Collective modes of a layered system and their spectral features ### Generalized plasma modes dispersion To analyze the spectral function of the density response (73) we set the momentum \(\mathbf{q}\) for the sake of simplicity within the \(xz\) plane. With such choice the longitudinal-transverse basis is spanned by the following orthogonal vectors: \[\hat{\mathbf{v}}_{L}=\hat{\mathbf{q}},\quad\hat{\mathbf{v}}_{T}^{y}=\hat{\mathbf{y}}, \quad\hat{\mathbf{v}}_{T}^{xz}=\hat{\mathbf{q}}\times\hat{\mathbf{y}}. \tag{74}\] \(\hat{\mathbf{v}}_{L}\) is the longitudinal versor parallel to \(\mathbf{q}\), \(\hat{\mathbf{v}}_{T}^{y}\) and \(\hat{\mathbf{v}}_{T}^{xz}\) are the transverse ones along the \(y\)-direction and in the \(xz\)-plane respectively. To make a bridge between the longitudinal and transverse projectors defined in Eq. (26) and the basis versors (74), we notice that the former can be expressed in terms of outer products of the latter as \(\hat{P}_{L}=\hat{\mathbf{v}}_{L}\otimes\hat{\mathbf{v}}_{L}\) and \(\hat{P}_{T}=\hat{\mathbf{v}}_{T}^{y}\otimes\hat{\mathbf{v}}_{T}^{y}+\hat{\mathbf{v}}_{T}^ {xz}\otimes\hat{\mathbf{v}}_{T}^{xz}\). Let us now write the relevant layered response functions within the new basis (74). First of all, we note that the mixing terms (63) and (64) couple longitudinal excitations to transverse ones polarized along the \(xz\) plane but not with those polarized along \(y\). Indeed, since in our frame of reference \(q_{y}=0\), we have that \(\Pi^{0y}_{0}=0\) (which follows trivially from Eq. (48)) and, as a consequence, \(\Pi^{0y}_{MIX}=0\). Therefore \[(\hat{\mathbf{v}}_{T}^{y})_{i}\Pi^{0i}_{MIX}(q)=0, \tag{75}\] Similarly Eq. (49), for \(i=y\) or \(j=y\), has only a diamagnetic contribution proportional to a delta \(\delta_{yj}\) or \(\delta_{iy}\), so that \[(\hat{\mathbf{v}}_{T}^{y})_{i}(\hat{\mathbf{v}}_{L})_{j}\Pi^{ij}_{MIX}(q)=0,\quad( \hat{\mathbf{v}}_{T}^{y})_{i}(\hat{\mathbf{v}}_{T}^{xz})_{j}\Pi^{ij}_{MIX}(q)=0 \tag{76}\] which implies that current fluctuations along \(y\) never get coupled with those along the longitudinal and the transverse \(xz\) directions. As a consequence, the current-current response function, as given by Eq. (68) for space-like indices \(\mu=i\) and \(\nu=j\), reads, for \(i=j=y\), \[\tilde{\Pi}^{yy}(q)=\frac{\Pi^{yy}_{0}(q)}{1-D_{T}(q)\Pi^{yy}_{0}(q)}. \tag{77}\] Eq. (77) identifies a transverse mode with electric field polarized along \(y\) and whose long-wavelength propagation is given (being \(\Pi^{yy}_{0}\simeq n/m_{xy}\) in the dynamical long-wavelength limit) by \(\omega^{2}=\omega_{xy}^{2}+\tilde{c}^{2}|\mathbf{q}|^{2}\), which coincides with the standard polariton dispersion (43). Conversely, Eqs. (65) and (66) account for a finite coupling between longitudinal modes and transverse ones polarized along \(xz\). To investigate this effect it is convenient to express Eq. (73) as \[\tilde{\Pi}^{00}(q)=\frac{\Pi^{00}_{ret}(q)}{1-V_{C}(\mathbf{q})\Pi^{00}_{ret} (q)}, \tag{78}\] where \[\Pi^{00}_{ret}(q)=\Pi^{00}_{0}(q)+\frac{\left(\Pi^{0J}_{T}(q)\right)^{2}}{D_{T }^{-1}(q)-\Pi^{JJ}_{T}(q)} \tag{79}\] and we introduced the projections of the bare density-current and current-current functions along the transverse \(xz\) direction: \[\Pi^{0J}_{T}(q) \equiv (\hat{\mathbf{v}}_{T}^{xz})_{i}\Pi^{0i}_{0}(q),\] \[\Pi^{JJ}_{T}(q) \equiv (\hat{\mathbf{v}}_{T}^{xz})_{i}(\hat{\mathbf{v}}_{T}^{xz})_{j}\Pi^{ij}_{0} (q). \tag{80}\] Eq. (78), whose derivation is detailed in Appendix C, is the second central result of this work, as it provides an expression for the full density-density response of the layered metal in terms of the bare anisotropic response functions, with the inclusion of the retardation effects. It is important to stress that both Eq. (68) and Eq. (78) emphasize two main aspects: the former underlines that the anisotropy mixes the standard-RPA result, given by the integration of \(\phi\), with the propagating transverse modes encoded into \(\mathbf{A}\); the latter suggests that this is equivalent to resuming first the retarded interaction mediated by the transverse modes and then the Coulomb interaction. Indeed, according to Eqs. (78) and (79), the full density response contains an RPA resummation both in the current and density sector: the former accounts for the replacement of the bare bubble \(\Pi^{00}_{0}\) with the correction due to the transverse gauge field, whose propagator is proportional to \(1/\left(D_{T}^{-1}-\Pi^{JJ}_{T}\right)\), with a strength controlled by the non-zero value of \(\Pi^{JJ}_{T}\) for a layered system; the latter is the usual RPA dressing with the Coulomb interaction \(V_{C}(\mathbf{q})\). We now focus, in full analogy with Sec. III, on the simplified long-wavelength dynamical case in which the anisotropic Lindhard bubbles (47)-(49) can be expanded, at leading order in the momentum, as \[\Pi_{0}^{00}(q) \simeq \frac{n}{\omega^{2}}\left(\frac{q_{x}^{2}}{m_{xy}}+\frac{q_{z}^{2}} {m_{z}}\right), \tag{81}\] \[\Pi_{T}^{0J}(q) \simeq -\frac{q_{x}q_{z}}{|\mathbf{q}|}\frac{n}{\omega}\left(\frac{1}{m_ {xy}}-\frac{1}{m_{z}}\right),\] (82) \[\Pi_{T}^{JJ}(q) \simeq \frac{n}{|\mathbf{q}|^{2}}\left(\frac{q_{x}^{2}}{m_{z}}+\frac{q_ {z}^{2}}{m_{xy}}\right).\] The derivation of these limits is discussed in Appendix B. By using Eqs. (81)-(IV.2) we can immediately find an estimate of the retardation corrections to the density response encoded in Eq. (79), in the same spirit of the discussion in Sec. II. Indeed, we see that the relative correction to the density bubble \(\Pi_{0}^{00}\) encoded into Eq. (79) can be expressed, by means of the expansions (81)-(IV.2), as \[\frac{\left(\Pi_{T}^{0J}(q)\right)^{2}}{\Pi_{0}^{00}(q)(D_{T}^{-1}(q)-\Pi_{T}^ {J}(q))}\simeq\frac{\left(\omega_{xy}^{2}-\omega_{z}^{2}\right)^{2}}{\omega_{ L}^{2}\left(\omega^{2}-\omega_{T}^{2}(\mathbf{q})\right)}\frac{q_{x}^{2}q_{z}^{2}}{| \mathbf{q}|^{4}}\equiv\frac{\tilde{c}^{4}q_{c}^{4}}{\omega_{L}^{2}(\mathbf{q} )\left(\omega^{2}-\omega_{T}^{2}(\mathbf{q})\right)}\frac{\sin^{2}(2\eta)}{4} \tag{84}\] where \[q_{c}\equiv\frac{\sqrt{\omega_{xy}^{2}-\omega_{z}^{2}}}{\tilde{c}} \tag{85}\] as already defined (for \(\varepsilon=1\)) in Eq. (9) above, and with the definitions (57) and (58) of \(\omega_{L}\) and \(\omega_{T}\). Outside the light cone, as it is the case for EELS and RIXS, momenta are such that \(|\mathbf{q}|\gg\omega/c\). In this regime the term \(\tilde{c}^{2}|\mathbf{q}|^{2}\) in \(\omega_{T}(\mathbf{q})\) dominates in the denominator of Eq. (84), while \(\tilde{c}q_{c}\simeq\omega_{xy}\) is comparable to \(\omega_{L}\). One then recovers the same scaling condition \(\sim q_{c}^{2}/q^{2}\ll 1\) of Eq. (9) for the quantitative irrelevance of retardation effects. Conversely, for experiments with THz light where \(\omega\simeq\omega_{z}\) and \(q=\omega_{z}/\tilde{c}\), which is far smaller than the crossover value \(q_{c}\sim\omega_{xy}/\tilde{c}\), one sees that the denominator of Eq. (84) scales at leading order with \(\omega_{z}^{2}(\tilde{c}q_{c})^{2}\): \[\omega_{L}^{2}(\omega_{T}^{2}-\omega_{z}^{2}) = (\omega_{z}^{2}+(\tilde{c}q_{c})^{2}\cos^{2}\eta) \tag{86}\] \[\times (\omega_{z}^{2}+(\tilde{c}q_{c})^{2}\sin^{2}\eta)\simeq\omega_{z} ^{2}(\tilde{c}q_{c})^{2}.\] When replaced into Eq. (84) one finds again an overall factor scaling as \((cq_{c}/\omega_{z})^{2}=(q_{c}/q)^{2}\gg 1\), and one recovers that relativistic corrections become crucial. In the following we will see how the above estimate reflects in the crossover from the relativistic to the standard RPA regime for the response function. Let us first determine the general dispersion of the plasma modes from the zeros of Eq. (78). With lengthly but straightforward calculations one obtains the expressions equivalent to those derived recently in Ref. [41] for an anisotropic superconductor in the case of zero screening length, i.e. \[\omega_{\pm}^{2}(\mathbf{q})=\frac{1}{2}\left(\omega_{xy}^{2}+ \omega_{z}^{2}+\tilde{c}^{2}|\mathbf{q}|^{2}\right.\] \[\pm \left.\sqrt{(\omega_{xy}^{2}-\omega_{z}^{2})^{2}+\tilde{c}^{4}| \mathbf{q}|^{4}-2\tilde{c}^{2}(q_{x}^{2}-q_{z}^{2})(\omega_{xy}^{2}-\omega_{z} ^{2})}\right).\] As already discussed in Ref. [41], in contrast to the RPA solution \(\omega_{L/T}(\mathbf{q})\) of Eq. (57)-(58), the two solutions \(\omega_{\pm}\) are analytic functions as \(\mathbf{q}\rightarrow\mathbf{0}\), with \[\lim_{\mathbf{q}\rightarrow\mathbf{0}}\omega_{\pm}(\mathbf{q})=\omega_{xy/z}, \tag{88}\] which is consistent with the expectation that as \(\mathbf{q}\rightarrow\mathbf{0}\) the only solution of the Ampere-Maxwell law \(4\pi\mathbf{J}-(i\omega/c)\mathbf{E}=0\) requires \(\mathbf{J}\) being parallel to \(\mathbf{E}\), that is only possible in a layered system for electric fields polarized along \(z\) and \(\omega=\omega_{z}\), or polarized in the \(xy\) plane with \(\omega=\omega_{xy}\). Since at small but finite \(\mathbf{q}\) the two e.m. modes (87) preserve the same polarization, for a generic direction of \(\mathbf{q}\) one has a mixture of longitudinal and transverse components. This can be explicitly seen by computing the polarization of the electric fields \(\mathbf{E}_{-}\) and \(\mathbf{E}_{+}\) corresponding to the two solutions \(\omega_{\mp}\). Their expressions have been derived previously in the superconducting case[41], and they can be obtained again within the framework of Maxwell's equations (see Appendix A). By denoting with \(E_{L/T}\) the longitudinal/transverse component of the \(\omega_{-}\) solution one finds \[\left\{\begin{array}{c}\mathbf{E}_{-}(\mathbf{q})=E_{L}(\mathbf{q})\hat{ \boldsymbol{v}}_{L}(\mathbf{q})+E_{T}(\mathbf{q})\hat{\boldsymbol{v}}_{T}^{xz},\\ \\ \mathbf{E}_{+}(\mathbf{q})=E_{L}(\mathbf{q})\hat{\boldsymbol{v}}_{T}^{xz}-E_{T} (\mathbf{q})\hat{\boldsymbol{v}}_{L}(\mathbf{q}),\end{array}\right. \tag{89}\] where: \[E_{L}(\mathbf{q})=\frac{\frac{q_{x}q_{z}}{|\mathbf{q}|^{2}}\left(\omega_{xy}^{2 }-\omega_{z}^{2}\right)}{\sqrt{\frac{q_{x}^{2}q_{z}^{2}}{|\mathbf{q}|^{4}} \left(\omega_{xy}^{2}-\omega_{z}^{2}\right)^{2}+\left(\omega_{+}^{2}(\mathbf{q})- \omega_{T}^{2}(\mathbf{q})\right)^{2}}}, \tag{90}\] \[E_{T}(\mathbf{q})=\frac{\omega_{+}^{2}(\mathbf{q})-\omega_{T}^{2}(\mathbf{q})}{ \sqrt{\frac{q_{x}^{2}q_{z}^{2}}{|\mathbf{q}|^{2}}\left(\omega_{xy}^{2}-\omega_{z} ^{2}\right)^{2}+\left(\omega_{+}^{2}(\mathbf{q})-\omega_{T}^{2}(\mathbf{q}) \right)^{2}}}. \tag{91}\] As expected, for \(\mathbf{E}_{+}\) the role of \(E_{L/T}\) is exchanged. The polarization vectors are shown along with the eigenmodes in Fig. 1(a)-(b) as functions of \(|\mathbf{q}|\) at fixed propagation angle \(\eta\), and in Fig. 1(c)-(d) as functions of \(q_{x}\) at fixed value of \(q_{z}\). In both cases, one sees that as soon as \(|\mathbf{q}|\gg q_{c}\) the generalized modes tend to their RPA counterparts: \[|\mathbf{q}|\gg q_{c}\Longrightarrow\omega_{-}(\mathbf{q})\simeq\omega_{L}( \mathbf{q}),\quad\omega_{+}(\mathbf{q})\simeq\omega_{T}(\mathbf{q}). \tag{92}\] This can be easily understood from Eq. (87), where at large \(|\mathbf{q}|\) the square-root term can be recast and expanded in powers of the small variable \(q_{c}/|\mathbf{q}|\), and one easily recovers the two analytical expressions of the standard-RPA purely longitudinal and transverse modes (57) and (58). In full agreement with this result, one sees that in such a non-relativistic regime \(E_{L}\simeq 1\) and \(E_{T}\simeq 0\), so that \(\mathbf{E}_{-}\) describes a purely longitudinal mode while \(\mathbf{E}_{+}\) reduces to the purely transverse mode along \(\hat{\boldsymbol{v}}_{T}^{xz}\), associated with the standard-RPA mode \(\omega_{T}\). In Fig. 1(a) and (c) we also show for comparison the standard-RPA results, for angle \(\eta=\pi/3\) in Fig. 1(a) and for \(q_{z}=1.5\)\(\mu\)m\({}^{-1}\) in Fig. 1(c). As one can see, at small momenta the RPA solutions leads to the unphysical crossing of the transverse and longitudinal solutions, the former being even smaller in energy than the latter. Such a patho Figure 1: **Momentum dependence of the mixed longitudinal-transverse modes and their polarizations.** Momentum dependence of the mixed longitudinal-transverse modes \(\omega_{-}\) and \(\omega_{+}\) (panel (a) and (c)), as given by Eq. (87), and of the longitudinal and transverse components \(E_{L}\) and \(E_{T}\) of the \(\omega_{-}\) mode, as given by Eqs. (90) and (91) (panel (b) and (d)). In panel (a) and (b) the quantities are shown as functions of \(|\mathbf{q}|\) at three selected values of the angle \(\eta\), while in panel (c) and (d) they are shown as functions of \(q_{x}\) at three selected values of the out-of-plane momentum \(q_{z}\). Here red/blue solid lines are associated with \(\omega_{-}/\omega_{+}\) and \(E_{L}/E_{T}\), respectively, with same shades of red or blue associated with the same value of \(\eta\) or \(q_{z}\). In panel (a) and (c) we also show, for comparison, the standard-RPA modes \(\omega_{L}\) (red dot-dashed line) and \(\omega_{T}\) (blue dot-dashed line) at \(\eta=\pi/3\) in panel (a) and \(q_{z}=1\)\(\mu\)m\({}^{-1}\) in panel (c), respectively. We set the values of the plasma frequencies as \(\omega_{xy}=1\) eV and \(\omega_{z}=0.05\) eV and for simplicity we assumed \(\varepsilon=1\). logical behavior, that can be understood by the strong mixing of L/T character in the same regime of momenta, see Fig. 1(b) and (d), is completely solved by considering the generalized solutions. ### Density response Once clarified the behavior of the generalized plasma modes let us study their contribution to the density response function \(\tilde{S}(q)\equiv-\operatorname{Im}\tilde{\Pi}^{00}(q)\), whose general expression in terms of the retarded density function (79) is given by \[\tilde{S}(q)=\frac{-\Pi^{00\ \prime}_{ret}(q)}{\left(1-V_{C}(\mathbf{q})\Pi^{00\ \prime}_{ret}(q)\right)^{2}+\left(V_{C}(\mathbf{q})\Pi^{00\ \prime}_{ret}(q)\right)^{2}}. \tag{93}\] The zeroes of \(1-V_{C}\Pi^{00\ \prime}_{ret}\) identify the dispersions of the modes as given by Eq. (87). The imaginary part \(\Pi^{00\ \prime}_{ret}\) sets the widths associated with their peaks, and it vanishes, as in the isotropic case, in the long-wavelength dynamical limit: in particular, it is \(\Pi^{00\ \prime\prime}_{ret}\to 0^{-}\) for \(|\mathbf{q}|<q_{c}\) and \(\omega\) such that \(\omega=\omega_{\pm}(\mathbf{q})\) (as we discuss at the end of Appendix C). In such a momentum range \(\omega_{-}\) and \(\omega_{+}\) are therefore well-defined modes and they both appear, in the density spectrum identified by \(\tilde{S}\), as two sharp delta-like peaks centered around their respective frequencies, i.e. \[\tilde{S}(q) \simeq \pi I_{-}(\mathbf{q})\delta\left(\omega-\omega_{-}(\mathbf{q})\right) \tag{94}\] \[+ \pi I_{+}(\mathbf{q})\delta\left(\omega-\omega_{+}(\mathbf{q}) \right),\] where the overall peak intensities are given by \[I_{\pm}(\mathbf{q}) = \pm\frac{\omega_{\pm}(\mathbf{q})}{2V_{C}(\mathbf{q})}\frac{ \omega_{\pm}^{2}(\mathbf{q})-\omega_{T}^{2}(\mathbf{q})}{\omega_{+}^{2}( \mathbf{q})-\omega_{-}^{2}(\mathbf{q})}. \tag{95}\] As a first observation, we notice that Eq. (94) satisfies the \(f\)-sum rule, as one can prove by computing the integral \(-(1/\pi)\int d\omega\operatorname{Im}\tilde{\Pi}^{00}(\omega,\mathbf{q})=(1/ \pi)\int d\omega\tilde{S}(\omega,\mathbf{q})\) over all the positive frequencies. Taking advantage of the identity \(\omega_{-}^{2}+\omega_{+}^{2}=\omega_{xy}^{2}+\omega_{z}^{2}+\tilde{c}^{2}| \mathbf{q}|^{2}\), it yields \[\int_{0}^{+\infty}\frac{d\omega}{\pi}\omega\tilde{S}(\omega, \mathbf{q})=\] \[= \omega_{+}(\mathbf{q})I_{+}(\mathbf{q})+\omega_{-}(\mathbf{q})I_ {-}(\mathbf{q})=\frac{n}{2}\left(\frac{q_{x}^{2}}{m_{xy}}+\frac{q_{z}^{2}}{m_ {z}}\right),\] i.e. the expected result for the \(f\)-sum rule of an anisotropic electron gas [38]. The momentum dependence of the spectral weights is shown in Fig. 2(a) and Fig. 2(b) as a function of \(|\mathbf{q}|\) at fixed angle and as a function of \(q_{x}\) at fixed \(q_{z}\), respectively. As one can see, as \(\mathbf{q}\to\mathbf{0}\) in general the spectral function displays a two-peak structure, also shown explicitly in Fig. 2(c)-(f). This is a direct consequence of the fact, highlighted above, that at generic wavevector both modes have a finite longitudinal component in the relativistic regime. As a consequence, since the density response projects out the longitudinal fluctuations, it carries out a finite spectral weight at both modes. This is the third relevant result of the present work, that shows the emergence of a double-peak structure in the density spectral function in the relativistic regime. On the other hand, as the momentum increases and overcomes \(q_{c}\) one sees from Eq. (95) that \[I_{+}(\mathbf{q})\simeq 0,\quad I_{-}(\mathbf{q})\simeq I_{L}(\mathbf{q}) \equiv\frac{\omega_{L}(\mathbf{q})}{2V_{C}(\mathbf{q})},\quad|\mathbf{q}| \gg q_{c} \tag{97}\] i.e. \(I_{+}\) vanishes and \(I_{-}\) approaches the spectral weight expected for a standard-RPA longitudinal mode, i.e. the one given by Eq. (46) (see Fig. (2)(a) and (c)) with \(\omega_{L}\) being now the standard-RPA plasmon defined in Eq. (57). Therefore, provided that \(\mathbf{q}\) does not exceed the value above which plasmons are damped by the particle-hole continuum (see Appendix C), one finds that \[\tilde{S}(q)\simeq\pi I_{L}(\mathbf{q})\delta\left(\omega-\omega_{L}(\mathbf{q })\right),\quad|\mathbf{q}|\gg q_{c}, \tag{98}\] which is exactly the anisotropic counterpart of Eq. (45). These effects are shown in Fig. 2(c)-(f), where we plot the spectral function \(\tilde{S}\) as given by Eq. (94) and normalized with respect to \(I_{L}\), in order to get rid of the overall \(\propto|\mathbf{q}|^{2}\) factor due to charge conservation. \(\tilde{S}/I_{L}\) is shown at the two values of the momentum \(q_{1}=3.0\ \mu\)m\({}^{-1}\) (below the crossover) and \(q_{2}=7.0\ \mu\)m\({}^{-1}\) (above the crossover) in (c), (e) and (d), (f) respectively: in the first case \(\omega_{+}\) carries on a larger spectral weight than \(\omega_{-}\), in agreement with the previous discussion on the behaviour of \(I_{\pm}\) below \(q_{c}\); in the second case \(\omega_{-}\) has the largest spectral weight, as its spectral profile of tends to the one of the standard-RPA plasmon, while \(\omega_{+}\) has an overall vanishing peak intensity, as expected for a pure polariton. ## VI Conclusions In the present paper we provided a general derivation of the electromagnetic response functions of a layered electron gas in the long-wavelength limit that accounts for both instantaneous and retarded electromagnetic interactions. The starting point is the observation that the anisotropy of the electric current induced as response to a local electric field implies, already at the level of classical Maxwell's equations, a mixing between the longitudinal and the transverse components of the internal e.m. fields. The physical effect is the emergence of a transverse current in response to a longitudinal electric field, that in turns acts as a source for the magnetic field. The final outcome is a mixing between the so-called polariton-like and plasmon-like hybrid light-matter modes, which appear instead in the isotropic metal as purely transverse and longitudinal modes, respectively. To implement this effect within a general many-body formalism we used a path-integral approach where the electronic degrees of freedom are explicitly coupled not only to the external (source) fields but also to the e.m. fields mediating the interactions among them. This approach highlights how the different role of the electric and magnetic field within the context of the Maxwell equation manifests in the usual language of the RPA resummation of the bare electronic response functions, that represent the standard paradigm to study plasma modes. The results can be summarized in the case e.g. of the density response, that is the one probed by RIXS and EELS spectroscopy. In the isotropic system density fluctuations only couple to the scalar potential, and as a consequence at RPA level the density response is only dressed by Coulomb-like interactions. However, in the anisotropic system a transverse current is induced by a density fluctuations, leading to an additional RPA dressing of the density response via the transverse e.m. propagator. This is shown in Eq. (78) that we report here for convenience: \[\tilde{\Pi}^{00}(q)=\frac{\Pi_{ret}^{00}(q)}{1-V_{C}(\mathbf{q})\Pi_{ret}^{00} (q)}. \tag{99}\] In Eq. (99) the standard RPA resummation with the Coulomb potential is carried out using as starting point a density response function \(\Pi_{ret}^{00}(q)\) that includes retardation effects, i.e. \[\Pi_{ret}^{00}(q)=\Pi_{0}^{00}(q)+\frac{\left(\Pi_{T}^{0J}(q)\right)^{2}}{D_{ T}^{-1}(q)-\Pi_{T}^{JJ}(q)}. \tag{100}\] Retardation effects appear as a "relativistic" contribution, since one can show that they are negligible above Figure 2: **Spectral features of the mixed longitudinal-transverse modes.** (a)-(b) Momentum dependence of the spectral weights \(I_{-}\) and \(I_{+}\), as given by Eq. (95), both normalized with respect to the \(I_{L}(\mathbf{q})\) defined in Eq. (97). In panel (a) the quantities are shown as functions of \(|\mathbf{q}|\) at three selected values of the angle \(\eta\), while in panel (b) they are shown as functions of \(q_{x}\) at three selected values of the out-of-plane momentum \(q_{z}\). Here solid and dotted lines are associated with \(I_{-}\) and \(I_{+}\), respectively, with same color associated with the same value of \(\eta\) or \(q_{z}\). (c)-(f) Frequency dependence of the density response \(\tilde{S}(q)\), as given by Eq. (94), at selected values of the momenta. In panels (c)-(d) we show the results at various angles \(\eta\) for the two values \(|\mathbf{q}|=q_{1}\) and \(q_{2}\) marked in in panels (a) and (b) by a vertical dashed line, with the same color convention used in (a). In panel (e)-(f) we show the results at different \(q_{z}\) for the two values \(q_{x}=q_{x,1}\) and \(q_{x}=q_{x,2}\) marked in in panel (b) by a vertical dashed line, with the same color convention used in (b). In analogy with Fig. 1 we set \(\omega_{xy}=1\) eV, \(\omega_{z}=0.05\) eV and \(\varepsilon=1\). In order to better visualize the peaks associated with \(\omega_{-}\) and \(\omega_{+}\), we broadened the delta distributions with a finite width \(\delta=0.01\) eV. the momentum threshold \(q_{c}\sim\omega_{xy}/c\sim 5~{}\mu\)m\({}^{-1}\) vanishing for infinite light velocity. Indeed, at \(q\gg q_{c}\) the second term of Eq. (100) vanishes, and one recovers the textbook result (99) with the standard bare density response \(\Pi_{0}^{00}\), leading to a layered version of the longitudinal plasma mode. In contrast, in the low-momentum regime \(q<q_{c}\) Eq. (99) admits two poles, that coincide formally with the generalized plasma waves derived previously in Ref. [41] in the superconducting state. The mixed longitudinal-transverse character of these modes manifests indeed as a finite projection of both modes in the density sector, leading to a double-peak structure of the density response. Such a prediction could be confirmed once that EELS and RIXS experiments will be able to push their resolution down to the crossover scale. Indeed, despite the rather low state-of-the-art momentum resolution of these protocols, with the lowest accessible momentum of about \(\sim 0.01\) A\({}^{-1}=100~{}\mu\)m\({}^{-1}\), electron energy loss spectroscopy incorporated in a scanning transmission electron microscope (STEM-EELS) and equipped with a monochromator and aberration correctors has a high potential to combine high momentum and energy resolution [56; 9], and thus to explore plasma excitations around the crossover scale, where the standard RPA breaks down and both generalized plasma modes give a comparable contribution to the density response. The main advantage of the derivation presented in this manuscript, encoded in a very compact and elegant way into Eq. (100), is the possibility to provide a general framework to study charged plasmon in a layered system by making explicit the effect of all long-range e.m. interactions, and leaving as a separate problem the inclusion of short-range interactions in the response functions \(\Pi^{\mu\nu}\) that appear as building block of the final observable. The latter has been instead the focus of recent experiments of reflection EELS[26; 27; 8; 8] in cuprates. In these materials an anomalous damping of plasmons occurs already at low momenta where particle-hole excitations are not operative, according to the standard Fermi-liquid description. Such a result has been attributed to a strange-metal physics[29], that is not captured by our approach, but can be in principle incorporated into the general expression (100) by means of a proper gauge-invariant renormalization of the bare response functions due to short-range interactions. An additional interesting open question is the possibility that short-range interactions affect also the crossover scale, making relativistic effects operative below the momenta estimated on the basis of a Fermi-liquid picture. Such a mechanism could help the spectroscopic detection of the predicted generalized plasma modes, adding an additional knob to the investigation of electronic excitations in correlated metals. **Acknowledgments** We acknowledge financial support by EU under project MORE-TEM ERC-SYN (grant agreement No 951215) and by Sapienza University under the program Ateneo (No 2021 RM12117A4A7FD11B and 2022 RP1221816662A977). ## Appendix A Classical electrodynamics of a layered metal In this appendix we rephrase the existence of mixed longitudinal-transverse e.m. modes in a layered metal within the classical framework of Maxwell's equations. A similar approach has been previously discussed for layered SC systems by three of us in Ref. [41]. For the sake of simplicity, we consider a metal in the absence of external sources, i.e. \(\rho_{ext}=0\) and \(\mathbf{J}_{ext}=\mathbf{0}\). The electron transport can be described, in the simplified case of long-wavelength propagation of the e.m. modes at very low scattering rate, by means of the undamped Drude equation for the internal current and electric field \(\mathbf{J}\) and \(\mathbf{E}\): \[\frac{\partial\mathbf{J}}{\partial t}=e^{2}n\hat{m}^{-1}\mathbf{E}. \tag{101}\] \(\hat{m}\) is the effective-mass tensor, that in isotropic systems trivially reduces to the scalar mass \(m\) along an arbitrary direction; on the other hand, in layered anisotropic systems it reads \[\hat{m}=\begin{pmatrix}m_{xy}&0&0\\ 0&m_{xy}&0\\ 0&0&m_{z}\end{pmatrix}, \tag{102}\] where \(m_{xy}\) and \(m_{z}\) are the in-plane and the out-of-plane effective masses, respectively. As it is usually done to derive the wave equation from the Maxwell's ones, one can take the curl of Faraday's law and then replace \(\mathbf{B}\) from Ampere-Maxwell's equation. This yields the following equation for the electric field [50]: \[\mathbf{\nabla}\left(\mathbf{\nabla}\cdot\mathbf{E}\right)-\nabla^{2}\mathbf{E}=\frac {4\pi}{c^{2}}\frac{\partial\mathbf{J}}{\partial t}-\frac{\varepsilon}{c^{2}} \frac{\partial^{2}\mathbf{E}}{\partial t^{2}} \tag{103}\] By exploiting Eq. (101), we get rid of \(\mathbf{J}\) and obtain an equation for the electric field only. Let us introduce, as in the main text, the longitudinal \(\mathbf{E}_{L}=(\hat{\mathbf{q}}\cdot\mathbf{E})\hat{\mathbf{q}}\) and the transverse \(\mathbf{E}_{T}=\mathbf{E}-\mathbf{E}_{L}=(\hat{\mathbf{q}}\times\mathbf{E}) \times\hat{\mathbf{q}}\) components of the electric field. In the isotropic case the longitudinal-transverse decomposition \(\mathbf{E}=\mathbf{E}_{L}+\mathbf{E}_{T}\) of the total electric field leads to two decoupled equations, i.e. \[\frac{\partial^{2}\mathbf{E}_{L}}{\partial t^{2}}+\omega_{p}^{2}\mathbf{E}_{L }=\mathbf{0}, \tag{104}\] \[\frac{1}{\tilde{c}^{2}}\frac{\partial^{2}\mathbf{E}_{T}}{\partial t^{2}}- \nabla^{2}\mathbf{E}_{T}+\frac{\omega_{p}^{2}}{\tilde{c}^{2}}\mathbf{E}_{T}= \mathbf{0}, \tag{105}\] where the renormalized light velocity is defined as \(\tilde{c}=c/\sqrt{\varepsilon}\) as in the main text. They describe a longitudinal mode oscillating at \(\omega=\omega_{p}\) and two degenerate transverse modes propagating at \(\omega^{2}=\omega_{p}^{2}+\tilde{c}^{2}|\mathbf{q}|^{2}\), \(\omega_{p}\) being the isotropic plasma frequency defined in Eq. (42). In the anisotropic case such a decomposition for the electric field does not decouple the two equations. The main physical reason is that, due to the tensorial nature of the effective mass, the induced current \(\mathbf{J}\) in Eq. (101) is no more parallel to the electric field. Let \(\hat{x}\) be, as in the main text, the versor parallel to the direction of the in-plane component of the momentum \(\mathbf{q}\). For an anisotropic system Eq. (102) splits into three equations. One of them describes the in-plane pure transverse component \(\mathbf{E}_{T}^{y}=E_{T}^{y}\hat{\mathbf{y}}\) through \[\left(\omega^{2}-\omega_{xy}^{2}-\tilde{c}^{2}|\mathbf{q}|^{2}\right)E_{T}^{y }=0. \tag{103}\] Such transverse mode, which is polarized along the \(xy\) plane, is not affected by the anisotropy along the out-of-plane direction, so it propagates at \(\omega^{2}=\omega_{xy}^{2}+\tilde{c}^{2}|\mathbf{q}|^{2}\) without coupling with the longitudinal degrees of freedom. This is the result we found above with Eq. (77). On the other hand, the two equations describing the longitudinal mode \(\mathbf{E}_{L}=E_{L}\hat{\mathbf{q}}\) and the transverse component \(\mathbf{E}_{T}^{xz}=E_{T}^{xz}(\hat{\mathbf{q}}\times\hat{\mathbf{y}})\) polarized along the \(xz\) plane are coupled. Such equations read, in Fourier space: \[\left\{\begin{array}{c}\left(\omega^{2}-\omega_{xy}^{2}\frac{q_{x}^{2}}{| \mathbf{q}|^{2}}-\omega_{z}^{2}\frac{q_{z}^{2}}{|\mathbf{q}|^{2}}\right)E_{L} +\frac{q_{x}q_{y}}{|\mathbf{q}|^{2}}\left(\omega_{xy}^{2}-\omega_{z}^{2} \right)E_{T}^{xz}=0,\\ \\ \left(\omega^{2}-\omega_{z}^{2}\frac{q_{x}^{2}}{|\mathbf{q}|^{2}}-\omega_{xy}^ {2}\frac{q_{z}^{2}}{|\mathbf{q}|^{2}}-\tilde{c}^{2}|\mathbf{q}|^{2}\right)E_{ T}^{xz}+\frac{q_{x}q_{z}}{|\mathbf{q}|^{2}}\left(\omega_{xy}^{2}-\omega_{z}^{2} \right)E_{L}=0.\end{array}\right. \tag{104}\] The non-trivial propagating solutions of the previous equations are found by solving the characteristic polynomial \[\left(\omega^{2}-\omega_{xy}^{2}\right)\left(\omega^{2}-\omega_{z}^{2}\right) -\tilde{c}^{2}q_{x}^{2}\left(\omega^{2}-\omega_{xy}^{2}\right)-\tilde{c}^{2} q_{z}^{2}\left(\omega^{2}-\omega_{z}^{2}\right)=0 \tag{105}\] that leads to the frequencies \(\omega_{\pm}\) introduced in Eq. (87). The electric fields \(\mathbf{E}_{\pm}\) associated with such modes can be then computed: they are given by Eq. (89). Notice that if the coupling term \(\frac{q_{x}q_{z}}{|\mathbf{q}|}(\omega_{xy}^{2}-\omega_{z}^{2})E_{T}^{xz}\) is neglected in the first equation in Eq. (104), the pure longitudinal standard RPA mode \(\omega_{L}=\sqrt{\left(\omega_{xy}^{2}q_{xy}^{2}+\omega_{z}^{2}q_{z}^{2} \right)/|\mathbf{q}|^{2}}\) is recovered. This is valid when the transverse component \(E_{T}^{xz}\) is negligible with respect to the longitudinal one \(E_{L}\), as expected when \(|\mathbf{q}|\gg q_{c}\). Indeed, from the second of Eq. (104) one can estimate their ratio, at generic frequency and momentum, as \[\frac{E_{T}^{xz}}{E_{L}}=-\frac{\omega_{xy}^{2}-\omega_{z}^{2}}{\omega^{2}- \omega_{T}^{2}(\mathbf{q})}\frac{q_{x}q_{z}}{|\mathbf{q}|^{2}}=-\frac{\tilde{ c}^{2}q_{c}^{2}}{\omega^{2}-\omega_{T}^{2}}\frac{\sin\left(2\eta\right)}{2}, \tag{106}\] where \(\omega_{T}=\sqrt{\left(\omega_{xy}^{2}q_{z}^{2}+\omega_{z}^{2}q_{xy}^{2} \right)/|\mathbf{q}|^{2}+\tilde{c}^{2}|\mathbf{q}|^{2}}\) is the standard-RPA pure transverse mode. Eq. (106) is a more refined version of Eq. (8), where the displacement current was neglected, and is very similar to the one derived within the Many-Body formalism, i.e. Eq. (84). As for the latter, when the momentum lies outside the light cone, i.e. \(|\mathbf{q}|\gg\omega/\tilde{c}\), \(E_{T}^{xz}/E_{L}\simeq 0\), while the ratio stays finite for THz light propagating with a wavevector \(q=\omega_{z}/c\) that lies far below the crossover value \(q_{c}\sim\omega_{xy}/\tilde{c}\). ## Appendix B Relation between isotropic and anisotropic bare response functions In this appendix we discuss the mapping of an anisotropic free-electron gas model into an isotropic one, i.e., strictly speaking, how to obtain the layered bare Lindhard functions \(\Pi_{q}^{\mu\nu}\) from the knowledge of the isotropic ones \(\chi_{0}^{\mu\nu}\). The former ones are given by an anisotropic generalization of Eqs. (22) and (23) of the main text, i.e., after analytical continuation \(i\Omega_{m}\rightarrow\omega+i0^{+}\), \[\Pi_{0}^{\mu\nu}(\mathbf{q},\omega)=\frac{n}{m_{i}}\delta^{\mu i }\delta^{\mu\nu}\left(1-\delta^{\mu 0}\right) \tag{107}\] \[+ 2\int\frac{d^{3}k}{(2\pi)^{3}}\tilde{\gamma}^{\mu}\left(\mathbf{ k},\mathbf{q}\right)\tilde{\gamma}^{\nu}\left(\mathbf{k},\mathbf{q}\right)\frac{f( \tilde{\xi}_{\mathbf{k}})-f(\tilde{\xi}_{\mathbf{k}+\mathbf{q}})}{\omega- \tilde{\xi}_{\mathbf{k}+\mathbf{q}}+\tilde{\xi}_{\mathbf{k}}+i0^{+}}\] where we took the limit of infinite volume \(V\rightarrow\infty\). In Eq. (107) \(\tilde{\xi}_{\mathbf{k}}\equiv k_{xy}^{2}/(2m_{xy})+k_{z}^{2}/(2m_{z})-\mu\), with \(k_{xy}\equiv\sqrt{k_{x}^{2}+k_{y}^{2}}\), is the anisotropic free-electron energy dispersion and \(\tilde{\gamma}^{\mu}\) is the anisotropic density-current vertex, with \(\tilde{\gamma}^{0}=1\) for \(\mu=0\) and \(\tilde{\gamma}^{i}=\left(k_{i}+q_{i}/2\right)/m_{i}\) for \(\mu=i\). As a first step, we perform a rescaling of the momentum, in order to link e.g. the anisotropic energy dispersion to an isotropic one. In order to do so, we introduce the effective mass \(m^{*}\), defined as \[m^{*}\equiv\left(m_{xy}^{2}m_{z}\right)^{\frac{1}{3}} \tag{108}\] and we perform the following rescaling of the momenta \({\bf k}\) and \({\bf q}\): \[k_{i}^{*}=\sqrt{\frac{m^{*}}{m_{i}}}k_{i},\quad q_{i}^{*}=\sqrt{\frac{m^{*}}{m_{i} }}q_{i}. \tag{101}\] Eq. (101) leaves the momentum integration measure invariant, i.e. \(d^{3}k^{*}=d^{3}k\). It is straightforward to prove that the anisotropic energy dispersion and the current vertex can be rewritten, in terms of \(m^{*}\), as \[\tilde{\xi}_{\bf k}=\frac{|{\bf k}^{*}|^{2}}{2m^{*}}-\mu\equiv\xi_{\bf k^{*}}^ {*} \tag{102}\] and \[\tilde{\gamma}^{i}({\bf k},{\bf q})=\sqrt{\frac{m^{*}}{m_{i}}}\frac{k_{i}^{*}+ q_{i}^{*}/2}{m^{*}}\equiv\sqrt{\frac{m^{*}}{m_{i}}}\gamma_{*}^{i}({\bf k}^{*},{ \bf q}^{*}), \tag{103}\] where \(\xi^{*}\) and \(\gamma^{*}\) are the energy dispersion and current vertex, as functions of the rescaled momenta (101), of a fictitious isotropic free-electron gas with effective electron mass \(m^{*}\). Let us consider, as an example, the anisotropic density-density response function as given by Eq. (100) for time-like indices \(\mu=\nu=0\). Taking advantage of Eq. (102) and of the invariance of the integration measure under the rescaling (101) we have that \[\Pi_{0}^{00}({\bf q},\omega) = 2\int\frac{d^{3}k}{(2\pi)^{3}}\frac{f(\tilde{\xi}_{\bf k})-f( \tilde{\xi}_{\bf k+q})}{\omega-\tilde{\xi}_{\bf k+q}+\xi_{\bf k}+i0^{+}} \tag{104}\] \[= 2\int\frac{d^{3}k^{*}}{(2\pi)^{3}}\frac{f(\xi_{\bf k^{*}}^{*})-f (\xi_{\bf k^{*}+q^{*}}^{*})}{\omega-\xi_{\bf k^{*}+q^{*}}^{*}+\xi_{\bf k^{*}}^ {*}+i0^{+}}\] \[\equiv \chi_{0*}^{00}({\bf q}^{*},\omega).\] By means of similar calculations, one can show that the anisotropic density-current and current-current functions can be computed as \[\Pi_{0}^{0i}({\bf q},\omega) = \sqrt{\frac{m^{*}}{m_{i}}}\chi_{0*}^{0i}({\bf q}^{*},\omega)= \tag{105}\] \[= \sqrt{\frac{m^{*}}{m_{i}}}\frac{\omega q_{i}^{*}}{|{\bf q}^{*}|^{ 2}}\chi_{0*}^{00}({\bf q}^{*},\omega)\] and \[\Pi_{0}^{ij}({\bf q},\omega) = \sqrt{\frac{m^{*}}{m_{i}}}\sqrt{\frac{m^{*}}{m_{j}}}\chi_{0*}^{ ij}({\bf q}^{*},\omega)\] \[= \sqrt{\frac{m^{*}}{m_{i}}}\sqrt{\frac{m^{*}}{m_{j}}}\frac{\omega ^{2}}{|{\bf q}^{*}|^{2}}\chi_{0*}^{00}({\bf q}^{*},\omega)\frac{q_{i}^{*}q_{j} ^{*}}{|{\bf q}^{*}|^{2}}+\] \[+ \sqrt{\frac{m^{*}}{m_{i}}}\sqrt{\frac{m^{*}}{m_{j}}}\chi_{0*}^{ T}({\bf q}^{*},\omega)\left(\delta_{ij}-\frac{q_{i}^{*}q_{j}^{*}}{|{\bf q}^{*}|^{ 2}}\right).\] The last three equations are the same quoted in the main text, see e.g. Eqs. (47), (48) and (49). \(\chi_{0*}^{00}\), \(\chi_{0*}^{0i}\) and \(\chi_{0*}^{ij}\) are, respectively, the bare density-density, density-current and current-current response functions of the isotropic free-electron gas with mass \(m^{*}\), as functions of the rescaled momentum \({\bf q}^{*}\) and of the frequency. Moreover, in the second row of Eqs. (105) and (106) we took advantage of the longitudinal-transverse decomposition, with respect to the momentum \({\bf q}^{*}\), of \(\chi_{0*}^{0i}\) and \(\chi_{0*}^{ij}\), as prescribed for an isotropic metal by Eqs. (24) and (25) of the main text. From Eqs. (104), (105) and (106) one easily derives the long-wavelength expansions (81), (82) and (83) of the main text. Indeed, at leading order in \(v_{F}^{x}|{\bf q}^{*}|/\omega\) (\(v_{F}^{x}\equiv\sqrt{2\varepsilon_{F}/m^{*}}\)) it is \(\chi_{0*}^{00}\simeq n|{\bf q}^{*}|^{2}/(m^{*}\omega^{2})\) and \(\chi_{0*}^{T}\simeq n/m^{*}\), in analogy with Eqs. (39) and (40) of the main text. Once substituted the definitions of \(m^{*}\) and \({\bf q}^{*}\) one finds that \[\Pi_{0}^{00}({\bf q},\omega)\simeq\frac{n}{\omega^{2}}\left(\frac{q_{xy}^{2}} {m_{xy}}+\frac{q_{z}^{2}}{m_{z}}\right), \tag{107}\] \[\Pi_{0}^{0i}({\bf q},\omega)\simeq\frac{n}{\omega}\frac{q_{i}}{m_{i}}, \tag{108}\] \[\Pi_{0}^{ij}({\bf q},\omega)\simeq\frac{n}{m_{i}}\delta_{ij}. \tag{109}\] The first expansion is exactly Eq. (81) of the main text. If we substitute the last two into the definitions of \(\Pi_{T}^{0J}\) and \(\Pi_{T}^{JJ}\) (i.e. Eq. (80)) we get exactly Eqs. (82) and (83). ## Appendix C Derivation of Eq. (78) Let us consider once again the full density-density response function, i.e. Eq. (73) for time-like indices \(\mu=\nu=0\), that reads \[\tilde{\Pi}^{00}(q)=\Pi_{RPA}^{00}(q)-\Pi_{MIX}^{0i}(q)\Lambda_{ij}^{T}(q)\Pi_ {MIX}^{0j}(q). \tag{110}\] The non-zero projection of \(\Pi_{MIX}^{0i}\) along the direction set by \(\hat{\mathbf{v}}_{T}^{xz}\), i.e. \[(\hat{\mathbf{v}}_{T}^{xz})_{i}\Pi_{MIX}^{0i}(q)=(\hat{\mathbf{v}}_{T}^{xz})_{i}\Pi_{ RPA}^{0i}(q)=\frac{\Pi_{T}^{0J}(q)}{1-V_{C}({\bf q})\Pi_{0}^{00}(q)} \tag{111}\] has the crucial consequence that the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\Pi_{MIX}^{0j}\) in Eq. (110) is in general non-zero. Indeed, the mixing contribution \(\Pi_{MIX}^{0i}\Lambda_{ij}^{T}\) in Eq. \[\Pi^{0i}_{MIX}(q)\Lambda^{T}_{ij}(q)\Pi^{0j}_{MIX}(q) = \left((\hat{\mathbf{v}}^{xz}_{T})_{i}\Pi^{0i}_{MIX}(q)\right)^{2} \Lambda^{T}_{xz}(q)\] \[= \frac{1}{1-V_{C}(\mathbf{q})\Pi^{00}_{0}(q)}\frac{\left(\Pi^{0J}_{ T}(q)\right)^{2}}{\left(1-V_{C}(\mathbf{q})\Pi^{00}_{0}(q)\right)\left(\Pi^{JJ}_{ T}(q)-D^{-1}_{T}(q)\right)+V_{C}(\mathbf{q})\left(\Pi^{0J}_{T}(q)\right)^{2}},\] where we took into account the fact that the projections of \(\Pi^{0i}_{MIX}\) along the longitudinal and the transverse \(y\) direction are zero, as follows from \((\hat{\mathbf{v}}_{L})_{i}\Pi^{0i}_{MIX}=(\hat{\mathbf{v}}^{yx}_{T})_{i}\Pi^{0i}_{MIX }=0\), while the one along the transverse \(xz\) direction is finite. In Eq. (C) \(\Lambda^{T}_{xz}\equiv(\hat{\mathbf{v}}^{xz}_{T})_{i}(\hat{\mathbf{v}}^{xz}_{T})_{j} \Lambda^{ij}=1/\left(\Pi^{T,xz}_{RPA}-D^{-1}_{T}\right)\) is the transverse \(xz\) component of \(\Lambda\), with \(\Pi^{T,xz}_{RPA}\equiv(\hat{\mathbf{v}}^{xy}_{T})_{i}(\hat{\mathbf{v}}^{xy}_{T})_{j} \Pi^{ij}_{RPA}=\Pi^{JJ}_{T}+\left(\Pi^{0J}_{T}\right)^{2}/\left(1-V_{C}\Pi^{00 }_{0}\right)\). \(\Pi^{0J}_{0}\) and \(\Pi^{JJ}_{0}\) are the bare functions defined in Eq. (80) of the main text, which, along with the bare density function \(\Pi^{00}_{0}\), allow for the following expression of the full density-density function: \[\tilde{\Pi}^{00}(q)=\frac{1}{V_{C}(\mathbf{q})}\left[\frac{D^{-1}_{T}(q)-\Pi^ {JJ}_{T}(q)}{\left(1-V_{C}(\mathbf{q})\Pi^{00}_{0}(q)\right)\left(D^{-1}_{T}( q)-\Pi^{JJ}_{T}(q)\right)-V_{C}(\mathbf{q})\left(\Pi^{0J}_{T}(q)\right)^{2}}-1 \right]. \tag{10}\] Eq. (10), that is valid at arbitrary momentum, is expressed in terms of the bare electronic susceptibilities \(\Pi^{00}_{0}\), \(\Pi^{0J}_{0}\) and \(\Pi^{JJ}_{0}\) and can be easily recast in terms of the retarded density function \(\Pi^{00}_{ret}\) defined in Eq. (79), see Eq. (78) of the main text. A last comment is in order about the undamped nature of the two modes \(\omega_{-}\) and \(\omega_{+}\) at \(|\mathbf{q}|\sim q_{c}\). To discuss it we plot the region identified by the values of \(\mathbf{q}\) and \(\omega\) for which \(\Pi^{00\ \prime\ \prime}_{ret}\neq 0\), where particle-hole (p-h) damping is operative, at three fixed values of the angle \(\eta\) in Fig. 3 ; we also show, in each panel, the corresponding frequency dispersions \(\omega_{-}(|\mathbf{q}|,\eta)\) and \(\omega_{+}(|\mathbf{q}|,\eta)\) as functions of \(|\mathbf{q}|\). Clearly both modes do not enter the continuum at \(|\mathbf{q}|\leq q_{c}\simeq 5\ \mu\)m\({}^{-1}\), i.e. they propagate with zero damping within the momentum range where mixing effects are relevant. Above the crossover value \(\omega_{+}\simeq\omega_{T}\sim\widetilde{c}|\mathbf{q}|\) disperses as a pure transverse mode and never undergoes dissipation; on the other hand \(\omega_{-}(|\mathbf{q}|,\eta)\) first saturates to its RPA value \(\omega_{L}(|\mathbf{q}|,\eta)\) and then falls into the continuum.
2303.12835
Compact and Quiescent Circumgalactic Medium and Ly$α$ Halos around Extremely Red Quasars (ERQs)
Red quasars may represent a young stage of galaxy evolution that provide important feedback to their host galaxies. We are studying a population of extremely red quasars (ERQs) with exceptionally fast and powerful outflows, at median redshift $z$ = 2.6. We present Keck/KCWI integral field spectra of 11 ERQs, which have a median color $i-W3$ = 5.9~mag, median $\left\langle L_{\text{bol}} \right\rangle$ $\approx$ 5 $\times$ $10^{47}$ erg s$^{-1}$, Ly$\alpha$ halo luminosity $\left\langle L_{\text{halo}} \right\rangle$ $=$ 5 $\times$ $10^{43}$ erg s$^{-1}$, and maximum linear size $>128$ kpc. The ERQ halos are generally similar to those of blue quasars, following known trends with $L_{\text{bol}}$ in halo properties. ERQs have halo symmetries similar to Type-I blue quasars, suggesting Type-I spatial orientations. ERQ $\left\langle L_{\text{halo}} \right\rangle$ is $\sim$2 dex below blue quasars, which is marginal due to scatter, but consistent with obscuration lowering photon escape fractions. ERQ halos tend to have more compact and circularly symmetric inner regions than blue quasars, with median exponential scale lengths of $\sim$9 kpc, compared to $\sim$16 kpc for blue quasars. When we include the central regions not available in blue quasar studies (due to PSF problems), the true median ERQ halo scale length is just $\sim$6 kpc. ERQ halos are also kinematically quiet, with median velocity dispersion 293 km s$^{-1}$, consistent with expected virial speeds. Overall we find no evidence for feedback on circumgalactic scales, and the current episode of quasar activity, perhaps due to long outflow travel times, has not been around long enough to affect the circumgalactic medium. We confirm the narrow Ly$\alpha$ emission spikes found in ERQ aperture spectra are halo features, and are useful for systemic redshifts and measuring outflow speeds in other features.
Jarred Gillette, Marie Wingyee Lau, Fred Hamann, Serena Perrotta, David S. N. Rupke, Dominika Wylezalek, Nadia L. Zakamska, Andrey Vayner
2023-03-22T18:00:04Z
http://arxiv.org/abs/2303.12835v1
Compact and Quiescent Circumgalactic Medium and Ly\(\alpha\) Halos around Extremely Red Quasars (ERQs) ###### Abstract Red quasars may represent a young stage of galaxy evolution that provide important feedback to their host galaxies. We are studying a population of extremely red quasars (ERQs) with exceptionally fast and powerful outflows, at median redshift \(z\) = 2.6. We present Keck/KCW integral field spectra of 11 ERQs, which have a median color \(i-W3\) = 5.9 mag, median (\(L_{\rm bol}\)) \(\approx\) 5 \(\times\) 10\({}^{47}\) erg s\({}^{-1}\), Ly\(\alpha\) halo luminosity (\(L_{\rm halo}\)) = 5 \(\times\) 10\({}^{43}\) erg s\({}^{-1}\), and maximum linear size \(>\) 128 kpc. The ERQ halos are generally similar to those of blue quasars, following known trends with \(L_{\rm bol}\) in halo properties. ERQs have halo symmetries similar to Type-I blue quasars, suggesting Type-I spatial orientations. ERQ (\(L_{\rm halo}\)) is \(\sim\)2 dex below blue quasars, which is marginal due to scatter, but consistent with obscuration lowering photon escape fractions. ERQ halos tend to have more compact and circularly symmetric inner regions than blue quasars, with median exponential scale lengths of \(\sim\)9 kpc, compared to \(\sim\)16 kpc for blue quasars. When we include the central regions not available in blue quasar studies (due to PSF problems), the true median ERQ halo scale length is just \(\sim\)6 kpc. ERQ halos are also kinematically quiet, with median velocity dispersion 293 km s\({}^{-1}\), consistent with expected virial speeds. Overall we find no evidence for feedback on circumgalactic scales, and the current episode of quasar activity, perhaps due to long outflow travel times, has not been around long enough to affect the circumgalactic medium. We confirm the narrow Ly\(\alpha\) emission spikes found in ERQ aperture spectra are halo features, and are useful for systemic redshifts and measuring outflow speeds in other features. keywords: galaxies: active - quasars: emission lines - galaxies: intergalactic medium - galaxies: halos - galaxies: evolution - galaxies: high-redshift ## 1 Introduction Quasars are supermassive black holes which grow as they rapidly accrete infalling material at the center of their host galaxy. At high redshift, accretion can coincide with galactic assembly via galaxy mergers, or streams of infalling gas. Major merger activity could trigger both star formation and growth of the central black hole (Hopkins et al., 2006, 2008; Somerville et al., 2008; Glikman et al., 2015). The process of infalling gas, cold-mode accretion, may also be a dominant mechanism for supplying matter for the formation of galaxies, triggering starbursts, and fueling quasars (Keres et al., 2009; Dekel et al., 2009; Faucher-Giguere & Keres, 2011; Fumagalli et al., 2014). Accretion by infall can coincide with outflows as the subsequent galaxy activity generates feedback, influencing the galaxy's formation (Costa et al., 2014; Nelson et al., 2015; Suresh et al., 2019). A possible evolutionary scheme is where the central back hole grows in obscurity until feedback generates outflows, which clear the obscuring interstellar medium, to reveal a luminous quasar (Sanders et al., 1988; Di Matteo et al., 2005; Hopkins et al., 2006, 2008, 2016; Rupke & Veilleux, 2011, 2013; Liu et al., 2013; Stacey et al., 2022). Red quasars are important for testing the hypothesis that obscured quasars may be in a young evolution phase. Young quasars may be reddened by dust created in a major starburst inside the galaxies, triggered by a merger or cold-mode accretion. They may show signs of early evolution such as rapid accretion or more powerful outflows from the quasar, or more infall from the intergalactic medium, as they transition in to blue quasars. Studies of red/obscured quasars have found many to be in mergers or high-accretion phases (Glikman et al., 2015; Wu et al., 2018; Zakamska et al., 2019). Extremely Red Quasars (ERQs) display powerful outflow signatures, and are interesting candidates for studying a young quasar phase in the galaxy/quasar evolution scheme (Hamann et al., 2017). ERQs were first discovered among the Baryon Oscillation Spectroscopic Survey (BOSS, Paris et al., 2017) in the Sloan Digital Sky Survey-III (SDSS, Eisenstein et al., 2011) and the ALLWISE data release (Cutri et al., 2011, 2013) of the Wide-field Infrared Survey Explorer (WISE, Wright et al., 2010). They were initially described in Ross et al. (2015), and were further refined in Hamann et al. (2017) to \(\sim\)200 objects, characterized by their extremely red colors (\(i-W3\)\(>\) 4.6 mag; Hamann et al., 2017). Currently known ERQs are at comic noon redshifts \(z\sim\) 2-4, and are on the high end of quasar bolometric luminosities \(L_{bol}>10^{47}\) erg s\({}^{-1}\). ERQs exhibit other extreme spectral properties that make them unique compared to other red quasar samples, beyond being extremely red. Many have unusually strong and/or blueshifted broad emission lines, with C IV rest equivalent widths \(>\) 100 A, and peculiar wingless profiles with high kurtosis (Monadi & Bird, 2022). They also frequently have unusual emission line flux ratios (e.g. high N V \(\lambda\)1240/Ly\(\alpha\) or N V \(\lambda\)1240/C IV \(\lambda\)1549). These broad line features indicate outflow properties controlled by accretion, and reside on scales of tens of parsecs (Zhang et al., 2017; Alexandroff et al., 2018). ERQs also have the most blueshifted [O III] \(\lambda\)5007 emission lines ever reported, with speeds \(>\) 6000 km s\({}^{-1}\)(Zakamska et al., 2016; Perrotta et al., 2019). [O III] is significant because it traces gases on galactic scales, at tens of kpc, and are low density forbidden transitions (Hamann et al., 2011; Vayner et al., 2021). Outflows at these galactic scales carry energy that could generate feedback in the host galaxy. This suite of properties found in ERQ spectra indicate more extreme physical conditions than what are found in typical quasar populations, and beyond orientation effects. Understanding ERQs and their evolutionary nature has launched a multi-faceted study of their physical environments. ERQ host galaxies have been directly imaged with _Hubble Space Telescope_ to look for signs of merger activity (Zakamska et al., 2019). Major merger activity was not found in most of their objects, but observations of high redshift quasars have added difficulty subtracting point-spread function in two dimensional imaging. Another approach is to study the kinematics and physical extent of the outflowing gas, specifically, the [O III] emission (Perrotta et al. (2019); Vayner et al. (2021); Lau et al. in preparation). Past studies have found [O III] outflows to be spatially compact, not extended to circumgalactic regions, and at kpc scales of the nuclear regions of host galaxies (Vayner et al., 2021). A third method of investigating ERQ environments is through their halo emission. The circumgalactic medium (CGM) has an important role in understanding the evolution of galaxies (Tumlinson et al., 2017), and is the site of interaction for outflows from galaxies and the inflows from the intergalactic medium. High redshift quasars have been found to have large gas reservoirs in the CGM, observed in fluorescent Ly\(\alpha\) emission, and have been interpreted as filaments or inflowing gas in cold-mode accretion (Borisova et al., 2016; Arrigoni Battaia et al., 2019; Cai et al., 2019). ERQs are a well defined and uniform sample from BOSS, and make good targets for studying their Ly\(\alpha\) halos for quasar evolution. In this paper, we aim to investigate whether ERQs represent an earlier evolutionary stage compared to regular luminous blue quasars, which are characterized by strong rest-UV emission lines and blue color across the rest-frame mid-UV to Near-IR spectral range. With the embedded quasar evolution scheme, we investigate ERQs for evidence they are different or unusual from normal blue quasars at the same redshifts and luminosities. This could be evidence for feedback, but also signatures of more intense infall or chaotic mergers, more/less asymmetries, and/or suppressed Ly\(\alpha\) halo emission due to the quasar obscuration. Detailed analysis of the Ly\(\alpha\) emission from ERQs can provide key insights into this question. Secondarily we want to investigate the spatial origin of a peculiar, and narrow, Ly\(\alpha\) emission component frequently seen in spatially unresolved ERQ spectra. It often appears as a "spike" on top of the broad Ly\(\alpha\) emission, and it's narrow width (FWHM \(<\) 1,000 km s\({}^{-1}\)) indicates the emission is far from the quasar's broad line region (BLR). The spike sometimes coincides with the rest frame of other narrow emission lines (e.g. He II and narrow [O III], Hamann et al. (2017) and Perrotta et al. (2019), respectively). We want to determine whether this sharp feature originates in the extended inner-halo region, as hypothesized in Hamann et al. (2017), and discussed or used in Perrotta et al. (2019) and Lau et al. (2022). This investigation is motivated from observations that many broad emission lines in ERQs are often blueshifted, and involved in outflow, that would typically be at the systemic in other quasar populations (Hamann et al., 2017; Perrotta et al., 2019). This blueshifting would systemically bias automated redshift estimates generated in BOSS, making outflows appear weaker. If the source of the Ly\(\alpha\) spike is far from the galactic center, where outflows would take place, and is extended in the galactic halo, then it would not be involved in strong outflows and would provide a better systemic redshift estimate. Confirming the narrow Ly\(\alpha\) emission can be used as a better systemic redshift of ERQs also confirms outflows hypothesized in the discussions aforementioned, and helps constrain velocities for future work on ERQ outflows (J. Gillette et al., 2023b in preparation). Our team already analyzed one Ly\(\alpha\) halo from our sample that is the reddest known ERQ (J0006+1215, in Lau et al. (2022)), which displays an extremely fast [O III] \(\lambda\)5007 outflow at \(\sim\)6000 km s\({}^{-1}\)(Perrotta et al., 2019). The Ly\(\alpha\) halo spans \(\sim\)100 kpc, and the narrow Ly\(\alpha\) emission spike in the quasar spectrum originates from the inner halo. It is kinematically quiet, with velocity dispersion of \(\sim\)300 km s\({}^{-1}\) and no broadening above the dark matter circular velocity down to the spatially resolved limit \(\sim\)6 kpc from the quasar. Lau et al. (2022), hereafter L22, proposed that the He II \(\lambda\)1640/Ly\(\alpha\) ratio of the inner halo and the asymmetry level of the overall halo in J0006+1215 are dissimilar to Type-II quasars, suggesting unique physical conditions for J0006+1215 that are beyond orientation differences from other quasar populations. We note the Type-I/II quasar classification dichotomy, because other studies find that Type-IIs tend to show more asymmetric halos, consistent with more edge-on views of the quasar. It also correlates strongly with the level of obscuration toward the nucleus Greene et al. (2014), and we want to distinguish ERQs from obscured quasar populations explained by orientation effects. In this paper we analyze Ly\(\alpha\) halos of a sample of ERQs, and compare them to those around other quasars as a unique population. It is organized as follows. Section 2 describes the selection method for ERQ observations, their data reduction, calibration, and post-processing. Section 3 describes the measured halo properties of ERQ sample, such as the extended line emission surrounding the quasar, and analysis results of size, morphology, surface brightness (SB), and kinematics of the extended emission. In Section 4 we discuss implications for quasar youth, feedback, and further for quasar studies. Section 5 concludes the paper. Throughout this paper we adopt a \(\Lambda\)-CDM cosmology with \(H_{0}\) = 69.6 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm M}\) = 0.286 and \(\Omega_{\Lambda}\) = 0.714, as adopted by the online cosmology calculator developed by Wright (2006). All magnitudes are on the AB system. Reported wavelengths are in vacuum and in the heliocentric frame. ## 2 Target selection, observations, and data reduction Here we summarize the K coeff observations, data reduction, and post-processing for the full sample of 12 ERQs, following the procedures described in detail by L22. Our aim is to test whether the most extreme properties of ERQs correlate in any way with particular or unusual Ly\(\alpha\) halo properties. Our second goal is to test the suggestion in Hamann et al. (2017) that the narrow Ly\(\alpha\) spikes in the BOSS spectra of some ERQs are halo emission. ### Sample Selection We select targets for our study from the sample of 205 ERQs in Hamann et al. (2017), and in the redshift range \(2.0\leq z\leq 3.6\), all have coverage of the Ly\(\alpha\) feature at rest-frame 1215.7A. ERQs showing a narrow Ly\(\alpha\) emission in the profile of their BOSS spectra, by visual inspection, were prioritized during observing to determine if the emission originates from an extended halo. We also prioritize ERQs in Perrotta et al. (2019), which have [O III] measurements, which were shown to have color-correlated powerful outflows at tens of kpc scales, and may be more likely to show evidence of feedback in the halo. We also prefer the reddest available ERQs. We chose the best available targets subject to weather and scheduling constraints, and prioritize targets fitting several of the criteria above. Finally, after applying our selection criteria to our four nights we present a total sample of 12 ERQs, with redshifts \(2.31\leq z\leq 3.14\). Half of them have the narrow component in their Ly\(\alpha\) emission. The median reddening of the sample is \(i-W3\approx 5.9\), Table 1 contains basic sample properties. ### Observations We observed our final sample of 11 ERQs using the Keck Cosmic Web Imager (KCWI, Morrissey et al., 2018), on the Keck II telescope. KCWI is a wide field, integral field spectrograph optimized for observing low-surface brightness targets. It provides both spatial and spectral information for resolved targets, allowing us to make pseudo-narrow-band images and obtaining spectra from individual spatial-pixels, or "spaxels." Observations were conducted over four nights from 2018 to 2020, with identical instrument configuration across the sample. Observation details are noted in Table 1. Conditions during observation often varied through any night, but our work had typical seeing FWHM of about (0.8-1.4) arcseconds. KCWI currently has only a blue filter, and we used the BL grating, which has the best efficiency and widest wavelength bandpass (\(\Delta\lambda\simeq\) 2,000A). KCWI uses a "slicer" to slice the field of view into rows or columns before going to the grating. Slicers come in three different sizes to optimize field of view, spectral resolution, and spatial sampling. We used the medium slicer to have considerable field of view, spectral resolution comparable to SDSS, and sufficient spatial sampling. This configuration yields a field of view of 15.7 arcsec \(\times\) 18.9 arcsec, corresponding to a physical scale of approximately 128 kpc \(\times\) 154 kpc at our sample's median redshift (\(z_{em}\approx 2.61\)). With 24 slices, the instrument configuration provides a spatial sampling of 0.68 arcsec, and seeing limited, along the slices. Each exposure was dithered 0.35 arcsec across slices to sub-sample the long spatial dimension of the output spaxels, which have sizes of 0.68 arcsec \(\times\) 0.29 arcsec. The spectral resolution is \(R=1800\). The full spectral range is approximately 3500 to 5625A, which is ample for coverage of the Ly\(\alpha\) emission profile across a wide range of quasar redshifts, \(2.0\leq z\leq 3.6\). We used exposures of 20 minutes, optimally integrating for 2 to 3 hours total, and varied depending on target availability, observing conditions, and priority of the object. We calibrated using arclamps and spectroscopic standard stars at the beginning and end of each night. ### Data Reduction & Post Processing We adopted the data reduction and post-processing approaches described in detail in L22. Here we only provide a summary of the major steps. We used the KCWI Data Extraction and Reduction Pipeline (KDERP, [https://github.com/Keck-DataReductionPipelines/KcWIDRP](https://github.com/Keck-DataReductionPipelines/KcWIDRP)) written in the Interactive Data Language (IDL) for flat-fielding, cosmic-ray removal, and for the first-pass instrument noise removal and background subtraction. We made sure that no prominent skyline residuals are present near Ly\(\alpha\) in each spectrum. We then used CWITools (O'Sullivan et al., 2020), and the IDL library IFFSTT (Rupke & To, 2021), for the second-pass background subtraction and removing internally scattered light. Specifically, IFSFIT uses a spatially-unresolved quasar emission template, generated from a one arcsecond aperture around the object, and is scaled to subtract all emission from the quasar at every spaxel position. A key step that was customized for half of the sample was the quasar template which is subtracted to leave only halo emission in the residual, described in Section 2.4. We also used CWITools' simplified algorithm to subtract any foreground continuum sources convolved with the point spread function. ### Narrow Halo Emission L22 confirmed that the narrow Ly\(\alpha\) "spike" emission in ERQ spectra originates in the halo. In Section 3.1 we further confirm this in the rest of our ERQ sample. This insight into the emission's origin allows us to treat the emission differently in processing the spectra and images. For objects with no spike the PSF spectral template removes all emission within the seeing disk resulting in a hollow at the quasar position in the SB map, like in blue quasar studies. We followed the procedure for generating quasar spectral templates as described in L22, and made customisations to the quasar template for each ERQ identified to have narrow Ly\(\alpha\) emission. We isolated the feature within the one arcsecond template by interpolating underneath the narrow line, effectively "clipping" it, and left the extended emission in the residual. Examples of this template interpolation is shown in Figure 1 (see also L22). In cases where the spectral location and profile shape of the spike was more ambiguous, we compared extracted spectra from gradually larger aperture sizes. We confirmed the spike was the only emission that increases with larger aperture size, and interpolated underneath its narrow profile. ### Line Variability We inspected ERQ spectra for variability between our KCWI observations and the BOSS catalog. The median time between observations is two years at rest frame of the the median ERQ. In general, characteristics of line profiles do not significantly change between SDSS and our observations, in agreement with comments in L22. Similarities between these observations indicate the unusual profile features that distinguish ERQs from typical quasars persist beyond two years in the quasar frame, and are consistent with other observations of ERQs from Hamann et al. (2017). \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline & ERQ Name & RA & Dec & \(z_{\rm em}\) & \(z_{\rm em}\) & \(z_{\rm em}\) & C IV FWHM & [O III vs] & W3 & i-W3 & Bolometric Luminosity & Observation Date & Exposure Time \\ & (J2000) & (J2000) & & (m s\({}^{-1}\)) & (m s\({}^{-1}\)) & (m s\({}^{-1}\)) & (m s\({}^{-1}\)) & (m s\({}^{-1}\)) & (m s\({}^{-1}\)) \\ \hline J0066+1215\({}^{a}\) & 00:06:10.67 & 12:15:01.2 & 2.3184 & 4540:200 & -622:49.9 & 14.1 & 8.0 & 7.58e+47 & 2019-01-02 & 4,200.0 \\ J0202+0137\({}^{b}\) & 02:05:25.11 & 01:37:1.1 & 3.141 & 3.375 & 263:11.6 & -15.8 & 6.2 & 3.68e+47 & 2019-01-02 & 8,400.0 \\ J0834+019\({}^{c}\) & 03:04:10.9 & 48:41.5 & 9:22:1.9 & 2.9852 & 2636:365 & -442:68.1 & 14.9 & 6.0 & 4.98e+47 & 2019-03-02 & 6,000.0 \\ J1145+4724\({}^{c}\) & 14:54:08.00 & 47:52:45.6 & 2.97 & 2.8477 & 9103:46 & -40.2 & 14.3 & 4.8 & 1.14e+48 & 2019-03-02 & 10,800.0 \\ J1232+0012\({}^{c}\) & 12:32:14.75 & 01:92:0.3 & 2.3043 & 4.043 & 787:52 & -702:06.2 & 14.3 & 6.8 & 6.33e+47 & 2020-02-56 & 4,800.0 \\ J1451+0133\({}^{a}\) & 14:55:13.61 & 01:32:34.1 & 2.77 & 2.8130 & 6231:15.6 & - & 14.7 & 5.7 & 6.76e+47 & 2019-03-02 & 10,800.0 \\ J1451+2338\({}^{a}\) & 14:54:18.01 & 23:36:4.4 & 2.62 & 2.6348 & 4166:12.2 & - & 15.0 & 5.5 & 4.45e+46 & 2020-02-56 & 8,400.0 \\ J1652+1728\({}^{a}\) & 16:52:02.61 & 17:28:23.3 & 2.94 & 2.9584 & 2.9485 & -253:00.49 & 1.5 & 5.4 & 6.83e+47 & 2020-03-56 & 4,800.0 \\ J1705+276\({}^{a}\) & 17:05:85.6 & 27:36.4 & 2.45 & 2.4461 & 1012:22 & - & 15.5 & 5.1 & 2.43e+47 & 2020-03-56 & 3,600.0 \\ J2215+056\({}^{a}\) & 22:52:04.05 & -00:05:43.8 & 2.51 & 2.5074 & 2480:112.2 & - & 15.5 & 1.54e+47 & 2019-01-02 & 8,400.0 \\ J2254+2327\({}^{a}\) & 22:54:38.3 & 232:71.4 & 3.09 & 3.0825 & 4124+16.4 & - & 16.5 & 5.5 & 1.68e+47 & 2018-01-02 & 10,800.0 \\ J2323+010\({}^{b}\) & 23:23:16.1 & -01:09:03.1 & 2.66 & 2.3811 & 3909:06.2 & -645:08.03 & 15.2 & 7.2 & 2.85e+47 & 2019-01-02 & 7,800.0 \\ Medina & \multicolumn{1}{c}{–} & 2.61 & 2.6348 & 4223 & - & 15.0 & 5.9 & 4.56e+47 & – & – \\ \hline \end{tabular} \end{table} Table 1: ERQ properties in our program, \(z_{\rm em}\) is from the SDSS DR12Q BOSS catalog emission-line measurements, \(z_{\rm med}\) is computed from the J\(\sigma\) line emission from the extended halo. C IV FWHM is from emission line fitting done in Hamann et al. (2017). [O III] emission blueish measurements, when available, are from Perruta et al. (2019), and blueishifted from CO emission measurements by F. Hamann et al. (2023) in prep.), and will be later discussed in Giltee et al. (2020) in prep.). bolometric luminosity is estimated from the W3 magnitude, ESO J0834+0159 was observed under cloudy conditions, and is omitted from analysis of halo. \begin{table} \begin{tabular}{l c c c c c c c} \hline & ERQ Name & Area (Second) & Maximum Linear Size & Halo Luminosity & Peak to Quasar Distance & Centroid to Quasar Distance & \(\epsilon_{\rm em}\) & \(\epsilon_{\rm em}\) & \\ & (kpc)\({}^{a}\) & (kpc) & (erg/s) & (kpc) & (kpc) & (kpc) & & \\ \hline J0006+1215\({}^{a}\) & 6851 & 140 & 5.00e+43 & 3.44 & 5.56 & 0.44 & 0.69 \\ J0220+0137\({}^{b}\) & -5693 & \(>\)104 & 6.28e+43 & 5.94 & 9.99 & 0.35 & 0.66 \\ J1145+4724\({}^{c}\) & \(>\)10529 & \(>\)148 & 9.97e+43 & 2.31 & 17.09 & 0.74 & 0.71 \\ J1232+0912 & 6016 & 119 & 5.40e+43 & 6.83 & 6.17 & 0.72 & 0.64 \\ J1451+0132\({}^{c}\) & -9970 & \(>\)148 & 2.29e+44 & 0.00 & 4.00 & 0.43 & 0.59 \\ J1451+2338\({}^{a}\) & \(>\)116165 & \(>\)157 & 3.30e+44 & 5.29 & 2.87 & 0.57 & 0.59 \\ J1652+1728\({}^{a}\) & -8596 & \(>\)128 & 2.14e+44 & 2.30 & 3.72 & 0.46 & 0.49 \\ J1705+2736\({}^{b}\) & -6703 & \(>\)122 & 2.37e+43 & 4.81 & 23.15 & 0.91 & 0.78 \\ J2215+0565\({}^{b}\) & 3812 & 114 & 9.78e+42 & 28.91 & 21.50 & 0.87 & 0.85 \\ J2252+4327\({}^{b}\) & 2281 & 80 & 9.36e+42 & 8.18 & 8.41 & 0.65 & 0.67 \\ J2323\({}^{c}\)\({}^{d}\)\({ ### Sample Properties Table 1 lists basic properties of the sample, measured redshift of the Ly\(\alpha\) emission, and details of the observations. We include the SDSS BOSS catalog emission-line redshift measurements \(z_{em}\), and \(z_{halo}\) computed from the extended Ly\(\alpha\) halo emission centroid, without the quasar emission. We include C IV FWHM, measured from C IV \(\lambda\)1549 emission-line profile fitting done in Hamann et al. (2017), and when available [O III] \(\lambda\)5007 emission blueshift from Perrotta et al. (2019). We include catalog magnitudes and color (\(W3\) and \(i-W3\)), and our computed bolometric luminosities (\(L_{bol}\)). We apply galactic extinction corrections to all luminosity and SB measurements. ERQs are heavily extincted in the visible and UV, but the amount of extinction is difficult to determine. In Hamann et al. (2017) it is estimated that the median SED is suppressed in ERQs by three magnitudes in the rest-frame UV, in comparison to normal blue quasars. Uncertain extinction in ERQs also makes bolometric luminosities difficult to determine. Therefore, we assume ERQs have intrinsic SEDs like typical blue quasars, and use their measured WISE \(W3\) fluxes (assumed to be unaffected by extinction) to estimate the bolometric luminosities, similar to the ERQ luminosities computed by Perrotta et al. (2019). We compare the medians quantities from our sample to medians from other surveys of blue quasars, discussion of these comparisons are in Section 3.4. One quasar, J0834+0159, was observed under cloudy conditions, and was omitted from analysis of Ly\(\alpha\) halos. We do not have detection of other extended emission lines such as C IV \(\lambda\)1549 or He II \(\lambda\)1640 except for J0006+1215, which is discussed in L22. ### Halo Detection We used an optimal extraction algorithm for detecting diffuse emission in Ly\(\alpha\) instead of pseudo narrowband images to obtain morphology and kinematics of the halo. This algorithm takes into account pixels of the three dimensional data cube, or "voxels," that are of good signal to noise (S/N > 2) and connected at sides, edges, or corners to each other. Our algorithm is described with more detail in L22, which followed algorithms in Borisova et al. (2016); Arrigoni Battaia et al. (2019); Cai et al. (2019); Farina et al. (2019). Our surface brightness's 1\(\sigma\) limit, and root-mean-square value, are measured in a 1-arcsec aperture for a single 1-angstrom channel. Surface brightness maps probe the halo \(\rm Ly\alpha\) emission down to a median 1\(\sigma\) SB limit of \(2.16\times 10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\), across spatial scales from 40 kpc to \(>80\) kpc from the quasars. Surface brightness root-mean-square has median of \(2.26\times 10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\), for a channel in rest-frame wavelengths between 1255 and 1275 A, and where there are no prominent quasar emission lines. ## 3 Measurements & Results To investigate the environment of ERQs, we measure basic properties of the Ly\(\alpha\) halo emission, such as surface brightness, linear size, and velocity dispersion. Peculiarities in halo properties compared to blue quasars may be evidence ERQ inhabit a different quasar evolutionary phase. Panels in Figure 2 show, from left to right, halo surface brightness, 1st velocity moment of the line flux distribution (i.e. velocity shift), velocity dispersion, circularly-averaged surface brightness (SB) radial profile, and spatially-integrated spectrum, for 11 of the 12 observed ERQs. J0834+0159 was observed under cloudy conditions, and is not included in the Figure 2 and subsequent tables. Tables 2 - 4 list a variety of properties measured from these maps. ### Morphology & Brightness Maps Morphology measurements such as extent and asymmetry may give insights into the gaseous environment of the quasar, and could be influenced by large scale structure, the presence of merger activity, and quasar luminosity. Morphology of ERQ Ly\(\alpha\) halos varies across the sample, with some halos extending to the edge of the FOV (\(>60\)-70 kpc), and others compact near the central quasar (\(\sim\)40 kpc). These Ly\(\alpha\) halos also show varying asymmetry. We present Ly\(\alpha\) SB maps in the first column of Figure 2. One unique aspect of this project is showing continuous halo emission down to zero projected distance from the quasar, and thus the six ERQs with Ly\(\alpha\) spikes do not show a hollow due to our quasar PSF subtraction (Section 2.4). Table 2 presents the measured projected distance from the quasar to the Ly\(\alpha\) halo emission peak, and to the halo centroid. The median distance from quasar to peak in our sample is \(r_{\rm peak}=5.0\) kpc, and the median distance from quasar to halo centroid is \(r_{\rm centroid}=6.5\) kpc. Uncertainty in these projected distances is about half a pixel in size, or \(\sim\)1 kpc. The position of the Ly\(\alpha\) halo centroid is used in analysis of spatial asymmetry in this section, and the position of the halo peak is used in analysis of aperture kinematics in Section 3.2. Six of the halos extend beyond the FOV, and \(>55\) kpc from the central quasar. Our sample has a median maximum linear size \(>128\) kpc, and median halo luminosity \(5.40\times 10^{43}\) erg s\({}^{-1}\). We quantify the asymmetry level of the morphology of the Ly\(\alpha\) halo with the elliptical eccentricity parameters \(e_{\rm weight}\) and \(e_{\rm unweight}\) (identical to L22). We define \(e_{\rm weight}\) using flux-weighted second-order spatial moments with respect to the Ly\(\alpha\) halo centroid, and follow the formula in O'Sullivan & Chen (2020). \(e_{\rm unweight}\) uses flux-unweighted spatial moments with respect to the quasar position. Values of \(e\approx 0\) correspond to circular morphologies, and values near 1 correspond to being more elliptical. Parameter \(e_{\rm weight}\) is defined as \(e_{\rm weight}=\sqrt{1-\alpha_{\rm weight}^{2}}\), where the flux-weighted eccentricity parameter \(\alpha_{\rm weight}\) is defined in Arrigoni Battaia et al. (2019) or Cai et al. (2019). \(e_{\rm weight}\) tends to characterize central regions of the halo with high SB. Conversely, \(e_{\rm unweight}\) is defined as \(e_{\rm unweight}=\sqrt{1-\alpha_{\rm unweight}^{2}}\), where the flux unweighted eccentricity parameter \(\alpha_{\rm unweight}\) is defined in den Brok et al. (2020) or Mackenzie et al. (2021). \(e_{\rm unweight}\) better describes the large-scale shape of the diffuse halo surrounding the quasar position. Large differences between \(e_{\rm weight}\) and \(e_{\rm unweight}\) could indicate significant changes from inner halo to extended regions that could be affected by filamentary structures. For example, J1145+5742 has a luminous Ly\(\alpha\) halo that extends East from the quasar position, and is visibly asymmetric about the halo centroid, with a measured \(e_{\rm weight}\approx 0.74\). Asymmetric extending of the halo away from the quasar position is also reflected in the large \(e_{\rm unweight}\approx 0.71\). An unusual case is J2323-0100, which displays an asymmetric halo with a patch of emission to the North-West that causes the Ly\(\alpha\) halo centroid to be offset. This offset away from the luminous bulk of halo emission results in a large \(e_{\rm weight}\approx 0.86\). Considering the more diffuse mission around the quasar position, the asymmetry is more moderate, and thus has lower \(e_{\rm unweight}\approx 0.65\). Median eccentricities for the optimally extracted halos (11 objects) are \(e_{\rm weight}=0.65\) and \(e_{\rm unweight}=0.66\). \(e_{\rm unweight}\) values larger than \(e_{\rm weight}\) could reflect the sample generally having a more circularly symmetric inner halo, where the flux is strongest, versus the slightly more asymmetric outer halo. These median eccentricity values are comparable to each other, but are given further context by comparing values to other quasar samples in Section 3.4. To characterize the radial extent of the Ly\(\alpha\) halo, we calculate the circularly averaged SB radial profiles for the ERQ sample (Figure 2, 4th column). While optimally extracted line maps yield measurements of morphology and kinematics, they do not allow direct comparison with other samples because other surveys used pseudo-narrowband imaging to compute their radial profiles. Therefore, we choose to generate a pseudo-narrowband image with fixed width to recover all possible fluxes in extended regions. Using a similar method to other studies, we adopt a fixed wavelength width of \(\pm 1,000\) km s\({}^{-1}\) centered at the Ly\(\alpha\) rest wavelength. For the full sample of ERQs, the median SB in the innermost annulus (2\(-\)4 kpc) is 1.53 \(\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\), and 1.61 \(\times 10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\) in the most distant annulus (32\(-\)63 kpc). Six of the 11 ERQs which have modified PSF subtraction, due to the presence of a Ly\(\alpha\) spike in their spectra, have an inner annulus (2\(-\)4 kpc) median SB 11.4 \(\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\), and most distant annulus (32\(-\)63 kpc) median SB 4.69 \(\times 10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\). We calculate the averaged SB at each radial distance measured by annuli centered on the quasar position for each ERQ, as well as the full sample median (Table 3). We compute a full sample SB median omitting the inner region data in order to compare these ERQ radial profiles with the spike to other samples, which cannot perform modified PSF subtraction. To demonstrate the potential difference between our quasars and other samples, we compute the median radial profile of the sub-sample of six ERQs, where a Ly\(\alpha\) spike is present in the spectra and modified PSF subtraction was performed. SB of ERQs with detected Ly\(\alpha\) emission increases monotonically as the radial distance decreases, indicating that the innermost regions are the most luminous part of these halo. ERQs with the Ly\(\alpha\) spike emission demonstrate the most centrally concentrated emission. We use an exponential fit to the binned radial profiles, \(\mathrm{SB}_{\mathrm{Ly}\alpha}(r)=C_{e}\mathrm{exp}(-r/r_{h})\), where \(C_{e}\) is the normalization, \(r\) is the projected distance from the quasar, and \(r_{h}\) the scale length of the profile. Figure 2 shows the fits of this model in the fourth column, and appear to reasonably describe the the radial SB profile. For individual ERQs, a steep decline in SB near their inner regions (\(<\)4kpc) appear only for ERQs with quasar PSF subtraction that did not involve clipping the Ly\(\alpha\) spike. Offset between the profile fit and data in the outer regions (\(>\)32kpc, eg. J1451+0132 and J1652+1728) are generally from ERQs with an asymmetric shape in the Ly\(\alpha\) halo at large scales. These profiles are better fit by exponential functions than power laws, and show that SB steeply declines at large distances from the quasar. Table 3 presents averaged SB at each annular bin and exponential fit parameters. Our full sample median exponential scale length is \(r_{h}=9.4\) kpc, and the median of the six ERQ with a Ly\(\alpha\) spike have \(r_{h}=8.7\) kpc. Generally, ERQ Ly\(\alpha\) halos are more centrally concentrated than blue quasar samples, and ERQs that have the Ly\(\alpha\) spike are the most compact of our sample. Section 3.4 will have further discussion of quasar population comparisons. Figure 1: Example spectral templates extracted from an inner 1 arcsecond aperture spectra near Ly\(\alpha\) of, from left to right, J1145+5742, J1451+2338, and J1652+1728. The spectrum is shown in black, the interpolation function is in dashed green, and the “clipped” narrow feature in the spectrum is replaced by the thick red line segment. All emission lines are labelled from the rest frame of the Ly\(\alpha\) extended halo emission. Function fitting procedures would become convoluted by absorption in the Ly\(\alpha\) forest, and so the interpolation function was hand-fit to account for smooth emission features and continuum on the red side of Ly\(\alpha\) and deep absorption on the blue side. In the first panel, J1145+5742 has a straightforward and distinct separation between the broad and narrow emission components. The second panel shows J1451+2338, shown to have blended broad emission lines from Ly\(\alpha\) and N V \(\lambda\)1240. The third panel of J1652+1728 also has unusual emission lines, and the blueshifted central emission of Ly\(\alpha\) caused uncertainties in clipping the narrow emission. We do not assume this interpolation function describes any characteristics of the broad emission, but simply isolates the emission from the narrow component of Ly\(\alpha\). \begin{table} \begin{tabular}{l c c c c} \hline \hline ERQ Name & Radial Brightness & Radial Brightness & Radial Brightness & Radial Brightness & Exponential Amplitude & Exponential Scale Length \\ & (2-4) kpc & (4-8) kpc & (8-16) kpc & (16-32) kpc & (32-6) kpc & \\ & (erg/cm\({}^{2}\))\({}^{-2}\) & (erg/cm\({}^{2}\))\({}^{-2}\) & (erg/cm\({}^{2}\))\({}^{-2}\) & (erg/cm\({}^{2}\))\({}^{-2}\) & (erg/cm\({}^{2}\))\({}^{-2}\) & (erg/cm\({}^{2}\))\({}^{-2}\) \\ \hline 0006+1215\({}^{a}\) & 1.63+14 & 1.22+14 & 5.88+15 & 1.48+15 & 1.78+16 & (2.36 \(\pm\) 0.005\({}^{a}\)-14 & 8.6 \(\pm\) 0.1 \\ 30220+0137\({}^{a}\) & 1.43+14 & 1.26+14 & 7.68+15 & 1.91+15 & 4.76+17 & (2.50 \(\pm\) 0.05\({}^{a}\)-14 & 9.4 \(\pm\) 0.1 \\ 31145+5742\({}^{a}\) & 2.16+14 & 1.66+14 & 7.79+15 & 2.49+15 & 7.98+16 & (2.43 \(\pm\) 0.02\({}^{a}\)-14 & 10.8 \(\pm\) 0.1 \\ 31232+0912 & 1.03+15 & 9.28+15 & 6.75+15 & 2.60+15 & 1.61+16 & (1.77 \(\pm\) 0.11\({}^{a}\)-14 & 11.9 \(\pm\) 0.5 \\ 31145+0132\({}^{a}\) & 2.10+15 & 8.23+14 & 1.53+14 & 3.38+15 & 3.13+16 & (4.68 \(\pm\) 0.20\({}^{a}\)-13 & 3.51 \(\pm\) 0.01 \\ 31145+2338\({}^{a}\) & 8.46+14 & 8.59+14 & 4.90+14 & 9.42+15 & 1.22+15 & (1.57 \(\pm\) 0.02\({}^{a}\)-13 & 8.7 \(\pm\) 0.1 \\ 31265+1728\({}^{a}\) & 1.46+13 & 7.26+14 & 1.92+14 & 4.96+15 & 6.25+16 & (2.41 \(\pm\) 0.03\({}^{a}\)-13 & 50.1 \(\pm\) 0.04 \\ 31705+7236 & (6.69+15) & 2.39+15 & 1.50+15 & 7.85+16 & 7.91+17 & (3.82 \(\pm\) 0.52\({}^{a}\)-15 & 13.9 \(\pm\) 1.3 \\ 32125+0506 & (1.236+16) & 1.33+15 & 6.65+16 & 4.69+16 & 8.05+17 & (1.64 \(\pm\) 0.17\({}^{a}\)-15 & 16.9 \(\pm\) 1.4 \\ 22542+2327 & (3.15+15) & 7.91+15 & 2.24+15 & 5.39+16 & -2.10+16 & (2.07 \(\pm\) 0.22\({}^{a}\)-14 & 5.7 \(\pm\) 0.3 \\ J2233-0100 & (-1.59+16) & 4.14+16 & 3.77+16 & 1.28+16 & -2.48+17 & (1.02 \(\pm\) 0.19\({}^{a}\)-15 & 10.5 \(\pm\) 1.4 \\ Median & 1.53e-14 & 1.22+14 & 6.75e-15 & 1.93e-15 & 1.61+16 & (2.36 \(\pm\) 0.14\({}^{a}\)-14 & 9.4 \(\pm\) 3.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Our sample’s Ly\(\alpha\) halo circularly averaged surface brightness radial profiles. The first five columns make up the data points for radial profiles in Figure 2. For modeling the radial profiles, the inner radius (2-degree is not included when there is no inner halo clipped from the quasar PSF spectral-template, shown in parentheses. The last two columns are the parameters from exponential-profile fit to the radial profile data. Median-errors shown are the standard deviation of the values that converge the median. \begin{table} \begin{tabular}{l c c c c} \hline \hline ERQ Name & Radial Brightness & Radial Brightness & Radial Brightness & Radial Brightness & Exponential Amplitude & Exponential Scale Length \\ & (2-4) kpc & (4-8) kpc & (8-16) kpc & (16-32) kpc & (32-6) kpc & \\ & (erg/cm\({}^{2}\))\({}^{-2}\)\({}^{-2}\) & (erg/cm\({}^{2}\))\({}^{-2}\) & (erg/cm\({}^{2}\))\({}^{-2}\) & (erg/cm\({}^{2}\))\({}^{-2}\) & (erg/cm\({}^{2}\))\({}^{-2}\) & (erg/cm\({}^{2}\))\({}^{-2}\) \\ \hline 0006+1215\({}^{a}\) & 1.63+14 & 1.22+14 & 5.88+15 & 1.148+15 & 1.78+16 & (2.36 \(\pm\) 0.005\({}^{a}\)-14 & 8.6 \(\pm\) 0.1 \\ 30220+0137\({}^{a}\) & 1.43+14 & 1.26+14 & 7.68+15 & 1.91+15 & 4.76e-17 & (2.50 \(\pm\) 0.05\({}^{a}\)-14 & 9.4 \(\pm\) 0.1 \\ J1145+5742\({}^{a}\) & 2.10e-14 & 1.60e-14 & 7.79+15 & 2.49e-15 & 7.98+16 & (2.43 \(\pm\) 0.02\({}^{a}\)-14 & 10.8 \(\pm\) 0.1 \\ J1232+0912 & 1.03e-15 & 9.28+15 & 6.75e-15 & 2.60e-15 & 1.61+16 & (1.77 \(\pm\) 0.11\({}^{a}\)-14 & 11.9 \(\pm\) 0.5 \\ J1151+0132\({}^{a}\) & 2.10+15 & 8.23e-14 & 1.53e-14 & 3.38e-15 & 3.13e-16 & (4.68 \(\pm\) 0.20\({}^{a}\)-13 & 3.51 \(\pm\) 0.01 \\ J1151+2338\({}^{a}\) & 8.46+14 & 8.59e-14 & 4.90+14 & 9.42+15 & 1.22e-15 & (1.57 \(\pm\) 0.02\({}^{a}\)-13 & 8.7 \(\pm\) 0.1 \\ J1651+1728\({}^{a}\) & 1.46e-13 & 7.26e-14 & 1.92e-14 & 4.96e-15 & 6.25e-16 & (2.41 \(\pm\) 0.03\({}^{a}\)-13 & 50.1 \(\pm\) 0.04 \\ J1705+7236 & (6.69+15) & 2.39e-15 & 1.50e-15 & 7.85e-16 & 7.91e-17 & (3.82 \(\pm\) 0.52\({}^{a}\)-15 & 13.9 \(\pm\) 1.3 \\ J2125+0506 & (1.236+16) & 1.33e-15 & 6.65+16 & 4.69e-16 & 8.05e-17 & (1.64 \(\pm\) 0.17\({}^{a}\)-15 & 16.9 \(\pm\) 1.4 \\ J2254+2327 & (3.15+15) & 7.91e-15 & 2.24e-15 & 5.39e-16 & -2.10e-16 & (2.07 \(\pm\) 0.22\({}^{a}\)-14 & 5.7 \(\pm\) 0.3 \\ J2233-0100 & (-1.59+16) & 4.14e-16 & 3.77e-16 & 1.28e-16 & -2.48e-17 & (1.02 \(\pm\) 0.19\({}^{a}\)-15 & 10.5 \(\pm\) 1.4 \\ Median & 1.53e-14 & 1.22e-14 & 6.75e-15 & 1.93e-15 & 1.61e-16 & (2.36 \(\pm\) 0.14\({ ### Kinematics Here we describe our kinematic measurements of the halos in our sample. Because of the multi-dimensional measurements of integral field spectroscopy, we can present the kinematics in both the integrated halo emission and spatially-resolved spectra. Figure 2's fifth column shows the spatially integrated spectra and a single Gaussian fit to the Ly\(\alpha\) halo emission. We define zero velocity by the emission centroid of the spatially-integrated halo, without clipping the narrow-emission spike from the spectral template. For the case of J2215\(-\)0056, we integrate the halo including the extended object to the north. Across the sample our integrated-halo emission spectra generally have similar narrow shapes, with a median spatially-integrated spectrum velocity dispersion of 293 km s\({}^{-1}\). J1145\(+\)5742 has narrow absorption that is blueshifted from the centroid in its Ly\(\alpha\) halo spectrum. But in spite of the absorption, the fit centroid is consistent with the Ly\(\alpha\) halo emission peak. For J1652+1728, the Gaussian fit captures the line width for approximating dispersion, but the narrow Ly\(\alpha\) halo emission in the central region is blueshifted relative the outer halo, blueshifting the total spatially-integrated spectrum. This centrally concentrated blueshifting, which is seen in J1652+1728's 1st moment velocity map, is also seen in the Ly\(\alpha\) emission profile from the central arcsec aperture in Figure 1 & 4. Blueshifted central emission causes uncertainties in what to clip as halo emission when subtracting the quasar. Table 4's fifth column shows the resulting velocity dispersion from a Gaussian fit to the integrated halo emission. Figure 3 displays evidence of an extended Ly\(\alpha\) emitter (LAE) whose emission centroid coincides with absorption in the quasar spectrum. The centroid of the LAE is \(\sim\)30 kpc projected distance from the halo peak, and spectroscopically \(\epsilon-\)500 km s\({}^{-1}\) from the redshift measured by the Ly\(\alpha\) halo. At higher redshifts (3 \(<z<\) 4.5) LAEs with similar velocity offset and clustering are suggested to be star-forming galaxies orbiting in the quasar halo potential (Fossati et al., 2021). It is uncertain if the extended emitter is powered by quasar radiation. We describe kinematics at each spatial position around the quasar in 2D maps. Figure 2's second column shows velocity centroid maps, which are the 1st-moments in velocity space of the flux distribution at each spatial position. Our velocity maps use the same detection region and PSF from the SB maps. We measured relatively low velocity shifts of the halos, at hundreds of km s\({}^{-1}\), and do not exhibit energetic outflows of thousands of km s\({}^{-1}\). Table 4 lists the spatial velocity-centroid standard deviation for each ERQ, other computed quantities of velocity dispersion, and their median. The first column is the standard deviation of the 1st velocity-moment centroid, with a sample median of 288 km s\({}^{-1}\). Second is the median velocity dispersion for each object, from the Voronoi-binned velocity dispersion maps in Figure 2, and their respective standard deviation in the third column. Our sample median of the spatial velocity dispersion and its standard deviation are 374 and 114 km s\({}^{-1}\) respectively. Finally we show the dispersion of the Gaussian-fit to the spatially integrated Ly\(\alpha\) halo emission, with a median of 293 km s\({}^{-1}\). Discussion and analysis of these quantities is in Section 4.5. Our data do not have sufficient signal-to-noise ratios to measure halo velocity dispersion using second velocity moments (see L22 for more discussion). We instead use Gaussian fits to a Voronoi-binned map (see Rupke et al. (2019)). Figure 2's third column shows the final Ly\(\alpha\) Gaussian velocity dispersion maps. These maps are generally smaller than the other maps of SB and velocity shift, but as an independent detection method they confirm the general morphology of the extended halo. In the case of J2215\(-\)0056, the northern emitting region has unusually uniform dispersion across its area, unlike any other extended emission cloud we detect. It is not clear if this Ly\(\alpha\) emitter is a distinct object. For J2215\(-\)0056 we determine the systemic redshift by extracting the extended halo excluding the extended object. The remaining ERQ Ly\(\alpha\) halos show relatively low dispersion across their maps, in the hundreds of km s\({}^{-1}\). Some ERQs show a velocity gradient from one edge of the halo to the other of \(\sim\)1,000 km s\({}^{-1}\) (e.g., J1232+0912 and J1451+0132), and the coherent transition from redshift to blueshift helps confirm the measured velocities are valid. A rotating disc of \(\sim\)100 kpc size is not expected to have formed by \(z\sim 2\) (e.g., DeFelippis et al., 2020; Huscher et al., 2021). One ERQ, J0220+0137, has a kinematically distinct cloud at the eastern boundary of the FOV, \(\sim\)65 kpc from the quasar, and shifted in velocity space about \(-\)1,500 km s\({}^{-1}\) from the Ly\(\alpha\) halo redshift. The kinematic separation makes it unlikely to be of the quasar halo diffuse gas, and it is omitted in measuring the Ly\(\alpha\) halo emission. ERQs J1451+2338 and J1652+1728 have a noticeable circular patch at the center of their 1st velocity-moment map, and are likely the residual from imperfect quasar subtraction caused by complexities in their Ly\(\alpha\) profile. ### Blueshifted Absorption Figure 4 shows aperture and annular spectra of selected ERQs, to further understand the spatial and spectral distribution of Ly\(\alpha\) emission and absorption features. One ERQ shows definite blueshifted absorption, J1145+5742, similar to what was found in J0006+1215, by L22. J1145+5742 has strong absorption features in its integrated spectra, and consistently shows deep and narrow absorption in apertures across the FOV, out to \(\sim\)50 kpc, blueshifted at \(-\)400 km s\({}^{-1}\). In its inner halo spectrum, only within \(\sim\)25 kpc, there is blueshifted absorption at about \(-\)950 km s\({}^{-1}\). J1451+0132 has blueshifted absorption in its emission profile of the innermost and spatially unresolved inner halo, at \(\sim\)800 km s\({}^{-1}\) and \(\sim\)1250 km s\({}^{-1}\), but not in the outer halo profile. These blueshifted absorption features resemble those of spatially resolved Ly\(\alpha\) halo spectra that require outflows (e.g., Li et al., 2021). J1652+2736 has more unique profile features. It has primarily blueshifted central halo emission from apertures within 15\(-\)20 kpc of the quasar (about \(-\)900 km s\({}^{-1}\)), and has gradually redshifted and more symmetric Ly\(\alpha\) halo emission at larger annular radii. Its aperture spectra reveal that the asymmetric emission profile in the integrated Ly\(\alpha\) halo is not from blueshifted absorption, but from the central halo emission being blueshifted, relative to the extended halo. Our data show evidence of multi-component emission, and likely also affected by absorption. Overall, we do not find evidence for strong absorption in the Ly\(\alpha\) halo emission profiles. The intrinsic emission profile of Ly\(\alpha\) is not known for these quasars in order to model the level of absorption (e.g., Wang et al., 2021), and analysis of the environment which may cause absorption is beyond the scope of this work. ### Comparisons to Blue Quasars We have a full sample of eleven ERQ Ly\(\alpha\) halos. One of the main goals of our study is to compare the Ly\(\alpha\) halos around ERQs to Type-I blue quasars roughly matched to the ERQs in redshift and luminosity (\(\geq 10^{47}\) ergs s\({}^{-1}\), at cosmic noon). Our focus is on Type-I quasars, but we also include Type-IIs for some of the comparisons, as described below. Our median ERQ luminosity of \(L_{\rm bol}=5\times 10^{47}\)erg s\({}^{-1}\) and \(z=2.61\) is only slightly more luminous than other samples we compare. We use the same method described in Section 4.5 to recalculate all of the blue quasar sample luminosities for consistency. Combining the four blue quasar samples, they have a range of medians \(L_{\rm bol}\approx 1.0-4.0\times 10^{47}\)erg s\({}^{-1}\) and \(z\approx 2.3-3.2\), totaling 108 quasars (Cai et al. (2019); Arrigoni Battaia et al. (2019); Borisova et al. (2016a); Mackenzie et al. (2021)). We recalculate all of the blue quasar sample luminosities in the same way as ERQs, using W3 photometry to estimate bolometric luminosities (see Section 2.6). Our comparison samples are the following. We take the quasar sample from Cai et al. (2019), consisting of 16 blue quasars of median \(z=2.3\), and \(L_{\rm bol}=10^{47.2}\)erg s\({}^{-1}\). We take the 61 blue quasars from Arrigoni Battaia et al. (2019), with median \(z=3.2\) and \(L_{\rm bol}=10^{47.4}\)erg s\({}^{-1}\). Borisova et al. (2016a) sample consists of 19 luminous blue quasars of median \(z=3.2\) and median bolometric luminosity \(10^{47.6}\) erg s\({}^{-1}\). Mackenzie et al. (2021) have 12 blue quasars of median \(z=3.2\), and are on the fainter side of these samples, with median \(L_{\rm bol}=10^{47.0}\)erg s\({}^{-1}\). den Brok et al. (2020) sample consists of four Type-II quasars of median \(z=3.4\) and median \(L_{\rm bol}=10^{46.7}\)erg s\({}^{-1}\). We verify that the intrinsic bolometric luminosities estimated based on mid-IR luminosities are similar to those estimated with X-ray luminosities, and the correction factors modeled in Shen et al. (2020). A final Type-II quasar from Sanderson et al. Figure 2: Full sample of ERQ Ly\(\alpha\) halos after subtracting the quasar, and the best available measurements, where the Ly\(\alpha\) spike was isolated in six quasar spectra before subtraction. First column shows the optimally extracted halo, second is the 1st moment halo velocity map, third is Voronoi binned velocity dispersion, fourth is the circularly averaged surface brightness radial profile, and the fifth is the total integrated Ly\(\alpha\) halo spectrum. For all 2D maps, the plus is the location of the quasar. For the surface brightness panels the PSF size is indicated as a circle in the upper left corner, the diamond is the centroid of halo emission, the cross symbol is the location of peak Ly\(\alpha\) halo emission when the Ly\(\alpha\) spike is left in the residual map. For the surface brightness radial profiles hollow circles represent negative values that were included for profile fitting, and plotted as absolute values. They have also been corrected for cosmological dimming for comparison to other samples. The integrated halo spectrum shows a Gaussian that was fit to determine the overall halo velocity dispersion. Zero velocity is defined as the centroid of integrated halo emission, without removing the Ly\(\alpha\) spike in quasar subtraction. J02020+0137 has a cloud in the eastern edge of the FOV which is likely a foreground source not physically related to the quasar, and is omitted in measuring Ly\(\alpha\) halo emission. (2021) is at \(z=3.2\), which is a mid-IR-only source not detected in the optical. Figure 5 shows the relationship between the halo luminosity and the bolometric luminosity of the quasar. In order to make these comparisons, we verified that masking or not masking the inner 1-arcsecond yields similar halo luminosities, within about 10 per cent of the total halo luminosity. We also assessed changes in computed halo luminosity with or without Ly\(\alpha\) spike clipping. Four of six ERQs with a spike showed an increase in computed halo luminosity of 20\(-\)40 percent. Halo luminosities of J1451+0132 & J1652+1728 more than doubled, each increasing by factors of \(\sim\)2.25. When subtracting the emission of the quasar, the shape of the spectral template that is subtracted is important for determining the residual halo emission. Halo emission is not as distinct in typical blue quasar spectra, and Figure 2: _continued_. J1652+1728’s integrated halo centroid is offset from zero velocity because we define zero from the outer halo. J2215\(-\)0056 has a cloud in the north of the quasar which appears to be associated with the quasar as part of the same large scale structure, and blueshifts the spatially-integrated halo spectra. The associated object is omitted in measuring the ERQ’s total Ly\(\alpha\) halo emission. does not change the spectral template shape (ref. Sections 2.3 & 2.4, and Figure 1). Most of ERQ Ly\(\alpha\) halo emission is concentrated in the inner halo regions near the quasar PSF, so modifications to the spike are more important for ERQ Ly\(\alpha\) halo luminosity. Our ERQ median is \(L_{\rm halo}=5.40\times 10^{43}\)ergs s\({}^{-1}\), where the luminous blue quasar populations have medians in the range \(L_{\rm halo}\approx 5-10\times 10^{43}\)ergs s\({}^{-1}\). L22 noted that the Ly\(\alpha\) halo luminosity around the reddest quasar in our sample, J0006+1215, is roughly three times lower than expected from the matched blue quasar samples. In our larger sample, the second reddest ERQ, J2323\(-\)0100, stands out for having the lowest halo luminosity, roughly 10 times lower than expected from the blue quasars (see Table 2 and Fig. 5). However, the full range of ERQ halo luminosities in our study falls within the range of those for blue quasars. We also find that the ERQs follow a weak trend for larger halo luminosity around quasars with larger bolometric luminosity that was noted previously in blue quasars by Mackenzie et al. (2021). Figure 6 shows the maximum linear size vs bolometric luminosity. Many of the ERQs are shown with only lower limits on their maximum halo size, because they exceed the field of view Figure 3: Aperture spectra of J2215\(-\)0056’s halo and the associated Ly\(\alpha\) emitter (LAE). The left panel displays the same SB map from Fig. 2, over-plotted with the apertures used to generate emission spectra. The plus symbol is the location of the quasar, and the cross symbol is the location of peak brightness for the LAE. The aperture used to extract the halo emission includes the quasar position, but is offset to avoid the LAE. The second aperture above the quasar is used to generate the LAE spectrum. The right panel is the quasar spectra for J2215\(-\)0056, the extracted aperture spectra from the left panel, and a singe Gaussian fit to the halo emission. In black is J2215\(-\)0056’s quasar spectrum, generated using a 1 arcsecond aperture from the non-PSF subtracted data, blue is the LAE emission, orange is the aperture spectrum of Ly\(\alpha\) halo emission, and black is a Gaussian fit to the halo emission. Ly\(\alpha\) halo emission is scaled with a constant for visualization and comparison to the other spectra. There is sharp and narrow absorption in the quasar spectrum that visibly aligns with the LAE spectrum centroid. Our extracted halo emission may also display absorption at the same spectral location. There is no strong emission detected near the boundary of these emitters, nor is there a smooth and continuous transition from the halo emission profile to the LAE profile, to suggest there is a luminous arm directly connecting the objects. Figure 2: _continued_. Figure 4: Aperture spectra of Ly\(\alpha\) halo emission for J0220+0137, J1145+5742, J1652+1728, and J1705+2736. The left column displays the same 1st moment velocity map from Fig. 2, over-plotted with the apertures used to generate Ly\(\alpha\) emission spectra. The plus symbol is the location of the quasar, and the cross symbol is the peak of the Ly\(\alpha\) halo emission. For J0220+0137 & J1145+5742, circular aperture boundaries are drawn to define three regions, one innermost region, a middle transition region, and an outer halo region, which includes all emission outside the circular apertures. J1652+1728 has an additional subdivision to show a quiescent circular region of the middle halo (\(\sim\)20 kpc), which lacks clumpy inflows/outflows. For J1705+2736, the halo is dissected into a northern and southern region for analysis. The right panel is the corresponding quasar’s Ly\(\alpha\) emission spectra from each halo region, and each spectra is scaled with a constant for visualization and comparison to the other regions’ emission profile. J0220+0132 is an example of a complex profile that is revealed after it’s divided into apertures, instead of the smoothed emission features in ’s integrated halo spectra. A flat emission spectrum between 100–400 km s\({}^{-1}\) is seen in the innermost halo (\(\la\)20 kpc), perhaps by a combination of multi-component gas kinematics and/or absorption, and is otherwise lost in the total integrated halo spectrum shown in Fig. 2. J1145+5742 displays strong absorption at \(-\)400 km s\({}^{-1}\) across the \(\sim\)120 kpc span of the halo, and weaker absorption at \(-\)950 km s\({}^{-1}\) across \(\la\)\(<\)\(\sim\)50 kpc. Ly\(\alpha\)’s emission profile shape remains consistent across the \(\sim\)120 kpc span of the halo. J1652+1728’s Ly\(\alpha\) emission profile displays a rapid transition beyond \(-\)20 kpc from the quasar, and becomes sharper peaked and symmetrically broader at its base. This symmetric broadening is likely from the patches of red and blueshifted clumps in the outer regions of the velocity map. The inner halo region’s peak emission is blueshifted to \(-\)900 km s\({}^{-1}\), and displays absorption at the the redshift of the peak outer-halo emission. J1705+2736 shows moderate blueshifting in the southern lobe of the Ly\(\alpha\) halo, and becomes more narrow. The northern halo emission profile may be asymmetrically broadened on the red side by redshifted clumps at the outer boundary of the halo. of KCWI (see also Table 2 and Section 3.1). We compare samples from Borisova et al. (2016a), Arrigoni Battaia et al. (2019), Cai et al. (2019), and Mackenzie et al. (2021). For Arrigoni Battaia et al. (2019) we estimate a maximum linear size for each quasar from the sum of maximal radial extent for the halo plus \(\sqrt{(\rm covered\ area)/\pi}\). It was noted in Arrigoni Battaia et al. (2019) that the size of blue quasar nebulae did not increase significantly with longer exposure time, confirming our size measurements are mostly insensitive to integration time. We report the maximum linear size as lower limits when the halo SB map extends to the edge of the FOV. Similarly to the luminosity trend, ERQs populate the high-luminosity region of a positive trend with the maximum linear size of Ly\(\alpha\) halos and bolometric luminosities. Figure 7 shows the surface brightness radial profile of ERQs pseudo-narrowband images, the median radial profiles of ERQs and several blue quasar samples, and the median exponential scale length of each quasar sample. Each point on the radial curve shows the average SB of an annulus centered on the quasar. Table 3 presents the data for our sample. An important feature we take advantage of is that obscuration behaves as a coronagraph, and allows probing of the inner-most regions of the halo. Six ERQs have had their quasar emission subtracted without the Ly\(\alpha\) spike, and we measure their SB at smaller projected distances from the quasar than the blue quasar samples. For comparison, we take the median SB radial profile from Borisova et al. (2016a), Arrigoni Battaia et al. (2019), and Cai et al. (2019). The Borisova et al. (2016a) radial profile is obtained from Marino et al. (2019). All samples have comparable size of the point-spread function corresponding to a FWHM of 1.4 arcsec or 12 kpc. For our ERQ sample we present two median radial profiles. One median is for ERQs which can probe to zero projected distance, and the other is omitting the central 1-arcsecond region (\(\sim\)4 kpc radius), for comparison to other quasar studies that do not have the Ly\(\alpha\) spike. Along the top of Figure 7 we also present the exponential scale length fit to the median radial profiles. We fit an exponential function defined in Section 3.1 to the median SB radial profile, and record fit values in Table 3. Exponential scale length probes the brighter inner halo region, in contrast to the maximum linear size which depends on more diffuse low-SB emission. Our full sample's median radial profile, omitting the innermost bin (0\(-\)4 kpc) for comparison with blue quasar samples, has a scale length \(r_{h}=9.0\) kpc. Median profile scale lengths for other blue quasar samples are 13.5, 15.7, and 18.7 kpc for Borisova et al. (2016a), Arrigoni Battaia et al. (2019), and Cai et al. (2019), respectively. For ERQs with a Ly\(\alpha\) spike we can probe deeper to the inner halo regions, at projected distance 0\(-\)4 kpc, because of modified PSF subtraction (ref. Sections 2.3 & 2.4). Considering only the ERQs with a Ly\(\alpha\) spike, we can compute a scale length of their median profile \(r_{h}=5.9\) kpc. ERQs are generally more luminous than blue quasars, and there should be a natural tendency for more luminous quasars to have larger halos, as everything scales with higher luminosity. Despite this intuitive tendency, ERQ Ly\(\alpha\) halos are characteristically more compact in their inner regions than those of luminous blue quasars. Figure 8 compares ERQ Ly\(\alpha\) halo spatially-integrated velocity dispersion vs bolometric luminosity with those from blue quasars which have measured spatially integrated velocity dispersion in the literature. We did not have enough signal to compute 2nd-moment velocity dispersions for the halo maps. Our ERQ velocity dispersions for direct comparison are computed from the Gaussian fits to the total integrated halo spectrum, shown in the last column of Figure 2, and are tabulated in the last column of Table 4. Other samples compute a 2nd-moment velocity dispersion, but Cai et al. (2019) also reports a velocity dispersion from spatially integrated velocity dispersion for the integrated halo spectrum, which we can directly compare. Line fitting of the integrated spectrum is a more robust dispersion measurement than moment maps (see discussion in O'Sullivan et al. (2020)). Our ERQ sample's median integrated Ly\(\alpha\) halo dispersion is 293 km s\({}^{-1}\). Cai et al. (2019) measured their sample median of 269 km s\({}^{-1}\). We see the halo velocity dispersion of ERQs follow the distribution of the luminous blue quasars of Cai et al. (2019). Both distributions generally follow a trend for larger dispersion in more luminous quasars. Finally, Figure 9 presents morphology of halos vs bolometric luminosity, with two different eccentricity parameters. Different surveys used different parameters to characterize morphology Figure 5: Computed logarithmic bolometric luminosity vs logarithmic Ly\(\alpha\) halo luminosity of various literature samples and our ERQs. Upper and lower limits are shown with arrows. Brown pluses are the individual ERQs, and the large brown plus is the median of our sample. Blue quasar samples have median values shown as the large pink circle, dark blue star, light blue triangle, and the dark green downward triangle symbol. Individual quasars from other samples shown as small gray symbols matching their median symbol. There is significant overlap of individual quasars in our sample and luminous blue quasar samples. The median of these samples shows a positive luminosity trend for bolometric luminosity and Ly\(\alpha\) halo luminosity across all samples. for their population. We present ERQs with both of these parameters computed to compare with as many as possible. We take \(e_{\rm unweight}\) values from, Borisova et al. (2016a), den Brok et al. (2020), Mackenzie et al. (2021), and Sanderson et al. (2021). We take the computed \(e_{\rm weight}\) from Arrigoni Battaia et al. (2019) and Cai et al. (2019). Their \(e_{\rm weight}\) is calculated with a 1-arcsecond region centered on the quasar masked to avoid residuals from point spread function subtraction. All samples we compare also mask the central 1-arcsecond region for their computed \(e_{\rm weight}\) or \(e_{\rm unweight}\), except for den Brok et al. (2020) and Sanderson et al. (2021). The two samples with no masking are so obscured they did not need point spread function subtraction to measure the halo. For our data in Figure 9, we also mask the inner 1-arcsecond of the quasar for plotted measurements, for fair comparison with other surveys which did not probe the inner halo as we have. Table 2 presents our best possible eccentricity measurements, without the inner 1-arcsecond masked. Without masking, our ERQ sample has median weighted and unweighted eccentricities of \(e_{\rm weight}=0.65\) and \(e_{\rm unweight}=0.66\). We give these values context by comparing them to other typical quasar values available in the literature. Flux unweighted eccentricity is insensitive to strong central emission, and thus can describe morphology at large scales. Blue quasar samples have \(e_{\rm unweight}\approx 0.7\), and are comparable to the less luminous population seen in Mackenzie et al. (2021). Our ERQs tend to have morphologies like blue Type-I quasar samples. There is significant scatter in \(e_{\rm weight}\) measurements among each Figure 6: Computed logarithmic bolometric luminosity vs maximum linear size of the halo, using identical color and symbol legend as Figure 5. Lower limits are shown for medians computed from samples that contain halos extending to the edge of the field of view. There is overlap of individual quasars with other samples, and a weak positive luminosity trend can be seen across the medians of the samples. Figure 7: Circularly averaged surface brightness radial profiles of our sample of ERQs in comparison with other samples of blue quasars at comparable redshift and luminosity. The bold brown line is the ERQ sample median at each surface brightness bin for which there is detected halo emission. Thin brown lines are the individual ERQs. Other colors correspond to the other samples of blue quasars, pink, blue, and light blue for Cai et al. (2019), Arrigoni Battaia et al. (2019), and Borisova et al. (2016b), respectively. Across the top of the figure shows the exponential scale lengths, color coded to match each sample. Notice that the scale lengths of ERQs are shorter than blue quasar samples, and thus have profiles that fall off more rapidly at large distances. Also notice that ERQs with higher surface brightness tend to have detectable 1\(\sigma\)-spike halo emission down to the central projected radius from the quasar. Negative values of surface brightness from Fig. 2 are not shown in this figure. All surface brightnesses are corrected for cosmological dimming. sample, but medians of these populations show ERQs have more circular symmetry to other blue quasar Ly\(\alpha\) halo surveys. Blue quasar samples have \(e_{\rm weight}\approx 0.7-0.9\). Luminous quasar medians are larger than the ERQ median and most individual ERQs, tabulated in Table 2. We include Type-II quasars in this morphology comparison, because other studies find that Type-IIs tend to show more asymmetric halos, consistent with more edge-on views of the quasar asymmetric illumination patterns in the normal Type-I/II dichotomy (den Brok et al., 2020). Samples of Type-II quasars have an unweighted eccentricity as high as \(e_{\rm unweight}\approx 0.8\)(den Brok et al., 2020; Sorini et al., 2021). The ERQ in our sample with narrowest C IV emission line FWHM, J1705+2736, is similar to the other Type-IIs, and has the largest unweighted eccentricity in our sample with \(e_{\rm unweight}=0.78\). J1705+2736 also has the largest flux weighted eccentricity of our sample, and greater than of most of the Type-I blue quasars, with \(e_{\rm weight}=0.91\). Figure 8: Comparison of logarithmic bolometric luminosity vs spatially integrated Ly\(\alpha\) halo velocity dispersion for samples which it was measured, using identical legend as Figure 5. Halo velocity dispersion for this comparison is taken from a Gaussian fit to the spatially integrated halo spectrum, shown as the last column in Figure 2. Cai et al. (2019) is the only blue quasar survey from the blue quasar surveys we have considered in this paper also computed integrated halo velocity dispersion. There is a positive luminosity trend among individual quasars, but the trend is uncertain for the two sample medians available for this analysis. Figure 9: Logarithmic bolometric luminosity vs Ly\(\alpha\) halo eccentricity, flux weighted and unweighted, for samples which have the respective measurements to compare. In addition to the identical symbols as Figure 5, Type-II quasars from den Brok et al. (2020) are shown in box symbols with their median shown as the large light green box, and an upper limit of an individual Type-II in Sanderson et al. (2021) is shown as a purple star. Eccentricity of zero is circular, and values near one are more elliptical. Plotted ERQ eccentricity values are computed with the central 1 arcsecond masked to compare with the other samples that use standard PSF subtraction, and Table 2 shows the eccentricity values calculated without masking the central 1 arcsecond. Flux-unweighted eccentricity is a measure of the outer halo shape, whereas the flux-weighted eccentricity is more strongly influenced by the brighter inner halo regions. In the flux unweighted plot there is some separation between the populations of Type-I and Type-II quasar samples, which is more clear in the median symbols. ## 4 Discussion ERQs could represent a brief transition phase in galactic and quasar evolution, characterized by an obscured galactic/quasar environment and potentially extreme feedback via outflows (see Section 1). Our main goal is to determine if ERQs differ from normal blue quasars in significant ways that might identify them as a distinct, and potentially more youthful, galactic/quasar population. We obtained Keck KCWI data for 11 ERQs to measure their basic Ly\(\alpha\) halo properties. In this section we will discuss our results for the full sample. ### Quasar Systemic Redshift & Future Work Systemic redshifts are uniquely difficult to determine for ERQs due to frequent blueshifted/fast-outflowing emission-lines that are typically used as redshift indicators, such as [O III] \(\lambda\)5007 and broad emission lines (Hamann et al., 2017; Perrotta et al., 2019; Gillette et al., 2023b in prep.). Our study confirms the result in L22 that the narrow Ly\(\alpha\) spike present in aperture spectra of some ERQs are the same redshift and emission profile as the inner halo emission, and therefore this spike also forms in the inner halo and is a good indicator of ERQ systemic redshifts. L22 used the Ly\(\alpha\) spike redshift in the reddest ERQ, J0006+1215, to show the centroid of the broad C IV \(\lambda\)1549 emission line is blueshifted by 2240 km s\({}^{-1}\). Hamann et al. (2017) provides other examples of large broad emission-line blueshifts in ERQs based on the Ly\(\alpha\) spike, claiming that large blueshifts are more common in ERQs than luminous blue quasars. We present a more complete discussion of blueshifts and their implications for ERQ outflows in a forthcoming paper, Gillette et al. (2023b in prep.). ### Circularly Symmetric and Compact Halos Overall we find that ERQs and blue quasars both exhibit a wide range of Ly\(\alpha\) halo properties and there is considerable overlap between these samples. The main differences identified by our study are in the halo morphologies. In particular, the ERQs tend to have 1) more compact and centrally concentrated SB profiles, and 2) more circularly symmetric central regions. Fig. 7 shows other samples which have circularly averaged SB radial profiles measured have median exponential scale lengths between 13.5 and 18.7 kpc, and the scale length of the median profile of our full sample is 9.0 kpc. When considering only ERQs that exhibit a Ly\(\alpha\) spike, the median profile's exponential scale length is only 6.0 kpc. Fig. 9 shows the flux-weighted elliptical eccentricities of our sample are much smaller, probing the inner halo, and meaning more circularly symmetric. Reasons for ERQs having more compact and more circular morphology are unclear. They are possibly related to a younger evolution stage than blue quasars, for example, if the outflows from ERQs (e.g., Perrotta et al., 2019; Gillette et al., 2023b in prep.) have not had time yet to distort and disperse the gas in their inner halo. Compact and circular morphology also supports the argument that ERQs do have Type-I orientation in spite of their obscuration. ### Type-I versus Type-II Quasars We can gain insights into the dust distribution around ERQs from the halo morphology being more similar to Type-I blue quasar samples than Type-II samples. Previous studies have shown that Type-IIs tend to have more asymmetric Ly\(\alpha\) halos, consistent with Type-IIs having a more edge-on view of the asymmetric (bipolar) radiation pattern of quasars in the standard Type-I/II dichotomy (Antonucci, 1993; Urry and Padovani, 1995; Netzer, 2015; den Brok et al., 2020). Gas distribution asymmetry is evident in the eccentricity parameters plotted in Fig. 9, although the sample sizes are small and there is considerable overlap in values between samples. ERQs tend to have more symmetric halo morphologies than Type-I blue quasars, making them even more different from the Type-IIs. One ERQ in our sample, J1705+2736, has considerable asymmetry in its outer Ly\(\alpha\) halo morphology, and has an asymmetric and luminous arm that extends \(>\)70 kpc from the quasar. J1705+2736 has substantially narrower FWHM(C IV) than the other ERQs, similar to other Type-II quasars, although FWHM(C IV) alone is known not to be a good discriminator for Type-I versus Type-II classification (Greene et al., 2014; Hamann et al., 2017). Fig. 4 shows J1705+2736's extended emission, that is uniformly blueshifted from the central halo by about \(-\)100 km s\({}^{-1}\), and out to \(>\)70 kpc in the southern direction. Thus it appears that the large line-of-sight extinctions toward ERQs cannot be attributed to viewing effects analogous to Type-IIs in the simplified picture of Type-IIs versus Type-I quasars. Our conclusion that ERQs are more similar to Type-I's is supported by the finding in L22 that the He II/Ly\(\alpha\) line ratio in the ERQ J0006+1215 (the only ERQ for which those lines are measured) is more similar to Type-I blue quasars than Type-IIs (see section 4.5 in L22 for more discussion). ### No Evidence of Halo Feedback Figure 8 reveals ERQs appear to follow the same trend as blue quasars for larger velocity dispersions around quasars with larger bolometric luminosity. Overall the halos are kinematically quiet, with integrated velocity dispersion in the range 225\(-\)526 km s\({}^{-1}\), similar to matched blue quasars. We compare these velocity dispersions with emission-line broadening expected from the orbital motions of gas in the host galaxy's dark matter halo. Our sample is on the high end of quasar luminosities, and further studies have found that obscured quasars and hyper-luminous quasars may on average reside in more massive halos, typically \(10^{13}h^{-1}M_{\odot}\)(DiPompeo et al., 2017; Geach et al., 2019). We assume a dark matter halo profile from Navarro et al. (1997), and halo concentration parameter of 4, from the COLOSSUS software and references therein.1 We then use the halotools software to calculate the circular velocity for an upper limit for the projected 1D velocity.2 Halo gas with dispersion above 379 km s\({}^{-1}\) could be considered fast moving, and corresponds to 1D circular velocity of 536 km s\({}^{-1}\). Table 4 presents our integrated Ly\(\alpha\) halo dispersion. Velocity dispersions of our sample have a median dispersion of 293 km s\({}^{-1}\), are within a few hundred km s\({}^{-1}\) of each other, and none are above the fast moving circular velocity estimate. Quiet kinematics do not rule out the possibility of youth in ERQs, because they may have had less time for energetic outflows to extend outward to generate feedback in the inner halo. Footnote 1: [https://bdiemer.bitbucket.io/colossus/halo_concentration.html](https://bdiemer.bitbucket.io/colossus/halo_concentration.html) Across the sample we generally do not find Ly\(\alpha\) halo emission-line broadening above the expected dark matter halo velocities down to \(\sim\)0.7 arcsec or \(\sim\)6 kpc radius. Episodic lifetimes of quasars are typically \(10^{5}-10^{7}\) yrs (e.g., Khrykin et al., 2021). With most Ly\(\alpha\) halo outflow speeds \(\sim\)400 km s\({}^{-1}\) they can at most only travel to \(\sim\)4 kpc. Therefore any Ly\(\alpha\) emission on circumgalactic scales are not outflowing due to the present quasar episode. Spatially resolved velocity dispersion beyond 20\(-\)30 kpc is more commonly measured in ERQs with a Ly\(\alpha\) spike. These more extended halos gradually decrease velocity dispersion toward the edges, away from the quasar. Instances of extended clouds (eg. J1145+5742 and J1451+0132) also show a decrease in dispersion farther along the extended arm, but J0006+1215 shows increasing dispersion with increasing distance along its extended arm. We do not find evidence for large velocity dispersion in the Ly\(\alpha\) halos that may be caused by outflows and feedback effects from the central quasar, but an absence of fast outflows in our sample has no implications on the quasar evolutionary stage. In conclusion, the quiet kinematics, and the compact and circular symmetry discussed in Section 4.2, are evidence against feedback being present at CGM halo scales, now or in the past. ### General Halo Properties If ERQs are in younger host galaxies, then they may have different ionizing escape fractions compared to typical blue quasars. If the galaxy is dusty, then the escape fraction could be lower. But if the dust distributions are clumpy, then they may have substantial ionization escape fractions. Luminosities of the Ly\(\alpha\) halos are comparable to typical luminous blue quasars of similar redshift in Fig. 5. We also see in Fig. 6 that ERQ halo linear size is within the size distributions in other samples of luminous quasars. The median ERQ halo luminosity is offset roughly two times lower than expected from the blue quasar data, but this does not appear significant given the small sample size and the width of the halo luminosity distributions spanning nearly a factor of 100. Thus the appearance of narrow Ly\(\alpha\) emission spikes in ERQ spectra can be attributed to extinction toward the central quasar, and not from unusually bright halo emission, relative to the central quasars. If the Ly\(\alpha\) halo emissions are powered by hydrogen-ionizing radiation from the central quasars, the similar halo luminosities between ERQs and blue quasars might indicate that similar fractions of the H-ionizing photons emitted by the quasars escape to the circumgalactic medium in ERQs. This similarity is notable because of the much larger line-of-sight extinctions observed in ERQs, estimated to be typically \(\sim\)3 mags or a factor of \(\sim\)16 in the near UV around 1500 A to 2000 A (Hamann et al., 2017). One possible explanation is that other lines-of-sight toward ERQs have lower extinctions than we observe, allowing their ionizing photons to escape in quantities similar to blue quasars. Another possibility is that the dust distributions are clumpy and inhomogeneous, which can permit larger UV photon escape fractions than expected from the extinctions along direct lines of sight to the central source, due to scattering. The general notion that dust scattering plays an important role is consistent with the large UV polarizations found in some ERQs by Alexandroff et al. (2018); however, they attribute their results to a particular axisymmetric scattering geometry that might conflict with our finding that ERQs tend to have circularly symmetric halos, resembling Type-I blue quasars (Section 4.3). An important caveat to keep in mind for the \(L_{halo}\) comparisons is that Ly\(\alpha\) halos around quasars are generally believed to be at least partly matter bounded, meaning that the observed luminosities depend at least partly on the amount of halo gas available for ionisation, not simply on the flux or escape fraction of ionising photons emitted by the quasar. ERQs show a weaker than expected Ly\(\alpha\) halo luminosity for their bolometric luminosity, which could be evidence of matter bounded halo emission (Dempsey & Zakamska, 2018, also Fig. 5). In an ionization bounded scenario we expect the luminosity of the halo to scale linearly with bolometric luminosity, but when a halo is matter bounded there is no dependence. However, real quasars can show a weak dependence of the halo properties on the bolometric luminosity, and any relation between Ly\(\alpha\) halo luminosities and extinction toward the quasars will also be weak. A unique feature of our study is that extreme obscuration allows us to map the 2D halo emissions all the way down to the quasar positions, and possibly down to galactic emission, in roughly half of our ERQ sample. ERQs that exhibit the Ly\(\alpha\) spike show similar inner halo characteristics as J0006+1215 in L22, but vary in their outer region symmetry and morphology beyond 20\(-\)30 kpc from the emission peak. ERQs without the spike are most luminous around the central region of the quasar, and tend to be less extended to outer regions beyond 30\(-\)40 kpc from the halo emission peak (see Fig. 2). ### Multi-component Emission In several ERQs with spatially resolved measurements, a transition at \(\sim\)20\(-\)30 kpc from the quasar frequently occurs (e.g., a drop off in SB, change in blueshift, or velocity dispersion). This transition leads us to define a boundary of inner-halo Ly\(\alpha\) emission. This boundary could be evidence of a 2-component halo structure of compact and extended gas (e.g., J1145+5742 and J1652+1728). L22 showed a halo transition phase 20\(-\)30 kpc from the position of ERQ J0006+1215, defining an inner and outer halo component. The inner region is more coherent, circularly symmetric, and has quiet gas kinematics, and the outer region has more asymmetric and disrupted kinematics. One possibility is that the outer halo is inflowing CGM gas, and the inner halo is dominated by outflow (see section 4.2 in L22 for more discussion). This is further supported in multi-component JWST observations of J1652+1728, which found complex gas kinematics and outflows on kpc scales (Vayner et al., 2023). Figure 4 shows aperture spectra of ERQs with distinct inner and outer halo emission regions that show different kinematics from each other. Many of our ERQ Ly\(\alpha\) halos show a smooth gradient in their velocity shift from one side of the halo to the other, with zero velocity centered near the halo emission peak. None have extreme velocities consistent with powerful outflow. In cosmological radiation-hydrodynamic simulations from Costa et al. (2022) quasar driven outflows on circumgalactic scales move \(\sim\)1,000 km s\({}^{-1}\), and are also less dense than most Ly\(\alpha\) emitting gas. In summary, our comparison of ERQ Ly\(\alpha\) halo properties to blue quasars has revealed many similarities, and some contrasting characteristics that distinguish them from simple orientation effects. Extended morphology of halos around ERQs appear similar to those of blue quasars, but the inner halo is more circularly compact. Future work with larger samples and/or deeper maps could help resolve the evolution/youth question. Further exploration into ERQs as an evolutionary stage (e.g., Perrotta et al., 2023 and Hamann et al., 2023) will investigate the host galaxies for comparison to blue quasars. ## 5 Conclusions We present a sample of 11 ERQs observed with KCWI Integral Field Spectroscopy, which have median redshift \(z=2.6\), a median color of \(i-W3=5.9\) mag, and median bolometric luminosity \(L_{\rm bol}\approx 5\times 10^{47}\) erg s\({}^{-1}\). Except for one ERQ observed under cloudy conditions, all have detected Ly\(\alpha\) halos, and have a median halo luminosity \(L_{\rm halo}=5.83\times 10^{43}\) erg s\({}^{-1}\). The median of Ly\(\alpha\) emission's maximum linear size is \(>\)128 kpc, and exponential scale length of the circularly averaged SB radial profile median is 9.0 kpc. Morphology is generally circular around the inner halo regions, with a median flux-weighted eccentricity of \(e_{\rm weight}=0.65\) and unweighted \(e_{\rm unweight}=0.66\). One ERQ in our sample, J1705+2736, that has substantially narrower emission line widths, has the most asymmetric morphology in its outer halo with \(e_{\rm unweight}=0.78\), and compared to the rest of the ERQ sample has the most asymmetric inner halo with \(e_{\rm weight}=0.91\). Kinematics of the halos are relatively calm, with velocity shifts of the Ly\(\alpha\) emission centroid to be in the hundreds of km s\({}^{-1}\), and much weaker than the shifts found in [O III] for ERQs in the thousands of km s\({}^{-1}\) by Perrotta et al. (2019). Velocity maps are coherent, with some showing a gradual gradient from red to blue velocity shifts across the halo. Dispersion of the halo emission is also quiet, with a spatial median dispersion of 374 km s\({}^{-1}\), and standard deviation of 114 km s\({}^{-1}\). Our measured quantities of size, luminosity, blueshift, and dispersion show ERQs are mostly similar to those obtained in blue quasar surveys. ERQs generally follow luminosity trends that are seen across faint to luminous blue quasar samples for halo linear-size, luminosity, and velocity dispersion. However, ERQs do stand apart from similarly luminous blue quasars in that their halos are more circularly compact. Most of the statements and inferences about ERQ populations made in L22 are supported by this work with the addition of 10 other Ly\(\alpha\) halos, summarized below: * We do not see a color correlation with Ly\(\alpha\) luminosity or kinematics across our sample. * At circumgalactic scales we do not find clear evidence of feedback based on the circularly symmetric inner halos and low velocity dispersions (see Sections 4.2 & 4.4). * Ly\(\alpha\) halo velocity dispersions are mostly consistent with circular velocities of halo gas at a typical dark matter halo mass (see Section 4.4). * Our sample's illumination patterns are similar to other blue quasars based on Ly\(\alpha\) halo morphologies, characterized by elliptical eccentricity parameters (see Section 3.4). * ERQ halos are more circularly concentrated, and could mean they have had less time to extend by outflows from the innermost unresolved regions (see Section 4.2). * Obscuration acting as a chronograph allows for measurements of narrow Ly\(\alpha\) emission lines, and these lines can be used as systemic redshift estimators for constraining broad line emission blueshifts and outflows (see Section 4.1). ## Acknowledgements JG, MWL, and FH acknowledge support from the USA National Science Foundation grant AST-1911066. NLZ and AV acknowledge support from the NASA Astrophysics Data Analysis Program Grant 80NSSC21K1569. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. Data presented herein were partially obtained using the California Institute of Technology Remote Observing Facility. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. ## Data Availability The data are available upon request.
2305.07608
Torrent Driven (TD) Coin: A Crypto Coin with Built In Distributed Data Storage System
In recent years decentralized currencies developed through Blockchains are increasingly becoming popular because of their transparent nature and absence of a central controlling authority. Though a lot of computation power, disk space, and energy are being used to run this system, most of these resources are dedicated to just keeping the bad actors away by using Proof of Work, Proof of Stake, Proof of Space, etc., consensus. In this paper, we discuss a way to combine those consensus mechanism and modify the defense system to create actual values for the end-users by providing a solution for securely storing their data in a decentralized manner without compromising the integrity of the blockchain.
Anirudha Paul
2023-05-12T16:54:24Z
http://arxiv.org/abs/2305.07608v1
# Torrent Driven (Td) Coin: A Crypto Coin With Built ###### Abstract In recent years decentralized currencies developed through Blockchains are increasingly becoming popular because of their transparent nature and absence of a central controlling authority. Though a lot of computation power, disk space, and energy are being used to run this system, most of these resources are dedicated to just keeping the bad actors away by using Proof of Work, Proof of Stake, Proof of Space, etc., consensus. In this paper, we discuss a way to combine those consensus mechanism and modify the defense system to create actual values for the end-users by providing a solution for securely storing their data in a decentralized manner without compromising the integrity of the blockchain. ## 1 Introduction When Bitcoin [1] was first introduced, its Proof of Work consensus showed us a different way to combat adversaries. Rather than blocking the adversaries directly, it made an attack on the network extremely expensive. Though this system is preventing any major attack on the system, the tendency of centralized mining is increasing exponentially. Nowadays, ordinary people can not participate in bitcoin mining. As a result, the system is not as democratized as initially thought. Another widespread consensus is Proof of Stake, which is adapted in Ethereum 2.o [2], Cardano [3], etc., coins, where you stake your own money to participate. Any bad behavior can cost the miners loosing their staked money. Though it is power efficient, staking actual currency without doing any work can cause nothing at stake, whale problems, etc. To address this, we propose a consensus that uses this distributed network of the miner to store user data. By securely storing this data and continuously proving the proof of storage, the miners will earn seed points. Instead of staking the currency they maintain, they will use these earned seed points to claim a spot in the mining round. In a nutshell, miners should provide some utility to earn points that enable them to mine further to get actual cryptocurrency. Spoofing the process of earning these seed points is hard enough so that honest miners have no incentive to deviate from the intended flow. And as the underlying mining technology is basically already tested proof of stake, the integrity of the network is as good as other similar currencies like Ethereum 2.o, Cardano, etc. ## 2 Previous whitepaper shortcomings In the Bitcoin whitepaper [1], the author introduced the idea of Proof Of Work, where miners need to continuously try different nonce to generate a hash with the required number of zero bit. But as time goes by, this challenge of generating the intended hash has become so complex that nowadays it is near impossible to mine effectively without designated ASICS, which is a huge waste of money and resources for doing calculations that have no other purpose other than showing commitment. To address this, Ethereum is introducing Proof of Stake in its system. But it also introduces new problems - as now the "work" part is removed to show commitment, miners have the opportunity to sign multiple blocks from parallel chains, making it hard for forks to converge. Though Ethereum punishes this behavior by slashing the coin of bad actors, it is still not completely secured from manipulation by the whales who have the majority coin in control. Another consensus mechanism is proof of space [4], where the miner fills up its disk space with garbage information generated by mathematical hash functions, and the verifier sends them challenges from time to time to validate whether the miner is holding the data or not. The issue is the disk space the data is taking is not meaningful. It is there to show commitment, just like Bitcoin's proof of work. We shall see later in this paper that it is possible to address some of the shortcomings mentioned above. ## 3 Architecture The whole mechanism of this blockchain can be described with three sections. 1. Block structure 2. Consensus 3. Method of issuing the token The block structure follows the standard Bitcoin block structure. The consensus follows the proof of stake mechanism, but it is different in a sense it doesn't use the main coin as a stake but instead uses a secondary Seed Bonus Token. The main deviation comes in the method of issuing the tokens. Though the main Torrent Driven Coin (TD Coin) is issued with a standard block reward, the secondary coin (Seed Bonus Token) needed for staking and mining has a different structure for minting. ### Block structure Transactions are arranged in standard Bitcoin format, where every transaction is spent from the coin, not from the account. The coin owner transfers the coin by digitally signing the hash of the previous transaction and adding the next owner's public key at the end. The new owner can verify the signatures by following the chain of ownership. And to avoid double-spending, only the oldest transaction of any coin is considered valid. 1 In the chain, many of these transactions are arranged in a block, and their hash value links them. The size of the blocks can be dynamically adjusted like Ethereum based on the network congestion. ### Consensus In this modified proof of stake consensus mechanism 3, miners agree to lock up the whole amount of their secondary coin, "Seed bonus token," for getting the chance to validate new blocks of data to be added to a blockchain. The blockchain algorithm selects validators from the pool of queued miners based on how much seed bonus token their accounts have. The more seed bonus token a miner has, the better chance of being chosen to mine and earn newly minted primary crypto - "Torrent Driven Coin" as a reward if the block gets added to the main chain. A portion of their seed bonus token is burnt to encourage future data seeding, and the rest is returned back to their wallet again. If a validator is caught cheating, they could be punished by burning all their seed bonus tokens and sending them to an unusable wallet address to which nobody has access, making them useless forever. ### Method of issuing the token There are three types of token present in this system. 1. torrent driven coinThis is the standard-issue coin that can be exchanged between any parties present in the blockchain. It can only be minted by mining a block in this blockchain. Other than exchanging it as money, users can also burn an Figure 1: Transaction structure (Standard Bitcoin) [1] Figure 3: A high level architecture of Proof of Stake consensus [5] Figure 2: Block structure (Standard Bitcoin) [1] amount of this coin through a smart contract present in layer 1 of this blockchain to get the Leecher Token described below. The exchange rate between Torrent Driven Coin and Leecher Token will be adjusted dynamically based on supply and demand. ## 2. leecher Token Leecher token grants the ability to upload your data to other users or the ability to host other's data in your machine and get a seed bonus. The only way to get this token is to send an amount of Torrent Driven coin to the predefined smart contract, and the smart contract will burn the coin and give the sender an amount of Leecher Token in exchange. The exchange rate is controlled algorithmically to address the supply-demand issue. Each leech token grants access to upload or host one MB of data. If a user wants to host 3oMB of data in one additional copy in the network, the process will work like this - For example, Alice wants to make a copy of her data on the peer-to-peer network, and Bob wants to host data for the seed bonus token. First, Alice will send an amount of Torrent Driven coin to a specific smart contract to get 30 or more leecher tokens as seen in figure 4. Then she will join the pool of hosting requests to find possible hosts with the 30 leecher tokens in place like in figure 5. On the other side, Bob also needs to exchange Torrent Driven coins to show commitment as a seeder. So, for example, if he has 50 leecher tokens, he can request the smart contract to match him with 50 * 5 = 250 data blocks. Five is a constant here, representing each data block will be saved by at least five seeders. Then what the smart contract will do is match each block Alice wants to host with five different Host addresses. Alice will see all the public keys associated with each of her data blocks on the blockchain. The application on her side will make five copies of data, add the public key to those blocks, make a torrent tracker and publish the tracker on the blockchain. When the tracker gets published, the group of hosts, including Bob, can see the tracker and use that to only download the data attached with their individual public keys in header like in figure 6. Not only that, each payload is also appended with the seeder's public key and random value before encryption which prevents seeders from swapping the payload between them. Each seeder has his own unique version of the same payload Figure 4: Minting Leech Token Figure 5: Hosting Request that the data owner himself can only decrypt. Sample payload structure can be seen in figure 7 The entire process will be covered with Byzantine Fault tolerance and other cross-chain swap techniques if any involved party deviates from the intended path like - going offline, not signing, false signing, etc. The smart contract will ensure that other parties won't have to consume the loss. So this leecher token is not exchangeable between addresses. It can only be used for hosting purposes. 3. Seed bonus tokenThe sole purpose of this token is to ensure data integrity and facilitate staking for proof of stake. The first issue is - what is the guarantee the host is storing the data? The standard checksum used in torrent systems is not enough to solve this issue. Because to do that, they need to send the data back to the validator, which is slow and unnecessary. Instead, the original data owner or some other validator can save some part of that data in their system, and after random intervals, they can request zero-knowledge proof from the host. The time to response window is short to avoid data recreation. If they can't respond with the proof within that window, that proves the miner is either offline or is not actually storing the data - both of these behaviors are unacceptable. But if they can respond to the challenge properly along with able to send a small amount of data to show seed liveliness and passing the checksum test, the host will be rewarded with a small amount of new token - the "Seed Bonus Token." The download and checksum check is similar to a normal torrenting system [6]. And the zero-knowledge proof is similar to this proof of space and time described in this paper [7]. The whole process can be done in side chain and only publish the result in main chain to claim reward. Figure 6: Data Distribution Process Figure 7: Payload Structure Now there is a new token in the account for potential miners - which is really hard to spoof and also not transferable. That's why the reason to be a bad actor to earn this token is very slim. So the possibility of the host honestly seeding the data is much higher. But now the question is, what is the incentive for them to earn this reward? The answer is - they can use this reward to stake to get a chance to validate a block and get the block reward in TD coin, which is actually transferable and can be used as actual currency. From this point on, the consensus falls back on the proof of stake. The difference here is - instead of staking the main currency, the miners are staking their off-chain hosting work reward earned through proof of space. ## 4 Scalability, Security, and Resource-efficiency The system is scalable in a sense underlying everything; it is still using the proof of stake as its consensus, which is pretty scalable and secured under heavy transactions. One might think the whole process of downloading and uploading data can slow down the main blockchain - which is not true. Cause the upload, download, and zero-knowledge prove that part of the system can be considered off-chain work and side chain transactions. Only the result of each checkpoint is published on the main chain. So the main chain is not slowed down by all the bottlenecks associated with proof of space. The main network will continue to use proof of stake, but the staked coins are influenced by tokens earned in the independent torrenting mechanism and proof of space consensus. The whole system is resource-efficient because though miners have to do a lot more work, especially allocating more disk space than the traditional proof of stake, the silver lining is that those works are not wasted work. By using the storage, they are providing actual value to the users. There is no unnecessary computation power used like Proof of Work. Overall the security is also up to par with other traditional coins. The original main network is using proof of stake, which after many iterations and research, is now pretty secure to be relied upon. The staked point is going through the zero-knowledge proof of space and byzantine consensus, making it extremely hard to profit from being a bad actor. There is no way to replicate the data as the packets are encrypted with individual miner's public key, and the only way to list your public key as a valid host is by burning actual Torrent Driven coins. As the exchange rate is controlled algorithmically, it is not easy to inflate or deflate the staking pool or make a hostile takeover. So by combining all those techniques, the proposed blockchain is scalable, secured, and resource-efficient and provides actual value to users instead of the perceived value of traditional crypto coins. ## 5 Conclusion The primary goal of this white paper was to create intrinsic value for the whole distributed mining network of blockchain. By incorporating torrenting mechanisms in the proof of space framework, we have introduced a solution for securely and reliably storing multiple copies of data on the internet. Moreover, as there are monetary exchanges and interests involved in the process, the chance of abandoning the torrent by the seeder is really low. And it is also a great way to incentiify hosting private encrypted data. In the process, it is now replacing the redundent generated data of Proof of Space with actual meaningful information. By adding signatures, zero-knowledge proof, and byzantine consensus to track the seed status and facilitate a reward mechanism, we have minimized the risk of bad actors creating fake seed data and, in the process, attacking the staking pool. On the other hand, removing staking coins with staking work has introduced a toned-down version of Proof of Work in the Proof of Stake mechanism, making the architecture secure like PoW and scalable like vanilla PoS. Finally, introducing a utility inside the network and regulating the exchange rate algorithmically has the possibility of reducing the current deflationary nature of the crypto coins where no one wants to use it in the real-world other than speculative investment. Because in this architecture, seeding more data is beneficial for the miners, that's why the competition among miners can reduce the cost for the storing data for the normal users.
2306.06046
Redeveloping a CLEAN Deconvolution Algorithm for Scatter-Broadened Radio Pulsar Signals
Broadband radio waves emitted from pulsars are distorted and delayed as they propagate toward the Earth due to interactions with the free electrons that compose the interstellar medium, with lower radio frequencies being more impacted than higher frequencies. Multipath propagation in the interstellar medium results in both later times of arrival for the lower frequencies and causes the observed pulse to arrive with a broadened tail described via the pulse broadening function. We employ the CLEAN deconvolution technique to recover both the intrinsic pulse shape and pulse broadening function. This work expands upon previous descriptions of CLEAN deconvolution used in pulse broadening analyses by parameterizing the efficacy on simulated data and developing a suite of tests to establish which of a set of figures of merit lead to an automatic and consistent determination of the scattering timescale and its uncertainty. We compare our algorithm to simulations performed on cyclic spectroscopy estimates of the scattering timescale. We test our improved algorithm on the highly scattered millisecond pulsar J1903+0327, showing the scattering timescale to change over years, consistent with estimates of the refractive timescale of the pulsar.
Olivia Young, Michael Lam
2023-06-09T17:14:27Z
http://arxiv.org/abs/2306.06046v1
# Redeveloping a CLEAN Deconvolution Algorithm for Scatter-Broadened Radio Pulsar Signals ###### Abstract Broadband radio waves emitted from pulsars are distorted and delayed as they propagate toward the Earth due to interactions with the free electrons that compose the interstellar medium, with lower radio frequencies being more impacted than higher frequencies. Multipath propagation in the interstellar medium results in both later times of arrival for the lower frequencies and causes the observed pulse to arrive with a broadened tail described via the pulse broadening function. We employ the CLEAN deconvolution technique to recover both the intrinsic pulse shape and pulse broadening function. This work expands upon previous descriptions of CLEAN deconvolution used in pulse broadening analyses by parameterizing the efficacy on simulated data and developing a suite of tests to establish which of a set of figures of merit lead to an automatic and consistent determination of the scattering timescale and its uncertainty. We compare our algorithm to simulations performed on cyclic spectroscopy estimates of the scattering timescale. We test our improved algorithm on the highly scattered millisecond pulsar J1903+0327, showing the scattering timescale to change over years, consistent with estimates of the refractive timescale of the pulsar. 0000-0002-4880-7880]Olivia Young 0000-0002-4882-7880]Michael T. Lam ## 1 Introduction Radio pulsars provide unique probes of the ionized interstellar medium (ISM) and allow us to gain insight into its structure and variability by modeling the effects of the delays and distortions on the emitted radio pulses as observed at the Earth (Lorimer & Kramer, 2004). While delays due to dispersion are routinely modeled in pulsar timing experiments (e.g., Verbiest et al., 2016), distortions due to multipath propagation are not and it can be difficult to do so (Shannon & Cordes, 2017). Determining the distortion level is difficult due to both the intrinsic pulse shape and the underlying geometry and spectrum of the turbulent medium being unknown (Cordes et al., 1986; Cordes & Rickett, 1998), and the time and path-dependent variations in the observed pulse broadening function (PBF; Williamson, 1972). Not only can separating these effects yield important insights into the nature of the ionized ISM but also provide proper pulse profile impact mitigation for pulsars used in precision timing experiments such as low-frequency gravitational wave detectors (Stinebring, 2013). CLEAN deconvolution, originally developed for radio interferometric imaging (Hogbom, 1974), was applied to radio pulses in Bhat et al. (2003) to recover both the pulse broadening (scattering) timescale \(\tau_{\rm d}\) and the intrinsic shape simultaneously via the use of an assumed PBF. Unlike in synthesis imaging where the positions of the array elements are known while the sky brightness distribution is not, neither the analogous PBF nor intrinsic pulse shape, respectively, are known. Bhat et al. (2003) introduced figures of merit to iteratively test trial values of \(\tau_{\rm d}\) under an assumed PBF, demonstrating variation in the rebuilt intrinsic pulses for PSR J1852+0031 for different PBFs and application to several other pulsars. We expand upon the CLEAN deconvolution algorithm presented in Bhat et al. (2003) to prepare for _automated_ deployment on data sets of significantly more pulsars. In this work, we primarily focus on the broadening effects of the ISM and recovering \(\tau_{\rm d}\) with the intention of applying the algorithm to the multi-frequency profiles of pulsars distributed throughout the galaxy to understand both the bulk properties of the turbulence in the ISM and specific unique lines of sight. Understanding these properties inform priors on pulsar timing arrays and other high-precision pulsar timing experiments in which scattering biases estimates of the arrival times (Lentati et al., 2017). This work is the first of several papers on robust method development and deployment on real data from a larger selection of pulsar observations. In SS2, we describe the CLEAN deconvolution method as presented and expanded upon the work in Bhat et al. (2003). In SS3, we perform systematic tests on simulated data, demonstrating the level of recall in the input \(\tau_{\rm d}\) values and quantifying our uncertainties in the estimates. We also compare our results with the cyclic spectroscopy (CS) deconvolution technique and discuss the tradeoff of limitations in our method with the extensive computational complexity of the CS method. Finally, we apply our method to PSR J1903+0327 in SS4 and discuss our future directions in SS5. ## 2 The CLEAN Deconvolution Algorithm CLEAN deconvolution for radio pulsars exploits the one-dimensional nature of pulsar profiles and differs from traditional CLEAN approaches where the instrumental response function is known. The analogous function in this work, the PBF, must be assumed from _a priori_ models. Bhat et al. (2003) developed a method that can both determine the pulse broadening timescale \(\tau_{\rm d}\) and recover the intrinsic pulse from observational pulsar profile data via the employment of a CLEAN deconvolution algorithm and figures of merit (FOMs). CLEAN can be applied using different models of the PBF of the ISM, making it a broadly encompassing method. In this work, we assumed the PBF for the commonly-used thin-screen approximation for the ISM's geometry. Bhat et al. (2003) described the CLEAN algorithm for use in the deconvolution of radio pulsar pulses, along with the development of five FOMs used to determine the correct broadening timescale from a set of test values. In this section, we discuss the algorithm both as originally described and how the algorithm has been re-developed for this work. ### Modeling the Observed Pulse Profile We assumed the observed pulse \(y(t)\) to result from the convolution of the intrinsic pulse \(x(t)\), the PBF \(g(t)\), and the instrumental response function \(r(t)\), given by \[y(t)=x(t)\otimes g(t)\otimes r(t). \tag{1}\] We simulated our intrinsic pulse \(x(t)\) as a normalized, single-peaked Gaussian shape, which minimizes the asymmetry of the rebuilt pulse and provides a baseline comparison against the use of the FOM \(\Gamma\) discussed later in Section 2.3.1. The PBF for the ISM is commonly modeled as a thin screen (Cordes & Rickett, 1998) for simplicity. The thin-screen approximation simplifies calculations, separating out the physical turbulent processes from the geometry of the intervening gas, and in the case of the PBF, simplifies the form as well; the thin-screen model works reasonably well for lines of sight with a single overdense region. We used this model in our work, given by \[g(t|\tau_{\rm d})=\frac{1}{\tau_{\rm d}}\exp\left(-\frac{t}{\tau_{\rm d}}\right) U(t), \tag{2}\] where \(U(t)\) is the Heaviside step function. Lastly, the instrumental response function denoted as \(r(t)\) determines the resolution of the observed data. We assumed a delta function1 as an approximation for the instrumental response function with a width of one phase bin. Footnote 1: For clarity, we use the digital signal processing definition of the unit-height sample function being \(\delta(t)=1\) if \(t=0\), otherwise \(0\), which allows us to multiply by a constant as in Eq 3. ### CLEAN Deconvolution CLEAN iteratively subtracts replicated components from an observed pulse until the residual structure falls below the root mean squared (rms) of the off-pulse noise. As we do not know the value of \(\tau_{\rm d}\)_a priori_, this iterative subtraction process is repeated for a range of test \(\tau_{\rm d}\) values, with the assumed correct \(\tau_{\rm d}\) chosen using FOMs. For the purposes of the algorithm, we treat \(\tau_{\rm d}\) to be measured in time-bin resolution units as measured across the folded pulse's phase with \(N_{\phi}\) total bins. We step through our CLEAN deconvolution process below. 1. **CLEAN Component Creation:** We first identify the location of the maximum of the de-constructed pulse after the \(i\)-th iteration, \(t_{i}\equiv\mbox{argmax}\left[y_{i}(t)\right]\); our first iteration begins with the originally observed pulse \(y_{0}(t)\). Each CLEAN component (CC) \(y_{c}(t|t_{i})\) starts with a delta function \(\delta(t-t_{i})\) at the location of the maximum of the observed pulse, \(\max\left[y_{i}(t)\right]\) multiplied by the loop gain value \(\gamma\), i.e., \[y_{c}(t|t_{i})=\gamma\left\{\max\left[y_{i}(t)\right]\right\}\delta(t-t_{i}) \equiv C_{i}\delta(t-t_{i}).\] (3) Smaller loop gains result in a greater number of iterations before the stopping criterion is met but allow for finer intrinsic features to be resolved (Hogbom, 1974); in this work, we used \(\gamma=0.05\). 2. **Iterative Subtraction off the Main Pulse:** After we construct \(y_{c}(t|t_{i})\), we convolve the CC with the instrumental response function \(r(t)\) and the PBF with a given test \(\tau_{\rm d}\), and then subtract this shape from the \(i\)-th iteration pulse. The change in the profile at each iteration is described as \[\Delta y_{i}(t)=y_{i}(t)-\left\{y_{c}(t|t_{i})\otimes\left[g(t|\tau_{\rm d}) \otimes r(t)\right]\right\}\] (4) with \(y(t_{i})\) as the input pulse profile to the \(i\)-th iteration. The CCs are then iteratively subtracted off the main pulse, with the resulting subtracted profile becoming the pulse profile for the next CLEAN iteration so that \[y_{i+1}(t)=\Delta y_{i}(t).\] (5) 3. **Termination of CLEAN Algorithm:** The CLEAN algorithm is terminated when the maximum of the input pulse profile falls below the rms of the off-pulse noise, i.e., \(\max\left[y_{i}(t)\right]\leq\sigma_{\rm off}\). The CLEAN algorithm above will provide the list of CCs along with the residual noise. The CCs can be used to reconstruct the intrinsic pulse shape, but for the purposes of this work, our final goal was to determine \(\tau_{\rm d}\). The algorithm can run with any input value of \(\tau_{\rm d}\), therefore our iterative method is repeated with different trial \(\tau_{\rm d}\), from which we derived FOMs based on the reconstructed intrinsic pulse shape and the residual noise that resulted from each trial \(\tau_{\rm d}\). ### Figures of Merit We employed six FOMs as follows: a measure of positivity of the residual noise (\(f_{\rm r}\)), a measure of skewness of the recovered intrinsic pulse (\(\Gamma\)), a count of the on-pulse-region residual points below the off-pulse noise level (\(N_{\rm f}/N_{\phi}\)), a measure of the ratio of the rms of the residual noise to the off-pulse noise rms (\(\sigma_{\rm off}\)/\(\sigma_{\rm off}\)), a measure of the combined positivity and skewness measure (\(f_{\rm c}\)), and a count of the number of CLEAN components each test \(\tau_{\rm d}\) uses before the peak of the profile falls below the noise level (\(N_{\rm iter}\)). All except the last are described in Bhat et al. (2003). These six FOMs fall into three broad categories: figures based on the rebuilt intrinsic pulse, figures based on the residual noise after the CLEAN algorithm terminates, and a figure based on the number of CLEAN components generated before the algorithm terminates. We describe the FOMs grouped into these three categories in the sections below. In Figure 1, we see the ideal result of the use of six FOMs and the methods for determining the "correct" \(\tau_{\rm d}\). #### 2.3.1 FOM Measuring the Shape of the Rebuilt Intrinsic Pulse We examine the CC amplitudes \(C_{i}\) and locations \(t_{i}\) found during the CLEAN process (e.g., see Eq. 3) to compute the \(\Gamma\) FOM. In our simulations, we created intrinsic pulses that are symmetric Gaussians, and therefore the correct rebuilt pulse should always be a perfectly symmetric Gaussian if the correct \(\tau_{\rm d}\) is used. In reality, intrinsic pulses may not be perfectly symmetric, and we discuss these implications in SS5. The \(\Gamma\) of the rebuilt pulses is calculated for each test \(\tau_{\rm d}\) by computing the third standardized moment \[\Gamma=\frac{\left<t^{3}\right>}{\left<t^{2}\right>^{3/2}}, \tag{6}\] where \(\left<t^{n}\right>\) is \[\left<t^{n}\right>=\frac{\sum_{i=1}^{n_{c}}(t_{i}-\bar{t})^{n}C_{i}}{\sum_{i =1}^{n_{c}}C_{i}} \tag{7}\] and \(\bar{t}\) is \[\bar{t}=\frac{\sum_{i=1}^{n_{c}}t_{i}C_{i}}{\sum_{i=1}^{n_{c}}C_{i}}. \tag{8}\] Figure 1: Summary of FOMs used in this work, for a pulse with simulated full-width at half maximum of 100 phase bin units, input \(\tau_{\rm d}=50\) phase bins, and \(\rm S/N=100\). We tested \(\tau_{\rm d}\) values ranging from 25 to 75 bin units with a step size of one. Panel 1 shows the number of data points within a \(3\sigma\) level of the noise FOM, panel 2 shows the root mean squared FOM, panel 3 shows the skewness FOM, panel 4 shows the positivity FOM, panel 5 shows the combined skewness and positivity FOM, and panel 6 shows the number of iterations FOM. The resulting \(\Gamma\) is ideally represented by the example in panel 3 of Figure 1, where the sharp fall-off point represents the general location of the correct \(\tau_{\rm d}\). #### 2.3.2 FOMs Based on the Residual Noise Three of our FOMs are built from measures of the residual noise after the completion of the CLEAN algorithm. We will also discuss a FOM that combines one of these FOMs (positivity) with the \(\Gamma\) FOM discussed previously - this is an important FOM as described in Bhat et al. (2003). The residual noise is one of the end products of the CLEAN deconvolution process. A test \(\tau_{\rm d}\) that is larger than the correct value of \(\tau_{\rm d}\) results in a progressively larger over-subtraction as shown in Figure 2. If the test \(\tau_{\rm d}\) is smaller than the correct value, it results in an unremoved noise floor in the baseline, again see Figure 2. We can first calculate the rms of the residual noise \[\sigma_{\rm offc}=\frac{1}{N_{\phi}}\sum_{j=1}^{N_{\phi}}[\Delta y_{i}(t_{j}| \tau_{\rm d})]^{2}, \tag{9}\] in comparison to the rms of the off-pulse region, \(\sigma_{\rm off}\), where this ratio \(\sigma_{\rm offc}/\sigma_{\rm off}\) will grow whenever over- or under-subtraction is performed and should otherwise approach a value of 1 for the appropriate subtraction. This is roughly equivalent to the single metric used to automatically determine \(\tau_{\rm d}\) used in Tsai et al. (2017) for multi-frequency data from 347 pulsars. Beyond the rms, we can count the total number of residual noise points \(N_{\rm f}\) within a certain threshold level (we chose \(3\sigma_{\rm off}\)) of the noise that satisfies the condition \[|y_{i}-y_{\rm off}|\leq 3\sigma_{\rm off}. \tag{10}\] As seen in Figure 2, for under-subtraction we expect all of the points to satisfy the condition and so the ratio \(N_{\rm f}/N_{\phi}=1\), but the ratio will drop as over-subtraction occurs. Besides these two metrics which measure deviations away from the rms noise, we also wish to enforce non-negativity of the residual profile since we know pulsar signals must be above the baseline noise. A \(f_{\rm r}\) FOM was defined2 by Bhat et al. (2003) in terms of a sum over the \(N_{\phi}\) bins of the residual noise, Footnote 2: Bhat et al. (2003) introduced a multiplicative weight of order unity but did not specify the value. Here we take that weight to be 1 and so ignore introducing it in the main text. They also include a Heaviside step function, \(U_{\Delta y}\). As this only changes the overall normalization of our FOM in our simulation runs, we ignore this in our work. \[f_{\rm r}=\frac{1}{N_{\phi}\sigma_{\rm off}^{2}}\sum_{j=1}^{N_{\phi}}[\Delta y _{i}(t_{j}|\tau_{\rm d})]^{2}. \tag{11}\] If \(\Delta y_{i}(t)\) is Gaussian white noise with rms equal to \(\sigma_{\rm off}\), then as with the previous FOM, we would expect \(f_{\rm r}\approx 1\), while over-subtraction would force the sum to increase well beyond 1. Bhat et al. (2003) defined the \(f_{\rm c}\) FOM, equally weighting the rebuilt intrinsic pulse shape and the residual noise by \[f_{\rm c}=\frac{\Gamma+f_{\rm r}}{2}, \tag{12}\] thus providing higher confidence in test \(\tau_{\rm d}\) values with favorable values of both skewness and positivity. The typical shape of this FOM is shown in panel 5 of Figure 1. #### 2.3.3 FOM Measuring the Number of Iterations Performed We developed this FOM to more directly measure the fit of the re-convolved CCs broadening tails to the broadening of the observed pulse. As the amplitude for re-convolved CCs with larger broadening tails is smaller than those with smaller broadening tails (due to the normalization in Eq. 2), we expect a general increase in the number of iterations needed to deconvolve the observed pulse. Similarly, when re-convolved CCs with smaller broadening tails are subtracted from a pulse with a larger true broadening tail, more iterations will be required. However, when the CCs are convolved with the correct value of \(\tau_{\rm d}\), neither under nor over-subtraction occurs, resulting in fewer iterations being needed. Therefore, we expect a dip in our FOM around the correct value of \(\tau_{\rm d}\). Figure 2: The residual noise left over after the CLEAN algorithm terminates for three test \(\tau_{\rm d}\) values, 10, 20, and 30 bins, where 20 is the simulated value. These time series are representative of the residuals used to calculate multiple FOMs. We can see the under- and over-subtraction for test \(\tau_{\rm d}\) values that are smaller than or larger than the true \(\tau_{\rm d}\), respectively. ### Automating the Choice of the Correct \(\tau_{\rm d}\) Value These FOMs were originally constructed to pinpoint the correct value of \(\tau_{\rm d}\) by eye. This approach is impractical for large data sets, so we automated this process. We found that the simple approach of computing the numerical third derivative of each function with respect to \(\tau_{\rm d}\) and finding the maximum has yielded good results, though the exact recall depends on both the value of \(\tau_{\rm d}\) and the pulse S/N. More complicated algorithms will be employed in future works but the systematic error introduced by this choice is small in comparison to other noise sources, as shown next, so we opted to use it. ## 3 Automated Algorithm Performance In this section, we will discuss the performance of our automated CLEAN method. An in-depth description of our redeveloped CLEAN algorithm in Python, as well as notes on how to use the open source versions available on [https://zenodo.org/badge/latestdoi/524167339](https://zenodo.org/badge/latestdoi/524167339), can be found in Young (2022). We wished to robustly quantify the "correctness" of our \(\tau_{\rm d}\) estimates in simulated data so that we could automatically assign uncertainties to our estimates on real data. To that end, we simulated multiple data sets with different input parameters to determine how these will affect the recall. Ideally, as in Dolch et al. (2021) for the cyclic spectroscopy (CS) algorithm, only the S/N and \(\tau_{\rm d}\) of a profile should affect the recall accuracy of our CLEAN deconvolution, though we tested several other parameters as well. To quantify the algorithm's performance, we computed a measure of the fractional average error bar. Within this work, the values returned for each FOMs were given the same weight when calculating our error bars, which is defined as the fraction of the returned \(\tau_{\rm d}\) to the correct injected \(\tau_{\rm d}\). Our fractional average error bars are defined as \[\epsilon_{\rm ave}=1-\frac{1}{N_{\rm runs}}\sum_{i=1}^{N_{\rm runs}}\left( \frac{\epsilon_{i}}{\tau_{\rm d}N_{\rm FOM}}\right) \tag{13}\] where \(N_{\rm runs}\) is the number of simulations for a given data set, \(N_{\rm FOM}=6\) is the number of FOMs used, and \(\epsilon_{i}\), for readability, is defined as an unweighted sum of the \(\tau_{\rm d}\) values returned by our FOMs for each run (\(i=1\ldots N_{\rm runs}\)), \[\epsilon_{i}=\tau_{N_{\rm f}\over N_{\phi}}+\tau_{\rmrm e_{\rm off}\over\sigma _{\rm off}}+\tau_{\rm T}+\tau_{f_{\rm r}}+\tau_{f_{\rm c}}+\tau_{N_{\rm iter}}. \tag{14}\] ### Testing the Impact of S/N and \(\tau_{\rm d}\) on Recall We first tested how CLEAN performs based on different injected pulse S/N and \(\tau_{\rm d}\) combinations. We simulated data sets using several of the characteristics of PSR B1937+21 (the first-known millisecond pulsar and a known scattered source) as follows; these parameters are also shown in Table 2. PSR B1937+21 has a spin period of 1.557 ms and a full-width at half maximum (FWHM) of 38.2 \(\mu\)s (Manchester et al., 2013). To reduce computing time, we used different numbers of phase bins depending on the injected \(\tau_{\rm d}\) value as shown in Table 2; we show in the next subsection that there is minimal impact in the recovery of \(\tau_{\rm d}\) depending on the phase resolution of the pulses so long as the scattering tails are resolved. For each of our runs, we tested across 100 equally-spaced \(\tau_{\rm d}\) steps between 0.5 and 1.5 times the injected \(\tau_{\rm d,correct}\). For each S/N-\(\tau_{\rm d}\) pair, we simulated and ran CLEAN on 60 pulse shapes. We choose our S/N-\(\tau_{\rm d}\) values in the same style as Dolch et al. (2021) to more directly compare our CLEAN deconvolution method with the CS algorithm. While CLEAN works on an averaged pulse profile, CS uses raw voltage data prior to folding to recover the full impulse response function, making the latter more computationally intensive (though assuming no specific PBF). These methods are therefore difficult to directly compare, but we can expect to see improved performance for both methods as either S/N or \(\tau_{\rm d}\) increase. Indeed, after running our simulations, we see this expected behavior in Figure 3, where darker colors indicate better recovery of \(\tau_{\rm d}\), which matches what is seen in Dolch et al. (2021) for CS. The numerical values shown in Figure 3 are in terms of the percentage of the correct \(\tau_{\rm d}\), given by \(\epsilon_{\rm ave}\). In Figure 4, we show how well each individual FOM performs, with each panel showing the recovery over the full range of S/N-\(\tau_{\rm d}\) pairs. Smaller dots indicate smaller error bars, and thus a more accurate performance. Poor performance from one FOM will impact our averaged recall, as an unweighted average is currently employed. We see that in general, the performance of each FOM \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{ Parameter} & Value \\ \hline Spin period & 1.557 ms \\ Pulse FWHM & 38.2 \(\mu\)s \\ \(N_{\phi}\) for \(\tau_{\rm d}=1,2,4\)\(\mu\)s & 2048 \\ \(N_{\phi}\) for \(\tau_{\rm d}=8,16,32\)\(\mu\)s & 1024 \\ \(N_{\phi}\) for \(\tau_{\rm d}=64,128,256\)\(\mu\)s & 512 \\ Test \(\tau_{\rm d}\) range & (0.5\(\tau_{\rm d,correct}\) – 1.5\(\tau_{\rm d,correct}\)) \\ Number of steps in test \(\tau_{\rm d}\) array & 100 \\ \hline \end{tabular} \end{table} Table 1: Automated Algorithm Simulation Parameters improved with higher S/N or \(\tau_{\rm d}\) like the average, though not all behave equally. For example, the \(N_{\rm f}/N_{\phi}\) and \(\sigma_{\rm offc}/\sigma_{\rm off}\) appear to perform better at somewhat lower \(\tau_{\rm d}\) than the other FOMs. While the skewness \(\Gamma\) does not perform as well at the high S/N-\(\tau_{\rm d}\) end, it does perform marginally better than the previously mentioned two FOMs otherwise. In future works, we will explore developing weights for each FOM in constructing the average \(\epsilon_{\rm ave}\) to improve the average accuracy of the algorithm. ### Testing Secondary Parameter Contributions to Recall Error While we assumed the main contributors to the effectiveness of our algorithm to be our primary parameters, S/N and \(\tau_{\rm d}\), we wanted to ensure that secondary parameters were not significant contributors to our recall error. We created a small-scale parameterization set via simulation of a base pulse profile with \(\tau_{\rm d}=256\) bins and S/N \(=2600\). We chose very large values for both \(\tau_{\rm d}\) and S/N as the method was able to reliably recall the correct \(\tau_{\rm d}\) for large \(\tau_{\rm d}\) and S/N values (see Section 3.2). This set was used to determine how the number of bins in our observation, the FWHM of the intrinsic pulse, and the user-defined step size and range of the test \(\tau_{\rm d}\) array affected the algorithm's performance. Additionally, our previous data set assumed an intrinsic pulse with similar parameters to B1937+21 only. Therefore, an additional motivation for probing these secondary parameters was to determine if we can extrapolate our results to observations of other pulsars with varying FWHMs and numbers of phase bins. For these parameterization runs we used the base values shown in the second column of Table 2 and iterated over the values shown in the third column. We run 20 simulations for each variation, which gave insight into these parameters' contribution to our recall and allowed for exploration into the expected larger contributions of \(\tau_{\rm d}\) and S/N to the recall error. As many pulse profiles are recorded with a different number of phase bins (see e.g., Lorimer et al., 1997), we tested to see how the phase resolution of the observation affected our recall. We simulated data sets with \(N_{\phi}\) ranging from 128 to 2048. In Figure 5, we see good agreement between the individual recall values for each run and the averages varying within 10%. Thus, the minor variations in the average recalls could be explained by our limited number of runs resulting in incomplete coverage of the algorithm's performance and we therefore assumed that the number of bins in the observed pulse profile was not a significant contributor to our total recall. Results of testing how the FWHM of the intrinsic pulse affected our recall are shown in Figure 6, and reveal a large, though not unexpected, range in \(\epsilon_{\rm ave}\) between the FWHMs tested. As the FWHM increases, the pulse takes up an increasing fraction of the observation window, thus making the CLEAN cutoff criterion of falling below the off-pulse noise level less effective. This result is corroborated by the findings of Jones et al. (2013) when they found the CS method to be less effective on wider pulses. In Figure 7, we see the contribution of the number of steps or interchangeably the step size of the test \(\tau_{\rm d}\) array to our recall error. We included in this analysis a correction factor of \(\Delta\tau_{\rm d}/2\), the largest base error induced due to large step sizes resulting in the correct \(\tau_{\rm d}\) not being directly tested. For example, if the correct \(\tau_{\rm d}\) is 10.5 \(\mu\)s, and our test \(\tau_{\rm d}\) array only samples every \(\Delta\tau_{\rm d}=1\mu\)s, an error of 0.5 \(\mu\)s will be introduced, thus, we added this factor of \(\Delta\tau_{\rm d}/2\) to our \(\epsilon_{\rm ave}\) to more conservatively estimate our uncertainties. The fractional average error bars returned vary within 5%, therefore we concluded that the number of steps in the test \(\tau_{\rm d}\) array was not a large contributor to the overall recall error. Finally, we parameterized the contribution of the range of test \(\tau_{\rm d}\) iterated over to our recall. In Figure 8, we see that ranges that barely included the correct Figure 3: Average recall error bars for CLEAN deconvolution in the same style as Figure 5 from Dolch et al. (2021). This plot gives an encompassing overview of the performance of the CLEAN algorithm by returning the average size of the error bars with each S/N and \(\tau_{\rm d}\) pair. As smaller error bars indicate better performance, CLEAN performs better on simulated data with larger values of both S/N and \(\tau_{\rm d}\). \(\tau_{\rm d}\) (second bar) result in poor performance as expected. This results from the shapes of the FOMs not being fully covered over the correct injected \(\tau_{\rm d}\). Other ranges that include the correct \(\tau_{\rm d}\) have recalls within about 5% of each other, even when the ranges iterated over are much larger. Therefore, while more computationally intensive, we recommend running CLEAN over a large range of \(\tau_{\rm d}\) values to ensure the best estimate is chosen. With the results of these runs, we see that these secondary effects have some small impact at large S/N and \(\tau_{\rm d}\), but otherwise the most prominent influences on the recall of CLEAN deconvolution are the S/N and \(\tau_{\rm d}\) of the data. While there are some variations in the average recall for each of the parameters we tested, the average recalls varied within 10% or less for most tests, with the notable exceptions: large FWHMs and test \(\tau_{\rm d}\) range that barely included the correct value of \(\tau_{\rm d}\), both expected. We can also conclude that the results using one simulated pulsar can be extrapolated to both simulations of other pulsars and real observational data. ## 4 Applying CLEAN to PSR J1903+0327 To demonstrate the efficacy of our algorithm, we tested CLEAN on real data from the pulsar J1903+0327. PSR J1903+0327 is a millisecond pulsar that has been monitored by pulsar timing array collaborations such as the North American Nanohertz Observatory for Gravitational Waves (NANOGrav; Arzoumanian et al., 2021) in the effort to detect low-frequency gravitational waves. While these collaborations self-select for pulsars with low amounts of pulse broadening (narrower pulses have higher timing precision), PSR J1903+0327 has some of Figure 4: Overview of the relative performance for each FOM. Smaller circles indicate smaller error bars or better performance of the FOMs on simulated data. These factional averages were computed as described in Equation 13, with \(\epsilon_{i}\) being composed of the \(\tau_{\rm d}\) returned by only one FOM instead of an unweighted sum of all returned \(\tau_{\rm d}\) values. These values are labeled as \(\epsilon_{\rm FOM}\) in this plot. In general, we see better performance for higher values of \(\tau_{\rm d}\) and S/N. Interestingly, the \(N_{\rm f}/N_{\phi}\) and \(\sigma_{\rm offc}/\sigma_{\rm off}\) FOM appear to perform better than the \(f_{\rm r}\) and the \(\Gamma\) FOMs which were highlighted in the Bhat et al. (2003) paper. the most prominent scattering in these data sets, with the broadening tail visible by eye. With over a decade of timing data on this pulsar, we analyzed the lowest-radio-frequency pulses in the NANOGrav 12.5-year data set over time, where broadening is the strongest, to investigate if variations in \(\tau_{\rm d}\) are detectable by our algorithm. We created six summed profiles on which to deploy our CLEAN algorithm, with one profile corresponding to each year from 2012-2017 in our data set. We restricted the frequency band for each observation to 10 MHz centered at 1200 MHz to mitigate additional broadening if frequency-averaging the pulses together to boost S/N. Each summed profile consists of twelve monthly observations summed via cross-correlation. Cross-correlation is used to ensure the peaks of our profiles are properly aligned in time before they are summed, resulting in the highest possible S/N of the summed profile. This process was performed iteratively, with each new profile being cross-correlated with and then added to the summed profile. The refractive timescale of PSR J1903+0327 is estimated to be between 1 and 2 years (Geiger and Lam, 2022). Therefore, summing across one year of observations is consistent with the PBF remaining unchanged across this time span. To further increase the S/N values for each profile, we used different Savitzky-Golay filters to smooth the resulting summed profile to the desired S/N level. In Figure 9, we see an example of this summed and smoothed pulse profile. For our time series analysis, we used two different filtering techniques, both employing a Savitzky-Golay filter using a polynomial of order zero to fit the samples: using a filter window size necessary to achieve S/N of 70 and using a filter window of 5% of the observation \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{ Parameter} & \multicolumn{1}{c}{Base Value} & \multicolumn{1}{c}{Values} \\ \hline Number of phase bins & 256 & \(\{128,256,512,1024,2048\}\) \\ FWHM & \(\frac{1}{8}N_{\phi}\) & [\(\frac{1}{64}\), \(\frac{1}{32}\), \(\frac{1}{16}\), \(\frac{1}{8}\), \(\frac{1}{4}\), \(\frac{1}{2}\)]\(N_{\phi}\) \\ Range of \(\tau_{\rm d}\) & (0.5 \(\tau_{\rm d,correct}\) – 1.5 \(\tau_{\rm d,correct}\)) & (0.1 \(\tau_{\rm d,correct}\) – \(\tau_{\rm d,correct}\)), (0.1 \(\tau_{\rm d,correct}\) – 2.0 \(\tau_{\rm d,correct}\)), \\ & & (0.4 \(\tau_{\rm d,correct}\) – 1.6\(\tau_{\rm d,correct}\)), (0.5 \(\tau_{\rm d,correct}\) – 1.5 \(\tau_{\rm d,correct}\)) \\ Number of steps in \(\tau_{\rm d}\) array & 100 & [10, 20, 50, 100, 200] \\ \hline \end{tabular} \end{table} Table 2: Secondary Parameters Tested Figure 5: Results of parameterization runs with changing number of phase bins. The y-axis shows the fractional average error bar size across 20 simulations for each bin value. The average recall error is denoted by the fully opaque green squares connected by the dashed line. The lighter circles indicate the recall error from each run. The average error bars range within 10%. Figure 6: Results of parameterization runs with changing values of the FWHM of the intrinsic pulse, as a fraction of the observation length. We see for FWHMs less than 1/4, the fractional average error bar changes less than 10%, with the \(\epsilon_{\rm ave}\) change still under 30% for the FWHM being 1/2 the observation window. This effect has been seen with the CS approach as well. length to achieve a higher S/N. We chose to create time series at two different levels of S/N to showcase the dependence of the algorithm's performance on S/N. We iterated through test \(\tau_{\rm d}\) values ranging from 100 to 500 bins for each run, with a step size of one bin. We see the results of these runs in Figures 10 and 11, where we converted our returned \(\tau_{\rm d}\) values into units of microseconds. most explicitly in the \(\sigma_{\rm offc}/\sigma_{\rm off}\) FOM (panel 2), where there is a noticeable location where the slope begins increasing in our larger S/N FOMs, versus our lower S/N FOMs where there is a more gradual increase in the slope of the \(\sigma_{\rm offc}/\sigma_{\rm off}\) FOM, making the correct \(\tau_{\rm d}\) more difficult to pinpoint. This increased sharpness of the points of change of our FOMs translated into greater accuracy and better agreement across our FOMs, which can be seen reflected in the tighter clusters around the average returned \(\tau_{\rm d}\) values in our time series. We also note some interesting results of this time series analysis, particularly the dip in 2015, followed by a drastic increase the following year. However, coupled with the unusual scattering indices measured in Geiger & Lam (2022), it is clear that an exponential PBF is not supported along this line of sight and a more complex model is necessary (Geiger et al. in prep). Nonetheless, we have shown via this analysis that not only does our CLEAN algorithm perform as expected on observational radio pulsar data given our et of assumptions, but also that employment of this algorithm holds potential for scientific insight into the ever-changing ISM. ## 5 Future Work and Conclusions Within this work, we discussed our motivations, introduced CLEAN deconvolution as presented in Bhat et al. (2003), discussed our results and products of our implementation of CLEAN, our parameterization work, and results on observational data of PSR J1903+0327. Through our parameterization work, we have concluded that our replicated CLEAN algorithm works as expected: the main factors that influence the recall of the algorithm are the S/N and \(\tau_{\rm d}\) of the pulse profile, and higher values of S/N and \(\tau_{\rm d}\) resulting in better recall. We have produced an algorithm that we can confidently deploy on larger sets of observational data. To that end, we have presented a brief analysis of PSR J1903+0327 at two S/N levels and discussed our findings, showing that our methods prove to be effective on observational data from radio pulsars and can thus provide insight into the time-dependence on pulse-broadening timescales for many pulsars after automatic deployment. Moving forward, we aim to further develop our CLEAN algorithm into a broadly applicable tool, focusing on improving upon or removing the need for a number of simplifications used within this methods paper. We will also deploy our algorithm on the data set used in (Bhat et al., 2004), the followup to the original CLEAN method introductory paper, and on additional large-scale data sets (e.g., Stovall et al., 2015; Bilous et al., 2020). Using these data sets, we will use our CLEAN algorithm to provide measurements of \(\tau_{\rm d}\) across multi Figure 11: Time series for PSR J1903+0327 from 2012 to 2017 using NANOGrav data. This time series is constructed using a Savitzky-Golay filter with a window filter size of 5% of the length of the observation. Figure 12: Normalized FOMs for PSR J1903+0327 at 1200 MHz from 2016 at S/N = 70. We see a tight grouping for the returned \(\tau_{\rm d}\) values for each FOM, with a mean value of \(\tau_{\rm d}=360.3\)\(\mu\)s and an error of 10% based on our simulation runs. ple frequencies along many lines of sight. This will give us greater insight into both the composition of the ISM and the intrinsic emission of radio pulsars. Within this work, we have extensively tested our algorithm's performance on simulated pulses broadened using a thin-screen model of the ISM for our PBF. Future work will entail testing the effects of different pulse broadening functions, namely PBFs based on thick and uniform medium ISM models, on the performance of our algorithm. In addition, while our third derivative method for determining the intrinsic \(\tau_{\rm d}\) from our FOMs works well given high levels of S/N and large \(\tau_{\rm d}\) values, this may not hold for low \(\tau_{\rm d}\) values and low levels of S/N as the FOMs are not as smooth. Therefore, we will work on improving our automation efforts via the implementation of machine learning, thus allowing our recall rates to better reflect the performance of the algorithm. We have greatly simplified radio pulsar emission by assuming symmetric Gaussian intrinsic pulses. However, perfectly symmetric pulses are uncommon in radio pulsars (e.g., Bilous et al., 2016). Should the intrinsic pulse be non-symmetric, our \(\Gamma\) COM will either be completely ineffective or lead to incorrect values of \(\tau_{\rm d}\) being chosen. Therefore, we must further probe the effects of non-symmetry on our FOM, and develop new FOMs that do not rely on assumed symmetry. We have developed a Python-based CLEAN algorithm that behaves as expected, have parameterized the performance of this rebuilt algorithm on a variety of simulated test data sets, have developed a method to automate the process of choosing the correct \(\tau_{\rm d}\) from our FOM, and have proved the efficacy of our algorithm on observational data. We have also defined areas of improvement for our algorithm, and look forward to continuing to develop a well-rounded method for probing the ISM using radio pulsars. OY is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2139292. We graciously acknowledge support received from NSF AAG award number 2009468, and NSF Physics Frontiers Center award number 2020265, which supports the NANOGrav project. We acknowledge Research Computing at the Rochester Institute of Technology for providing computational resources and support that have contributed to the research results reported in this publication.
2304.07521
What if we have Meta GPT? From Content Singularity to Human-Metaverse Interaction in AIGC Era
The global metaverse development is facing a "cooldown moment", while the academia and industry attention moves drastically from the Metaverse to AI Generated Content (AIGC) in 2023. Nonetheless, the current discussion rarely considers the connection between AIGCs and the Metaverse. We can imagine the Metaverse, i.e., immersive cyberspace, is the black void of space, and AIGCs can simultaneously offer content and facilitate diverse user needs. As such, this article argues that AIGCs can be a vital technological enabler for the Metaverse. The article first provides a retrospect of the major pitfall of the metaverse applications in 2022. Second, we discuss from a user-centric perspective how the metaverse development will accelerate with AIGCs. Next, the article conjectures future scenarios concatenating the Metaverse and AIGCs. Accordingly, we advocate for an AI-Generated Metaverse (AIGM) framework for energizing the creation of metaverse content in the AIGC era.
Lik-Hang Lee, Pengyuan Zhou, Chaoning Zhang, Simo Hosio
2023-04-15T10:09:27Z
http://arxiv.org/abs/2304.07521v2
# What if we have Meta GPT? Content Singularity and Human-Metaverse Interaction in AIGC Era ###### Abstract The global metaverse development is facing a "cooldown moment", while the academia and industry attention moves drastically from the Metaverse to AI Generated Content (AIGC) in 2023. Nonetheless, the current discussion rarely considers the connection between AIGCs and the Metaverse. We can imagine the Metaverse, i.e., immersive cyberspace, is the black void of space, and AIGCs can simultaneously offer content and facilitate diverse user needs. As such, this article argues that AIGCs can be a vital technological enabler for the Metaverse. The article first provides a retrospect of the major pitfall of the metaverse applications in 2022. Second, we discuss from a user-centric perspective how the metaverse development will accelerate with AIGCs. Next, the article conjectures future scenarios concatenating the Metaverse and AIGCs. Accordingly, we advocate for an AI-Generated Metaverse (AIGM) framework for energizing the creation of metaverse content in the AIGC era. ## 1 Retrospect: Experimental Metaverse We have witnessed a surge of investment and rigorous discussion regarding the Metaverse since 2021. Many believe a fully realized metaverse is not far off, so tech firms, e.g., Meta, Niantic, Roblox, Sandbox, just to name a few, have started creating their immersive cyberspaces with diversified visions and business agendas. After the metaverse heat wave in 2022, all of us are still as vague about what the Metaverse is. At the same time, the hype surrounding the metaverse shows a sign of slowing down anytime soon, primarily due to multiple metrics reflecting constantly low numbers of active daily users, the decreasing volume of projects, and the high uncertainty of return on investment. When the tech giants dipped their toes into the experimentation pool in 2022, they brought a few playful tasks to their self-defined virtual venues, giving users something to do. The fascinating difficulty is that the metaverse is already fundamentally split among the forwarding-thinking firms establishing their metaverse realm. Due to limited time and resources, these firms tried hard to resolve technical issues that shape their immersive cyberspace, such as developing efficient infrastructure that supports unlimited numbers of users in the same virtual venues or offering a decentralized transaction ecosystem driven by blockchain technology. Nonetheless, content development is delegated to third parties and thus goes beyond the firms' core concerns. Tech firms commonly leave content creation to the designers and creators, having an unattainable hope that designers and creators can fill up the rest of the metaverse. As a result, one can argue the current virtual spaces have become aimless, primarily caused by the lack of content and, therefore, activities, while users cannot find good reasons to spend time at such venues daily. Moreover, the experimental metaverses of 2022 often neglect usability issues leading to user experiences far from satisfactory. A prominent example is that first-time users struggle to understand the interaction techniques with their avatars in 3D virtual environments. Even worse, after hours of practice, these unskillful users can still not master such interaction techniques, causing poor usability entirely. Without addressing the gaps in content and usability, the firms' ambition exceeds what is practically feasible. Their ambition refers to the massive uses of the Metaverse, i.e., the immersive cyberspace [1]. The core values surrounding the users are not there yet to make the Metaverse a reality. We can briefly look back at the transition from the static web (Web 1.0) to its interactive counterpart (Web 2.0) in the 2D-UIs era, characterized by the empowerment of content creation. Among the static webpages in Web 1.0, limited people with relevant skills can publish information online. At the same time, users can only read the information but have no way to make a two-way interaction. Accordingly, Web 2.0, also known as social networks (SNS), offers participatory and dynamic methods and empowers two-way user interaction, i.e., reading and writing information in 2D UIs. The critical transition from Web 1.0 to 2.0 is that users, regardless of their technology literacy, can freely contribute content on SNS, such as text and images, and then put the content online. We must note that we are good at writing a message on a (soft-)keyboard and taking photos or videos with cameras. Also, most 2D UIs follow certain design paradigms, requiring only simple yet intuitive interactions like clicks, taps, swipes, drags, etc., to accomplish new content creation. In contrast, although the metaverse supposedly allows everyone to access many different virtual worlds, three unprecedented barriers arise. First, the current users have extensive experience with 2D UIs but not their 3D counterparts. As the entire experiences proceed with 3D UIs, the users in the Metaverse have to deal with unfamiliar virtual worlds with increasing complexity. More importantly, 3D objects do not explicitly show user interaction cues. As the Metaverse claims to be digital twins of our physical environment [2], a user encounters a virtual chair and employs analogies between the virtual and physical worlds. A simple question could be: Can the user's virtual hands lift or push the chair? As such, users, in general, may not be aware of how virtual objects interact in the Metaverse and thus rely on educated guesses and trial-and-error approaches. The above question can be generalized into sub-questions, including but not limited to: What are the available interaction techniques? When to activate the user-object interaction? How does the user understand the related functions mapped to the object? How can we manage the user expectation after a particular click? Which visual and audio effects impact the user's task performance? Second, the current interaction techniques allow users to manipulate a virtual object, such as selecting, rotating, translating, etc. Still, user efforts in object manipulation are a big concern. Commercial input hardware for headsets (e.g., controllers or joysticks) or even hand gestural inputs are barely sufficient for simple point-and-select operations on 2D UIs in virtual environments [3] but largely insufficient for 3D models, especially those with irregular shape causing intolerably long editing time and high dissimilarity with the intended shape [4]. Therefore, users with the current techniques, primarily point-and-select or drag-and-drop, can only manipulate objects with low granularity. However, content creation involves careful manipulation of a 3D object, i.e., modifying the vertex positions in great detail. Even though nowadays users engage in an immersive 3D environment, most can only create 2D texts and select some standard 3D objects from an asset library. The creation of metaverse content is not fully leveraged by the current authoring tools and the existing techniques supporting user interaction with the Metaverse. In the past two decades, the human-computer interaction community has attempted to alleviate the ease of user interaction in diversified virtual environments. Nonetheless, usability gaps still exist, resulting in low efficiency and user frustration [5]. We see that such gaps will not be overcome if we purely rely on investigating user behaviours with alternative interfaces and interaction techniques, especially since the tasks inside virtual 3D spaces grow more complicated. Third, creating large objects, e.g., a dragon floating in mid-air, requires a relatively spatial environment. Users unavoidably encounter a lot of distal operations between the user position and the virtual creation. It is worth mentioning that users are prone to errors during such distal operations. A prior work [6] provides evidence that users with headsets achieve lower pointing accuracy to a distal target. Considering such complicated operations in content creation, typical metaverse users can not immediately create objects except those already in the asset library. In other words, metaverse users have no appropriate approaches to unleash the full potential of creating content in the endless canvas of the Metaverse. Instead, they hire professionals to draw and mould virtual instances on traditional desktops. For virtual space owners, a team of professionals, e.g., unity developers, may spend days or weeks creating virtual environments. Further change requests (e.g., adding a new 3D model) for such environments may take additional hours or days. Without time or skills, general users can only experience the contents being built by virtual space owners. As shown in Figure 1, this rigid circumstance is analogous to the'read mode' in Web 1.0. Creating unique metaverse content has become highly inconvenient and demanding. We will likely face the circumstance of 'Web 1.0' in 3D virtual worlds, with some features inherited from Web 2.0, such as making new texts and uploading photos. To alleviate the barriers mentioned above, this article argues for using AI-generated content (AIGCs) in both content generation in the metaverse and AI-mediated user interaction in the metaverse. The article has a vision that the GPT-like model can trigger content singularity in the Metaverse and assist interaction between human users and virtual objects in the Metaverse. Before we move on to the main discussion, we provide some background information regarding the Metaverse and AIGCs, as follows. **Metaverse**: The Metaverse refers to the NEXT Internet featured with diversified virtual spaces and immersive experiences [1]. Similar to existing cyberspace, we can regard the Metaverse as a gigantic application that simultaneously accommodates diverse types of countless users. The application comprises computer-mediated Figure 1: AIGCs can prevent us from falling into another ‘Web 1.0’ in the metaverse era – the layman end-users suffer from the missing capability of creating unique content. We are natively skiful at texting and photo-taking in social networks but not editing 3D content in virtual 3D spaces. AIGCs may serve as a saviour to enable the general users freely express themselves, while owners of the platforms or virtual spaces can still delegate the content creation tasks to the peer users. worlds under the Extended Reality (XR) spectrum and emerging derivatives like Diminished Reality (DR). Ideally, users will create content and engage in activities surrounding such content. Multitudinous underlying technologies serve as the backbone of the Metaverse, including AI, IoT, mobile networks, edge and cloud servers, etc. Among the technologies, we can view AI as the fuel to support the automation of various tasks and content creation. Our discussion in this article goes beyond the well-known applications, including creating avatars, virtual buildings, virtual computer characters and 3D objects, automatic digital twins, and personalized content presentation [2]. **AI-Generated Content (AIGC)**: Apart from the analytical AI focusing on traditional problems like classification, AIGC can leverage high-dimensional data, such as text, images, audio, and video, to generate new content. For instance, OpenAI announces its conversational agent, Chat-GPT [7], of which the latest GPT-3 and GPT-4 can create texts and images, respectively. Moreover, the generated content can support the generation of metaverse objects, such as speech for in-game agents, 3D objects, artist artefacts and background scenes in many virtual worlds. The most popular techniques, including GANs, Diffusion models, and transformer architectures, support the challenging context-to-content task. It is important to note that generative AI and AIGC differ subtly [8]. AIGC focuses on content production problems, whereas generative AI analyzes the underlying technological underpinnings that facilitate the development of multiple AIGC activities. ## 2 Content Singularity The most widely used metaverse applications have appeared in industrial applications in the past two decades [9]. The firms have the resources to build up proprietary systems and prepare the content of their interested domain. The work content drives the adoption of AR/VR applications in industrial sectors, with the following two examples. First, labour at warehouse docks and assembly lines can obtain helpful information (e.g., the next step) through the lens of AR [10]. Second, personnel at elderly caring centres can nurture compass from perspective-taking scenarios of virtual reality (VR) [11]. Content is one of the incentives, and end-users achieve enhanced abilities or knowledge, perhaps resulting in better productivity. As we discussed the main three barriers in _Retrospect_, users have limited ability and resources to create unique content in the Metaverse. The general users can only draw some simple yet rough sketch to indicate an object in Extended Reality. Nonetheless, such expressiveness is insufficient for daily communication or on-site discussion for specific work tasks. We may expect the content on AR devices to be no worse than what we have in Web 2.0. To alleviate the issue, AIGCs can play an indispensable role in lowering the barriers and democratizing content creation. Figure 2 illustrates a potential scenario where users can effectively create content in virtual-physical environments. For instance, a user with an AR device is situated in a tourist spot and attempts to show the iconic vessels to explain the cultural heritage of Hong Kong's Victoria Harbour. First, the AIGC model can understand the user's situation and context through sensors on the AR device, for instance, depth cameras. Second, the users can make a dirty and speedy Figure 2: Generating a vessel that fits the context of Victoria Labour, Hong Kong. As a result, a junk boat appears: Original view (Left), Sketching (middle), the generated vessel on top of the physical world (Right). sketch to indicate the shape and position of the generated object. In addition, the prompt which contains the user's description, i.e., a prompt of 'a vessel fits this view', is sent to the AIGC model through methods like speech recognition. It is important to note that our speech always involves 'this' or 'that' to indicate a particular scene and object. The AIGC model can employ the user's situation and context in such a scenario. Finally, a junk boat appears in the Victoria Habour through the lens of AR. Singularity can refer to a point in time or a condition in which something undergoes a significant and irreversible milestone, depending on the context of such changes. Also, it is frequently used in technology and artificial intelligence (AI) to describe the hypothetical moment when robots or AI transcend human intellect and become self-improving or perhaps independent [12]. This notion is also known as technological singularity or AI singularity. It is a contentious issue to the Metaverse when AIGCs are widely adopted by end users. We believe the occurrence of AI-generated content might have far-reaching consequences for cyberspace. Next, the concept of content singularity refers to the belief that we are reaching a time when there will be abundant virtual material available on the Internet that people will consume as their daily routine. This is owing to the demand for immersive cyberspace and related technological ability, perhaps AIGCs, to pave the path towards the exponential proliferation of virtual 3D content. This is similar to the social network in which people contribute and consumer content. Since the launch of ChatGPT1, pioneering prototypes shed light on the daily uses of GPT-driven intelligence on AR wearables, such as generating simple 3D contents using WebAR (a.frame) by entering prompts2 and providing suggested answers for conversations during datings and job interviews3. These examples go beyond the industrial scenarios, implying that AIGC-driven conversational interfaces can open new opportunities for enriching virtual-physical blended environments [13]. Generative AI models can recognise the user context using the sensors on mobile devices (e.g., cameras on AR headsets or smartphones) to generate appropriate objects according to given prompts. In this decade, general users will treat generative AI models as utilities like water, electricity, and mobile network. Meanwhile, the metaverse is an endless container to display AI-generated content so users can read and interact with the AI utility midair. Users can make speech prompts to generative AI models to create characters, objects, back-drop scenes, buildings, and even audio feedback or speeches in virtual 3D environments. These content generations should not pose any hurdle or technical difficulties to the general users. It will be as simple as posting a new photo on Instagram, typing a tweet on Twitter, or uploading a new video on Tiktok. The lowered barrier will encourage people to create content, and more content consumers will follow, eventually leading to a metaverse community. In addition, rewarding schemes should be established when the content singularity arrives to sustain the content creation ecosystem. AIs, the data owners behind them, and users become the primary enablers and principal actors, respectively. The way of splitting the reward among them is still unknown, and ongoing debates will continue. Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) Footnote 2: [https://www.youtube.com/watch?v=J6bSCVAxCoDs&ab_channel=ARMRXR](https://www.youtube.com/watch?v=J6bSCVAxCoDs&ab_channel=ARMRXR) Footnote 3: [https://twitter.com/bryanhpchiang/status/](https://twitter.com/bryanhpchiang/status/) 16398303836164874265xxt=HHwWhMDTftbC7MEtAAAA Generative AI models are obviously drivers of content generation. But we should not neglect its potential of removing contents, primarily physical counterparts, through the lens of XR devices, also known as Diminished Reality (DR). It is important to note that the naive approach of overlaying digital content on top of the physical world may hurt the user experience. A virtual instance may not match the environmental context, and it may be necessary to change the context to show better perceptions when the metaverse application strongly relates to daily functions. We may accept a virtual Pokemon appearing on top of a physical rubbish bin. However, we feel weird when a virtual table overlaps the physical table being disposed of. Therefore, AIGCs may serve as a critical step of DR to smoothen the subsequent addition of digital overlays (AR). In this sense, the demands of AIGCs will penetrate throughout the entire process of metaverse con tent generation. More importantly, the diminished items should comply with the user's safety or ethical issues. Hiddening a warning sign may put the users in danger. Also, putting off a person's clothes may show inappropriate content, i.e., a naked body. It is essential to reinforce regulation and compliance when generative AI models are widely adopted in the content-generation pipeline. On the other hand, content singularity can also refer to the challenges of information overload in a virtual-physical blended environment, in which people are assaulted with so much information that it is impossible to digest and make sense of it all [14]. The sheer volume of online information, including text, photos, videos, and music, is already daunting and rapidly increasing. As such, the virtual-physical blended environment may cause a lot of disturbance to users if we neglect such exponential proliferation of 3D content. Information or knowledge in the tangible world can indeed be thought limitless, whereas augmentation within the relatively limited field of view of headsets is complex. Consequently, we must optimise the presentation of digital content. Typically, metaverse users with a naive approach to virtual content delivery will experience information inundation, thereby requiring additional time to consume the augmentation. Context awareness, such as the users, environments, and social dynamics, is a prominent strategy for managing the information display. The AIGCs at the periphery, with the assistance of recommendation systems, can interpret user context and provide the most pertinent augmentation [14]. Although we foresee a rise in content volume when AIGCs are fully engaged as a utility in the Metaverse, two significant issues should be addressed. First, content uniqueness raises concerns about the quality and relevancy of the material provided. With so much material accessible, users are finding it increasingly difficult to identify what they seek and discern between high-quality and low-quality content. To address the issues of content singularity, additional research studies should have been made to create new tools and methodologies that will assist users in filtering, prioritizing, and personalizing the material they consume. Current solutions in Web 2.0 include search engines, recommendation algorithms, and content curation tools. Yet, the issue of content singularity remains a complicated and continuing one that will undoubtedly need further innovation and adaptation as the volume and diversity of digital information increase in the Metaverse. Second, contemporary conversational interfaces have long been criticized for lacking transparency as a 'black box' [15]. In other words, conversational AIs do not show a complete list of their ability, while the general users usually have no clues about what the AI can achieve. Significantly, users with low AI literacy cannot quickly master the interaction with GPT-like AI agents through a conversational interface. Exploring the perfect fit between the generative AI models and the XR environment is necessary. For instance, the AI models can suggest some potential actions to the users by putting digital overlays on top of the user's surroundings. As such, the user can understand the AI's ability and will not make ineffective enquiries or wasted interactions with the generative AI model. In addition, more intuitive clues should be prepared, according to the user context, to inform the user about 'what cannot be done' with a generative AI model. ## 3 Human-Metaverse Interaction Besides generating virtual content, AIGC can be considered an assistive tool for user interaction in the metaverse. From other users' perspectives, a user's movements and interaction with virtual objects can be a part of the content in virtual worlds. The difficulties of controlling an avatar's movements and interacting with virtual objects can negatively impact an individual's workload and the group's perceptions of a metaverse application. For example, a group awaits an individual to finish a task, causing frustration. Before discussing how the prompt should be extended in the Metaverse for easier interaction between users and metaverse instances, some fundamentals are considered in human-computer interaction (HCI) and prompt engineering [16]. Prompts have different concerns in HCI and NLP. From the HCI perspective, Effective prompts are clear, concise, and intuitive. Users have to design prompts for an interactive system, and users' workloads exist to take specific actions or provide relevant input. Once the user's needs and goals have been identified, the next step is to craft effective prompts that guide the user towards achieving those goals. And the AI-generated results provide users with the information they need to take action in a particular context. Therefore, prompt engineering is an essential aspect of designing interactive systems that are easy to use and achieve high levels of user satisfaction. Prompt engineering, in NLP and particularly LLMs, refers to the methods for how communicating with LLM to steer their behavior for desired outcomes. The traditional chatbot (e.g., ChatGPT) considers primarily text prompts. In contrast, the prompts from the metaverse users can become more diverse by considering both the context as discussed above and multiple user modalities, including gaze, body movements, and psychological and physiological factors. In addition, perhaps employing certain personalization techniques, prompts should be tested and refined iteratively to ensure that they effectively guide LLMs towards the desired output. As such, metaverse-centric prompt engineering requires a new understanding of the user's needs and goals, as well as their cognitive abilities and limitations. This information can be gathered through user testing, A/B testing, user surveys and usability testing in many virtual worlds. The prompt design can be extended to the subtle interaction between virtual objects and users. VR sculpting is a popular application where users can freely mould their virtual objects in virtual spaces. The usability of VR, inaccurate pointing to vertex, becomes a hurdle [4]. It is still far away from being the main tool of creativity due to its efficiency. A hybrid model can be considered: generative AI models like LLMs can first generate a model of 3D content, and then we customize the model with manual editing in VR. In this sense, an important issue arises - we cannot get Figure 3: An example pipeline of content creation and human-metaverse interaction supported by AIGCs: (a) brainstorming with conversational agents (collecting requirements simultaneously); (b) auto-generation of the contents; (c) start manual edition but huge pointing errors exist; (d) following (c), AI-assisted pointing for selecting vertex; (e) following (d), AI-assisted vertex editing; (f) manual editing of subtle parts; (g) AI-assigned panel and user interaction on the virtual objects; (h) user reviews of the objects while AIGCs attempt to understand the user perceptions; (I) content sharing, e.g., educational purpose in a classroom. Photos are extracted and modified from [4] for illustration purposes. rid of manual operations with virtual instances. AIGCs, in the future, should assist human users in virtual tasks that inherit the nature of complexity and clumsiness, under hardware constraints, such as limited Field-of-view (FOV). AIGCs can parse the user actions in virtual environments, for instance, limb movements and gazes towards a virtual object, to provide appropriate work done from the manual editing. As such, AIGCs can serve as assistants for metaverse users. It is important to note that AI-assisted tasks always happen on the daily ubiquitous devices, i.e., smartphones. A prevalent example of 2D UIs is typing text on soft keyboards. Users tap on the key repetitively and make typos if triggered by adjacent keys. Such an erroneous task can be assisted by auto-correction. Users can tap the mistyped word and select the correct spelling from the suggested words. To achieve this, an AI model learns the words in the English dictionary and then understands user habits by recording the user's word choice. Typing on a soft keyboard is a good example of an AI-assisted task. In virtual environments, the interaction tasks, including dragging an object to a precise position and editing an object of irregular shapes, can be challenging to the users. AIGCs open opportunities to help human users accomplish the task. Nonetheless, the typing tasks on soft keyboards can be manageable because the dictionary is a reasonable search space. In contrast, AIGC-driven assistance can encounter a much larger room. In the editing task, a user can first select a vertex at a rabbit's tail. The next action can be changing the vertex property and then moving to another vertex. The next vertex can happen on the head, bottom, etc. With the current technology, predicting the user's following action at a highly accurate rate is very unlikely. However, if available, AIGCs may leverage the prior users' behaviours from a dataset containing user interaction footprint, and accordingly recommend several 'next' edits to facilitate the process. Eventually, the user can choose one of them and accomplish the task without huge burdens. In a broader sense, diversified items exist in many virtual worlds, and a virtual item can have many possible relationships with another. As such, the user interaction with AIGCs' predictions becomes complicated. For instance, I pick up an apple and then lift a tool to cut it. Other possible actions include putting down the apple, grabbing an orange, etc. It is also important to note that building an ontology for unlimited items in the Metaverse is nearly impossible. One potential tactic is to leverage the user's in-situ actions. Generative AI models can read the user's head and hand movements to predict the user's interested regions and, thus, upcoming activities. Ideally, a user may give a rough pointing location to a particular item. Then, Generative AI models can make personalized and in-situ suggestions for the user's subsequent interactions with virtual objects, with sufficient visualization to ensure intuitiveness. We believe that the above examples are only the tip of the iceberg but sufficient to illustrate the necessity of re-engineering the ways of making metaverse-ready prompts for Generative AI models. Then, there is the issue of how natural people will feel in the metaverse environments built, or in some cases hallucinated, with AIGCs. Urban designers and architects are now looking into what factors of our ordinary environments matter most when attempting to translate the environments into digital ones, beyond the 3D replication. Here, issues such as _subjective presence_ (comfort, feeling, safety, senses) or active involvement (activities taking place, other people's presence), in addition to the traditionally considered _structural aspects_ (colour, furnishing, scale, textures), will play a pivotal role in how the metaverse experience will feel like for its users (see, e.g., [17]). And so the questions to solve will include such as to what degree will we want generative AI to be able to spawn experiences that feel safe, or should the spaces more closely reflect the world as we know it, outside the metaverse, where different even adjacent spaces will have a lot of different perceived human characteristics to them. The technical capability of AIGCs only opens a landscape of generating metaverse content, regardless of adding backdrops (AR) or removing objects causing strong emotions (DR). But we know very little about user aspects once AIGCs can be scaled. As the metaverse moves beyond the sole digital interfaces, i.e., 2D UIs, the AIGC can be embedded in the physical worlds and alter the user's situated environments for fulfilling users' subjective presence_ that can be abstract. It can vary greatly due to the user's beliefs (norms, customs, ego, and so on) and their environment. A machine may not truly interpret the meaning of 'calm', especially if multiple subjective presences are underlying, e.g.,'safe and calm'. A user makes a simple prompt of 'calm' to an AIGC model. Consequently, the results are unsatisfactory, as the user does not make effective prompts, for example, adding words like'meditation, wellness and sleep' if the users are inside a bedroom. It is worth noting users with headsets may expect quick and accurate feedback, instead of requesting the generative AI models to revise the content with multiple iterations. In addition, _subjective presence_ does not limit to a single user. Multiple users will interact with metaverse content in a shared space, potentially causing co-perception and communication issues. Generating the right content at the right time leads to a challenging problem more than technical aspects. AIGC in the Metaverse will lead to a novel niche of understanding the dynamics among metaverse content, physical space, and users. ## 4 Towards AIGM Framework We argue AIGM is a must - should we aim to unleash all of the latent potentials in the metaverse concept. This is, regardless of who is the leading developer, the metaverse must be built for humans, and as humans, everything we do is embodied in the space around us [17]. The leading developers do not have the authority to arrange what content we should have on the Next Internet, as we have seen in the Metaverse of 2022, in which the virtual spaces are office-like environments. We usually spend eight work hours at the physical office, and it is insane to spend the rest of 8 hours in the virtual office. Ironically, except for the standard item given in asset libraries, we don't have the right to decorate such office space with our unique creations. It is eventually the user's call for the popular trend in the Metaverse. When Google's Image searches were conducted since Q3 2021, it was evident that the creators had always defined the metaverse with blue, dark, and purple colours. We believe the trend of popular content is ever-changing. Driven by the vital role of AIGCs in democ Figure 4: AIGM framework showing the relationship between human users, AIGCs and virtual-physical cyberspace (i.e., the Metaverse). rtaizing content creation, everyone in the Metaverse can decide, (co-)create, and promote their unique content. To scale up the use of AIGCs, we propose a framework for an AI-Generated Metaverse (AIGM) that depicts the relationships among AIGCs, virtual-physical blended worlds, and human users, see Figure 4. AIGC is the fuel to spark the content singularity, and Metaverse content is expected to surround everyone like the atmosphere. This creates an entire creation pipeline in which AIGCs are the key actors. First, the users can talk to generative AI models to obtain inspiration during human-AI conversations (Human-AI collaboration). Consequently, generative AI models provide the very first edition of the generated content (AI-Generation). It then supports subtle editing during content creation (AI-Assistance). Some precise details can be done manually (Human users); if necessary, multiple users can be involved in the task (Multi-user collaboration). In addition, it is important to note that AIGCs can assign properties of how the users and virtual instances will interact, e.g., through a tap on a panel, and accordingly, AIGC-driven evaluations will perform to understand the user performance and their cognitive loads [18]. Eventually, content sharing and the corresponding user interaction can be backed by AIGCs. ## 5 Concluding Remarks During a deceleration of global metaverse development, the author contends that AIGCs can be a critical facilitator for the Metaverse. This article shares some perspectives and visions of when AIGCs meet the Metaverse. Our discussion started with a look back at the key flaws of metaverse applications in 2022. We also highlight the fundamental difficulties the metaverse encountered. Accordingly, we examine how AIGCs will speed up metaverse development from a user standpoint. The article eventually speculates on future possibilities that combine the Metaverse with AIGCs. We call for a conceptual framework of AIGM that facilitates content singularity and human-metaverse interaction in the AIGC era. We also hope to provide a more expansive discussion within the HCI and AI communities.
2308.11096
MosaiQ: Quantum Generative Adversarial Networks for Image Generation on NISQ Computers
Quantum machine learning and vision have come to the fore recently, with hardware advances enabling rapid advancement in the capabilities of quantum machines. Recently, quantum image generation has been explored with many potential advantages over non-quantum techniques; however, previous techniques have suffered from poor quality and robustness. To address these problems, we introduce, MosaiQ, a high-quality quantum image generation GAN framework that can be executed on today's Near-term Intermediate Scale Quantum (NISQ) computers.
Daniel Silver, Tirthak Patel, William Cutler, Aditya Ranjan, Harshitta Gandhi, Devesh Tiwari
2023-08-22T00:40:37Z
http://arxiv.org/abs/2308.11096v1
# MosaicQ: Quantum Generative Adversarial Networks ###### Abstract Quantum machine learning and vision have come to the fore recently, with hardware advances enabling rapid advancement in the capabilities of quantum machines. Recently, quantum image generation has been explored with many potential advantages over non-quantum techniques; however, previous techniques have suffered from poor quality and robustness. To address these problems, we introduce MosaicQ a high-quality quantum image generation GAN framework that can be executed on today's Near-term Intermediate Scale Quantum (NISQ) computers.1 Footnote 1: Accepted to appear in the proceedings of International Conference on Computer Vision (ICCV), 2023. This is authors’ pre-print copy. ## 1 Introduction Generative Adversarial Networks, or GANs, are a type of neural network architecture used in machine learning and computer vision for generative modeling [8, 22, 2, 9]. A classical GAN consists of two neural networks, a generator, and a discriminator, that are trained simultaneously in a competitive process. The generator generates fake data samples, while the discriminator tries to distinguish them from real ones found in the training set, hence serving as an "adversarial entity". Classical GANs have received significant attention for generating high-quality images, among other purposes including text generation, data augmentation, and anomaly detection [20, 1, 5]. Naturally, this has spawned interest in the quantum information science community to develop corresponding quantum GANs that run on quantum computers. While recent efforts toward developing quantum GANs have been instrumental and early results have been encouraging, we discovered that existing approaches have severe scalability bottlenecks and have significant room for improvement. This is especially true in the context of generating high-quality image generation on real-system quantum computers. ### Opportunity Gap for Quantum GANs. A recent work by Huang et al. [11], referred to as QGPatch in this paper, is the state-of-the-art demonstration of QuantumGANs on real quantum computers. As our paper also demonstrates, QGPatch can learn different shapes and produce recognizable images in some cases, but can often yield low-quality images. It suffers from the scalability challenge because it breaks the image into "patches" and performs a pixel-by-pixel learning. Second, it is not effective at generating a variety of images within the same class - this problem is known as "mode collapse" [7]. It is non-trivial to achieve high-quality image generation while also maintaining variety. Motivated by these limitations, MosaiQ's design pushes the state of the art by achieving higher scalability, image quality, and variety. ### Contributions of MosaicQ **I.** MosaicQ introduces the design and implementation of a novel _quantum generative adversarial network for image generation on quantum computers_. MosaicQ's approach is a hybrid classical-quantum generation network where a network of low-circuit-depth variational quantum circuits are leveraged to learn and train the model. Upon acceptance, MosaicQ will be available as an open-source contribution. **II.** MosaicQ's design demonstrates how the extraction of principal components of images enables us to learn and generate higher-quality images, compared to the state-of-the-art approach which is limited in its scalability due to pixel-by-pixel learning [11]. However, exploiting information in principal components to its full potential is non-trivial, and MosaicQ proposes to mitigate those challenges using feature redistribution. Furthermore, MosaicQ introduces a novel adaptive input noise generation technique to improve both the quality and variety of generated images - mitigating the common risk of mode collapse in generative networks. **III.** Our evaluation demonstrates that MosaiQ significantly outperforms the state-of-the-art methods [11] on both simulation and real quantum computers with a hardware error. MosaiQ is evaluated on the MNIST [6] and Fashion MNIST datasets [21] - widely-used for QML evaluation on Near-term Intermediate Scale Quantum (NISQ) machines [15, 11]. MosaicQ outperforms the state-of-the-art methods [11] both visually and quantitatively - _for example, over 100 points improvement in image quality generated on IBM Jakarta quantum computer using the FID (Frechet Inception Distance) score [10]_, which is popularly used for comparing image quality. ## 2 MosaicQ: Challenges and Solution In this section, we present the design and implementation of MosaiQ, a quantum image generative network. To provide better context for understanding, first, we briefly describe the classical generative adversarial networks (GAN) (depicted in Fig. 1(a)), describe the limitations of the state-of-the-art quantum GANs, and then, provide the details of MosaicQ's quantum GAN design (depicted in Fig. 1(b)). ### Generative Adversarial Networks (GANs) Generative Adversarial Networks, or GANs, are a type of neural network architecture used in machine learning for generative modeling [8]. The basic architecture and workflow of a classical (non quantum) GAN are shown in Fig. 1 (a). A classical GAN consists of two neural networks, a generator, and a discriminator, that are trained simultaneously in a competitive process. The generator generates fake data samples, while the discriminator tries to distinguish them from real ones found in the training set, hence serving as the "adversary". Through training, the generator learns to create increasingly realistic samples that mimic the distribution of the real data in order to better fool an increasingly effective discriminator, eventually reaching an attempt to attain equilibrium with the value function expressed as \(\min_{G}\max_{D}\mathbb{E}_{x\sim q_{\text{data}}(\mathbf{x})}[\log D(\mathbf{x})]+ \mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\log(1-D(G(\mathbf{z})))]\). This value function has two main identifiable components that correspond to the respective optimization objectives of the generator and the discriminator. The use of \(\log\) provides more numerical stability because it converts the multiplication of multiple small probabilities into addition, and it also allows us to calculate the derivatives more easily during the optimization process. _Once fully trained, a generator component of the GAN is capable of converting random noise into new data samples (e.g., boots) that conform to the original distribution of the training data._ Essentially, the generator, without the use of a discriminator, can be inferred to obtain new data samples. GANs have been shown to be useful in a variety of applications such as image and text generation, data augmentation, and anomaly detection [20, 1, 5]. Naturally, this has spawned interest in the quantum information science community to develop corresponding quantum GANs. _Quantum GANs follow a similar generator and discriminator structure as classical GANs but use quantum principles to train and infer from the model._ Typically, the discriminator functions on classical resources, and the generator is trained on quantum resources. This is expected, as this structure allows the Quantum GANs to leverage quantum resources for generation tasks - the component that persists beyond training; recall that the discriminator is essentially a quality inspector that can be decoupled after training and is not required during inference. While this structure is intuitive and has been demonstrated to be somewhat promising [11], there are multiple open challenges that need to be overcome to achieve higher quality. ### Limitations of Existing Quantum GANs A recent work by Huang et al. [11], referred to as QG-Patch, is the most complete, state-of-the-art demonstration of QuantumGANs on real quantum computers. QGPatch follows the Quantum GAN architecture described earlier but achieves limited effectiveness - QGPatch can learn different shapes and produce recognizable images in some cases, but often suffers from low quality. Fig. 2 shows that digit '0' is a reasonable approximation of the ground truth, but the generated boot image is far less recognizable. Despite the imperfect generations, we can see in Fig. 2 how a classical GAN with \(25\times\) the parameters of QGPatch and trained for \(25\times\) as many iterations is worse or similar to the Figure 1: Classical generative adversarial network (GAN) and MosaiQ’s hybrid quantum-classical GAN architecture. Figure 2: State-of-the-art quantum image generator, QG-Patch [11], generates low-quality images for different datasets. However, its quality is still similar to a classical GAN with \(25\times\) more parameters and iterations. quantum technique - supporting the potential of quantum machine-learning for image generation tasks as compared to the classical approach with much higher resources and complexity. This independently provides experimental evidence for why the quantum information science community is motivated to accelerate the progress of Quantum GANs, despite its current limitations. _The reasons for the limited effectiveness of Quantum GANs are multi-fold._ The quantum generator component runs on the quantum hardware and requires many qubits to produce high-quality images from random input noise (a qubit is the fundamental unit of computation and information storage on quantum computers). QGPatch addresses this challenge by breaking the image into "patches" and employing a generator for different patches. While this is reasonable for smaller resolution images, the "patch-based" approach suffers from scalability due to its fundamental nature of learning pixel-by-pixel. For example, a total of 245 qubits are required for images in the full-resolution 784-pixel MNIST dataset. _The second challenge is performing efficient learning from random input noise - it is critical for the quantum generator to effectively utilize the random input noise to generate a variety of images for the same class._ Inability to generate a variety of images within the same class - which even classical GANs suffer from - is popularly known as the "mode collapse" problem [7]. Mode collapse is a side-effect of the generator learning to generate only produce one type of image for a given class because the generator has learned to "fool" the discriminator for this image and saturates in its learning. It is non-trivial to achieve high-quality generating while also maintaining variety. Motivated by these limitations, MosaicQ's design pushes the state of the art. ### Overview of MosaicQ and Key Ideas **Hybrid Quantum-Classical Architecture.** MosaicQ uses a hybrid quantum-classical architecture for image generation, as summarized in Fig. 1(b). The generator is quantum in nature and trained using quantum simulation, and the discriminator is classical - similar to the construction in [15, 11]. However, there are two key novel architectural changes: (1) to address the scalability and quantum resource bottlenecks for generating high-quality images, MosaicQ applies a transformation on the input image dataset, and (2) MosaicQ employs quantum-style random noise as input to enhance the "variety or mode-collapse" challenge. These two key features are described after a brief description of MosaicQ's generator and discriminator. **MosaicQ's Quantum Generator Network.** MosaicQ's quantum generator component is a network of multiple sub-generators. Each sub-generator is a _variational quantum circuit_ that is iteratively optimized to train a model. Variational quantum circuits are specific types of quantum circuits where some components of the circuits are tunable parameters, which are iteratively optimized. As a background, a quantum circuit is essentially a sequence of quantum gates applied to qubits. The quantum gates are fundamental operations that manipulate the qubit state, and any gate can be represented as a unitary matrix. One-qubit gates, such as the Pauli gates (\(\sigma_{x},\sigma_{y},\sigma_{z}\)), apply a rotation to just one qubit and are categorized by the axis around which the rotation takes place, and by how much. Multi-qubit gates, such as the CNOT gate, allow the creation of entanglement. Variational quantum circuits in MosaicQ are comprised of two types of sections. The first type, \(X\), has fixed gates that entangle the qubits. The second type, \(\theta\), has tunable \(U_{3}\) gates that are optimized via training. The overall circuit \(V\) is therefore an optimization function built of unitary transformations such that \(V=U(X,\theta)\). One key feature of variational quantum circuits is the ability to leverage many off-the-shelf classical deep-learning techniques for training and optimization. This includes learning through common loss functions such as \(L_{2}\) loss and leveraging existing optimizers such as Adam [13] that MosaicQ leverages. All sub-generator circuits in MosaicQ have identical architectures, composed of a five-qubit circuit. Fig. 3 shows an example of the sub-generator circuit. The circuit consists of encoding the input noise as angles using \(Rx\) gates and \(Ry\) gates. Following the embedding of the noise, the parameterized weights are encoded on each quantum layer alongside \(CZ\) gates used to entangle the qubits at each layer. These weights contain the portion of the circuit that is optimized. Following these repeated layers, the \(PauliX\) expected value is taken for each qubit in measurement. We note the simplistic design of MosaicQ's generator circuits is intentional; limiting the number of tunable parameters and depth of the circuits enables MosaicQ to mitigate hardware error impact on real quantum machines and maintain high quality, as confirmed by our evaluation (Sec. 3). **Scalable Learning by Extracting Principal Components.** Unlike previous approaches, which learn to construct the pixels of an image directly and consequently, suffer from the scaling bottleneck, MosaicQ demonstrates that extracting and learning the rich features is effective. The principal component analysis (PCA) method enables efficient learning by focusing on the important features that compromise Figure 3: Circuit ansatz design of MosaicQ’s generators. The RY gates in each layer are optimized during training. unique images, as opposed to having to learn to distinguish the important features from the redundant features in an image via the pixel-by-pixel method. PCA maximizes the entropy for a given number of components, concentrating the information of a dataset efficiently. The first step in learning principal components is normalizing the input data. Once the images are normalized, MosaiQ decomposes the images into principal components and scales these components between \([0,1]\) so that the quantum sub-generators can generate throughout the entire space of inputs. As quantum machines rely on only unitary transformations, it is not possible to obtain an output with an absolute value greater than one in measurement. These scaled components are fed into the discriminator as the "real labels" and the generator begins to learn how to mimic the distribution throughout the training process. After learning, the outputs are scaled back to the original principal component range, then are inverse transformed to form a unique image that is within the distribution of the original images. This method helps us achieve higher quality in a scalable way. However, experimentally we found that this alone is not sufficient. MosaiQ performs intelligent distribution of features to make the learning more effective and robust. **Feature Distribution Among Learners.** In the conventional implementation of the previously proposed idea, all generated features are distributed among several sub-generators before being concatenated. Recall that the MosaiQ's quantum generator is an ensemble of multiple sub-generators, where each sub-generator is a variation quantum circuit. In the default case where PCA features are aligned one after another to the sub-generators, an unbalanced distribution of explained variance may emerge, where some sub-generators are responsible for significantly more important generations than others. This is because, by definition, the explained variance of PCA features is heavily concentrated on the first few features, and there is often a significant drop-off in explained variance from one PCA feature to the next. Recall that PCA features are entangled within sub-generators in training to form rich connections for image generation, when some sub-generators do not contain much useful information in any feature, the entire generator does not pose much utility. Therefore, while effective in achieving scalability, only performing PCA may not achieve the full potential of MosaiQ's quantum image generation as this leads to unbalanced training, likely to quickly discover gradients that ignore many sub-generators and plateaus in training. To mitigate this challenge, we propose a new PCA feature distribution mechanism to counteract the unbalanced nature of assigning principal components to sub-generators (Fig. 4). We begin by assigning the first principal component to the first sub-generator, the second principal component to the second, and so on until each sub-generator has one top principal component. We follow this up by assigning the last n-1 principal components to the first generator, where n is the size of each sub-generator. This is followed by assigning the second to last n-1 features to the second sub-generator and repeated on the third generator and so on until all features have been assigned. This creates a much more balanced distribution, which enables us to ensure all sub-generators hold utility during the training process and the learning is not skewed. As an important side-note, MosaiQ's distribution also mitigates the pathological side-effects of hardware errors on NISQ quantum machines where the majority of critical principal component features could be concentrated on a single sub-generator. This sub-generator could be mapped to qubits with higher hardware error rates, and this can potentially make the training less effective. Our evaluation confirms the effectiveness of MosaiQ's principal component feature distribution among learners. Finally, we describe how MosaiQ utilizes the adaptive noise during image generation to increase the variety of images generated to avoid mode collapse. **Adaptive Input Noise to Improve Variety of Generated Images.** Recall that it is critical for the quantum generator to effectively utilize the random input noise to generate a variety of images for the same class. The inability to generate a variety of images within the same class leads to the "mode collapse" problem. Therefore, while MosaiQ's previous methods help us achieve high quality, it is critical to achieving high-quality, while also maintaining variety. To address this challenge, MosaiQ introduces an adaptive noise range based on the ratio of training loss between the quantum generator and the classical discriminator. The noise range is determined by the current progress of the generator discriminator mini-max game, instead of the traditional fixed range of \([0,\frac{\pi}{2}]\) as employed by QGPatch. Formally, the adaptive noise (Noise\({}_{\text{adaptive}}\)) is defined below in terms of the Generator loss \(G_{L}\) and Discriminator loss \(D_{L}\) and the Ratio \(\frac{G_{L_{0}}}{D_{L_{0}}}\) is the Discriminator/Generator loss ratio Figure 4: _MosaiQ divides the PCA features among the eight generator circuits in a manner that the total explained variance is close to equal across all generators._ observed after the first epoch during training. \[\text{Noise}_{\text{adaptive}}=\frac{\pi}{8}+\frac{5\pi}{8}\text{ReLU}(\text{Tanh }\big{(}\frac{D_{L}}{G_{L}}-\frac{G_{L_{0}}}{D_{L_{0}}}\big{)}\big{)} \tag{1}\] As the above formula indicates, this adaptive bound enforces a minimum noise range of \(\frac{\pi}{8}\) and a maximum range of \(\frac{3\pi}{4}\) for the noise. While the bounds are chosen to be a specific constant, the noise is always distributed within this range and the upper bound automatically adjusts itself based on the relative effectiveness of the generator and the discriminator. MosaiQ does not require tuning the thresholds for the range for different classes, it automatically adjusts itself to adapt to different conditions during training. MosaiQ embeds the adaptive noise using angle embedding on the quantum circuit. The \(Tanh\) function is used to scale the adaptive noise to asymptotically approach the range \([-1,1]\). MosaiQ chose \(Tanh\) because it is a widely-used activation function in deep learning applications which transforms unbounded data into data bound by \([-1,1]\). \(Tanh\) is defined as \(Tanh(x)=\frac{e^{2x}-1}{e^{2x}+1}\). In practice, we observed that it is not effective to have a noise range lower than \(\frac{\pi}{8}\), we leverage \(ReLU\) to force negative values to be zero. \(ReLU\) is also a commonly used activation function, however, \(ReLU\) is used to provide a non-linear transform that makes all negative values 0 (to ensure that the noise is never less than \(\frac{\pi}{8}\)), and keeps all positive values the same. While the non-linearity of \(Tanh\) and \(ReLU\) are important when used typically as activation functions, the non-linearity is not the main function in this application. If the generator is not able to catch up with the discriminator for some iterations at this small range, it should learn this smaller range instead of getting even smaller. We set the minimum as \(\frac{\pi}{8}\) as we discovered that a lower value than \(\frac{\pi}{8}\) results in mode collapse, with almost no distinction between images generated for a given class. Leveraging adaptive noise for input ensures more consistent training as the generator can focus on quality when it is doing relatively worse compared to the classical discriminator and expand to generating more variety in high quality as it begins to fool the discriminator more on an easier objective. Overall, the adaptive noise mechanism helps us increase the variety of images for a given class while ensuring that the image quality also remains high. This is further confirmed by our evaluation (Sec. 3). **MosaiQ's Classical Discriminator.** MosaiQ's discriminator is a classical deep-learning network, designed to aid in the training of the quantum generator. This network is essentially training wheels to guide a difficult-to-train quantum GAN, which can be discarded after training. The discriminator is much larger than the quantum generator, as there is only a single discriminator that must compete in the adversarial game with multiple quantum generators. Typical to its classical counterpart, MosaiQ's discriminator has multiple linear layers with \(ReLU\) activation (e.g., 64 layers). MosaiQ has a terminal layer with \(Sigmoid\) activation which introduces more non-linearity into the network, which enables it to learn more complex structures. The single value produced at the end of the discriminator network allows it to act as a classifier to distinguish between real and generated data. **Putting It All Together.** The overall workflow for training and inference for MosaiQ are visually summarized in Fig. 5 and 6, respectively. Fig. 5 depicts the continuous feedback-driven process where the discriminator and generator participate in a non-cooperative game to improve the overall quality of MosaiQ. Depending upon the relative loss of the discriminator and generator networks, the random input noise is automatically adjusted to help MosaiQ generators avoid the problem of mode collapse problem and enable it to generate variety. During the training process, the input images are transformed using PCA to encode critical and more information in a compact manner for resource-limited NISQ quantum computers, instead of learning over the input images pixel by pixel. Finally, the five-step inference procedure shown in Fig. 6 highlights that the PCA transformation is _inverted_ to generate new images when running the fully-trained MosaiQ generators on real quantum computers for inference to generate new images. Figure 5: MosaiQ’s process for training and optimizing the quantum generator circuits using PCA transformation, feature distribution, and adaptive noise generation. Figure 6: MosaiQ’s inference process to generate new images via running the generators on quantum machines. ## 3 Experimental Methodology **Datasets.** MosaicQ is evaluated on the MNIST [6] and Fashion MNIST datasets [21] as they have been widely-used for QML evaluation on NISQ-era quantum machines [15, 11]. MNIST consists of 28x28 gray-scale images of handwritten digits from 0 to 9. Fashion MNIST consists of 28x28 gray-scale images of clothing and accessories. Fashion MNIST provides more challenges for image generation and is chosen to explore more domains of quantum image generation. For both datasets, we split the dataset by image label and train individual models for each data type. This technique has been used in [11] and allows us to interpret the generation process and difficulties isolated for each class of data. **Experimental Framework and Training Details.** The environment for MosaicQ consists of PyTorch [16] acting as a wrapper for Pennylane [3]. MosaicQ is trained on IBM's quantum simulator for speed, but the inference is performed on both the quantum simulator and real quantum machine. For real quantum machine experiments, Pennylane compiles the circuits into a backend compatible with IBM-Qiskit [18]. All real machine runs were performed on the IBM QX Jakarta machine [18]. Images used in training are selected from the training set and decomposed to principal components of size 40. These components are divided across eight five-qubit sub-generators. The discriminator learns to differentiate at the level of principal components and does not need to utilize a full image. Our final FID metrics reported are at the end of the training after 500 iterations (where all methods achieve near-final stability). MosaicQ uses PyTorch's Standard Gradient Descent Optimizer for both the generator and discriminator and Binary Cross Entropy Loss for the shared loss of the generator and discriminator. The Adaptive Noise range is simple to calculate based on the generator and discriminator loss and MosaicQ automatically guides it to adjust itself every training iteration. The generator learning rate is \(0.3\) and the discriminator learning rate to \(.05\) with a batch size of \(8\). **Framework for Competing Techniques.** We set up QG-Patch, the state-of-the-art technique [11], using the popular Penylane framework and choose the parameters used in the original paper [11]. Our QGPatch training is performed using the batch size of eight and it trains until the quality stabilizes (500 iterations). We use the same network size that is mentioned in the original paper, using 4 sub-generators with 5 features each. Evaluating the probabilities in measurement yields a 64-pixel (8x8) image. As MosaicQ's results are based on the original size of the datasets of 784 pixels, we upscale the results of QGPatch using Bilinear Interpolation to allow for direct comparison. We rigorously explored multiple interpolation techniques to provide QGPatch as much as an advantage as possible and experimentally determined that Bilinear interpolation yielded the highest quality upscaling. For QGPatch, similar to MosaicQ, all metrics scores shown are calculated based on 500 images generated at the end of training compared to the entire distribution of the respective image category. We additionally set up other classical experiments to act as ablations for the efficacy of MosaicQ. We introduce two techniques: (1) PCAInverse (using random inputs and PCA inverse transformation to generate images), and (2) ClassicalPCA (a purely-classical GAN using the same number of parameters as MosaicQ, but uses MosaicQ's PCA technique for feature compression). PCAInverse applies the same inverse PCA transformation as MosaicQ, to random noise of size 40. This technique is designed to explore the isolated effects of the inverse PCA transformation procedure since it does not perform any learning. ClassicalPCA trains a purely-classical GAN with an identical number of parameters as MosaicQ and apply the same inverse PCA transformation workflow. **Figures of Merit.** To capture the quality of image generation, our evaluation figures of merit are both qualitative and quantitative. Our primary quantitative figure of merit is FID (Frechet inception distance) score [10]. FID evaluates the distance between two distributions, as defined below for a Gaussian with mean \((m,C)\) and a second Gaussian with mean \((m_{w},C_{w})\): \(\text{FID}=\left\|m-m_{w}\right\|_{2}^{2}+Tr(C+C_{w}-2(CC_{w})^{1/2})\). A lower FID score between two distributions indicates higher similarity - and hence, lower FID scores imply higher quality. The FID score has been shown to provide a more meaningful comparison over alternative metrics (e.g., Inception Score) for image GANs [10]. We also evaluate the variance of our images, by evaluating the variance of the pixel values relative to the mean. As defined below, the variance metric gives insight into the distinctness of the images generated for each method. For each pixel in an image row \(r\) and column \(c\) in an image, we sum the squared difference from the mean value for that respective pixel to achieve our variance score \(V\). We then take the cumulative density function (CDF) of these variance scores for each image generated image \(G(z)\) given uniformly distributed noise \(z\) from \([0,\pi/2]\): \(V=\sum_{r}\sum_{c}(\mu_{rc}-G(z)_{rc})^{2}\). A higher variance indicates higher variety in generated images for a given class - which is desirable, however, it is also critical that a method should attain a lower FID score before demonstrating higher variety. Figure 7: _MosaicQ produces visually higher-quality images compared to the state-of-the-art technique, QGPatch, for different classes of MNIST and Fashion-MNIST datasets._ ## 4 Evaluation and Analysis In this section, we go over the results of MosaiQ and provide an analysis of the key elements of the design. **MosaiQ: Key Results and SOTA Comparison.** MosaiQ yields significantly better image quality both visually and quantitatively compared to the state-of-the-art (SOTA) method, QGPatch [11]. As discussed earlier, MosaiQ is evaluated on two different datasets to cover diversity across datasets and within a dataset with different shapes and styles (digits and clothing). First, we highlight that MosaiQ provides a significant visual enhancement in the images produces across different classes compared to the SOTA method. In fact, Fig. 7 shows that the SOTA-produced images are almost unrecognizable for sophisticated shapes ('0' vs. '5' and different types of shoes). In comparison, MosaiQ is effective across different class types - this is further substantiated by lower FID scores of the images generated by MosaiQ (Fig. 8 and 9). As expected, the FID score obtained by MosaiQ varies across different classes for both datasets; this is because of varying degrees of difficulty and shapes of different images. However, the most notable observation is that MosaiQ consistently outperforms QGPatch across all class types by a significant margin (by more than 10 points in many cases). In fact, MNIST digit '7' is quite challenging for QGPatch (FID score is 60), and MosaiQ improves the FID score by approximately 20 points. Similarly, the most challenging class in the Fashion-MNIST dataset (Boot) receives a 45-point improvement by MosaiQ. The key reason MosaiQ outperforms QGPatch is the efficiency of principal components in capturing information. Instead of having to learn pixels one by one, MosaicQ scales more easily by learning a distribution of features that better organize redundancy. For example, most of the background of MNIST and Fashion MNIST images are black, in the case of QGPatch, each pixel must be synthesized, where MosaiQ may be able to build these features with only a few principal components. We also compare MosaiQ to other methods, including the PCAInverse technique, the ClassicalPCA technique, and a recent work on hybrid quantum GANs which we refer to as HQCGAN [19]. We sample generated images in Fig. 10 and find the images produced by all methods tested Figure 11: _MosaicQ produces higher quality images on every class of the MNIST dataset as compared to HQCGAN [19] in addition to the classical methods tested._ Figure 8: _MosaiQ consistently produces lower FID score images (higher quality images) compared to QGPatch across different classes of the MNIST dataset._ Figure 10: _MosaiQ produces higher quality images on the MNIST and Fashion MNIST dataset as compared to HQCGAN [19]. MosaiQ produces higher quality images than the classical methods: PCAInverse and ClassicalPCA._ Figure 9: _MosaiQ consistently produces lower FID score images (higher quality images) compared to QGPatch across different classes of the Fashion-MNIST dataset._ are lower quality than MosaicQ, producing images which are far less human-recognizable. We show the corresponding FID scores for MNIST in Fig 11 and Fashion MNIST in Fig. 12. We find that MosaicQ produces higher quality images and significantly improved FID scores in all cases tested compared to the classical methods (PCAInverse and ClassicalPCA), and HQCGAN. Importantly, MosaicQ outperforms ClassicalPCA, demonstrating the power of quantum networks in image generation when compared to equal-sized classical networks. Our results show that while HQCGAN is promising and useful, the final quality may not be as high as MosaicQ - this is because HQCGAN requires significantly more parameters, more training resources, lacks noise-mitigation, and learns noise in a fixed range (\([0,1)\) instead of MosaicQ's adaptive noise range technique). Next, we dissect the key reasons behind MosaicQ's effectiveness in more detail. **Why Does MosaicQ Work Effectively?**_Effect of careful PCA feature distribution among sub-generators._ Recall that MosaicQ employs an intelligent PCA feature distribution among sub-generators in the generation phase. The goal is to equalize the explainable PCA feature variance across learners - in an effort to make learners equally capable instead of weaker learners not effectively contributing toward the overall quality. To better understand and demonstrate the effectiveness of this mechanism, Fig. 13 shows the FID score over multiple training iterations with and without this mechanism while keeping all other design elements intact. For easier interpretation, Fig. 13 shows this for class digit '5' for the MNIST dataset. We observe that PCA feature distribution allows MosaicQ to achieve lower (and hence, better) FID score over training compared to without employing this mechanism. A side benefit of this mechanism is that MosaicQ can also mitigate hardware errors on real quantum computers, as discussed later. _Effect of adaptive noise generation during training._ Fig. 14 shows the effect of adaptive noise generation that is used by the MosaicQ generators instead of using a constant noise threshold range (i.e., \([0,\frac{\pi}{2}]\)) used by the QGPatch method. Recall that GANs often suffer from intra-class mode-collapse challenges where they can produce a high-quality image for a given class, but there is a possibility that all generated images for a given class appear similar. Therefore, it is critical to ensure that the generated images for a given class have sufficient variety. Fig. 14 confirms that adaptive noise improves variety (class digit '5' used as an example) compared to fixed noise ranges. This is because adaptive noise enables the generators to learn over different distributions more effectively and generate variations. Having an adaptive range allows the generator to improve variety when it is doing comparatively well and focus on stability when it is performing poorly with a smaller range of inputs. Taking advantage of this allows us to have high stability in training and high variety over time. **MosaicQ on NISQ Quantum Machines.** Our experimental campaign on real superconducting-based IBM NISQ quantum computers confirms MosaicQ produces higher quality images across different datasets with diverse classes of images. Fig. 15 and Table 1 summarize the image quality results for QGPatch and MosaicQ. While MosaicQ's quality remains consistent with simulations, QGPatch's quality degrades considerably on real computers. In fact, our results revealed QGPatch produces lower quality images and sometimes unrecognizable images because QGPatch is more prone to side-effects of prevalent quantum gate errors on real machines, while MosaicQ's simplistic design successfully mitigates those effects as discussed next. First, we observe that MosaQ produces competitive Figure 14: _Adaptive noise during generation helps MosaicQ achieve variety (higher variance across images for the same class to mitigate mode-collapse challenge), while achieving lower FID score at the same time, as shown earlier._ Figure 15: _Visual image quality of images produced by MosaicQ on real IBM quantum machine (Jakarta) for six representative classes for MNIST and Fashion-MNIST datasets._ Figure 13: _MosaicQ’s PCA feature distribution among learners improves the quality of generated images. The figure shows a distribution of 200 FID scores (comparing 8 images each to the entire data distribution) for the case with and without PCA feature redistribution._ quality images despite hardware errors on real quantum computers. Due to high queue wait time and limited availability of quantum resources, we are presenting results only for selected classes (three from each dataset) which cover a diverse range of intricacies in terms of shape and information. In fact, via visual inspection for different images produced for a given class, we observe that the variety in the produced images by MosaiQ is also high on the quantum computer - effectively mitigating the mode-collapse pitfall in GANs. This is because of MosaiQ's adaptive noise mechanism during the generation phase that improves image variety and avoids mode collapse. Second, we observe that the FID scores of MosaiQ on a real quantum computer are similar to error-free simulation results, across different classes. This result demonstrates that MosaiQ's design is effective even when hardware errors are present. In fact, the IBM Jakarta computer has a considerably low quantum volume (_higher is better_) with a quantum volume of 16. Quantum volume is a useful metric in determining the error rate and capabilities of a quantum computer. For perspective, at the time of writing, IBM has computers available with quantum volumes of 128. We chose a relatively less capable quantum computer for our experiments to demonstrate the portability and robustness of MosaiQ. The reason for MosaiQ's effectiveness is twofold: (1) its simplistic design for generator learners, which avoids a large number of two-qubit gates, and circuits with high depths. which are more prone to hardware noise, and (2) its PCA feature distribution mapping among the ensemble of learners in the generator - this careful redistribution ensures that most critical PCA features are not concentrated on a few qubits, which in the pathological case could be most severely impacted by hardware-errors. MosaiQ's PCA redistribution ensures that the learners do not need to know the error rate of individual qubits on the computer or make assumptions about different error types. ## 5 Related Work **Demonstration of the Speedup Achieved by Quantum GANs.** Quantum GANs were first introduced by Lloyd et al.[14]. This work theoretically proved the potential for exponential superiority of Quantum GANs over their classical counterparts. This was later followed by the work of Dallaire-Demers et al. [4], which established a way to design and train variational quantum circuits as the generator component of a Quantum GAN model. Nakaji et al. [15] use a hybrid configuration with a classical discriminator and a quantum generator for classification. Their results demonstrate the potential for quantum GANs to solve problems that are intractable with classical GANs. **Enchancing Classical GANs using Quantum Components.** Approaches such as the work done by Rudolph et al. [17] use a quantum enhancer for large-scale fully-classical generators to enhance the quality of image generation. While the images generated are high quality, this is mostly attributed to the large classical generators. The visual results in these works seem appealing because the classical GAN-based baseline is already high quality, with a quantum preamble which only improves the image quality slightly. This is in contrast to MosaiQ which uses a quantum generator to generate images without a classical generator. **Fully-Quantum Generators.** The work of Huang et al. [12] uses quantum gradients to train quantum GANs to replicate logic gates such as XOR with high fidelity and also generate images from the MNIST dataset. The technique relies on using quantum-based loss functions for their generator and discriminator. This work also shows that quantum GANs have similar performance as classical GANs while using 94.98% fewer parameters. A succeeding work with higher quality MNIST image generation can be found in QGPatch [11], with a generator split among several sub-generators each responsible for generating a part of the full image. QGPatch trains a quantum generator against a classical discriminator. The quality of generation is the highest among pre-MosaiciQ quantum GANs with a purely quantum generative component, at the time of writing. Therefore, we compare MosaiQ against QGPatch in this paper and show that MosaiQ outperforms QGPatch in terms of image quality and variety. ## 6 Conclusion MosaiQ is the first quantum GAN to successfully generate high-quality images on real quantum computers. MosaiQ incorporates PCA feature redistribution and adaptive noise to facilitate improved interaction between the quantum generator and classical elements of the model. This integration enhances the quality and variety of the generator's output respectively. **Acknowledgements** We thank the reviewers for constructive feedback. This work was supported by NSF Award 2144540, Northeastern University, and Rice University. \begin{table} \begin{tabular}{|c|c c c|c c c|} \multicolumn{1}{c}{**(a) QGPatch**} \\ \hline Environment & Digit 0 & Digit 5 & Digit 9 & T Shirt & Parts & Shoes \\ \hline \hline Quantum Simulator & 52 & 52 & 48 & 45 & 42 & 82 \\ \hline IBM Jakarta Machine & 145 & 134 & 131 & 125 & 144 & 124 \\ \hline \multicolumn{1}{c}{**(b) MosaiQ**} \\ \hline Environment & Digit 0 & Digit 5 & Digit 9 & T Shirt & Parts & Shoes \\ \hline \hline Quantum Simulator & 42 & 42 & 33 & 34 & 22 & 38 \\ \hline IBM Jakarta Machine & 45 & 44 & 35 & 38 & 23 & 38 \\ \hline \end{tabular} \end{table} Table 1: _QGPatch and MosaiQ’s FID scores on a real quantum computer for MNIST & Fashion-MNIST datasets._
2306.16255
Theory and applications of the Sum-Of-Squares technique
The Sum-of-Squares (SOS) approximation method is a technique used in optimization problems to derive lower bounds on the optimal value of an objective function. By representing the objective function as a sum of squares in a feature space, the SOS method transforms non-convex global optimization problems into solvable semidefinite programs. This note presents an overview of the SOS method. We start with its application in finite-dimensional feature spaces and, subsequently, we extend it to infinite-dimensional feature spaces using reproducing kernels (k-SOS). Additionally, we highlight the utilization of SOS for estimating some relevant quantities in information theory, including the log-partition function.
Francis Bach, Elisabetta Cornacchia, Luca Pesce, Giovanni Piccioli
2023-06-28T14:29:17Z
http://arxiv.org/abs/2306.16255v3
# Theory and applications of the Sum-Of-Squares technique ###### Abstract The Sum-of-Squares (SOS) approximation method is a technique used in optimization problems to derive lower bounds to the optimal value of an objective function. By representing the objective function as a sum of squares in a feature space, the SOS method transforms non-convex global optimization problems into solvable semidefinite programs. This note presents an overview of the SOS method. We start with its application in finite-dimensional feature spaces and, subsequently, we extend it to infinite-dimensional feature spaces using kernels (k-SOS). Additionally, we highlight the utilization of SOS for estimating some relevant quantities in information theory, including the log-partition function. ## All problems are convex! Let us consider a global optimization problem where we want to minimize a function \(h:\mathcal{X}\to\mathbb{R}\). At first we only look for the infimum \(h(\hat{x})\), and not where it is attained. Moreover, we are interested in considering general assumptions on \(h(x)\), in particular we do not want to put constraints on the convexity of the objective. However, a very important fact we exploit in the following, is that one can always rephrase a non-convex problem into a convex one (see Fig. 1 for intuition): \[\inf_{x\in\mathcal{X}}h(x)=\sup_{c\in\mathbb{R}}\;\;c\;\text{ such that }\forall x\in\mathcal{X},\;h(x)-c\geqslant 0. \tag{1}\] Indeed the right hand side is clearly a convex problem, i.e., minimization over a convex set plus linear constraints in \(c\), while on the left hand side we have a non-convex one in the most general setting. The catch is that we are imposing constraints over a dense set (if we do not make any assumption on \(\mathcal{X}\))! After this mapping, the main technical challenge becomes dealing with non-negativity constraints over the set \(\mathcal{X}\), i.e., \(h(x)-c\geq 0\), \(\forall x\in\mathcal{X}\). A crucial point that we must learn from the equivalence between the two problems above is that we need a computationally efficient (essentially linear) way to manipulate non-negative functions. Convex duality leads to another natural formulation: \[\inf_{x\in\mathcal{X}}h(x) =\sup_{c\in\mathbb{R}}\;c\;\text{such that}\;\forall x\in \mathcal{X},h(x)-c\geqslant 0 \tag{2}\] \[=\sup_{c\in\mathbb{R}}\inf_{\mathcal{P}^{<=}_{\mathcal{X}}( \mathcal{X})}\left\{c+\int_{\mathcal{X}}(h(x)-c)d\mu(x)\right\}\] (3) \[=\inf_{\mu\in\mathcal{P}^{<=}_{\mathcal{X}}(\mathcal{X})}\sup_{c \in\mathbb{R}}\left\{c+\int_{\mathcal{X}}(h(x)-c)d\mu(x)\right\}\] (4) \[=\inf_{\mu\in\mathcal{P}^{<=}_{\mathcal{X}}(\mathcal{X})}\left\{ \int_{\mathcal{X}}h(x)d\mu(x)\right\}\;\text{such that}\;\int_{\mathcal{X}}d\mu(x )=1, \tag{5}\] where we denoted the space of finite positive measures on \(\mathcal{X}\) as \(\mathcal{P}_{+}^{<\infty}(\mathcal{X})\). We rewrite the global optimization problem in eq. (1) thanks to the introduction of the Lagrangian in eq. (3); due to the positiveness of the constraint the measure must be positive and we take the infimum over \(\mathcal{P}_{+}^{<\infty}(\mathcal{X})\). By exploiting the convexity of the problem (strong duality) we can invert \(\inf\) and \(\sup\) in eq. (3) to obtain the dual formulation of the optimization problem in eq. (4). The interpretation of the measure for the optimization problem is given in Fig. 2. By solving the dual problem we can access both the minimum \(h(\hat{x})\) and the minimizer \(\hat{x}\) thanks to the knowledge of the measure attaining the infimum in eq. (5). ### Sum-of-squares representation of non-negative functions We need a computationally efficient (essentially linear) way to manipulate non-negative functions. In this section we introduce one idea that goes in this direction, originally introduced by [1; 2]: represent non-negative functions as "sums of squares" (SOS). Let us consider a feature map \(\varphi:\mathcal{X}\rightarrow\mathbb{C}^{d}\), the goal is to represent functions as quadratic forms: \[h(x)=\varphi(x)^{*}H\varphi(x), \tag{6}\] Figure 1: All problems are convex. In order to find the global minimum of \(h(x)\) one needs to find the greatest \(c\) which satisfy \(h(x)-c\geq 0,\ \forall x\in\mathcal{X}\). where \(H\in\mathbb{H}_{d}\), i.e., set of Hermitian matrices in \(\mathbb{C}^{d\times d}\). Remember that we want to deal with a real objective function \(h\) and a sufficient condition in order to achieve that is to assume \(H\) to be Hermitian. Many functions admit a representation of this form: * Polynomials: \(\varphi(x)=(1,x,x^{2},\dots)\). * Trigonometric polynomials: \(\varphi(x)=(e^{i\pi x},e^{-i\pi x},e^{i2\pi x},e^{-i2\pi x},\dots)\). It is important to stress that the representation of \(h\) associated with \(H\) is not unique. Consider the linear span of the set \(\{\varphi(x)\varphi(x)^{*}\,:\,x\in\mathcal{X}\}\), and denote it as \(\mathcal{V}\). If in eq. (6) we substitute \(H\) with \(H+H^{\prime}\), where \(H^{\prime}\in\mathcal{V}^{\perp}\)[3], the left hand side would be left unchanged. A key assumption we make is that the constant function can be represented through a certain (non-unique) \(U\in\mathbb{H}_{d}\): \[\varphi(x)^{*}U\varphi(x)=1,\,\,\,\forall x\in\mathcal{X}. \tag{7}\] We call a function a Sum of Squares (SOS) if it can be expanded in the following form: \[h(x)=\sum_{i\in\mathcal{I}}g_{i}(x)^{2}, \tag{8}\] where we introduced the generic set of functions \(\{g_{i}:\mathcal{X}\to\mathbb{R}\}_{i\in\mathcal{I}}\). We can connect this definition with the representation as a quadratic form in eq. (6) by characterizing the SOS functions using the following proposition: **Proposition 1**: _The objective function \(h\), represented as \(h(x)=\varphi(x)^{*}H\varphi(x)\), is a SOS if and only if \(H\succcurlyeq 0\) and \(H\in\mathbb{H}_{d}\)._ The statement is easily understood by doing the spectral decomposition of the matrix \(H\). Indeed let \(H=\sum_{i\in I}\lambda_{i}u_{i}u_{i}^{*}\), then we can massage eq. (6) to obtain: \[h(x)=\sum_{i\in I}\big{[}\sqrt{\lambda_{i}}u_{i}^{*}\varphi(x)\big{]}^{2}, \tag{9}\] which is a "sum-of-squares" (SOS). It is interesting to note that the number of squares in the SOS decomposition will be equal to the rank of the matrix \(H\). A rather simple statement which is pivotal for the following discussion is: **Proposition 2**: _If the objective function \(h\) is a SOS, then \(h\) is non-negative._ The converse of prop. 2 is not true and not all non-negative functions are SOS! We focus our attention in the following on understanding when the characterization is tight. However an important point is that, if we are given a representation of a function \(h\) as a quadratic form in eq. (6), checking that \(h\) is a SOS is a convex feasibility problem: **Proposition 3**: \(h(x)=\varphi(x)^{*}H\varphi(x)\) _is a SOS if and only if it exists \(H^{\prime}\in\mathcal{V}^{\perp}\) such that \(H-H^{\prime}\succcurlyeq 0\)._ Indeed if the space \(\mathcal{V}\) is known, and for most applications we will study it will be, the problem in prop. 3 is a well-defined SDP. From the discussion above we understand that the linear span \(\mathcal{V}\) plays a key role in the SOS contruction. We can analyze its form in the simple examples we already introduced: * Polynomials: The space \(\mathcal{V}\) is the set of Hankel matrices, indeed \(\left[\varphi(x)\varphi(x)^{*}\right]_{i,j}=x^{(i+j)}\). * Trigonometric polynomials: The space \(\mathcal{V}\) is the set of Toeplitz matrices, indeed \(\left[\varphi(x)\varphi(x)^{*}\right]_{\omega,\omega^{\prime}}=e^{i\pi(\omega -\omega^{\prime})x}\). Using the propositions we introduced above we can reformulate the optimization problem as: \[\inf_{x\in\mathcal{X}}h(x) =\sup_{c\in\mathbb{R}}\,\,c\,\,\text{such that}\,\,\forall x\in \mathcal{X},h(x)-c\geqslant 0 \tag{10}\] \[\geqslant\sup_{c\in\mathbb{R},\,\,A\succcurlyeq 0}\,\,\,c\,\,\,\text{such that}\,\,\forall x\in\mathcal{X},h(x)-c=\varphi(x)^{*}A \varphi(x)\] (11) \[=\sup_{c\in\mathbb{R},\,\,A\succcurlyeq 0}\,\,\,c\,\,\,\text{such that}\,\,H-cU-A\in\mathcal{V}^{\perp}, \tag{12}\] where we used Prop. 2 knowing that the SOS functions are included in the set of non-negative functions (hence the relaxation in the cost), and that the constant function can be represented by \(U\). Finally we assumed that \(h(x)=\varphi(x)H\varphi(x)^{*}\) (i.e., \(h\) is SOS). The problem above is a _semidefinite program_! Therefore we can solve it numerically on a computer. The SOS relaxation modifies as well the dual problem: \[\inf_{x\in\mathcal{X}}h(x) =\sup_{c\in\mathbb{R}}\;c\text{ such that }\forall x\in\mathcal{X},h(x)-c\geqslant 0 \tag{13}\] \[\geqslant\sup_{c\in\mathbb{R},\;A\succcurly 0}\;c\text{ such that } \forall x\in\mathcal{X},h(x)-c=\varphi(x)^{*}A\varphi(x)\] (14) \[=\sup_{c\in\mathbb{R},\;A\succcurly 0}\inf_{\mu\in\mathcal{P}^{< \infty}(\mathcal{X})}c+\int_{\mathcal{X}}(h(x)-c-\varphi(x)^{*}A\varphi(x))d \mu(x)\] (15) \[=\inf_{\mu\in\mathcal{P}^{<\infty}(\mathcal{X})}\sup_{c\in \mathbb{R},\;A\succcurly 0}c+\int_{\mathcal{X}}(h(x)-c-\varphi(x)^{*}A\varphi(x))d\mu(x)\] (16) \[=\inf_{\mu\in\mathcal{P}^{<\infty}(\mathcal{X})}\int_{\mathcal{X }}h(x)d\mu(x)\text{ such that }\int_{\mathcal{X}}d\mu(x)=1\text{ and }\int_{\mathcal{X}}\varphi(x)\varphi(x)^{*}d\mu(x) \succcurlyeq 0. \tag{17}\] We write the Lagrangian in eq. (15), and we exploit strong duality in eq. (16) to exchange inf and sup. While the costraint imposed by \(c\) yields again simply the normalization of the measure as in eq. (5), a maximization over \(A\) is more elaborated. The optimization concerning \(A\) is \(\sup_{A\succcurlyeq 0}-\int_{\mathcal{X}}\varphi(x)^{*}A\varphi(x)d\mu(x)=-\inf_{A \succcurlyeq 0}\int_{\mathcal{X}}\varphi(x)^{*}A\varphi(x)d\mu(x)=-\inf_{A \succcurlyeq 0}\operatorname{tr}[AB]\), with \(B=\int_{\mathcal{X}}\varphi(x)\varphi(x)^{*}\). The solution is \[-\inf_{A\succcurlyeq 0}\operatorname{tr}[AB]=\begin{cases}0&\text{ if }B \succcurlyeq 0\\ +\infty&\text{ otherwise}\end{cases} \tag{18}\] in fact if \(B\succcurlyeq 0\) then \(\operatorname{tr}[AB]\geq 0\) for all \(A\succcurlyeq 0\), hence a minimizer is \(A=0\). Suppose instead that \(B\) has a negative eigenvalue with eigenvector \(v_{-}\), then picking \(A=\lambda v_{-}v_{-}^{T}\) one can make \(\operatorname{tr}[AB]\) arbitrarily negative by increasing \(\lambda\). Assuming the objective admits a representation as in eq. (6) \(h(x)=\varphi(x)^{*}H\varphi(x)\), a simple reformulation of the problem in eq. (1) is: \[\inf_{x\in\mathcal{X}}h(x) =\inf_{x\in\mathcal{X}}\operatorname{tr}\Big{[}H\varphi(x)\varphi (x)^{*}\Big{]} \tag{19}\] \[=\inf_{\Sigma\in\mathcal{K}}\operatorname{tr}[\Sigma H], \tag{20}\] where we exploited the linearity of the problem in order to optimize over \(\mathcal{K}\), defined as the closure of the convex hull of \(\varphi(x)\varphi(x)^{*}\), indeed whenever we maximize linear objective over a convex hull we end up in one extremal point. An illustration of the basic geometry involved is given in Fig. 3. By looking at eq. (20) we see that the relaxation in eq. (17) is equivalent to find upper-bounds of \(\mathcal{K}\) as \(\widehat{\mathcal{K}}=\big{\{}\Sigma\in\mathcal{V},\;\operatorname{tr}[U \Sigma]=1,\;\Sigma\succcurlyeq 0\big{\}}\), with \(\Sigma=\int_{\mathcal{X}}\varphi(x)\varphi(x)^{*}d\mu(x)\). Notice that \(\mathcal{K}\subseteq\widehat{\mathcal{K}}\). Indeed, the final goal of optimization is to find tractable outer approximations of \(\mathcal{K}\). ### Tightness of the approximation We saw in previous section that we can approach a SOS relaxation from two point of views: * _Primal point of view_: Relax the non-negative constraint to be a SOS (eq. (12)). * _Dual (moment) point of view_: Relax the strict assumption on the positive sign of the measure \(\mu\), require only positiveness of the moments. It is equivalent to replace the convex hull \(\mathcal{K}\) with the affine hull of \(\varphi(x)\varphi(x)^{*}\) intersected with the SDP cone which we called \(\hat{\mathcal{K}}\) (eq. (17)). The two characterizations are completely equivalent. If \(\mathcal{K}=\hat{\mathcal{K}}\) then the non-negative constraint can be expressed as a SOS, and viceversa. A natural question we investigate is: when is the SOS approximation tight? We analyze in the following a series of (simple) example in which we can give detail on the tightness of the approximation: * **Finite set with injective embedding:** Let \(\mathcal{X}\) be a finite set \(\mathcal{X}=\{1,2,\ldots,n\}\). Consider a _one-hot encoding_ mapping \(\varphi(i)=e_{i}\), with \(e_{i}\) the \(i\)-th element of the canonical basis in \(\mathbb{R}^{n}\). The elements of \(\mathcal{V}\) are given by diagonal matrices, as one can easily obtain from \(\varphi(i)\varphi(i)^{\top}=\operatorname{diag}(e_{i})\). Moreover, the representation of the constant function is achieved by \(U=\mathbb{I}_{n}\). Exploiting the moment point of view, one can easily check that the affine hull \(\tilde{\mathcal{K}}\) is equivalent to the convex hull \(\mathcal{K}\), hence the SOS relaxation is tight. Proving that the approximation is tight using the primal formulation is less obvious. Let \(K\in\mathbb{R}^{|\mathcal{X}|\times|\mathcal{X}|}\) the Gram matrix of all dot-products. If \(K\) is invertible, then there exists \(A\in\mathbb{R}^{|\mathcal{X}|\times d}\) such that \(A\varphi\) sends all points to an element of the canonical basis of \(\mathbb{R}^{|\mathcal{X}|}\). We can use this property to expand \(f:\mathcal{X}\to\mathbb{R}\) as a SOS: \(h(x)=\sum_{x^{\prime}\in\mathcal{X}}1_{x^{\prime}=x}h(x^{\prime})= \sum_{x^{\prime}\in\mathcal{X}}\big{[}\varphi(x)^{*}A^{*}A\varphi(x^{\prime}) \big{]}^{2}h(x^{\prime})\). * **Quadratic functions in \(\mathcal{X}=\mathbb{R}^{d}\):** Let \(\varphi(x)=\begin{pmatrix}1\\ x\end{pmatrix}\in\mathbb{R}^{d+1}\) and consider \(h(x)=\dfrac{1}{2}\begin{pmatrix}1\\ x\end{pmatrix}^{\top}\begin{pmatrix}c&b^{\top}\\ b&A\end{pmatrix}\begin{pmatrix}1\\ x\end{pmatrix}\). The goal is to understand if \(h\) non-negative is equivalent to be a SOS, we use the primal point of view. One can easily show that we need \(A\succcurlyeq 0\) otherwise \(\inf_{x\in\mathbb{R}^{d}}h(x)=-\infty\). By differentiating \(h\) one obtains: \(\inf_{x\in\mathbb{R}^{d}}h(x)=\frac{1}{2}(c-b^{\top}A^{-1}b)\). Hence \(h\) is non-negative if the Schur complement is PSD. It follows from simple algebraic fact that \(A\) being PSD jointly with the Schur complement being PSD guarantees the condition \(\begin{pmatrix}c&b^{\top}\\ b&A\end{pmatrix}\succcurlyeq 0\), therefore using Prop. 1 we can rewrite \(h\) as SOS. * **Polynomials in \(\mathbb{R}\).** A polynomial \(p\) of even degree in \(\mathbb{R}\) is non-negative if and only if it is a sum of squares. One can show that by factor \(p\) in term of its roots: \[p(x)=\alpha\prod_{i\in I}(x-r_{i})^{m_{j}}\prod_{j\in J}\big{[}(x-a_{j})^{2}+ b_{j}^{2}\big{]},\] it follows easily indeed that \(\alpha\) must be positive, the multiplicity of the real roots \(\{r_{i}\}_{i\in I}\) must be even (otherwise we would change sign once crossing them), and by expanding the complex-conjugate roots term we obtain a sum of \(2^{|J|}\) squares. * **Order \(r\) polynomials in dimension \(d\):** All non-negative polynomials are SOS when \(r=2\), or when \(d=1\). Only other case is \(d=2\), and \(r=4\) (Hilbert, 1888). A counter-example is given by Motzkin (1965) for \((d=2,r=6)\): \(1+x_{1}^{2}x_{2}^{4}+x_{1}^{4}x_{2}^{2}-3x_{1}^{2}x_{2}^{2}\). * **Trigonometric polynomials on \([-1,1]\).** We consider \(\mathcal{X}=[-1,1]\) and \(\varphi(x)\in\mathbb{C}^{2r+1}\), with \([\varphi(x)]_{\omega}=e^{i\pi\omega x}\) for \(\omega\in\{-r,\ldots,r\}\). As we discussed before the outer product of the \(\varphi\) is given by: \((\varphi(x)\varphi(x)^{*})_{\omega\omega^{\prime}}=e^{i\pi(\omega-\omega^{ \prime})x}\), and thus \(\mathcal{V}\) is the set of Hermitian Toeplitz matrices, and we can take \(U=\frac{1}{4}I\). For trigonometric polynomials Figure 3: Cartoon plot of the convex hull in the linear span \(\mathcal{V}\). in one dimension: non-negativity is equivalent to being SOS (Fejer-Riesz theorem): any PSD Toeplitz matrix is of the form \(\hat{p}(\omega-\omega^{\prime})\) where \(p\) is a positive measure. ## I Lecture 2 ### Introduction In the last lecture we introduced a method to optimize a function \(h(x)\) under the condition that it can be expressed as \(h(x)=\varphi^{*}(x)H\varphi(x)\), for a feature map \(\varphi:\mathcal{X}\mapsto\mathbb{R}^{p}\). Two equivalent relaxations can be made: * **Primal point of view:** \[\inf_{x\in\mathcal{X}}h(x) =\sup_{c\in\mathbb{R}}\ c\ \text{such that}\ \forall x\in\mathcal{X},h(x)-c\geqslant 0\] (21) \[\geqslant\sup_{c\in\mathbb{R},\ A\geqslant 0}\ c\ \ \text{such that}\ \forall x\in\mathcal{X},h(x)-c=\varphi(x)^{*}A\varphi(x),\] (22) where we relaxed the constraint \(h(x)-c\geqslant 0\) to \(h(x)-c=\varphi(x)^{*}A\varphi(x)\). * **Dual point of view:** The dual problem (52) gives \[\inf_{x\in\mathcal{X}}h(x)=\inf_{\mu\in\mathcal{P}^{1}(\mathcal{X})}\int_{ \mathcal{X}}h(x)d\mu(x)=\inf_{\Sigma\in\mathcal{K}}\mathrm{tr}[H\Sigma]\geq \inf_{\Sigma\in\mathcal{K}}\mathrm{tr}[H\Sigma],\] (23) with \(\mathcal{K}=\text{convex hull}\left(\{\varphi(x)\varphi(x)^{*},\,x\in \mathcal{X}\}\right)\) and \(\widehat{\mathcal{K}}=\big{\{}\Sigma\in\mathcal{V},\ \mathrm{tr}[U\Sigma]=1,\ \Sigma \succcurlyeq 0\big{\}}\). This relaxation scheme is however affected by some limitations. First we need \(h\) to be representable as \(h(x)=\varphi^{*}(x)H\varphi(x)\). Moreover the inequalities are tight only for some classes of functions (e.g., polynomials in one dimension, trigonometric polynomials in \([-1,1]\)). One way to fix this is to use hierarchies of feature spaces, i.e., using progressively more expressive feature spaces to approach tightness. To demonstrate this method we start with a couple of examples. * **Trigonometric polynomials on \([-1,1]^{d}\)** Suppose \(h(x)=\varphi(x)^{*}H\varphi(x)\) on \(\mathcal{X}=[-1,1]^{d}\) with \(\varphi(x)=e^{i\omega^{T}x}\) for \(\omega\in\Omega\subset\mathbb{Z}^{d}\). \(\Omega\) basically acts as a constraint on the spectrum of \(h\). For this choice of \(\varphi\) we have \[\mathcal{V}=\mathrm{Span}\left(\{\varphi(x)\varphi(x)^{*},\,x\in\mathcal{X}\} \right)=\mathrm{Span}\left(\left\{B(x)\in\mathbb{C}^{|\Omega|\times|\Omega|} \,:\,B_{\omega\omega^{\prime}}(x)=e^{i(\omega-\omega^{\prime})^{T}x},\ x\in \mathcal{X}\right\}\right).\] This shows that \(\mathcal{V}\) is defined by a set of linear constraints in the space of \(|\Omega|\times|\Omega|\) Hermitian matrices. The relaxation (12) is not tight in general, however by embedding \(\Omega\) in a larger set we can make the relaxation as tight as desired, at the price of using larger and larger embeddings. Consider for example the hierarchy of sets \(\Theta_{r}=\{\omega\in\mathbb{Z}^{d}\,:\,\left\|\omega\right\|_{\infty}\leq r\}\). We define new features \(\psi_{r}:\mathcal{X}\mapsto\mathbb{R}^{|\Theta_{r}|}\), analogously to \(\varphi\). Suppose that for sufficiently large \(r\), \(\Omega\subseteq\Theta_{r}\). Then we can write \(h\) in terms of the new features, i.e., \(h(x)=\psi_{r}(x)^{*}H^{\prime}\psi_{r}(x)\), with \(H^{\prime}\) a block matrix of the form \[H^{\prime}=\left(\begin{array}{c|c}H&0\\ \hline 0&0\end{array}\right)\] (24) in a basis in which the first \(|\Omega|\) coordinates are the frequencies in \(\Omega\) and the remaining \(|\Theta_{r}|-|\Omega|\) are the frequencies in \(\Theta_{r}/\Omega\). Increasing \(r\) the relaxation of \(h(x)-c\geq 0\) to \(h(x)-c=\psi_{r}(x)^{*}A\psi_{r}(x)\) becomes tighter since the feature space becomes more expressive. What is the price to pay? Increasing the number fo features means that the dimensionality of the semidefinite problem (12) increases as well, so it becomes more expensive to solve computationally. In the current example \(|\Theta_{r}|=r^{d}\). * **Boolean hypercube** Let \(\mathcal{X}=\{-1,1\}^{d}\). Consider the set indexed features \(\varphi_{A}(x)=\prod_{i\in A}x_{i}\). Recall that each function \(f:\mathcal{X}\mapsto\mathbb{R}\) can be uniquely written as \(f(x)=\sum_{A\subseteq[d]}\hat{f}(A)\varphi_{A}(x)\); this is just the Fourier transform for Boolean functions. We call \(|A|\) the order of \(\varphi_{A}\). Notice moreover that \(\varphi_{A}(x)\varphi_{B}(x)=\varphi_{A\triangle B}(x)\), where \(\triangle\) is the symmetric difference operator. Consider \(\mathcal{A}\subseteq\mathfrak{P}([d])[4]\), a collection of sets in \([d]\). Then \(\mathcal{V}=\mathrm{Span}\left(\{\varphi(x)\varphi(x)^{*},\,x\in\mathcal{X}\} \right)=\mathrm{Span}(\{M(x)\,:\,M_{AB}=\varphi_{A\triangle B}(x),\ x\in \mathcal{X}\})\), which is defined by a set of linear constraints in the space of \(|\mathcal{A}|\times|\mathcal{A}|\) matrices. One can then introduce hierarchies based of the order of the features, for example consider the set \(\Theta_{r}=\{A\in\mathfrak{P}([d])\,:\,|A|<r\}\). Increasing \(r\) the relaxation becomes more and more tight. * **General polynomial case (Putinar's Positivstellensatz [5]):** **Theorem 1**: _Consider \(K=\{x\in\mathbb{R}^{d}\ :\ \forall j\in[m],g_{j}(x)\geq 0\}\) for \(g_{j}:\mathbb{R}^{d}\mapsto\mathbb{R}\) some multivariate polynomials, and assume that for at least one index \(i\in[m]\) the set \(\{x\in\mathbb{R}^{d}\ :\ g_{i}(x)\geq 0\}\) is compact. **Then**, if a polynomial \(f\) is strictly positive on \(K\), there exist sum-of-squares polynomials \(\{f_{j}\}_{j=0}^{m}\), such that_ \[f=f_{0}+\sum_{j=1}^{m}f_{j}g_{j}. \tag{25}\] Previous results tell us that not every positive polynomial is SOS; instead (25) is the next best representation that we can obtain, involving multiple sum of squares. Notice that the degrees of the SOS polynomials are not known in advance, hence hierarchies are needed. In practice one considers the representations \(f=f_{0}+\sum_{j}g_{j}f_{j}\), with deg\((f_{j})\)\(<D\) for all \(j\). Increasing \(D\) the approximation becomes better and better. It is also possible to have a moment view of this result. * **Max-cut [6]:** Consider a weighted graph \(G=(V,w)\), with \(d=|V|\) vertices and \(w\in\mathbb{R}_{+}^{d\times d}\), positive weights. A cut is a partition of the vertices of \(G\) in two groups. Each partition can be indicated by a vector \(x\in\{-1,+1\}^{d}\) assigning each vertex to either the group \(+1\) or \(-1\). In max cut the objective is to find the partition such that the sum of the cross group weight is maximized. In terms of \(x\) the objective is \[C(x)=\frac{1}{2}\sum_{i,j=1}^{d}w_{ij}\left(1-x_{i}x_{j}\right)=\frac{1}{2} \sum_{i,j=1}^{d}w_{ij}-\frac{1}{2}x^{T}Wx=\frac{1}{2}\sum_{i,j=1}^{d}w_{ij}- \frac{1}{2}\operatorname{tr}[Wxx^{T}].\] (26) The optimization can then be expressed as \[\text{OPT}=\max_{x}C(x)=\frac{1}{2}\sum_{i,j=1}^{d}w_{ij}-\frac{1}{2}\min_{ \begin{subarray}{c}X:\text{rank}(X)=1,\\ X\gg 0,X_{ii}=1\lor i\in[d]\end{subarray}}\operatorname{tr}[WX],\] (27) where \(X\in\mathbb{R}^{d\times d}\), and the constraint \(X_{ii}=1\,\forall i\in[d]\) enforces that \(x\) is on the hypercube. The SDP relaxation consists of removing the rank \(1\) constraint giving the problem \[\text{SDP}=\frac{1}{2}\sum_{i,j=1}^{d}w_{ij}-\frac{1}{2}\min_{X:\,X\gg 0,X _{ii}=1\,\forall i\in[d]}\operatorname{tr}[WX].\] (28) We have of course \(\text{SDP}\geq\text{OPT}\). At this point we can ask two questions: * How loose is the SDP approximation? * Given \(\hat{X}\), a solution of (28), how do we obtain a proper cut \(x\)? To answer both question we devise the following procedure: Sample \(\tilde{x}\sim\mathcal{N}(0,X)\), estimate the max weight cut as \(x=\text{sign}(\tilde{x})\). **Proof** Denote \(\mathbb{E}\) the expectation with respect to \(\tilde{x}\), and write \(X_{ij}=u_{i}^{T}u_{j}\) with \(u_{i}\in\mathbb{R}^{d}\). Then we have \(x_{i}=\text{sign}(u_{i}^{T}z)\) with \(z\sim\mathcal{N}(0,\mathbb{I}_{d})\). \[\text{OPT} \geq\mathbb{E}\left[\frac{1}{2}\sum_{i,j=1}^{d}w_{ij}\left(1-x_{i }x_{j}\right)\right]=\sum_{ij}w_{ij}\mathbb{P}(\tilde{x}_{i}\tilde{x}_{j}>0)= \tag{29}\] \[=\sum_{ij}w_{ij}\mathbb{P}(u_{i}^{T}zu_{j}^{T}z>0)=\sum_{ij}w_{ij} \left(1-\frac{1}{\pi}\arccos\left(u_{i}^{T}u_{j}\right)\right)=\] (30) \[=\sum_{ij}w_{ij}\frac{1}{2}\left(\frac{2}{\pi}\frac{\arccos\left( u_{i}^{T}u_{j}\right)}{1-u_{i}^{T}u_{j}}\right)\left(1-u_{i}^{T}u_{j}\right) \overset{(a)}{\geq}\alpha\frac{1}{2}\sum_{ij}w_{ij}(1-X_{ij})=\alpha\text{SDP}, \tag{31}\] with \(\alpha=\frac{2}{\pi}\min_{z\in[-1,1]}\frac{\arccos\left(z\right)}{1-z}\approx 0.87 856\). \((a)\) follows from the fact that the probability of \(z\) having positive or negative scalar product with both \(u_{i}\) and \(u_{j}\) is \(1-\theta_{ij}/\pi\) with \(\theta_{ij}\) being the angle between \(u_{i}\) and \(u_{j}\). Figure 4 further explains this point. ### Kernel methods One way to increase the expressivity of the feature space is to make it infinite dimensional. This is possible through kernel methods. Consider a map \(\varphi:\mathcal{X}\mapsto\mathcal{F}\), with \(\mathcal{F}\) some Hilbert space, with inner product \(\langle,\rangle\). We can then define a new Hilbert space whose elements are the functions which are linear in \(\varphi\). Let \(\mathcal{F}^{\prime}=\{g:\mathcal{X}\mapsto\mathbb{R}\,:\,g(x)=\langle f, \varphi(x)\rangle\) for some \(f\in\mathcal{F}\}\). In the following we will assume that \(\mathcal{F}\) is a reproducing kernel Hilbert space (RKHS). In this case, assuming \(k(x,y)=\langle\varphi(x),\varphi(y)\rangle\) is the reproducing kernel, we can identify \(\mathcal{F}^{\prime}\) and \(\mathcal{F}\). In fact by the reproducing property every function in \(\mathcal{F}\) can be expressed in the form \(f(x)=\langle f,\varphi(x)\rangle\). When minimizing functions over \(\mathcal{F}\) we have the following result. **Theorem 2**: _Representer theorem [7]: Let \(\mathcal{F}\) be an RKHS, with reproducing kernel \(k(x,y)=\langle\varphi(x),\varphi(y)\rangle\). Also let \(L:\mathbb{R}^{n}\mapsto\mathbb{R}\) be an arbitrary error function and \(\{x_{1},x_{2},\ldots,x_{n}\},\)\(x_{i}\in\mathcal{X}\) a set of points. Consider the problem_ \[\min_{f\in\mathcal{F}}L\left(\langle f,\varphi(x_{1})\rangle,\ldots,\langle f,\varphi(x_{n})\rangle\right)+g(\|f\|) \tag{32}\] _with \(g\) an increasing function. **Then** any minimizer \(\hat{f}\) of (32) can be written as \(f=\sum_{i=1}^{n}\alpha_{i}\varphi(x_{i})\). Making the function explicit gives_ \[\hat{f}(x)=\sum_{i=1}^{n}\alpha_{i}\langle\varphi(x_{i}),\varphi(x)\rangle= \sum_{i=1}^{n}\alpha_{i}k(x,x_{i}). \tag{33}\] In the setting of 2 we have \[\left\|\hat{f}\right\|^{2}=\sum_{ij=1}^{n}\alpha_{i}\alpha_{j}\langle k( \cdot,x_{i}),k(\cdot,x_{j})\rangle=\sum_{ij=1}^{n}\alpha_{i}\alpha_{j}k(x_{i},x_{j})=\alpha^{T}K\alpha, \tag{34}\] where we introduced the kernel matrix \(K\in\mathbb{R}^{n\times n}\), \(K_{ij}=k(x_{i},x_{j})\). The importance of the representer theorem lies in the fact that the possibly infinite dimensional problem (32) admits a solution in an \(n-\)dimensional linear function space (moreover for particular forms of \(L,g\) the solution is explicit). We have seen how an RKHS gives a reproducing kernel \(K\). In the other direction we can prove that given a positive semidefinite kernel \(k:\mathcal{X}\times\mathcal{X}\mapsto\mathbb{R}\), there exists a unique associated RKHS **Theorem 3**: _Moore Aronszjan [8]: Let \(k:\mathcal{X}\times\mathcal{X}\mapsto\mathbb{R}\) be a positive semidefinite kernel, in the sense that for every \(\{x_{1},\ldots,x_{n}\}\), the kernel matrix \(K_{ij}=k(x_{i},x_{j})\) is positive semidefinite. Then there exists a Hilbert space with inner product \(\langle,\rangle\), and a feature map \(\varphi:\mathcal{X}\mapsto\mathcal{F}\), such that \(k(x,y)=\langle\varphi(x),\varphi(y)\rangle\)._ One example of RKHS are Sobolev spaces in \(\mathbb{R}^{d}\). We define the Sobolev space \(\mathcal{H}_{s}\) of order \(s\) is the space of functions whose derivatives up to order \(s\) are in \(L^{2}(\mathbb{R}^{d})\). If \(s>d/2\), \(\mathcal{H}_{s}\) can be represented as an RKHS, with kernel \(k(x,y)=q(x-y)\). And \[\hat{q}(\omega)\propto\frac{1}{\left(1+\left\|\omega\right\|_{2}^{2}\right)^{s }}, \tag{35}\] with \(\hat{q}\) being the Fourier transform of \(q\). The RKHS norm can then be written as \[\left\|f\right\|_{q}^{2}=\int\,|\hat{f}(\omega)|^{2}\left(1+\left\|\omega \right\|_{2}^{2}\right)^{s}d\omega, \tag{36}\] The larger \(s\) the more high frequencies are penalized. For \(s=d/2+1/2\) the kernel corresponding to \(q\) is the Laplace kernel (see [9] p. 165) \[k(x,y)=e^{-\left\|x-y\right\|_{2}}. \tag{37}\] ### k-SOS relaxation [10][11][12] Consider the SOS relaxation with \(\varphi:\mathcal{X}\mapsto\mathcal{F}\), with \(\mathcal{F}\) the feature space associated to a Sobolev kernel of order \(r\) on \(\mathbb{R}^{d}\). We restrict the optimization to \(\mathcal{X}\subset\mathbb{R}^{d}\), e.g., \(\mathcal{X}=[-1,1]^{d}\). Analogously to (11) we have \[\inf_{x\in\mathcal{X}}h(x) =\sup_{c\in\mathbb{R}}\,\,c\,\,\text{such that}\,\,\forall x\in \mathcal{X},h(x)-c\geqslant 0 \tag{38}\] \[\geqslant\sup_{c\in\mathbb{R},\,\,A\geqslant 0}\,\,c\,\,\,\, \text{such that}\,\,\forall x\in\mathcal{X},h(x)-c=\varphi(x)^{*}A\varphi(x), \tag{39}\] where this time \(A:\mathcal{F}\mapsto\mathcal{F}\) is a positive definite self-adjoint operator. Several questions arise: * Is the problem feasible? In other words, can \(h(x)-c\) be represented as \(\varphi(x)^{*}A\varphi(x)\)? Yes if \(h\) is sufficiently differentiable. * Is the relaxation tight? Yes, if the minimizer satisfies a local Hessian condition. * Is the problem solvable in polynomial time? Recall that in this case the optimization over \(A\) would be infinite dimensional so we cannot solve it directly like in (12). We will see that we can still solve the problem in polynomial time, through sampling, regularization and a new representer theorem. ### Representation of a non-negative function as a sum-of-squares To answer the first question suppose \(h:\mathbb{R}^{d}\mapsto[0,\infty)\) is a positive function in \(C^{m}\) (i.e., \(h\) has continuous derivatives up to order \(m\)). Can we represent \(h\) as a sum of squared functions, each of which is in \(C^{m^{\prime}}\), with \(m^{\prime}\leq m\)? One (naive) solution would be to take \(h(x)=(\sqrt{h(x)})^{2}\). This works with \(m^{\prime}=m\) as long as \(h\) is strictly positive, in order to preserve differentiability. If \(h=0\) somewhere then square root corresponds to \(m^{\prime}=0\). In principle we would like to find solutions in smaller classes of functions (ideally \(m^{\prime}=m\)) so the square root is not satisfactory. We will now state a solution which achieves \(m^{\prime}=m-2\), under some extra conditions **Proposition 4**: _Let \(g:\mathcal{X}\mapsto[0,\infty)\), be a \(C^{m}\) (\(m\geq 2\)) function with \(\mathcal{X}\subset\mathbb{R}^{d}\) a compact set. Also suppose \(\hat{x}\) is the unique global optimum of \(g\), located in the interior of \(\mathcal{X}\), and with \(g(\hat{x})=0\). Finally suppose that the Hessian of \(g\) at \(\hat{x}\) is positive definite._ _Then \(g\) admits an SOS representation with \(m^{\prime}=m-2\) and at most \(d+1\) factors._ **Proof** Without loss of generality assume \(\hat{x}=0\). We can write \(g\)'s Taylor expansion in \(\hat{x}\) with integral remainder as \[h(x)=h(0)+\nabla h(0)^{T}x+\int_{0}^{1}(1-t)x^{T}\nabla^{2}h(tx)\,x\,dt. \tag{40}\] The Hessian must be strictly positive definite near \(0\), since we assumed this is an interior point. So we have \(\nabla^{2}g(tx)\succcurlyeq\lambda I\) for \(\|x\|\) small enough. Using the fact that in zero the gradient and the function vanish we have \[g(x)=x^{T}R(x)x,\ \text{with}\ \,R(x)=\int_{0}^{1}(1-t)\nabla^{2}g(tx)dt\succcurlyeq \frac{\lambda}{2}I. \tag{41}\] \(g\) can then be written in the form \[g(x)=x^{T}R(x)^{1/2}R(x)^{1/2}x=\sum_{i=1}^{d}\left(R^{1/2}(x)x\right)^{2}, \tag{42}\] where the matrix square root is \(C^{\infty}\) because \(R(x)\succcurlyeq\frac{\lambda}{2}I\)[13]. with factors that are \(C^{m-2}\), i.e., the regularity of the Hessian. When instead \(\|x\|\) is not sufficiently small, by the assumptions of unique global optimum and positive definite Hessian we have that \(g(x)-g(0)\) is positive, hence \(\sqrt{g(x)}\) is \(C^{m}\). Using a partition of unity one can bridge the two expressions, one close and the other far from zero. This yields an SOS representation with \(d+1\) terms. This result can also be generalized to manifolds of global minima. As a consequence of proposition 4, if \(h\) is a \(C^{m}\) function with isolated global minima, and we pick \(\mathcal{F}\) to be the RKHS with kernel (35) and \(s\geq m-2\), then the relaxation (39) is tight. In other words there exists \(A^{\star}\) such that \(h(x)-\inf_{x\in\mathcal{X}}h(x)=\varphi(x)^{*}A^{\star}\varphi(x)\) for all \(x\in\mathcal{X}\). ### Controlled approximation through subsampling The relaxation \[\inf_{x\in\mathcal{X}}h(x)\geqslant\sup_{c\in\mathbb{R},\ A\succcurlyeq 0}\ c\ \text{ such that }\forall x\in\mathcal{X},h(x)-c=\varphi(x)^{*}A\varphi(x), \tag{43}\] with \(A:\mathcal{F}\mapsto\mathcal{F}\) involves an infinite number of constraints (one for every \(x\in\mathcal{X}\)) and an optimization over operators in infinite dimensions. These facts make the problem impossible to be solved on a computer as is. To circumvent the difficulties we use the subsampling technique. In other words we replace \(\forall x\in\mathcal{X},h(x)-c=\varphi(x)^{*}A\varphi(x)\) with \(\forall i\in[n],\ h(x_{i})-c=\varphi(x_{i})^{*}A\varphi(x_{i})\), where \(x_{i}\) are sampled uniformly in \(\mathcal{X}\). To avoid overfitting we regularize using \(\operatorname{tr}[A]\), hence penalizing operators with large trace. Any other function providing control over the eigenvalues would also work. The new problem is \[\sup_{c\in\mathbb{R},\ A\succcurlyeq 0}c-\lambda\operatorname{tr}[A]\text{ s.t. }\forall i\in[n],\ h(x_{i})-c=\varphi(x_{i})^{*}A\varphi(x_{i}). \tag{44}\] Notice that the solution to this problem is a lower bound to the right hand side of **Proposition 5**: _In the setting (44) let \(h\) be \(C^{m}\), \(\lambda>0\), \(A:\mathcal{F}\mapsto\mathcal{F}\), with \(\mathcal{F}\) a Sobolev space of order \(s\leq m-2\). Let \(n\geq C(n,d)\epsilon^{-\frac{d}{m-3}}\). **Then** with high probability with respect to sampling_ \[\left|\left[\inf_{x\in\mathcal{X}}h(x)\right]-\left[\sup_{c\in\mathbb{R},\ A \succcurlyeq 0}c-\lambda\operatorname{tr}[A]\text{ s.t. }\forall i\in[n],\ h(x_{i})-c=\varphi(x_{i})^{*}A\varphi(x_{i})\right]\right|<\epsilon. \tag{45}\] The statistical lower bound for the number of queries is \(n\geq C^{\prime}(d)\epsilon^{-\frac{d}{m}}\), however exponential computation (in \(n\)) might me required to achieve it. Notice that setting \(m=0\) we recover the curse of dimensionality \(n\geq\epsilon^{-d}\). The regularization is necessary to exploit the differentiable structure of \(h\), in fact setting \(\lambda=0\) one would get \(c^{\star}=\min_{i}h(x_{i})\), requiring \(\epsilon^{-d}\) samples, regardless of \(m\). One drawback of the subsampling approach is the fact that \(C\) can depend exponentially on \(d\). The previous proposition solves the problem of the infinite number of constraints, but we're still dealing with an optimization problem over the space of infinite dimensional operators. The following representer theorem tells us that we can equivalently solve a finite dimensional problem **Theorem 4**: _(Representer theorem for SOS) For any \(x_{1},\ldots,x_{n}\) any solution of (44) is of the form_ \[A=\sum_{i,j=1}^{n}\varphi(x_{i})\varphi(x_{j})B_{ij},\quad B\in\mathbb{R}^{n \times n},B\succcurlyeq 0. \tag{46}\] For a proof see [10]. The theorem can be generalized to the case where the regularizer (here \(\operatorname{tr}[A]\)) is an arbitrary spectral norm. Letting \(K\) be the kernel matrix this immediately implies \[\varphi(x_{i})^{*}A\varphi(x_{i})=\sum_{l,j}K(x_{i},x_{j})B_{lj}K(x_{j},x_{i})=( KBK)_{ii}, \tag{47}\] So that the final problem becomes \[\sup_{c\in\mathbb{R},\ B\geqslant 0}c-\lambda\operatorname{tr}[KBK]\ \text{s.t.}\ \forall i\in[n],\ h(x_{i})-c=(KBK)_{ii}, \tag{48}\] which is now an \(n\) dimensional semidefinite programming problem, solvable in \(O(n^{3.5})\). From a practical standpoint (48) is solvable up to \(n\approx 10^{4}\). Moreover it is crucial that the kernel matrix \(K\) is invertible. ## II Lecture 3: From optimization to information theory ### Introduction Recall that our goal is to take the \(\min_{x\in\mathcal{X}}h(x)\), where \(\mathcal{X}\) is a generic set of inputs. To this extent, we reformulated the minimization problem using the sum-of-square relaxation: \[\min_{x\in\mathcal{X}}h(x)=\sup_{c,A\succcurlyeq 0}\quad\text{s.t.}\quad h(x)=c+ \langle\phi(x),A\phi(x)\rangle\qquad\forall x\in\mathcal{X}. \tag{49}\] In other words, we replaced the positivity of a function by being a sum-of-square of some feature vector \(\phi:\mathcal{X}\to\mathcal{F}\). In particular, we assumed the feature map \(\phi\) to be associated with a kernel \(\mathcal{K}\). We showed that altough typically there is a gap between the LHS and RHS in (49), with mild assumptions on \(h\) there is tightness. Moreover, we replaced the continuous set of equalities in (49), by the following: \[\sup_{c,A\succcurlyeq 0}c-\lambda\operatorname{tr}A\quad\text{s.t.}\quad\forall i \in\{1,...,n\}\quad h(x_{i})=c+\langle\phi(x_{i}),A\phi(x_{i})\rangle. \tag{50}\] We explained that to solve (50) in practice, one can use the representer theorem (Theorem 2), which allows to write \(\langle\phi(x_{i}),A\phi(x_{i})\rangle=(KBK)_{ii}\), with \(B\in\mathbb{R}^{n\times n}\) and \(B\succcurlyeq 0\) and \(K\) kernel matrix, and similarly \(\operatorname{tr}A=\operatorname{tr}(BK)\), and therefore one can solve (50) in polynomial time. In this lecture, we move beyond optimization and extend the sum-of-square technique to other problems: e.g., optimal control, approximation of entropies and relative entropies in information theory. ### Extension to other infinite dimensional problem Recall, the two equivalent problems defined in the previous lectures: * Primal problem \[\min_{x\in\mathcal{X}}\;h(x)=\sup_{c\in\mathbb{R}}\;c\quad\text{ such that}\quad\forall x\in\mathcal{X},\;h(x)-c\geqslant 0.\] (51) * Dual problem on probability measures \[\inf_{\mu\in\mathbb{R}^{\times}}\int_{\mathcal{X}}\mu(x)h(x)dx\quad\text{ such that}\quad\int_{\mathcal{X}}\mu(x)dx=1,\;\forall x\in\mathcal{X},\;\mu(x)\geqslant 0.\] (52) The two are formulated on positive functions, and we could a priori apply the sum-of-square relaxation to either of the two inequalities. However, note that in (52) we expect \(\mu\) to be a Dirac's delta distribution at the optimum, which implies that the sum-of-square relaxation will not work. On the other hand, in (51), we expect the \(h\) to be well behaved (e.g. smooth), thus the sum-of-square relaxation will work well. We now give a generic formulation of a constrained optimization problem, and then specialize to optimal control. The generic problem can be expressed in the following form: \[\inf_{\theta\in\Theta}\;F(\theta)\quad\text{such that}\quad\forall x\in \mathcal{X},\;g(\theta,x)\geqslant 0, \tag{53}\] with \(F\) convex and \(g\) linear in its first argument \(\theta\), and where \(\Theta\) denotes a generic vector space. The sum-of-squares reformulation of (53) is given by \[\inf_{\theta\in\Theta,\;A\succcurlyeq 0}F(\theta)\quad\text{such that}\quad\forall x\in\mathcal{X},\;g(\theta,x)=\langle\phi(x),A\phi(x)\rangle. \tag{54}\] Solving (54) requires penalizing the unconstrained problem by \(\operatorname{tr}(A)\) and performing subsampling. If we expect \(g(\theta,x)\) to be smooth at the optimum \(\theta^{*}\), the representation as sum-of-square allows to benefit from its intrinsic smoothness and get guarantees. The sum-of-square relaxation can be performed in the primal and in the dual problem, however depending on the smoothness of these problems either the primal or the dual problem might be a better choice. Let us look at the special case of optimal control (or reinforcement learning) as an example. Another relevant example is optimal transport, which we do not cover here and refer to [14]. _Example: Optimal control/Reinforcement learning [15]._ Consider the dynamical system \[\dot{X}(t)=f(t,X(t),u(t))\qquad\forall t\in[t_{0},T], \tag{55}\] \[X(t_{0})=x_{0}, \tag{56}\] where \(X(.)\) denotes the state, \(u(.)\) denotes the control that we want to impose and \(t_{0},x_{0}\) are the initial time and the initial state respectively. In the optimal control formulation [16], we choose the control \(u(.)\) such that it minimizes a cost, i.e., for fixed T we want to solve \[V^{*}(t_{0},x_{0})=\inf_{w:[t_{0},T]\rightarrow\mathcal{U}}\int_{t_{0}}^{T}L(t, x(t),u(t))\mathrm{d}t+M(x(T)), \tag{57}\] where we can think of the first term as the cost along the way and the second term as the terminal cost. The solution of (57) can be found as a subsolution to the PDE defined by the Hamilton-Jacobi-Bellman (HJB) equation [17]. In particular, any \(V\) that satisfies the following \[\sup_{V:[0,T]\times\mathcal{X}\rightarrow\mathbb{R}}\int V(0,x_{ 0})\mathrm{d}\mu_{0}(x_{0})\] \[\frac{\partial V}{\partial t}(t,x)+L(t,x,u)+\nabla V(t,x)^{\top} f(t,x,u)\ \geqslant 0\quad\forall(t,x,u), \tag{58}\] \[V(T,x)=M(x)\quad\forall x,\] is a solution at time \(t_{0}=0\) of (57). Among the works that tackled the above equation, we mention [18], that approaches (58) through sum-of-square relaxation; and [15] for extension to kernel sums-of-squares. ### Connection to information theory: log partition function [19; 20] In this section we show how the sum-of-squares relaxation can be applied to compute relevant information theoretic quantities. For simplicity, we will consider the problem of maximizing a function \(h(x)\), instead of minimizing it. Specifically, we consider the log partition function [19; 20] of the function, defined as \[\varepsilon\log\int_{\mathcal{X}}\exp\left(\frac{h(x)}{\varepsilon}\right)dq( x), \tag{59}\] where we call \(q(.)\in\mathcal{P}(\mathcal{X})\) the base measure and \(\epsilon>0\) the temperature. There are several reasons that motivate the study of the log partition function: for instance, it is used as a tool for smoothing the maximum, and for computational purposes in optimization; moreover, it is used in probabilistic inference [21]. Our goal is to understand how to compute (59). Recall, the definition of KL-divergence. **Definition 1** (Kullback-Leibler (KL) divergence): _Let \(p,q\) be in \(\mathcal{P}^{1}(\mathcal{X})\) (i.e. probability measures on a measurable space \(\mathcal{X}\)), such that \(p\) is absolutely continuous with respect to \(q\). Then, the KL-divergence between \(p\) and \(q\) is defined as_ \[D(p\|q):=\int_{\mathcal{X}}\log\left(\frac{dp(x)}{dq(x)}\right)dp(x). \tag{60}\] Note that (59) can be rewritten as \[\varepsilon\log\int_{\mathcal{X}}\exp\left(\frac{h(x)}{\varepsilon }\right)dq(x) =\sup_{p\in\mathcal{P}^{1}(\mathcal{X})}\int_{\mathcal{X}}h(x)dp( x)-\varepsilon\int_{\mathcal{X}}\log\frac{dp(x)}{dq(x)}dp(x) \tag{61}\] \[=\sup_{p\in\mathcal{P}^{1}(\mathcal{X})}\int_{\mathcal{X}}h(x)dp( x)-\varepsilon D(p\|q). \tag{62}\] The first equality is a classical result of convex duality between maximum likelihood and relative entropy. One can verify it by deriving the right-hand side. Let us look at the two terms in (62). If \(h\) is represented as \(h(x):=\phi(x)^{T}H\phi(x)\), then the term \(\int_{\mathcal{X}}h(x)dp(x)\) will be expressed in terms of moment matrices. We therefore ask the following question for the second term: can we upper or lower bound \(D(p\|q)\) in terms of the moments of \(\phi(x)\), i.e., in terms of \[\Sigma_{p}=\int_{\mathcal{X}}\phi(x)\phi(x)^{T}dp(x)\qquad\text{and}\qquad \Sigma_{q}=\int_{\mathcal{X}}\phi(x)\phi(x)^{T}dq(x)\quad? \tag{63}\] This would lead to a convex optimization problem on the set of moment matrices, which we can relax using sum-of-squares and solve efficiently. A first attempt to answer this question appears in [22], and consists of considering \(\Sigma_{p}\) and \(\Sigma_{q}\) to be covariances of gaussian distributions \(p\) and \(q\) of dimension \(d\). The KL divergence for Gaussian matrices then reads \[-\frac{1}{2}\log\det(\Sigma_{p}\Sigma_{q}^{-1})+\frac{1}{2}\operatorname{tr} \Sigma_{p}\Sigma_{q}^{-1}-\frac{d}{2}. \tag{64}\] This has some nice properties: in fact it is zero if and only if \(\Sigma_{p}=\Sigma_{q}\), and potentially there is a link with information theory. However, some other properties of \(D(p\|q)\) are not preserved: it is no longer jointly convex in \(p\) and \(q\), it diverges in infinite dimensions and there is no direct link with the true KL divergence. A more powerful approach uses the kernel KL divergence. ### Kernel KL divergence **Definition 2** (Kernel KL divergence, or Von Neumann divergence [20]): _Let \(p\) and \(q\) be in \(\mathcal{P}^{1}(\mathcal{X})\), and let \(\Sigma_{p}\) and \(\Sigma_{q}\) be defined as in (63) for some feature map \(\phi:\mathcal{X}\to\mathcal{F}\). We define the kernel KL divergence (or Von Neumann divergence) as_ \[D(\Sigma_{p}\|\Sigma_{q})=\operatorname{tr}\big{[}\Sigma_{p}(\log\Sigma_{p}- \log\Sigma_{q})\big{]}. \tag{65}\] We will assume that \(\phi:\mathcal{X}\to\mathcal{F}\) is obtained from a positive definite kernel. We focus mostly on \(\mathbb{R}^{d}\), however, we remark that one can leverage this technique to any structured objects beyond finite sets and \(\mathbb{R}^{d}\). The kernel KL divergence has several nice properties, that we list here. **Proposition 1** (Properties of kernel KL divergence.): \(D(\Sigma_{p}\|\Sigma_{q})\) _satisfies the following properties:_ 1. \(D(\Sigma_{p}\|\Sigma_{q})\) _is jointly convex in_ \(\Sigma_{p}\) _and_ \(\Sigma_{q}\)_. Because_ \(\Sigma_{p}\) _and_ \(\Sigma_{q}\) _are linear in_ \(p\) _and_ \(q\)_,_ \(D(\Sigma_{p}\|\Sigma_{q})\) _is jointly convex in_ \(p\) _and_ \(q\)_;_ 2. \(D(\Sigma_{p}\|\Sigma_{q})\geq 0\) _and_ \(D(\Sigma_{p}\|\Sigma_{q})=0\) _if_ \(p=q\)_;_ 3. _If the kernel that generates_ \(\phi\) _is universal, then_ \(D(\Sigma_{p}\|\Sigma_{q})=0\) _if and only if_ \(p=q\)_;_ 4. _If_ \(p\) _is absolutely continuous with respect to_ \(q\)_, with_ \(\|\frac{dp}{dq}\|_{\infty}\leq\alpha\)_, then_ \(D(\Sigma_{p}\|\Sigma_{q})\leq\log\alpha\cdot\operatorname{tr}\Sigma_{p}\)_._ **Proof** 1. is not trivial, and we refer to the appendices in [20] for a formal proof. 2. holds since \(D(\Sigma_{p}\|\Sigma_{q})\) is the Bregman divergence of \(\operatorname{tr}(\Sigma\cdot\log\Sigma)\). 3. follows from the injectivity of the map \(p\mapsto\Sigma_{p}\) when the kernel is universal. In fact, if \(\Sigma_{p}=\Sigma_{q}\), the universality of the kernel implies that for all continuous functions \(f\), \(\int_{\mathcal{X}}f(x)[dp(x)-dq(x)]=0\), hence \(p=q\) and the map \(p\mapsto\Sigma_{p}\) is injective. 4. Assume without loss of generality that \(\alpha\geq 1\). We have then \(\Sigma_{p}\preccurlyeq\alpha\Sigma_{q}\), which leads to \[D(\Sigma_{p}\|\Sigma_{q})=\operatorname{tr}\big{[}\Sigma_{p}(\log\Sigma_{p}\log \Sigma_{q})\big{]}\leq\operatorname{tr}\big{[}\Sigma_{p}(\log(\alpha\Sigma_{q} )-\log\Sigma_{q})\big{]}=\log\alpha\cdot\operatorname{tr}\Sigma_{p}.\] (66) Estimation from data.We may ask whether \(D(\Sigma_{p}\|\Sigma_{q})\) can be estimated efficiently from data. To this purpose, assume that \(\Sigma_{q}\) is known and that we observe \(x_{1},...,x_{n}\) iid samples from \(p\). One can define the empirical moment \(\hat{\Sigma}_{p}=\frac{1}{n}\sum_{i=1}^{n}\phi(x_{i})\phi(x_{i})^{T}\), and the plug-in estimator \(D(\hat{\Sigma}_{p}\|\Sigma_{q})\). It turns out (Proposition 7 in [20]) that with further conditions on the decay of the eigenvalues of the kernel, \[D(\hat{\Sigma}_{p}\|\Sigma_{q})-D(\Sigma_{p}\|\Sigma_{q})=O\left(\frac{\log n} {\sqrt{n}}\right). \tag{67}\] We remark that, surprisingly, no extra regularization to the plug-in estimator is required for achieving the above rate. It is interesting to compare \(D(\Sigma_{p}\|\Sigma_{q})\) with the Maximum Mean Discrepancy (MMD) between \(p\) and \(q\), defined as follows [23]: \[\text{MMD}(p,q):=\|\mu_{p}-\mu_{q}\|, \tag{68}\] where \(\mu_{p}=\int_{\mathcal{X}}\phi(x)dp(x)\) and similarly \(\mu_{q}=\int_{\mathcal{X}}\phi(x)dq(x)\). Indeed, all properties of \(D(\Sigma_{p}\|\Sigma_{q})\) in Proposition 1 and estimation in \(O(\log n/\sqrt{n})\) rate hold for MMD as well. The main difference between \(D(\Sigma_{p}\|\Sigma_{q})\) and MMD, is that MMD has no link with the classical notions of divergences from information theory, whereas \(D(\Sigma_{p}\|\Sigma_{q})\) has a direct link with the KL divergence, which we analyse in the following Section. ### Link with KL divergence If we assume that \(\mathcal{X}\) is a finite set with orthonormal embeddings (i.e. such that \(\langle\phi(x),\phi(y)\rangle=1_{x=y}\)) and that all covariance operators are jointly diagonalizable with probability mass values as eigenvalues, then we recover the KL divergence _exactly_: \[D(\Sigma_{p}\|\Sigma_{q}) =\operatorname{tr}\left[\Sigma_{p}\left(\log\Sigma_{p}-\log \Sigma_{q}\right)\right] \tag{69}\] \[\stackrel{{(a)}}{{=}}\sum_{x\in\mathcal{X}}p(x) \left(\log p(x)-\log q(x)\right)=D(p\|q), \tag{70}\] where \((a)\) holds because by assumption both \(\Sigma_{p}\) and \(\Sigma_{q}\) are diagonal, with the probability mass values as diagonal elements. However, beyond finite sets we cannot recover the KL divergence exactly. If we assume that for all \(x\in\mathcal{X}\), \(\|\phi(x)\|\leq 1\), then we can find the following lower bound on KL divergence: \[D(\Sigma_{p}\|\Sigma_{q}) =D\Big{(}\int_{\mathcal{X}}\phi(x)\phi(x)^{T}dp(x)\Big{\|}\int_{ \mathcal{X}}\frac{dq(x)}{dp(x)}\phi(x)\phi(x)^{T}dp(x)\Big{)} \tag{71}\] \[\stackrel{{(a)}}{{\leq}}\int_{\mathcal{X}}D\Big{(} \phi(x)\phi(x)^{T}\Big{\|}\frac{dq(x)}{dp(x)}\phi(x)\phi(x)^{T}\Big{)}dp(x)\] (72) \[\stackrel{{(b)}}{{=}}\int_{\mathcal{X}}\|\phi(x)\|^{ 2}D\Big{(}1\Big{\|}\frac{dq(x)}{dp(x)}\Big{)}dp(x)\] (73) \[\stackrel{{(c)}}{{\leq}}\int_{\mathcal{X}}\log\Big{(} \frac{dp}{dq}(x)\Big{)}dp(x)=D(p\|q), \tag{74}\] where \((a)\) follows by the _joint_ convexity of \(D(\cdot\|\cdot)\) (note that both integrals in the arguments are in dp(x)) and by Jensen's inequality, (b) holds because \(\phi(x)\phi(x)^{T}\) and \(\phi(x)\phi(x)^{T}\frac{dq(x)}{dp(x)}\) are rank 1 operators with eigenvectors proportional to \(\phi(x)\) and eigenvalues equal to either zero or \(\|\phi(x)\|^{2}\) (for \(\phi(x)\phi(x)^{T}\)) and to either zero or \(\|\phi(x)\|^{2}\frac{dq(x)}{dp(x)}\) (for \(\phi(x)\phi(x)^{T}\frac{dq(x)}{dp(x)}\)), and \((c)\) follows by the assumption that \(\|\phi(x)\|\leq 1\) and by the definition of KL divergence. We may ask whether the above inequality is tight. One way to estimate the tightness of (74) uses quantum measurements. We outline here the main ideas, and refer to Section 4.1 in [20] for details. For all \(y\in\mathcal{X}\), a _quantum measurement_ for \(p\) is defined as \[\tilde{p}(y)=\operatorname{tr}\left(\Sigma_{p}D(y)\right), \tag{75}\] where \(D(y)\) is called the _measurement operator_, and it is such that \(D(y)\succcurlyeq 0\) and \(\int_{\mathcal{X}}D(y)d\tau(y)=I\), with \(\tau\) being the uniform measure. One can define the analogous quantum measurement for \(q\) as \[\tilde{q}(y)=\operatorname{tr}\left(\Sigma_{q}D(y)\right). \tag{76}\] By data processing inequality (see Appendix A.2 of [20]), we have \[D(\tilde{p}\|\tilde{q})\leq D(\Sigma_{p}\|\Sigma_{q}). \tag{77}\] One can then choose \(D(.)\) such that \(\tilde{p}\) and \(\tilde{q}\) are smooth versions of \(p\) and \(q\). Specifically, for all \(y\in\mathcal{X}\) let \[D(y)=\Sigma^{-1/2}(\phi(y)\phi(y)^{T})\Sigma^{-1/2}, \tag{78}\] which satisfies the above properties. We then have \[\tilde{p}(y)=\operatorname{tr}\left[D(y)\Sigma_{p}\right]=\int_{\mathcal{X}} \langle\phi(x),\Sigma^{-1/2}\phi(y)\rangle^{2}dp(x)=\int_{\mathcal{X}}h(x,y)dp (x), \tag{79}\] where \(h(x,y):=\langle\phi(x),\Sigma^{-1/2}\phi(y)\rangle^{2}\) is such that \(\int_{\mathcal{X}}h(x,y)d\tau(x)=1\) and can be seen as a smoothing kernel. Overall, we get the following'sandwich' inequalities: \[D(\tilde{p}\|\tilde{q})\leq D(\Sigma_{p}\|\Sigma_{q})\leq D(p\|q), \tag{80}\] that can lead to quantitative bounds between \(D(\Sigma_{p}\|\Sigma_{q})\) and \(D(p\|q)\), specifically when the smoothing function \(h\) puts most of its mass on pairs \((x,y)\) where \(x\) is close to \(y\). For instance, one can see that if \(h(x,y)=\exp\left(-\frac{\|x-y\|_{2}}{\sigma}\right)\), then \(D(p\|q)-D(\Sigma_{p}\|\Sigma_{q})=O(\sigma^{2})\) (see Section 4.2 in [20]). However, note that the smaller the \(\sigma\) is, the larger the sample will need to be to estimate the kernel KL divergence. Let us now look at a simple example. _Example._ Let \(\mathcal{X}=[0,1]\) and \(\phi(x)=\exp\left(2i\omega\pi x\right)\hat{q}(\omega)^{1/2}\), where \(\hat{q}(\omega)\) is the Fourier transform of a kernel. Then, \[\phi(x)\phi(y)^{*}=\sum_{\omega\in\mathbb{Z}}\hat{q}(\omega)\exp\left(2i \omega(x-y)\right)=q(x-y), \tag{81}\] that is the translation-invariant kernel on the torus. \(\Sigma_{p}\) is then an infinite dimensional matrix with elements given by \[\forall\omega,\omega^{\prime}\in\mathbb{Z},\qquad(\Sigma_{p})_{ \omega\omega^{\prime}}=\int_{x\in\mathcal{X}}dp(x)\exp\left(2i\pi x(\omega- \omega^{\prime})\right)\hat{q}(\omega)^{1/2}\hat{q}(\omega^{\prime})^{1/2} \tag{82}\] \[\overset{(a)}{=}\hat{p}(\omega-\omega^{\prime})\hat{q}(\omega)^{ 1/2}\hat{q}(\omega^{\prime})^{1/2}, \tag{83}\] where \((a)\) holds by definition of characteristic function of \(p\). Thus, \[\Sigma_{p}=\text{diag}(\hat{q})^{1/2}\text{TM}(\hat{p})\text{diag}(\hat{q})^{ 1/2}, \tag{84}\] where we denoted by \(\text{TM}(\hat{p})\) the Toeplitz matrix whose elements are given by \[(\text{TM}(\hat{p}))_{\omega,\omega^{\prime}}=\hat{p}(\omega-\omega^{\prime}). \tag{85}\] The previous discussion implies that the quantity \(\operatorname{tr}\Sigma_{p}\log\Sigma_{p}\) is related to entropy. ### Estimation of log-partition function Recall, the definition of log-sum-exp entropy and its equivalent formulation in (62): \[\varepsilon\log\int_{\mathcal{X}}e^{h(x)/\varepsilon}dq(x)=\sup_{p\in \mathcal{P}^{1}(\mathcal{X})}\int_{\mathcal{X}}h(x)dp(x)-\varepsilon D(p\|q) \tag{86}\] Assuming \(\|\phi(x)\|\leq 1\) for all \(x\in\mathcal{X}\), the discussion of Section II.5 allows us to upper-bound the above by \[\varepsilon\log\int_{\mathcal{X}}e^{h(x)/\varepsilon}dq(x)\leq\sup_{p\in \mathcal{P}^{1}(\mathcal{X})}\int_{\mathcal{X}}h(x)dp(x)-\varepsilon D( \Sigma_{p}\|\Sigma_{q}). \tag{87}\] Moreover, if \(h\) is representable, i.e. if \(h=\langle\phi(x),H\phi(x)\rangle\) for some \(H\), then \[\int_{\mathcal{X}}h(x)dp(x) =\operatorname{tr}\left(H\int_{x\in\mathcal{X}}p(x)\phi(x)\phi(x) ^{T}\right) \tag{88}\] \[=\operatorname{tr}\left(H\Sigma_{p}\right), \tag{89}\] and we can further relax (87), with standard SOS relaxation seen before, by replacing the \(\sup_{p}\) by the \(\sup_{\Sigma_{p}\in\hat{\mathcal{K}}}\), where we recall \[\hat{\mathcal{K}}=\{\Sigma\in\operatorname{span}\{\phi(x)\phi(x)^{T}:x\in \mathcal{X}\}:\Sigma\succcurlyeq 0,\operatorname{tr}[U\Sigma]=1\}. \tag{90}\] Overall, we obtain \[\varepsilon\log\int_{\mathcal{X}}e^{h(x)/\varepsilon}dq(x)\leq\sup_{\Sigma_{ p}\in\hat{\mathcal{K}}}\operatorname{tr}[H\Sigma_{p}]-\varepsilon D( \Sigma_{p}\|\Sigma_{q}). \tag{91}\] This can be computed in polynomial time by semi-definite programming. We remark that adding the relative entropy as a regularizer is a standard technique used to solve the SDP problem (50). Here we show that this technique has also an information-theoretic meaning. ### Extensions We highlight the following extensions to the contents presented in this lecture. 1. For two probability distributions \(p\) and \(q\) such that \(p\succcurlyeq q\), for a convex function \(f\) such that \(f(x)\) is finite for all \(x>0\) and \(f(1)=0\), we can define the _f-divergence_ as \[D_{f}(p\|q)=\int_{\mathcal{X}}f\left(\frac{dp(x)}{dq(x)}\right)dq(x).\] (92) This definition includes several well-known divergences, e.g. KL, Pearson, Hellinger, \(\chi^{2}\). We note that all the properties discussed above apply to any well defined f-divergence. 2. \(D(\Sigma_{p}\|\Sigma_{q})\) is concave in the kernel, which means that if we replace for instance \(\phi(x)\) by \(\Lambda^{1/2}\phi(x)\), with \(\Lambda\succcurlyeq 0\), and consider the kernel \(K_{\Lambda}(x,y)=\phi(x)^{T}\Lambda\phi(y)\), then \(D(\Sigma_{p}\|\Sigma_{q})\) is concave in \(\Lambda\). This property implies that the kernel can be optimized to obtain better bounds. 3. One can use other notions of quantum divergences, which lead to better bounds. E.g., [24] showed that \[\operatorname{tr}\left[A(\log A-\log B)\right]\leq\operatorname{tr}\left[A \log(B^{-1/2}AB^{-1/2})\right],\] (93) which implies that using the right hand side as divergence gives an improvement in the bounds in (74) and (91). 4. We showed that \(D(p\|q)\geq D(A\|B)\) if \(A=\Sigma_{p}\) and \(B=\Sigma_{q}\). We may ask what is the best lower bound on \(D(p\|q)\) based on \(\Sigma_{p}\) and \(\Sigma_{q}\): \[D^{*}(A\|B):=\inf_{p,q\in\mathcal{P}^{1}(\mathcal{X})}\ D(p\|q)\ \text{ such that }\ \Sigma_{p}=A\text{ and }\Sigma_{q}=B.\] (94) Indeed, \(D^{*}(A\|B)\) is computable by sum-of-squares relaxation (see [19] for details). ## III Acknowledgements These are notes from the lecture of Francis Bach given at the summer school "Statistical Physics & Machine Learning", that took place in Les Houches School of Physics in France from 4th to 29th July 2022. The school was organized by Florent Krzakala and Lenka Zdeborova from EPFL.
2304.01150
Algebraic and Geometric Models for Space Networking
In this paper we introduce some new algebraic and geometric perspectives on networked space communications. Our main contribution is a novel definition of a time-varying graph (TVG), defined in terms of a matrix with values in subsets of the real line P(R). We leverage semi-ring properties of P(R) to model multi-hop communication in a TVG using matrix multiplication and a truncated Kleene star. This leads to novel statistics on the communication capacity of TVGs called lifetime curves, which we generate for large samples of randomly chosen STARLINK satellites, whose connectivity is modeled over day-long simulations. Determining when a large subsample of STARLINK is temporally strongly connected is further analyzed using novel metrics introduced here that are inspired by topological data analysis (TDA). To better model networking scenarios between the Earth and Mars, we introduce various semi-rings capable of modeling propagation delay as well as protocols common to Delay Tolerant Networking (DTN), such as store-and-forward. Finally, we illustrate the applicability of zigzag persistence for featurizing different space networks and demonstrate the efficacy of K-Nearest Neighbors (KNN) classification for distinguishing Earth-Mars and Earth-Moon satellite systems using time-varying topology alone.
William Bernardoni, Robert Cardona, Jacob Cleveland, Justin Curry, Robert Green, Brian Heller, Alan Hylton, Tung Lam, Robert Kassouf-Short
2023-04-03T17:14:19Z
http://arxiv.org/abs/2304.01150v2
# Algebraic and Geometric Models for Space Networking ###### Abstract In this paper we introduce some new algebraic and geometric perspectives on networked space communications. Our main contribution is a novel definition of a time-varying graph (TVG), defined in terms of a matrix with values in subsets of the real line \(\mathcal{P}(\mathbb{R})\). We leverage semi-ring properties of \(\mathcal{P}(\mathbb{R})\) to model multi-hop communication in a TVG using matrix multiplication and a truncated Kleene star. This leads to novel statistics on the communication capacity of TVGs called lifetime curves, which we generate for large samples of randomly chosen STARLINK satellites, whose connectivity is modeled over day-long simulations. Determining when a large subsample of STARLINK is temporally strongly connected is further analyzed using novel metrics introduced here that are inspired by topological data analysis (TDA). To better model networking scenarios between the Earth and Mars, we introduce various semi-rings capable of modeling propagation delay as well as protocols common to Delay Tolerant Networking (DTN), such as store-and-forward. Finally, we illustrate the applicability of zigzag persistence for featurizing different space networks and demonstrate the efficacy of K-Nearest Neighbors (KNN) classification for distinguishing Earth-Mars and Earth-Moon satellite systems using time-varying topology alone. ###### Contents * 1 Introduction and Outline * 1.1 Outline for the Applications-First Reader * 1.2 Outline for the Applied Topology Reader * 1.3 Outline for the Applied Algebra Reader * 2 Algebraic Models for Time-Varying Graphs (TVGs) * 2.1 Semi-Ring and Matrix Perspectives on Graphs and TVGs * 2.2 Measuring Communication Capacity of STARLINK via the Kleene Star * 2.3 Non-Convergence of the Kleene Star: Semi-Rings for Propagation Delay * 2.4 Review of Prior Semi-Rings for Graph Optimization Problems * 2.4.1 Tropical/Min-Plus Semi-Ring * 2.4.2 Tropical Endomorphism Semi-Ring for Time-Varying Networks * 2.5 The Universal Contact Semi-Ring (UCS) for Time-Variate Routing * 3 Geometric and Topological Models for Time-Varying Graphs * 3.1 Distances on Time-Varying Graphs * 3.1.1 Distances with Fixed Node Correspondence * 3.1.2 Distances with Unknown Node Correspondence * 3.1.3 Connections with the Interleaving Distance via Summary Cosheaves * 3.2 Topological Summaries of TVGs via Barcodes for Machine Learning * 3.2.1 KNN on Earth-Moon vs. Earth-Mars Satellite Systems * 4 Future Directions * A Definitions and Technical Results on Semi-Ring * B Basic Category Theory * C Barcodes and Distances on These * D Proof of Isometry Theorems **Funding Acknowledgement:** This work is a product of NASA Contract 80GRC020C0016. ## 1 Introduction and Outline As humanity embarks on its next steps in space exploration, with international and commercial actors providing a huge influx of new space assets, the need to automate and scale space communications has become increasingly pressing. Current communication in space--between, say, a rover on Mars and a particular building on Earth--is handled by teams of engineers manually scheduling which point-to-point links are to be used at what times to provide end-to-end transmission of data. Once a message reaches a ground station on Earth traditional terrestrial networking theory, such as TCP/IP, takes over, which relies heavily on low latency and a largely static network architecture. However, neither of these requirements hold in space. Special relativity dictates that all communication is constrained by the speed of light, which means the fastest possible one-way communication between Earth and Mars is constrained to 3 and 22 light-minutes, depending on their relative positions in orbit. Networking topology can also evolve rapidly, as illustrated by satellites in low-Earth orbit (LEO), whose orbits are typically 90 minutes and a random pair of satellites may have a line-of-sight connection for only a few minutes. In all these scenarios occlusions typically force a break in point-to-point communication, which is handled by either local storage or the use of an alternate route that circumvents the occlusion. Motivated by these problems, we introduce here a novel set of tools for modeling time-varying networks that appear in actual space networking situations. Our main contribution is a clear definition of a time-varying graph (TVG), which we view as a matrix whose entries are populated by subsets of time. We exploit the fact that subsets of time, viewed as elements of \(\mathcal{P}(\mathbb{R})\), have a natural notion of addition and multiplication given by union and intersection, thereby making \(\mathcal{P}(\mathbb{R})\) into a semi-ring; see Definition 2.6. With this observation in hand, we model end-to-end communication capacity more accurately by considering the Kleene star (Definition 2.20) of our TVG adjacency matrix: \[A^{*}:=I+A+A^{2}+\cdots+A^{k}+\cdots\] Since addition in the above equation represents entry-wise union of times of connectivity, the partial series sum \(C_{k}(A):=I+\cdots+A^{k}\) provides an interesting filtration parameter that accumulates windows of opportunity along walks of length \(k\) or less. By considering the average measure of each entry \(C_{k}(A)_{i,j}\), measured over some fixed window of time \(W\subset\mathbb{R}\), we obtain a novel statistic that we call the _lifetime curve_ in Lemma 2.22, which we use as a proxy to measure how close a TVG is to a strongly connected one1. Footnote 1: An ordinary directed graph is strongly connected if you can go from any node to any other node. A TVG is then strongly connected if you can go from any node to any other node at any time. We implement these ideas in code--available at github.com/TheaMAS/sat-parser--and simulate various networking scenarios with the help of Satellite Orbital Analysis Program (SOAP), which is a tool for accurately simulating orbital mechanics and calculating windows of opportunity for line-of-sight communication2. By using the large database of STARLINK satellites available on celestrak.org, we simulate random LEO networks by taking different size samples from the STARLINK network. As illustrated in Figure 5 these lifetime curves can have radically different shapes, but they seem to undergo a clear phase transition when the number of nodes exceeds \(n=40\), suggesting that more than 40 nodes are needed to ensure strong connectivity in a LEO network. The shapes of these curves are currently not well understood and motivate further mathematical research into their structure, cf. Conjecture 2.27. Footnote 2: For our analysis, all possible line-of-sight opportunities are considered simultaneously valid. In practice, a given node might be able to only establish one link at time, meaning choosing one contact necessarily precludes others. This, among other considerations, should lead to interesting and well-defined future research topics. ### Outline for the Applications-First Reader For the reader who is primarily interested in the immediate application of our methods to space networking, and in particular how to measure the difference between near-Earth and deep space communication networks, we advise that they proceed directly from the beginning of Section 2 (including Sections 2.1 and 2.2) to the start of Section 3. There we continue the question of how close a given TVG is to a strongly connected one, by introducing a bona fide distance on TVGs that is inspired by topological data analysis (TDA). Here we leverage the perspective that times when an edge exists (or does not exist) can be represented as a collection of intervals, assuming the TVG is not too pathological in its connectivity. Such intervals can be represented using a persistence diagram and there are many well-defined distances on persistence diagrams. To emphasize that the distances in Sections 3.1.1 and 3.1.2 are not being used on the output of a traditional persistent homology pipeline--because no homology is being taken--we call these distances the _disconnect distances_ in Definition 3.5. We then use these distances to measure more carefully the connectivity properties of sub-samples of STARLINK, shown in Figure 15, which shows that 100 nodes is more likely needed to establish strong connectivity. Moving beyong STARLINK, we illustrate how to compare Earth-Moon and Earth-Mars systems using zigzag persistence [13, 14], which is another tool borrowed from TDA that summarizes how network topology varies over time, and is agnostic to node labels. We show that, when our TVGs are featurized using \(H_{1}\) zigzag persistence barcodes, how a K-Nearest Neighbor (KNN) classifier can be used to automatically distinguish types of space networks. As outlined in Section 4 we take this as the first step in creating an automatic recommendation and segmentation protocol for space internet that uses machine learning. ### Outline for the Applied Topology Reader For the reader who is interested in how our work expands the theory of TDA, we recommend proceeding directly from Section 2.2 and reading the entirety of Section 3. Our definition of TVGs leads naturally to a summary cosheaf (Definition 3.13) that refines the Reeb cosheaf of [1] by tracking the entire graph structure and not just the connected components. However, similar to [1] we adapt the interleaving distance construction to this setting and prove some novel isometry results (Theorem 3.16) that are very close in spirit to the results on merge trees proved in [1]. Our summary cosheaf also provides a novel pipeline for proving stability of the zigzag barcodes under the aforementioned metrics, see Proposition 3.21. Prior Work in TDA on Time-Varying Graphs Zigzag persistence [13, 14], which is a common tool from TDA, has been used to study time-evolving topology in dynamic networks for several years. At a high-level, zigzag persistence reveals the formation, disappearance and duration of topological features of a space \(X\) that is parameterized by \(\mathbb{R}\), i.e. by using a function \(f:X\to\mathbb{R}\). To understand how topology at one time \(t_{0}\) is related to the topology at another time \(t_{1}\), one typically considers the alternating inclusion \(f^{-1}(t_{0})\hookrightarrow f^{-1}[t_{0},t_{1}]\hookrightarrow f^{-1}(t_{1})\) and takes homology, which is what gives zigzag persistence its name. In order to analyze temporal topological features via zigzag persistence, a collection of snapshots of a temporal network are considered together with their zigzag unions (or intersections) to form a sequence of simplicial complexes. Network snapshots can be obtained via a sliding window or temporal partitioning construction[15]. This approach is used in [11] to analyze the Great Britain transportation network, for example. A similar approach, which is used in [11] to analyze the social networks and cyber data of various communities, considers situations where the network snapshots are hypergraphs. In the context of space communication, the study [14] uses zigzag persistence on simulated space networks to identify and extract subnetwork structures in order to reduce the complexity of the network. A more theoretical approach to dynamic graphs, carried out by Kim and Memoli [15], uses the Mobius inversion perspective on persistence to define a novel summary of time-evolving clusters called the _persistence clustergram_. ### Outline for the Applied Algebra Reader Finally, for the reader most interested in the semi-ring aspects of our work, we encourage them to read Section 2 in its entirety. Even the more applied reader will benefit from seeing how our semi-ring model for TVGs allows us to model propagation delay in Section 2.3. However, the most theoretically-minded should turn their attention to Section 2.5, as this provides a sort of "universal" semi-ring that allows us to model all known aspects of graph optimization problems, including tropical geometry. From the applied perspective, this section is most significant for its ability to model store-and-forward routing behavior, which is one of the most popular protocols in delay tolerant networking. Prior Work on Algebraic Path ProblemsTo our knowledge, the semi-ring models of time-varying graphs advanced here have not been considered before. In particular, we believe that the matrix TVG semi-ring, the propagation delay semi-ring, and the universal contact semi-ring are all new, with no clear presence in the literature. We say this with much trepidation as the semi-ring perspective on networks has at least a 50 year history, with Carre's pioneering work [13] as one of the first major papers. This perspective has endured, with textbook length treatments in [1, 2] and [15]. In all these works, one wants to view any network characterization or optimization problem as a matrix with entries in a carefully chosen semi-ring. Solutions to these characterization/optimization problems are usually calculated via the transitive closure of that matrix, i.e. the Kleene star. For instance, and this is expanded on in more detail in Section 2.4, the shortest path problem on a network can be formulated use the tropical semi-ring \(([0,\infty],\min,+)\). One compares the costs along all paths using the \(\min\) (addition) operation, while each path cost is the result of aggregating arc weights along the path using the \(+\) (multiplication) operation. The Kleene star for this matrix then consists of the lengths of pairwise shortest paths between nodes in networks, thus solving the all-pairs shortest-path problem with remarkable algebraic efficiency [12]. As Carre observed in [13], these matrix-theoretic methods provably generalize direct methods of solving routing problems such as Bellman-Ford's and Floyd-Warshal's algorithms. These observations were continued by [14], who showed that generalized path algebras can be used to obtain the well-known pathfinding algorithms by Moore, Ford, and Dijkstra; see [14, 15] for more on this. Surprisingly, our application of semi-rings to the internet-at-large is not new, as the works [12, 13, 14, 15, 16, 17] study internet routing protocols from this perspective. However each of these works are concerned with so-called "closed networks," where the nodes and edges are fixed from the outset. More recent work, coming out of the Applied Category Theory (ACT) community, has developed a more flexible theory for networks that allows for node discovery and for routing between other, unknown networks; see the works of Jade Master [13, 14, 15], whose thesis [14] made major headway in the theory of these so-called "open networks." We anticipate that this work will be important in future work on developing generalized, composable algorithms for routing in space. ## 2 Algebraic Models for Time-Varying Graphs (TVGs) Time-varying graphs (TVGs) have been studied extensively by many different groups of people, but there is no single agreed-upon definition of what a TVG should be. One model of a TVG is that a time-varying graph is simply a graph sequence \(G_{0},\ldots,G_{n}\), but this perspective obscures relationships across time. Another model for a TVG (or temporal graph) [1], which addresses this criticism, argues that a TVG is a graph \(G=(V,E)\) equipped with a function \(\tau:E\to 2^{\mathbb{N}}\) that specifies which indices in a graph sequence an edge lives. Both of these perspectives are inappropriate for the purposes of space networking, where we are primarily interested in when two assets (ground stations, rovers, satellites, etc.) have a clear line of sight for communication; see Figure 1. Due to orbital mechanics, each line of sight in a space network starts at a _sunrise_ time and ends with a _sunset_ time, which together marks the boundary of a single interval of connectivity. Since the metric properties of these intervals--_when_ and _for how long_?--are crucial for determining when and how much data can be routed across our network, we introduce a framework for talking about lifetimes of connections. It will be useful to organize the collection of connection lifetimes into a poset structure organized by inclusion: **Definition 2.1** (Poset of Lifetimes).: _Let \(\mathcal{L}(\mathbb{R})\) be the **poset of lifetimes**. A non-trivial element Figure 1: A screenshot from the Sattelite Orbital Analysis Program (SOAP) illustrates lines of sight between ground stations and satellites around Earth and Mars. \(a\in\mathcal{L}(\mathbb{R})\) is a finite union of disjoint closed intervals_ \[a=[x_{0},y_{0}]\cup\cdots\cup[x_{n},y_{n}],\] _where \(x_{0}\in\mathbb{R}\cup\{-\infty\}\) and \(y_{n}\in\mathbb{R}\cup\{+\infty\}\), with \(x_{n}\leq y_{n}\leq x_{n+1}\) for all \(n\). The partial order on lifetimes is given by inclusion of subsets, i.e. \(a\preceq b\) if \(a\subseteq b\)._ Our definition of a time-varying graph is a graph that is equipped with an order-reversing map to the poset of lifetimes \(\mathcal{L}(\mathbb{R})\). More precisely, we have the following definition. **Definition 2.2** (Tvg).: _Let \(G\) be a graph, i.e. a set of edges \(E\) and a set of vertices \(V\) with an incidence relation \(v<e\) that indicates when a vertex \(v\) belongs to an edge \(e\). A **time-varying graph (TVG)**\(\mathcal{G}=(G,\ell_{M})\) is a graph \(G\) along with an order-reversing **lifetime function**_ \[\ell_{M}:(G,<)^{op}\to\mathcal{L}(\mathbb{R}),\quad\text{i.e.}\quad\text{if} \quad v<e\quad\Rightarrow\quad\ell_{M}(e)\subseteq\ell_{M}(v).\] _The assumption that \(\ell_{M}(e)\subseteq\ell_{M}(v)\) is the **containment axiom**, as it requires that a vertex be alive whenever an edge is alive._ _Remark 2.3_.: We remark that any adjective for graphs descends to a modifier on time-varying graphs. For example, Definition 2.2 generalizes to hypergraphs as it only requires the notion of incidence (or containment) of vertices into hyperedges; see [11], which uses temporal attribution for hypergraph applications. Additionally, if \(G\) is a directed multigraph, then we can define \(v<e\) iff \(v\) is the head or tail of the edge \(e\), but directedness of an edge \(e\) that goes from \(i\) to \(j\). Finally, recall that a (directed) graph is **simple** if there exists at most one (directed) edge between any two vertices. For simplicity, we will work with an alternative matrix formulation for TVGs, which can be viewed as the primary definition for this paper. Note that in this approach, the poset is specialized to a total order. Figure 2: Sunrise and sunset times for line of sight communication in Figure 1 define intervals of connectivity, which is what our definition of a TVG emphasizes. **Definition 2.4** (Lifetime Matrices and General Matrix TVGs).: _Every simple directed TVG \(\mathcal{G}=(G,\ell_{M})\) that is equipped with a total order on its vertex set \(V\) has a representative **matrix of lifetimes**:_ \[M:V\times V\to\mathcal{L}(\mathbb{R})\qquad(i,j)\mapsto M(i,j)=\ell_{M}(i,j) \subseteq\mathbb{R}.\] _The collection of all lifetime matrices, written \(\operatorname{Mat}_{n}(\mathcal{L}(\mathbb{R}))\), includes every possible matrix with entries in \(\mathcal{L}(\mathbb{R})\), without any containment axiom. We call a matrix whose entries are arbitrary subsets of \(\mathbb{R}\), i.e. \(M\in\operatorname{Mat}_{n}(\mathcal{P}(\mathbb{R}))\), a **matrix TVG**._ Definition 2.4 should remind the reader of the adjacency matrix in graph theory, which for a simple graph has \(0\)s along the diagonal and \(1\)s whenever an edge from \(i\) to \(j\) exists. One important difference is that we typically assume that a lifetime matrix \(M\) has \(\mathbb{R}\)s along the diagonal, which, by analogy would say that every vertex has a self-loop sitting over it. This is a reasonable interpretation in the setting of message passing across a time-evolving graph, but has some drawbacks when trying to emulate (or generalize) more traditional constructions in graph theory. To allow ourselves to work with both conventions, we introduce the following definition. **Definition 2.5** (Adjacency Matrix of a TVG).: _The **adjacency matrix** of a simple directed TVG \(\mathcal{G}=(G,\ell_{M})\), written \(A\), is identical to the associated matrix TVG \(M\) except that \(A_{ii}=\varnothing\)._ ### Semi-Ring and Matrix Perspectives on Graphs and TVGs One of the important reasons for working with matrices is that certain operations--such as addition and multiplication--are available on matrices, which are not obvious for graphs. In this section we recall the interpretation of simple directed graphs as a matrix of Booleans and how the higher powers of the matrix of Booleans models walks of corresponding length. This then clears the way for defining and interpreting these matrix operations for TVGs. At the heart of both of these perspectives is the observation that addition and multiplication are very general operations and are formally unified in the language of semi-rings, which are rings with negatives removed. **Definition 2.6**.: _A **semi-ring**\((S,\oplus,\otimes,\mathbf{n},\mathbf{e})\) consists of a set \(S\) with an addition operation \(\oplus\) and a multipliciation operation \(\otimes\) along with neutral elements \(\mathbf{n}\) (the "0" element) and \(\mathbf{e}\) (the "1" element) for these operations. These operations need to further satisfy the following four collections of identities:_ 1. \((S,\oplus,\mathbf{n})\) _is a commutative monoid (see Definition_ 1.1_) with identity element_ \(\mathbf{n}\)_. This means:_ * \((a\oplus b)\oplus c=a\oplus(b\oplus c)\)__ * \(a\oplus b=b\oplus a\)__ * \(\mathbf{n}\oplus a=a=a\oplus\mathbf{n}\)__ 2. \((S,\otimes)\) _is a (not necessarily commutative) monoid with identity element_ \(\mathbf{e}\)_. This means_ * \((a\otimes b)\otimes c=a\otimes(b\otimes c)\)__ * \(\mathbf{e}\otimes a=a=a\otimes\mathbf{e}\)__ 3. _Multiplication distributes over addition on both the left and right:_ * \(a\otimes(b\oplus c)=(a\otimes b)\oplus(a\otimes c)\)__ * \((a\oplus b)\otimes c=(a\otimes c)\oplus(b\otimes c)\)__ 4. _Multiplication by_ \(\mathbf{n}\) _(the "_\(0\)_" element) annihilates_ \(S\)_:_ * \(\mathbf{n}\otimes a=\mathbf{n}=a\otimes\mathbf{n}\)__ _Remark 2.7_.: Since semi-rings are rings with negatives removed, some authors prefer to call semi-rings **rigs**--because they are "rings" with the "\(\mathbf{n}\)" (as in "negatives") removed. The prototypical example of a semi-ring is the set of natural numbers \(\mathbb{N}=\{0,1,2,...\}\) with normal addition and multiplication. We will work primarily with the following semi-rings, which are pertinent to the study of TVGs. **Definition 2.8** (Boolean Semi-Ring).: _Consider the set \(\operatorname{Bool}=\{\bot,\top\}\) with \(\mathbf{n}=\bot=0\) corresponding to FALSE and \(\mathbf{e}=\top=1\) corresponding to TRUE. If we define \(\oplus=\vee\) to be the logical OR operation and \(\otimes=\wedge\) to be the logical AND operation, then \((\operatorname{Bool},\vee,\wedge,\bot,\top)\) defines the **Boolean semi-ring**._ **Definition 2.9** (Path Semi-Ring).: _Suppose \(G=(V,E)\) is a simple directed graph. Let \(\mathsf{Path}(G)\) be the set of formal combinations of paths (or **walks**) in \(G\) of arbitrary length, i.e._ \[\mathsf{Path}(G)=\{\sum_{i}\gamma_{i}\mid\gamma_{i}=[v_{i_{0}},\ldots,v_{i_{n} }],\quad\text{where}\quad\forall\,i,j\,(v_{i_{j}},v_{i_{j+1}})\in E\}.\] _If \(a=\sum_{i}\gamma_{i}\) and \(b=\sum_{j}\gamma_{j}\), then \(a+b\) is the Boolean sum (i.e. set-theoretic union) of the paths in \(a\) with the paths in \(b\). The additive neutral element \(\mathbf{n}\) is the empty collection of paths \(\varnothing\). For multiplication, we first define concatenation of paths \(\gamma_{i}*\gamma_{j}\) to be the extension of the path \(\gamma_{i}\) by \(\gamma_{j}\) if the last vertex in \(\gamma_{i}\) is the first vertex in \(\gamma_{j}\), otherwise it is \(\varnothing\); notice this multiplication is not commutative. Extending linearly allows us to define_ \[a*b=(\sum_{i}\gamma_{i})*(\sum_{j}\gamma_{j})=\sum_{ij}\gamma_{i}*\gamma_{j}, \quad\text{where}\quad\mathbf{e}=1=\sum_{v_{i}\in V}[v_{i}]\] _is a multiplicative neutral element._ **Definition 2.10** (Lifetime and Powerset Semi-Ring).: _The poset of lifetimes \(\mathcal{L}(\mathbb{R})\) (Definition 2.1) is a commutative semi-ring. This is actually a sub-semi-ring of the bigger semi-ring consisting of the powerset of \(\mathbb{R}\), written \(\mathcal{P}(\mathbb{R})\). Addition and multiplication are defined as_ * \(a+b:=a\cup b\)_, with neutral element_ \(\mathbf{n}=\varnothing\) _being the empty set, and_ * \(a\cdot b:=a\cap b\)_, with neutral element_ \(\mathbf{e}=\mathbb{R}\) _being the whole set_ \(\mathbb{R}\)_._ _More generally, the powerset \(\mathcal{P}(X)\) of any set \(X\) forms a commutative semi-ring with the same operations and analogous neutral elements._ _Remark 2.11_ (Idempotency and Partial Orders).: Semi-rings such as \(\mathsf{Bool}\) and \(\mathcal{P}(X)\) have the additional property of being **idempotent**, i.e. for all elements \(a\) we have \(a+a=a\). Idempotent semi-rings give rise to a natural partial order, where \[a\preceq b\iff\exists c\text{ s.t. }a+c=b.\] **Definition 2.12** (Function Semi-Ring).: _Given any set \(X\) and semi-ring \((S,\oplus,\otimes,\mathbf{n},\mathbf{e})\), the set of functions from \(X\) to \(S\), written \(\mathsf{Fun}(X,S)\), inherits the structure of a semi-ring. Given \(m,n:X\to S\),_ * _we have_ \(m\oplus n:X\to S\)_, where_ \((m\oplus n)(x):=m(x)\oplus n(x)\)_, and_ * \(m\otimes n:X\to S\)_, where_ \((m\otimes n)(x):=m(x)\otimes n(x)\)_._ _The "\(0\)" function \(0:X\to S\) with constant value \(\mathbf{n}\) and the "\(1\)" function \(1:X\to S\) with constant value \(\mathbf{e}\) are the corresponding neutral elements._ _Remark 2.13_ (Boolean Functions and Subsets).: There is a close relationship between the semi-ring of subsets \(\mathcal{P}(X)\) and the semi-ring of Boolean functions \(\mathsf{Fun}(X,\mathsf{Bool})\). In fact, there is a map \(\Phi\) that takes each subset \(A\subseteq X\) to the indicator function \(1_{A}:X\to\mathsf{Bool}\); this being the function that is \(\top\) on points in \(A\) and \(\bot\) on points in \(X-A\). This map also preserves addition and multiplication, i.e. * \(\Phi(A\cup B)=1_{A\cup B}=1_{A}\oplus 1_{B}=\Phi(A)\oplus\Phi(B)\), and * \(\Phi(A\cap B)=1_{A\cap B}=1_{A}\otimes 1_{B}=\Phi(A)\otimes\Phi(B)\). This map \(\Phi\) also preserves the neutral elements of both semi-rings, i.e. \(\Phi(\varnothing)=0\) and \(\Phi(X)=1\). All of this makes \(\Phi\) a surjective **semi-ring homomorphism** (Definition A.2) with trivial kernel, which is a **semi-ring isomorphism**. Finally, matrices valued in a semi-ring define another semi-ring. **Definition 2.14** (Matrix Semi-Ring).: _If \((S,\oplus,\otimes,\mathbf{n},\mathbf{e})\) is a semi-ring, then the collection of \(n\times n\) matrices with entries in \(S\), written \(\mathsf{Mat}_{n}(S)\), is a semi-ring as well where \(M+N\) is defined entry-wise as \((M+N)_{ij}:=M_{ij}\oplus N_{ij}\) and \((MN)_{ij}=\oplus_{k}M_{ik}\otimes N_{kj}\)._ Interpretations of Boolean matrix addition and multiplication in graph theory are classical and well-understood. If \(A\) and \(B\) are two \(n\times n\) Boolean matrices--equivalently viewed as graphs \(G\) and \(H\)--then \(A+B\) can be interpreted as the union of the edge sets of \(G\) and \(H\). The matrix product \(AB\) can be viewed as the matrix of length \(2\) walks, where the first edge is traversed in \(G\) and the second edge is traversed in \(H\). When \(A=B\), the matrix \(A^{2}\) has a \(\top\) in entry \((i,j)\) if and only if there is a length \(2\) walk from \(i\) to \(j\). However, if there are multiple length \(2\) walks between \(i\) and \(j\), the matrix \(A^{2}\) cannot detect that. For this purpose it is better to use adjacency matrices valued in the natural numbers \(\mathbb{N}\) or even \(\mathsf{Path}(G)\) as these will encode the number and exact names of routes between nodes, respectively; see Figure 3. The rest of this section is devoted to understanding the higher powers of a matrix \(\operatorname{TVG}M\). As we will see, the matrix \(M^{k}\) will encode the intervals of time in which an instantaneous length \(k\) walk exists between two nodes. This will require proving an isomorphism between matrix \(\operatorname{TVGs}\) and functions from \(\mathbb{R}\) to \(\mathsf{Mat}_{n}(\mathsf{Bool})\). This in turn depends on the following "snapshot" construction. **Definition 2.15** (Snapshot of a matrix TVG, cf. [14]).: _Let \(tI\) be the matrix with \((tI)_{ii}=\{t\}\) along the diagonal and \((tI)_{ij}=\varnothing\) for off-diagonal entries. If \(M\in\mathsf{Mat}_{n}(\mathcal{P}(\mathbb{R}))\) is a matrix TVG, then the **snapshot** of \(M\)**at**\(t\) is the matrix \((tIM)\), viewed as matrix of Booleans \(\mathcal{S}_{t}(M)\), i.e._ \[\mathcal{S}_{t}(M)_{ij}=\top\iff(tIM)_{ij}=\{t\}\quad\text{and}\quad\mathcal{ S}_{t}(M)_{ij}=\bot\iff(tIM)_{ij}=\varnothing.\] _This defines a semi-ring homomorphism \(\mathcal{S}_{t}:\mathsf{Mat}_{n}(\mathcal{P}(\mathbb{R}))\to\mathsf{Mat}_{n}( \mathsf{Bool})\)._ The following theorem is fundamental to our paper. Its proof is somewhat lengthy and deferred to the Appendix, where it appears under Theorem A.4. **Theorem 2.16**.: _The semi-ring of matrix TVGs \(\mathsf{Mat}_{n}(\mathcal{P}(\mathbb{R}))\) is isomorphic to the semi-ring of functions from \(\mathbb{R}\) to \(\mathsf{Mat}_{n}(\mathsf{Bool})\). This isomorphism is witnessed by the homomorphism_ \[\Psi:\mathsf{Mat}_{n}(\mathcal{P}(\mathbb{R}))\to\mathsf{Fun}(\mathbb{R}, \mathsf{Mat}_{n}(\mathsf{Bool})),\quad\text{where}\quad\Psi(M)(t)=\mathcal{S}_{ t}(M)\] _is the snapshot of \(M\) at time \(t\)._ Although Theorem 2.16 allows us to interpret a matrix TVG as a family of \(\mathbb{R}\)-indexed Boolean matrices, for computational purposes it is better to work intrinsically with matrices of lifetimes, where each entry is a finite union of closed intervals. To wit, if \(M\) is a matrix TVG with at most \(L\) closed intervals in each entry, then finding \(M_{ij}\cap M_{jk}\) requires at most \(L^{2}\) intersection operations, but working with the Boolean perspective, i.e. \((m_{ij}\wedge m_{jk})(t)\), requires checking every Figure 3: Simple directed graphs are equivalent to matrices valued in the Boolean semi-ring, Definition 2.8. Taking powers of this matrix reveals longer walks between nodes. The \(k\)-cumulant aggregates walks of length \(k\) or less. Using a matrix with the path semi-ring, Definition 2.9, encodes different paths between the same pair of nodes. \(t\in\mathbb{R}\) to see if the resulting matrix entry evaluates true at \(t\)--a computational impossibility. As such, we finish this section with a few more TVG-centric constructions and conclude with a corollary that interprets these results through the Boolean lens of Theorem 2.16. **Definition 2.17** (Lifetime of a Walk).: _Assume \(\mathcal{G}=(G,\ell_{M})\) is a TVG, as in Definition 2.2, and \(\gamma=\{v_{i_{0}},\ldots,v_{i_{k}}\}\) is a list of \(k+1\) nodes defining a \(k\)-walk (length \(k\) path) in \(G\). The **walk lifetime** is the intersection of the lifetimes of each edge appearing in the walk, i.e._ \[\ell_{M}(\gamma)=\ell_{M}([\nu_{i_{0}},\nu_{i_{1}}])\cdot\ell_{M}([\nu_{i_{1}},\nu_{i_{2}}])\cdots\ell_{M}([\nu_{i_{k-1}},\nu_{i_{k}}]).\] _For a walk of length 0, i.e. \(\gamma=[\nu_{i}]\), then \(\ell_{M}([\nu_{i}])=\ell_{M}(\nu_{i})\) is the lifetime of that single vertex._ Definition 2.17 connects Definition 2.9 with Definition 2.10 as follows: **Lemma 2.18**.: _Assume \(\mathcal{G}=(G,\ell_{M})\) is a TVG, but where \(\ell_{M}(\nu_{i})=\mathbb{R}\) for every vertex \(\nu_{i}\), then the lifetime function \(\ell_{M}\) extends to a semi-ring homomorphism (Definition A.2)_ \[\ell_{M}:\mathsf{Path}(G)\to\mathcal{L}(\mathbb{R}),\quad\text{where}\quad \sum_{i}\gamma_{i}\mapsto\bigcup_{i}\ell_{M}(\gamma_{i}).\] Proof.: The assumption that each vertex lives forever, i.e. \(\ell_{M}(\nu_{i})=\mathbb{R}\), guarantees that \(\ell_{M}(1)=\ell_{M}(\sum\lfloor\nu_{i}\rfloor)=\mathbb{R}\). The additive and multiplicative properties then hold by definition. **Corollary 2.19** (Higher Powers of a Matrix TVG).: _If \(M\in\mathsf{Mat}_{n}(\mathcal{P}(\mathbb{R}))\) is a matrix TVG, then \(M^{k}_{ij}\) is the union of lifetimes over which there exists a length \(k\) walk from node \(i\) to node \(j\)._ Figure 4: A time-varying graph can be viewed as a matrix valued in the lifetime semi-ring, Definition 2.10. Higher powers of this matrix model times during which longer walks can occur. The \(k\)-cumulant aggregates windows of opportunity over which walks of length \(k\) or less can occur. Notice that the \(k\)-cumulant, for increasing \(k\), defines a filtration of lifetimes over each edge. ### Measuring Communication Capacity of STARLINK via the Kleene Star As Theorem 2.16 and Corollary 2.19 indicate, higher powers of a matrix TVG model when in time instantaneous walks can occur between nodes. In this section we show how the Kleene star and its truncation, called the cumulant, leads to a particular growth filtration of the temporal capacity for communicating between two nodes in a TVG. This provides a summary statistic for measuring proximity of a TVG to a strongly connected one, which we illustrate using increasing size samples from STARLINK--a SpaceX-operated internet service that features over 3,000 satellites in low Earth orbit. Our findings illustrate that temporal capacity illustrates a phase transition around 30 satellites, with larger subnetworks being strongly connected in a sense defined here. **Definition 2.20** (Kleene Star and k-Cumulant).: _Let \(A\) be an arbitrary matrix TVG. The **Kleene star** of \(A\), written_ \[A^{*}:=I+A+A^{2}+A^{3}+\cdots,\] _is the matrix whose \((i,j)\) entry is the subset of \(R\) where any communication from node \(i\) to node \(j\) can occur, perhaps using a walk of arbitrary length. The **k-cumulant** is the sum of the first \(k+1\) terms in the Kleene star and is written \(C_{k}(A)\) or \(C_{k}\), when the matrix \(A\) is clear from context. Finally, we say the **Kleene star converges** at radius \(r\) if \(A^{*}=C_{r}(A)\) for \(r\in\mathbb{N}\). See Figure 4 for an example TVG and its cumulants, whose Kleene star converges at \(r=3\)._ Figure 5: Samples of \(n\) nodes from STARLINK are simulated using SOAP for one day or 86,400 seconds. For each of these simulations, the average lifetime curve (defined in Lemma 2.22) is computed across all nodes \(i,j\) for each power \(k\) of the TVG matrix. There appears to be a jump in average connectivity above \(n=30\) nodes. _Remark 2.21_.: The Kleene star construction helps illustrate why we separated out the notion of the lifetime matrix \(M\) (Definition 2.4) associated to a simple TVG \(\mathcal{G}=(G,\ell_{M})\) from its adjacency matrix (Definition 2.5). In particular, if \(\ell_{M}(v)=\mathbb{R}\), then \(M=I+A\) and we can use the idempotency of \(\mathcal{P}(\mathbb{R})\) (Remark 2.11) to show that \[C_{k}(A)=I+A+\cdots+A^{k}=(I+A)^{k}=M^{k}.\] To see this, note that \(A+A=A\) and \[M^{2}=(I+A)^{2}=(I+A)(I+A)=I^{2}+IA+AI+A^{2}=I+A+A^{2}=C_{2}(A).\] The general result then follows by induction. Since addition of matrix TVGs is the union of lifetimes, the \(k\)-cumulant defines over each edge a chain of nested subsets of \(\mathbb{R}\). **Lemma 2.22** (Lifetime Curves).: _Suppose \(\mathcal{G}=(G,\ell_{M})\) is a simple TVG with \(\ell_{M}(v)=\mathbb{R}\), so that the corresponding matrix of lifetimes satisfies \(M=I+A\). For such a TVG we have a **lifetime filtration**, which associates to each edge \([i,j]\) of \(G\) the chain of subsets_ \[M^{0}_{ij}=\varnothing\subseteq M^{1}_{ij}\subseteq\cdots\subseteq M^{k}_{ ij}\subseteq\cdots,\] _whose \(k\)th entry is the union of lifetimes of walks length \(k\) or less; see Definition 2.17. Moreover, since closed intervals in \(\mathbb{R}\) are Lebesgue measurable, we can associate to each edge \([i,j]\) of \(\mathcal{G}=(G,\ell_{M})\) its **lifetime curve**, which is a non-decreasing function \[L(M^{*})(i,j):\mathbb{N}\rightarrow\mathbb{R},\quad\text{where}\quad L(\mu^{k })(i,j)=\mu(M^{k}_{ij})\] is the sum of the lengths of intervals in entry \((i,j)\) of \(M^{k}\). This is non-infinite, assuming all off-diagonal entries of \(M\) have finite measure. See Figure 5 for some examples. The lifetime curves are meant to provide a summary statistic for measuring how close a TVG is to a strongly connected one, which we define now. **Definition 2.23** (Strongly Connected TVGs).: _A matrix TVG \(M\in\operatorname{Mat}_{n}(\mathcal{P}(\mathbb{R}))\) is **strongly connected** if \(M^{*}=I+M+M^{2}+\cdots\) is equal to the constant matrix where every entry has value \(\mathbb{R}\). More generally, if \(W\subseteq\mathbb{R}\) is some subset of \(\mathbb{R}\), then \(M\in\operatorname{Mat}_{n}(\mathcal{P}(W))\) is **strongly connected over \(W\)** if \(M^{*}\) is the constant matrix with value \(W\). 3_ Footnote 3: Typically, we will assume \(W=[0,86400]\) for simulations that occur over one day, measured in seconds. _Remark 2.24_ (Strong Connectivity for Directed Graphs).: As a reminder, a finite directed graph is strongly connected if one can go from any node to any other node using a directed path. This is equivalent to the Kleene star being the constant \(\top\) matrix, which justifies our definition above. As Figure 5 indicates, the lifetime curves of a satellite system sampled from STARLINK can illustrate radically different behaviors depending on the number of nodes in the system. The systems here with 40 or more nodes seem to illustrate almost exponential growth in the average lifetime (averaged over all pairs of nodes) as a function of the walk-length \(k\), whereas systems with 30 or fewer nodes have growth more akin to a square root function. However, all curves reach a horizontal asymptote, as the Kleene star converges for finite values of \(r\). We review this result in the most general setting, which was first established by Carre as [1, Thm. 3.1] and then re-formulated in Baras [1, p. 19], before providing our own improvement on this result for the semi-ring \(\mathcal{P}(\mathbb{R})\). **Theorem 2.25** (Kleene Star Convergence, cf. [11] and [15]).: _Suppose \(G=(V,E)\) is a simple directed graph that is **weighted** in a semi-ring \((S,\oplus,\otimes,\mathbf{n},\mathbf{e})\), i.e. a map \(w:V\times V\to S\) is given. If for every cycle, i.e. a path \(\gamma=\{v_{i_{0}},\ldots,v_{i_{n}}\}\) where \(v_{i_{0}}=v_{i_{n}}\), the weight_ \[w(\gamma)=w(v_{i_{0}},v_{i_{1}})\otimes\cdots\otimes w(v_{i_{n-1}},v_{i_{n}}) \quad\text{satisfies}\quad w(\gamma)\oplus\mathbf{e}=\mathbf{e},\] _then the Kleene star of the weighted adjacency matrix \(A_{ij}=w(i,j)\) converges for \(r\leq|V|-1\)._ The hypotheses of Theorem 2.25 holds for the lifetime and powerset semi-ring as every cycle has a lifetime \(w(\gamma)\) such that \(w(\gamma)\cup\mathbb{R}=\mathbb{R}\). This means that the lifetime curves of Figure 5 will provably stabilize at \(k=n-1\). Since some of our simulations take up to \(n=100\) nodes in a single simulation, it is possible that the \(k\)-cumulant will need to be computed up to and including \(r=99\). However the next proposition, whose proof is deferred to the appendix under Proposition A.5, and our simulation results indicate that the convergence radius of the Kleene star for STARLINK simulations is typically much smaller. **Proposition 2.26** (Convergence at the Temporal Diameter).: _Let \(A\) be the adjacency matrix for a simple TVG\(\mathcal{G}=(G,\ell_{M})\). The \((i,j)^{th}\) entry of the \(k\)-cumulant \(C_{k}(A)\) stabilizes after \(d_{ij}\), where_ \[d_{ij}=\max_{t\mid\exists\gamma:t\sim j}\min_{\gamma}|\gamma|\] _is the length of the longest shortest path from \(i\) to \(j\), disregarding times where no such path exists. We set \(d_{ii}=0\), by convention. We define the **temporal diameter** of a TVG\(\mathcal{G}\) to be_ \[\operatorname{diam}(\mathcal{G})=\max_{ij}d_{ij}.\] _Consequently, the Kleene star \(A^{*}\) convergences for \(r\geq\operatorname{diam}(\mathcal{G})\)._ The upshot of Proposition 2.26 is that one need only understand the maximum diameter of snapshots of \(M\) measured across all possible values of \(t\in\mathbb{R}\). Based on the simulation described in Figure 6, the temporal diameter for an 80 node STARLINK simulation is actually \(15\). Figure 6: A histogram of diameters from slices of a one-day STARLINK simulation with 80 nodes. The maximum observed diameter was \(15\), but the typical diameter was around \(9\). However, further simulation indicates that for TVGs coming from STARLINK, the convergence radius for the Kleene star is actually much smaller than \(r=15\), which would be the prediction of Proposition 2.26 and Figure 6. In fact, Figure 7 indicates that for a 10 node system, the Kleene star convergences at \(r=5\). For more nodes, e.g. \(n=100\), this convergence radius is even smaller and is conjecturally \(r=3\) for STARLINK systems with 100 or more nodes. This conjecture is based on purely geometric considerations, depicted in Figure 9, which appear obvious, but require careful proof. Part of a putative proof of the following conjecture should include the observation that the essentially asserts that the Vietoris-Rips complex of \(n\) nodes sampled uniformly from the unit sphere with connectivity radius \(r>\pi/3\) should have a \(1\)-skeleton with diameter tending to \(3\), as \(n\to\infty\), however proving that such connections persist across time is a central difficulty in this problem. **Conjecture 2.27** (Convergence Radius for STARLINK TVGs).: _With high probability as \(n\to\infty\) a randomly sampled sub-TVG of STARLINK will have a Kleene star that converges at \(r=3\)._ Figure 7: 30 simulations of \(n=10\) (left) and \(n=100\) (right) randomly sampled nodes from STARLINK over the course of one day, i.e. 86400 seconds. Confidence intervals are shown below. ### Non-Convergence of the Kleene Star: Semi-Rings for Propagation Delay In all of the previous sections, we have focused on TVGs coming from space networking scenarios, such as STARLINK, where the speed of light is negligible. In what follows we are going to investigate satellite systems with assets around the Earth's moon (Luna) and Mars. In both of these settings, line-of-sight communication may require significant time due to propagation delay, even though messages are transmitted at the speed of light. As a reminder, one-way light travel to The Moon takes about 1.2 seconds; this represents a round trip time (RTT) that begins to preclude standard or traditional feedback mechanisms for reliability in communications though reactivity is still possible. Messages sent to Mars can take anywhere between 3 and 22 minutes, depending on the relative location of Earth and Mars in their respective orbits. When the RTT is measured in minutes, reliability in communications tends to be based on being proactive rather than reactive. All of this necessitates a new semi-ring capable of modelling propagation delay. In this section we introduce the Propagation Delay semi-ring, which is a combination of the Lifetime semi-ring of Definition 2.10 with a delay parameter. For this semi-ring Theorem 2.16 fails to hold, which further illustrates why considering time-varying graphs as an \(\mathbb{R}\)-indexed family of simple directed graphs (or a graph sequence, for that matter) fails to capture systems with non-trivial propagation delay. Moreover, the classical Kleene star convergence result of Carre (Theorem 2.25) does not apply to the Propagation Delay Semi-Ring. This is illustrated experimentally using a \(17\)-node simulation of an Earth-Mars-Moon system, where a ping orig Figure 8: Average lifetime curves across 30 simulations for node systems ranging from \(n=5\) to \(n=40\), in increments of size five. Node systems larger than \(n=30\) appear to be strongly connected over a one-day simulation, i.e. the Kleene star appears to converge to a constant matrix with value 86400—the number of seconds in a day. inating at one node "echoes" ad infinitum throughout the network. This lack of convergence of the Kleene star may be interpreted as an algebraic characterization of the difficulty of deep space routing and communication. The starting point for defining the Propagation Delay Semi-Ring is based on the observation that the collection of additive endomorphisms of a semi-ring \(S\) forms a semi-ring as well; see Lemma A.3. If one--necessarily--takes a relativistic perspective on space communication, then a message transcribed by one observer during an interval of time \(I\subseteq\mathbb{R}\), may communicate this message, only to have it received later during the time interval \[\varphi^{\epsilon}(I):=I^{\epsilon}=\{x+\epsilon\mid x\in I\}.\] Composing such delay operators leads to another semi-ring structure, which we now specify. **Definition 2.28** (Propagation Delay Semi-Ring).: _Consider the set \(\mathcal{L}(\mathbb{R})\times[0,\infty)\) of lifetimes (Definition 2.1) along with possible delays. This set can be equipped with addition and multiplication operations, as follows:_ * \((I,s)+(J,t):=(I\cup J,\max\{s,t\})\)_, and_ * \((I,s)\otimes(J,t):=(I\cap(J-s),s+t)\)_._ _The neutral elements are \(\mathbf{n}=(\varnothing,0)\) and \(\mathbf{e}=(\mathbb{R},0)\) respectively._ Figure 9: Starlink satellites typically orbit the Earth at an altitude of \(\sim 340\) miles. Using line of sight considerations, each satellite should have approximately a \(73^{\circ}\) view angle, which implies that any two antipodal satellites can be connected by a walk of length at most 3, assuming a sufficient number of satellites. _Remark 2.29_ (Three Semi-Rings and Their Relationships).: The lifetime semi-ring (Definition 2.10) injects into the propagation delay semi-ring, which in turn injects into the endomorphism semi-ring of the lifetime semi-ring \(\operatorname{End}(\mathcal{L}(\mathbb{R}))\), defined generally in Lemma A.3. To see this last injection, note that each \(I\in\mathcal{L}(\mathbb{R})\) and delay \(s\geq 0\) and, we consider an "intersect and shift" endomorphism \(\varphi_{I}^{s}\colon\mathcal{L}(\mathbb{R})\to\mathcal{L}(\mathbb{R})\), defined as \[\varphi_{I}^{s}(K)=\{x+s\mid x\in I\cap K\}.\] Under this assignment the neutral element \(\mathbf{n}=(\varnothing,0)\) is sent to the "everything maps to \(\varnothing\)" operator and \(\mathbf{e}=(\mathbb{R},0)\) is sent to the identity transformation. _Example 2.30_ (4 Node Cartoon Example).: In Figure 10 we consider an augmentation of a TVG with delays. The path \(A\to B\to D\) leads to a composite \[([0,10],1)\otimes([9,15],3)=([0,10]\cap[9-1,15-1],1+3)=([8,10],4).\] The path \(A\to C\to D\) leads to a composite \[([0,10],3)\otimes([9,10],2)=([0,10]\cap[9-3,10-3],3+2)=([6,7],5).\] The entry associated to the \(2\)-walk matrix in the entry corresponding to \(A\to D\) communications is \(A_{14}^{2}\), whose value is then \[([8,10],4)+([6,7],5)=([6,7]\cup[8,10],5).\] As this example shows, the addition operation assumes a worst-case scenario where--perhaps--a message is fragmented and sent along the two different routes and an operator at node D needs to wait till both messages arrive to re-assemble/decode the message. This makes implicit assumptions on the storage capacity at the node D, which we assume is arbitrarily large. Finally, we illustrate a simulated example using 14 assets: * One ground station in Sydney, Australia, which is the source of the "ping" at \(t=300\). * Four STARLINK satellites, which are out of view when the ping is first transmitted. * Five (IOAG) Lunar satellites receive Sydney's ping less than \(2\) seconds later. Figure 10: Time-varying network of 4 nodes: each edge represents a connection, that is decorated with available time and delay time. * Four Martian satellites receive the message between 13 and 14 minutes later. Whenever an asset receive the ping, it automatically repeats the ping to all connected assets, which are received later according to the number of light seconds separating each asset. STARLINK satellites move in and out of view and are connected by different length walks as the simulation evolves. This sort of "bent pipe" communication is modelled via row-vector multiplication \[v^{t}(I+A+A^{2}+A^{3}+\cdots+A^{8}+\cdots)\] with the vector \(v\) have a singleton set [300] in the Sydney entry and \(\varnothing\) in every other entry. The vector of arrival times is depicted in Figure 11, color-coded by the length of the walk of the route traversed in the walk matrix \(A^{k}\). ### Review of Prior Semi-Rings for Graph Optimization Problems In this section we review some prior work on semi-rings for solving the all-pairs shortest-path problem for a weighted graph and the shortest path problem with time-inhomogeneous edges, Figure 11: A ping is emitted at time \(t=300\) from a ground station in Sydney. with and without capacity constraints. #### 2.4.1 Tropical/Min-Plus Semi-Ring As mentioned in Example 2.30, the propagation delay semi-ring uses a "max-plus" semi-ring in the second coordinate. This semi-ring is dual to a more popular semi-ring for graph optimization problems. **Definition 2.31** (Tropical Semi-Ring).: _Consider the set \(\mathbb{T}=\mathbb{R}\cup\{\infty\}\). If we define \(a\oplus b=\min\{a,b\}\) and \(a\odot b=a+b\), then the set \(\mathbb{T}\) with \(\mathbf{n}=\infty\) and \(\mathbf{e}=0\) defines the **tropical** or **min-plus semi-ring**._ _Remark 2.32_ (All Pairs Shortest Path Problem).: If \(G\) is a simple directed graph with lengths assigned to each edge, then viewing this graph as weighted in \(\mathbb{T}\) and calculating the Kleene star of this weighted adjacency matrix solves the **all pairs shortest path problem**, i.e. \(A_{ij}^{*}\) will have the length of the shortest path from \(i\) to \(j\) in this entry. We note that if a cycle has negative weights, then the hypotheses of Theorem 2.25 fail, as one can create an arbitrarily short (negative) length path by traversing this cycle repeatedly. #### 2.4.2 Tropical Endomorphism Semi-Ring for Time-Varying Networks As Remark 2.32 demonstrates, the shortest path problem can be solved by computing the Kleene star of the adjacency matrix weighted in the tropical semi-ring \(\mathbb{T}\). A model which generalizes this perspective, but also allows time-varying edge lengths uses a certain sub-semi-ring of \(\operatorname{End}(\mathbb{T})\), which we now define. **Definition 2.33** (Non-Decreasing Endomorphisms).: _Let \(\mathbb{W}\) be the set of all endomorphisms \(w:\mathbb{T}\to\mathbb{T}\) such that \(w\) is non-decreasing and \(\lim_{t\to\infty}w(t)=\infty\). We define the operation \(\oplus\) and \(\otimes\) on \(\mathbb{W}\) as follows: for all \(t\in\mathbb{T}\),_ * \((w\oplus v)(t)=\min\{w(t),v(t)\}\)_, and_ * \((w\otimes v)(t)=w(v(t))\)_._ _Remark 2.34_.: The intuition behind Definition 2.33 is that a function \(w_{ij}\in\mathbb{W}\) dictates when a message transmitted at time \(t\) from node \(i\) will arrive at node \(j\)--this is the time \(w_{ij}(t)\). Considering the Kleene star of such a matrix will also solve the earliest time of arrival for a traffic network, where the density of traffic can vary over time. The non-decreasing condition in Definition 2.33 is also clear in the traffic example, as departing later in the day will never result in an earlier time of arrival. _Remark 2.35_ (Connections to Delay Tolerant Networking).: [10, pg.33] observes that Definition 2.33 can also be used to model opportunistic networks and delay tolerant networking, which is the current paradigm for space networking. Indeed, one can embed our lifetime semi-ring into this model as well, but will sacrifice the ease of use and the connections to topological data analysis described in Section 3. Finally, we remark that capacity constraints can be added to Definition 2.33 without issue. _Remark 2.36_ (Capacity Constrained Time-Variate Routing).: Using a similar approach to what was described above, we can construct an endomorphism-based semi-ring that captures capacity-constrained delivery on time-varying networks. Specifically, given a capacity \(C>0\), for each \(w\in\mathbb{W}\), there exists an \(W\in\mathbb{W}\), such that: \[W(t)=\min\big{\{}s\colon\int\limits_{t}^{s}w(x)dx=C\big{\}}.\] One can then compose these elements just as in Definition 2.33. ### The Universal Contact Semi-Ring (UCS) for Time-Variate Routing We finish our discussion of semi-rings by introducing a novel semi-ring called the **Universal Contact Semi-Ring (UCS)** that allows us to model a large class of networking problems. Most significant for our application of semi-rings to space networking is the observation that the UCS allows us to model the "store and forward" protocol [25], which is the main paradigm for delay tolerant networking (DTN) and cannot be captured via the other semi-rings described above. After providing the necessary definitions, we work through a synthetic example of how this semi-ring describes the store and forward networking protocol. **Definition 2.37**.: _The **Universal Contact Semi-Ring (UCS)**\(\mathcal{C}\) consists of a collection of maps from \(\mathbb{R}\) to the powerset of \(\mathbb{R}\),_ \[\mathcal{C}=\{f:\mathbb{R}\to 2^{\mathbb{R}}\},\] _together with addition and multiplication operations defined as_ \[f\oplus g(t)=f(t)\cup g(t)\quad\text{and}\quad f\odot g(t)=\bigcup\limits_{x \in f(t)}g(x+t)+x,\] _respectively. Here the formal sum \(A+x\) denotes the Minkowski sum, i.e_ \[A+x=\{a+x,a\in A\}.\] _The additive identity of \(\mathcal{C}\) is the constant function_ \[0_{\mathcal{C}}:\mathbb{R} \to 2^{\mathbb{R}}\] \[t \mapsto\varnothing,\] _while the multiplicative identity is the constant function_ \[1_{\mathcal{C}}:\mathbb{R} \to 2^{\mathbb{R}}\] \[t \mapsto\{0\}.\] Although the Universal Contact semi-ring is idempotent, this semi-ring is poorly-behaved due to its lack of commutative and integral properties, as well as a total order. However, we believe the UCS is suitable for modeling in real world application: In the context of networking, a map \(f\in\mathcal{C}\) represents the possible delivery delays at a given time: if \(x\in f(t)\), then a message can be transported along the edge to arrive at time \(t+x\). This allows the UCS to encapsulate contact windows, time forwarding, changing communication times, as well as storage--limited or unlimited--at a node. Moreover, UCS contains most of the semi-rings that naturally arise in routing problems, described above, as subquotients, i.e. quotients of sub-semi-rings of UCS. The proof of the next result is presented under Proposition A.6 in the appendix. **Proposition 2.38**.: _Let \(\mathscr{C}\) denote the Universal Contact Semi-Ring (Definition 2.37). \(\mathscr{C}\) contains_ 1. _an injective image of the boolean semi-ring_ \(S\)_;_ 2. _a sub-semi-ring which is isomorphic to the TVG semi-ring;_ 3. _a sub-semi-ring which surjects onto the tropical semi-ring_ \(\mathbb{T}\)_;_ 4. _a sub-semi-ring which surjects onto the propagation delay semi-ring ; and_ 5. _a sub-semi-ring which surjects onto the function endomorphism semi-ring_ \(\mathbb{W}\)_._ _Example 2.39_.: Let \(\mathcal{N}\) be a time-varying network with \(4\) nodes \(A,B,C\), and \(D\) as described in Figure 12. One can endow this network with the universal contact semi-ring \(\mathscr{C}\) by treating the edge \(([a,b],\omega)\) as the map: \[t\mapsto\begin{cases}\{\omega\}&\text{for }t\in[a,b]\\ \emptyset&\text{otherwise.}\end{cases}\] In order to model the potential storage of information at a node, we add a self-loop on each node, representing via the maps that sends \(t\mapsto[0,\infty)\). Using the propagation delay semi-ring, or our other semi-rings, the only possible route from \(A\) to \(B\) would be the route \(A\to B\to D\) with delay \(15\). However, in \(\mathscr{C}\), the route \(A\to C\to C\to D\) is permitted and has delay \(5\). The semi-ring weight of the route \(A\to B\to D\), obtained by multiplying the edge weights in the path, is given by the map: \[t\mapsto\begin{cases}\{15\}&t\in[0,1]\\ \emptyset&t\not\in[0,1]\end{cases}\] while the semi-ring weight of the route \(A\to C\to C\to D\) will be given by the map that sends \[t\mapsto\begin{cases}\{5-t,6-t\}&t\in[0,1]\\ \emptyset&t\not\in[0,1]\end{cases}\] So in addition to allowing for another route, assuming that nodes are given a storage buffer, we also can see that in terms of total route time, for the route \(A\to C\to C\to D\), we can send our message at any time within \([0,1]\) and still arrive by time \(5\). We note that limited storage buffer at each node can be encapsulated by modifying the self-loop maps as \(t\mapsto[0,a]\), where \([0,a]\) is a finite interval. Figure 12: Time-varying network of \(4\) nodes: each edge represents a connection, decorated with available time and delay time. ## 3 Geometric and Topological Models for Time-Varying Graphs In this section we take up the question of when two time-varying graphs (TVGs) are "close" in a precise sense. This is an important question for a variety of reasons, but one immediate application of such a notion would be a calculation of whether an \(n\)-node STARLINK sub-system is actually close to a strongly connected system, as the lifetime curves in Figure 5 suggest. Lifetime curves are merely a summary statistic and lack the discriminatory power of a true metric, but Figure 15 supports the intuition that (probabilistically) more than 40 nodes is sufficient for constructing a sub-TVG that is strongly connected, with \(n=100\) providing a higher probability for a guarantee on constant connectivity. More broadly, the TVG model fits nicely within a growing topological data analysis (TDA) paradigm, where topological changes can be quantified precisely. This is a fundamental improvement over traditional parametric graph models such as those used in [1], where the removal of an edge causes a discontinuity in the coordinatization process; see Figure 13. ### Distances on Time-Varying Graphs In this section we begin the study of distances on TVGs (Definition 2.2). We use distances rather than metrics because certain TVGs may be infinitely far away and also two TVGs may have Figure 13: Models that treat edge lengths or weights as continuously varying parameters are fundamentally ill-equipped to handle removal of edges or other changes in graph topology. Indeed, if \(\ell_{3}\) were removed in the parametric model, the value of the third coordinate becomes ambiguous. By contrast, the TVG model coordinatizes disruptions in service appropriately and the Hausdorff distance can be used to measure duration of disconnect. distance zero, even if they are not exactly the same. We briefly recall this more flexible notion of a metric. **Definition 3.1** (Distance).: _A **distance** on a set \(X\) is a map \(d:X\times X\to[0,\infty]\) where for all \(x,y\in X\):_ 1. \(d(x,y)=d(y,x)\)__ 2. \(d(x,y)\geq 0\) _and_ \(d(x,x)=0\)__ Fundamental to our distances on TVGs is the Hausdorff distance. **Definition 3.2**.: _Given \(A,B\subseteq\mathbb{R}\), the **Hausdorff distance**\(d_{H}\) between \(A\) and \(B\) is defined as_ \[d_{H}(A,B)=\inf\{\varepsilon\geq 0|\ A\subseteq B^{\varepsilon}\text{ and }B\subseteq A^{\varepsilon}\},\] _where \(A^{\varepsilon}=\bigcup_{a\in A}\text{Ball}_{\varepsilon}(a)\) and \(B^{\varepsilon}=\bigcup_{b\in B}\text{Ball}_{\varepsilon}(b)\). Here \(\text{Ball}_{\varepsilon}(a)=\{x\in\mathbb{R}\ |\ |x-a|\leq\varepsilon\}\)._ #### 3.1.1 Distances with Fixed Node Correspondence **Definition 3.3**.: _Given two matrix TVGs \(M,N\in\text{Mat}_{n}(\mathcal{P}(\mathbb{R}))\) their **Hausdorff distance** is_ \[d_{H}(M,N):=\max_{1\leq i,j\leq n}\{d_{H}\big{(}M_{ij},N_{ij}\big{)}\}.\] _Remark 3.4_ (Computability of \(d_{H}(M,N)\)).: Assuming \(M\) ane \(N\) are lifetime matrices (Definition 2.4) then each entry is a finite collection of disjoint closed intervals. In this case, the computability of \(d_{H}(M_{ij},N_{ij})\) reduces to the the computability of the Hausdorff distance between the collections of these intervals. [16, Thm. E.1] proves that this quantity can be obtained via computing bottleneck distance between their respective complement intervals, i.e. \[d_{H}(M_{ij},N_{ij})=d_{B}(C(M_{ij}),C(N_{ij})),\] where \(C(M_{ij})\) and \(C(N_{ij})\) denote the complements in \(\mathbb{R}\)--usually intersected with some compact interval indicating the simulation time--respectively. In view of Remark 3.4, we introduce a new suite of distances on TVGs. **Definition 3.5** (\((p,q)\)-Disconnect Distances).: _Given two lifetime matrices \(M,N\in\text{Mat}_{n}(\mathcal{L}(\mathbb{R}))\) their \((p,q)\)-**Disconnect Distance** is_ \[d_{DC,p,q}(M,N):=\left\|\big{\langle}d_{W}^{p}\left(C(M_{ij}),C(N_{ij})\right) \big{\rangle}_{1\leq i,j\leq n}\right\|_{q}.\] _Where \(C(M_{ij})\) and \(C(N_{ij})\) is the complement of the intervals in \(M_{ij},N_{ij}\in\mathcal{L}(\mathbb{R})\), respectively, and \(d_{W}^{p}\) is the Wasserstein distance on barcodes, reviewed in the Appendix. These distances populate the entries of a length \(n^{2}\) vector, whose \(\ell^{q}\) norm is then computed._ _Remark 3.6_.: Following Remark 3.4, when Definition 3.3 is restricted to lifetime matrices the Hausdorff distance can be viewed as a special case of Definition 3.5 when \(p=q=\infty\). In Figure 15 we compute the disconnect distances for \(p=q=\infty\) (Hausdorff distance) and \(p=q=2\) (the \(2\)-Wasserstein distance). #### 3.1.2. Distances with Unknown Node Correspondence Both Definitions 3.3 and 3.5 are only useful when the number of nodes in two TVGs are the same and there is a fixed node correspondence. This is because the vertices need to be ordered to determine a matrix representation and then distances are computed entry-by-entry for these. In order to consider the distance between two general TVGs--where node correspondence is unclear--we need to introduce a relaxed Hausdorff distance, which has provable connections with an interleaving-type distance that is popular in TDA. **Definition 3.7**.: _The **symmetrized Hausdorff distance** between two matrix TVGs \(M,N\in\operatorname{Mat}_{n}(\mathcal{P}(\mathbb{R}))\) is_ \[d_{\Sigma H}(M,N)=\inf_{\sigma\in\Pi(n)}d_{H}(M,\sigma(N)),\] _where \(\Pi(n)\) is the set of all permutations of \(\{1,\ldots,n\}\), and \(\sigma(N)\) is the matrix TVG whose \((i,j)\)-th entry is \(N_{\sigma(i),\sigma(j)}\)._ **Corollary 3.8**.: _For any pair of matrix TVGs \(M,N\in\operatorname{Mat}_{n}(\mathcal{P}(\mathbb{R}))\), one has_ \[d_{\Sigma H}(M,N)\leq d_{H}(M,N).\] _Remark 3.9_.: Since \(d_{\Sigma H}\) requires considering \(d_{H}\) over all \(n!\) possible permutations, \(d_{\Sigma H}\) tends to be prohibitively expensive to compute, which is why we will work with topological summaries of TVGs such as zigzag barcodes instead. Figure 14. From an algorithmic perspective it is better to view the Hausdorff distance on windows of connection time as the Bottleneck distance on the disconnect times. This leverages an isometry proved in [12, Thm. E.1]. A TVG with no disconnections may be viewed as having an arbitrary number of points on the diagonal. The Bottleneck distance optimizes over wasy of aligning the disconnects between two TVGs. #### 3.1.3 Connections with the Interleaving Distance via Summary Cosheaves Similar to the snapshot construction of Definition 2.15, one can summarize a TVG over any interval \(I\subseteq\mathbb{R}\). Moreover, to better understand how a TVG behaves over a given interval \(I\) and the relationship of this summary to another summary \(J\), where \(I\subseteq J\), we introduce a functorial summary of a TVG that has the added benefit of being a cosheaf. The finer details of cosheaf theory are not needed for this paper and a basic understanding of category theory, as reviewed in Section B, is sufficient. However, further reading on category theory is always encouraged and [11] in particular is an excellent resource. **Definition 3.10** (Summary Graphs).: _Let \(\mathcal{G}=(G,\ell_{M})\) be a time-varying graph in the sense of Definition 2.2. To each interval \(I\subseteq\mathbb{R}\) we have a **summary subgraph over \(I\)**:_ \[\mathcal{E}_{M}(I):=\{e\in G\mid\ell_{M}(e)\cap I\neq\varnothing\}.\] _This is the subgraph of \(G\) where an edge is included iff it is alive at some point in the interval \(I\). The **underlying graph** of a time-varying graph \(\mathcal{G}\) is the summary graph over \(I=\mathbb{R}\)._ We now promote the summary graph construction to a functor with two possible codomains. **Definition 3.11** (The Category of Subgraphs and Graph Monomorphisms).: _Fix \(G=(V,E)\) a simple directed graph._ * _The collection of_ _subgraphs of_ \(G\)_, written_ \(\mathbf{Sub}(G)\)_, has for objects pairs_ \(G^{\prime}=(V^{\prime},E^{\prime})\) _where_ \(V^{\prime}\subseteq V\) _and_ \(E^{\prime}\subseteq E\)_. There is a unique morphism from_ \(G^{\prime}\to G^{\prime\prime}\) _iff_ \(V^{\prime}\subseteq V^{\prime\prime}\) _and_ \(E^{\prime}\subseteq E^{\prime\prime}\)_, thus making_ \(\mathbf{Sub}(G)\) _into a poset._ Figure 15: Two different metrics help verify the summary statistics depicted in Figure 5—for each sample of \(n=20,30,40,50,70,100\) nodes from STARLINK, the distance from the \(k\)-cumulant to the constant matrix with value \(86400\) is computed for increasing \(k\). The Bottleneck Distance (left) measures the largest deviation from constant connectivity, and separates out \(n=100\) as the one truly connected subnetworks, whereas the \(2\)-Wasserstein distance sums differences across entries in the TVGs and more tightly clusters the \(n=20\) and \(30\) node simulations apart from the \(40\) and higher number node systems. * _The collection of_ _graph monomorphisms to \(G\)_, written_ \(\operatorname{\mathbf{Mon}}(G)\)_, consists of simple directed graphs_ \(G^{\prime}=(V^{\prime},E^{\prime})\) _along with an injective graph morphism_ \(\varphi^{\prime}:G^{\prime}\to G\)_. There is a morphism_ \(\psi:(G^{\prime},\varphi^{\prime})\to(G^{\prime\prime},\varphi^{\prime\prime})\) _if_ \(\psi\) _is a graph morphism satisfying_ \(\varphi^{\prime\prime}\circ\psi=\varphi^{\prime}\)_._ * _There is a functor_ \(\iota:\operatorname{\mathbf{Sub}}(G)\to\operatorname{\mathbf{Mon}}(G)\) _that takes each subgraph to its corresponding inclusion map._ _Remark 3.12_.: The difference between \(\operatorname{\mathbf{Sub}}(G)\) and \(\operatorname{\mathbf{Mon}}(G)\) is subtle, but crucial. To see the difference, suppose \(G\) is a directed \(3\)-cycle with vertex set \(V=\{x,y,z\}\) and edge set \(E=\{[xy],[yz],[zx]\}\). In \(\operatorname{\mathbf{Sub}}(G)\) there is only one object with \(3\) vertices and \(3\) edges, namely, \(G\) itself. However, in \(\operatorname{\mathbf{Mon}}(G)\) there are three objects with \(3\) vertices and \(3\) edges, where the injective graph morphisms range over all three cyclic permutations of the \(3\)-cycle \(G\). **Definition 3.13** (Summary Cosheaf).: _The **summary cosheaf** of a \(\operatorname{TVG}\mathcal{G}=(G,\ell_{M})\) is the functor_ \[\mathcal{E}_{M}:\operatorname{\mathbf{Int}}\to\operatorname{\mathbf{Sub}}(G) \qquad\text{where}\qquad I\mapsto\mathcal{E}_{M}(I)\subseteq G,\] _which assigns to every closed interval \(I\) the summary graph over \(I\)._ _Remark 3.14_.: Following [10, 1], one can check that \(\mathcal{E}_{M}\) is actually a cosheaf, although that observation is not necessary here. Additionally, the functor \(\mathcal{E}_{M}\) is one way of viewing that our TVGs are equivalent to the dynamic graphs construction of [11]. **Definition 3.15**.: _If \(\mathcal{G}=(G,\ell_{M})\) and \(\mathcal{G}^{\prime}=(G,\ell_{N})\) are two TVGs with the same underlying graph, then they are \(\varepsilon\)**-interleaved** if there exist morphisms \(\varphi:\mathcal{E}_{M}\to\mathcal{E}_{N}\) and \(\psi:\mathcal{E}_{N}\to\mathcal{E}_{M}\) such that for all \(I\in\operatorname{\mathbf{Int}}\), the following diagram commutes:_ _where \(I^{\varepsilon}\) denotes the \(\varepsilon\)-thickening of \(I\), cf. Definition 3.2. The **interleaving distance** between \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) is_ \[d_{I}(\mathcal{G},\mathcal{G}^{\prime})=\inf\big{\{}\varepsilon\geq 0: \mathcal{E}_{M}\text{ and }\mathcal{E}_{N}\text{ are }\varepsilon\text{-interleaved}\big{\}}.\] **Theorem 3.16** (Isometry Theorems).: _Suppose \(\mathcal{G}=(G,\ell_{M})\) and \(\mathcal{H}=(H,\ell_{N})\) are two TVGs with isomorphic underlying graphs. Fix an isomorphism \(\Phi:H\to G\), which in turn induces isomorphisms \(\operatorname{\mathbf{Sub}}(G)\cong\operatorname{\mathbf{Sub}}(H)\) and \(\operatorname{\mathbf{Mon}}(G)\cong\operatorname{\mathbf{Mon}}(H)\), so we can view \(\mathcal{E}_{N}\) and \(\iota\circ\mathcal{E}_{N}\) as functors to \(\operatorname{\mathbf{Sub}}(G)\) and \(\operatorname{\mathbf{Mon}}(G)\), respectively. Any ordering of the nodes in \(G\) then determines two isometries_ \[d_{H}(M,N)=d_{I}(\mathcal{E}_{M},\mathcal{E}_{N})\quad\text{and}\quad d_{\Sigma H }(M,N)=d_{I}(\iota\circ\mathcal{E}_{M},\iota\circ\mathcal{E}_{N}),\] _where \(M\) and \(N\) are the lifetime matrices for \(\mathcal{G}=(G,\ell_{M})\) and \(\mathcal{H}=(H,\ell_{N})\), respectively._ ### Topological Summaries of TVGs via Barcodes for Machine Learning In this section we show how techniques from topological data analysis (TDA) can be used to simplify the study of TVGs, by converting them into two zigzag persistence barcodes: one for degree-0, which summarizes how components in the snapshots of \(\mathcal{G}\) evolve over time, and one for degree-1, which summarizes how cycles in the snapshots evolve. Two important conclusions of this section are 1. The degree-0 and degree-1 zigzag barcodes are _stable features_ of TVGs, and 2. These features can be used to distinguish Earth-Mars vs. Earth-Moon space networking scenarios in a K-Nearest Neighbors (KNN) classifier, _using the network topology alone_. These zigzag barcodes are based on the homology of a graph, along with maps that are induced on homology from graph morphisms. **Definition 3.17** (Homology).: _Fix a field \(\Bbbk\) for the remainder of the paper. To every graph \(G\), one has two vector spaces associated: \(H_{0}(G)\)--the vector space generated by the connected components of \(G\)--and \(H_{1}(G)\)--the vector space generated by cycles in \(G\). Additionally, associated to any graph morphism \(G^{\prime}\to G\) there are induced linear maps \(H_{k}(G^{\prime})\to H_{k}(G)\) on these vector spaces. In otherwords, we have the **homology functors** for \(k=0\) and \(k=1\):_ \[H_{k}:\mathbf{Mon}(G)\rightarrow\mathbf{Vec}.\] **Definition 3.18** (Homology Modules for TVGs).: _Recall the subgraph categories of Definition 3.11 and the summary cosheaf construction of Definition 3.13. For any TVG \(\mathcal{G}\), we have, by post-composing with \(\mathcal{E}_{M}(\mathcal{G})\), the associated **homology modules**\(H_{k}\mathcal{E}_{M}(\mathcal{G}):\mathbf{Int}\rightarrow\mathbf{Vec}\), which captures the homology of each summary graph of TVG \(\mathcal{G}\) in degrees \(k=0\) and \(1\)._ **Definition 3.19** (TVG Barcodes).: _For \(k=0,1\), the homology modules \(H_{k}\mathcal{E}_{M}(\mathcal{G}):\mathbf{Int}\rightarrow\mathbf{Vec}\) have a canonically associated **zigzag barcode**, which is a multiset of intervals in \(\mathbb{R}\):_ \[B_{k}(\mathcal{G})=\{(I_{j},m_{j})\}\quad\text{where}\quad m_{j}:I_{j} \rightarrow\mathbb{N}.\] _Remark 3.20_ (Zigzag Persistence Review).: Zigzag persistence is well studied in the TDA community with computational foundations established in [1, 1, 2, 3, 4] among others. We review two theoretical perspectives on this construction. The first proceeds directly by recognizing that associated to each TVG \(\mathcal{G}=(G,\ell_{M})\) is a finite set of closed intervals for each edge \(v,e\in G\). Taking the union of the endpoints of these intervals across all \(v,e\) specifies a finite set of "critical values" \(\{\tau_{i}\}_{i=0}^{n}\) where an edge or vertex can appear, from this we can express any TVG as a zigzag diagram of sub-graphs of the complete graph \[G_{-1}\hookrightarrow G_{0}\hookrightarrow G_{1}\hookrightarrow G_{2} \hookrightarrow\cdots\hookrightarrow G_{2n}\hookrightarrow G_{2n+1},\] where even-indexed subgraphs \(G_{2i}\) indicate the snapshot of \(\mathcal{G}\) at each critical value \(\tau_{i}\) and odd-indexed subgraphs \(G_{2i\pm 1}\) correspond to snapshots of \(\mathcal{G}\) at times \(\tau_{i}\pm\epsilon\) for sufficiently small \(\epsilon\). Taking homology of each of these subgraphs and graph inclusions produces a representation of a type \(A_{2n+3}\) alternating quiver, which by Gabriel's theorem [1] has a canonical multiset of intervals associated to it. Alternatively, one can follow the work of [16], which shows that our TVGs are equivalently viewed as dynamic graphs. Following [16, Def. 2.17] one sees that restricting \(H_{k}\mathscr{E}_{M}(\mathscr{G})\) to one-point intervals \([t,t]\) produces a map \(h_{k}:\mathbb{R}\to\mathbf{Vec}\) that is "cosheaf-inducing." As shown in [1, 2] constructible cosheaves are block-decomposable and restricting these blocks to the diagonal \(y=x\) produces the zigzag barcode described above. As previously mentioned, the zigzag barcode is a stable feature of a TVG. The next proposition explains what is meant by "stable." **Proposition 3.21**.: _Suppose \(\mathscr{G}=(G,\ell_{M})\) and \(\mathscr{H}=(H,\ell_{N})\) are two TVGs with a fixed isomorphism between the underlying graphs \(\Phi:G\cong H\), as in Theorem 3.16. The bottleneck distance on the barcodes of \(H_{k}\mathscr{E}_{M}(\mathscr{G})\) and \(H_{k}\mathscr{E}_{N}(\mathscr{H})\) are bounded above as follows:_ \[d_{B}(\mathscr{G},\mathscr{H})\leq d_{I}\left(\mathfrak{t}\circ\mathscr{E}_{M }(\mathscr{G}),\mathfrak{t}\circ\mathscr{E}_{N}(\mathscr{H})\right)\leq d_{ I}\left(\mathscr{E}_{M}(\mathscr{G}),\mathscr{E}_{N}(\mathscr{H})\right).\] Proof.: The notation \(H_{k}\mathscr{E}_{M}\) actually refers to the composition of functors \(H_{k}\circ\mathfrak{t}\circ\mathscr{E}_{M}\). [1, Proposition 3.6] proves very generally that compositions of functors define Lipschitz-1 maps between categories that are equipped with interleaving distances. Consequently \[d_{\mathfrak{l}}^{\mathbf{Vec}}\big{(}H_{k}\circ\mathfrak{t}\circ\mathscr{E}_ {M}(\mathscr{G}),H_{k}\circ\mathfrak{t}\circ\mathscr{E}_{N}(\mathscr{H})\big{)} \leq d_{\mathfrak{l}}^{\mathbf{Mon}(G)}\big{(}\mathfrak{t}\circ\mathscr{E}_{M }(\mathscr{G}),\mathfrak{t}\circ\mathscr{E}_{N}(\mathscr{H})\big{)}\leq d_{ \mathfrak{l}}^{\mathbf{Sub}(G)}\big{(}\mathscr{E}_{M}(\mathscr{G}),\mathscr{E} _{M}(\mathscr{G}^{\prime})\big{)}.\] By the results from [2, 1], one has \[d_{B}(\mathscr{G},\mathscr{G}^{\prime})=d_{I}\left(H_{k}\mathscr{E}_{M}( \mathscr{G}),H_{k}\mathscr{E}_{M}(\mathscr{G}^{\prime})\right),\] which proves the stated result. We now demonstrate the discriminative powers of the zigzag barcode on Earth-Moon and Earth-Mars simulations using a K-Nearest Neighbors (KNN) classifier. #### 3.2.1 KNN on Earth-Moon vs. Earth-Mars Satellite Systems For a final set of experiments, we evaluate the efficacy of the TVG model as the first step in a supervised machine learning pipeline, which ends with the feature of the degree-\(1\) zigzag barcode. For these experiments, we are interested in a binary classification problem with samples from two classes, described below. * **Class Earth-Moon**: Consists of random samples of \(s=15\) STARLINK satellites around the Earth and \(5\) satellites around the Moon, simulated over one day (\(86400\) seconds) and converted into a \(20\times 20\) TVG matrix. * **Class Earth-Mars**: Consists of random samples of \(s=15\) STARLINK satellites around the Earth and \(5\) satellites around Mars, simulated over one day and converted into a \(20\times 20\) TVG matrix. In Figure 16 we use the same day in March 2023 to generate all the simulations, but we also vary the number \(s\) of STARLINK satellites in the system from \(5\) to \(50\). We do this for two reasons: (1) because the number of STARLINK satellites is the primary driver of "noise" in our TVGs, as they are being chosen from a database with over \(2500\) STARLINK satellites, and (2) because we want to understand the computational complexity of these simulations. In particular, varying \(s\) leads to TVG matrices of size \(10\times 10\) (for \(s=5\)), \(15\times 15\) (for \(s=10\)), \(20\times 20\) (for \(s=15\)), and \(55\times 55\) (for \(s=50\)). Using a current state-of-the-art laptop (a 2023 MacBook Pro, 12-core CPU and 38-core GPU M2 Max Chip) generating the.orb files took only a few seconds for each experiment (100 random simulations for each value of \(s\)) and generating the list of contact times using SOAP only took two minutes. The main computational bottleneck was in computing zigzag persistence for each simulation and constructing the \(100\times 100\) matrix of \(W_{2}\)-distances between these barcodes. For \(s=15\) the distance matrix only took ten minutes to calculate, but for \(s=50\) nodes the analogous computation took 193 minutes. Nevertheless, the classification results are all very good with \(k=3\) providing \(\sim 98\%\) classification accuracy for \(s=5\) and \(s=10\), and \(\sim 93\%\) accuracy for \(s=15\) and \(s=50\). Fixing \(s=15\), we can also vary the days that the simulation takes place on. This has a much Figure 16: Average performance curves for a KNN classifier on \(H_{1}\) zigzag persistence barcodes computed from time-varying graphs (TVGs) representing Earth-Moon or Earth-Mars satellite systems. Each simulation has an even class balance of Earth-Moon vs. Earth-Mars systems, with 50 samples from each class, and with simulation time the same fixed day in March 2023. For all simulations the same Moon/Mars satellites are used, but the STARLINK satellites are randomly sampled with samples ranging from \(s=5,10,15,50\). Average performance is computed over 100 (80%-20%) train-test splits. stronger effect on the classification performance since the relative positions of Earth and Mars, as measured by its synodic period, has a \(\sim\!2\) year (780 day) period. For example, in the top row of Figure 17 a random day is chosen from each month, beginning in January 2018 and continuing for 23 months, and a pair of Earth-Moon and Earth-Mars simulations are generated. This yields 46 samples, evenly distributed across the two classes. For a particular \(80\%-20\%\) split, the classification performance of KNN is shown in the top-left of Figure 17 with a peak classification rate occuring at \(k=5\) neighbors. The top-right of Figure 17 is calculated by average these performance curves across 200 train-test splits on this same \(n=48\) sample experiment. In the bottom row of Figure 17 a similar experiment is repeated, except for each randomly sampled day 4 simulations are generated from each class, thus leading to a \(n=184\) experiment. Notice that the classification power increases as different days are being balanced by random draws from the same day. Finally, in Figure 18 this same basic experiment is repeated, but with different start years: 2016, 2018, 2020, 2022, and 2024. Average KNN classification performance peaks is best for simulations starting in 2018 and 2020 with over 80% accuracy (versus a naive 50% classification rate). Simulations starting in 2016, 2022, and 2024 fare worse with peak classification performance of around 65-72%. ## 4 Future Directions This paper is one installment in an on-going effort to integrate ideas from mathematics into the design of better engineered systems capable of supporting fast, reliable, and autonomous routing in a solar system-wide internet. We conclude with a list of ongoing and future work Figure 17: KNN performance on \(H_{1}\) zigzag persistence to differentiate satellite systems over a two year period. On the left, 23 days are selected, one from each month in a 23 month period beginning in 2018, and two simulations are generated—one from the Earth-Moon Class and one from the Earth-Mars Class for a total of 46 simulations. On the right, the same experiment is performed, but with 4 samples drawn from each class, leading to 184 simulations. Having more simulations from the same day improves classification accuracy considerably, by approximately 10%. that we hope will draw greater attention to the rich source of research problems that space networking presents. 1. **Faster Algorithms:** Current protocols for routing over a temporal and disruption tolerant network involves the construction of a contact graph and solving Dijkstra's algorithm over this graph; this is called Contact Graph Routing (CGR). Recent efforts [13] show that this can be sped up substantially by using a contact multigraph, as it reduces the number of vertices that Dijkstra's algorithm needs to search over. This mult-graph can be viewed as a display space associated to the cosheaf that defines a TVG. Do the perspectives introduced in this paper provide further computational benefits? For example, is there a routing algorithm that operates directly on the matrix TVG, viewed as a data structure? The current goal for space simulation work is the ability to simulate routing over a network with \(10^{5}\) nodes, with our current capabilities hovering around \(10^{3}\). 2. **Exploiting Periodicity:** From an algebraic perspective, the difficulty of routing in a system with propagation delay (such as in Earth-to-Mars communication) is the lack of convergence of the Kleene star. However, this comes from using \(\operatorname{End}(\mathcal{P}(\mathbb{R}))\) as a semi-ring. In truly periodic systems, such as those governed by celestial mechanics, it may make more sense to view lifetimes as subsets of the circle \(\mathbb{S}^{1}\). If we consider the communication matrix \(A\) with entries in \(\operatorname{End}(\mathcal{P}(\mathbb{S}^{1}))\), there may be conditions when non-trivial propagation delay may still yield a convergent Kleene star, such as in when a shift is a rational multiple of the circumference. 3. **Sub-Netting in TVGs:** Our KNN classification experiments indicate that TVGs, even when reduced to their zigzag barcodes, can represent distinct types of space networking scenarios. This leads us to believe that the metrics introduced in this paper can be used to identify motifs or other sub-structures in TVGs that can be used to automatically identify domains for routing. This is important as terrestrial internet relies heavily on fixed routing tables, which is not possible in a time-varying setting. Is it possible to identify algorithms or methods for automatically sub-netting a TVG? If so, this would help make routing protocols adapted to the type of (sub) TVG at hand, which could then be composed to efficiently route across disparate parts of a future space internet. 4. **More Realistic Models:** The approach to satellite network modelling taken in this paper is based on considering all possible contacts based on line-of-sight. In practice, many (or most) nodes might only be able to hold one link at a time; choosing one contact means not choosing others. Moreover setting up a link, tearing down a link, and pointing antennas all take time. This means that a realistic network is really based on a subset of all possible contacts. Methods to choose such a subset, particularly in a resilient and robust way, remain unknown. The machinery introduced in this paper could be used to create "best practices" and frameworks for network management. Figure 18: KNN average performance using 184 total simulations as described in Figure 17 where the two-year period starts in the indicated year.
2303.05391
Disambiguation of Company names via Deep Recurrent Networks
Name Entity Disambiguation is the Natural Language Processing task of identifying textual records corresponding to the same Named Entity, i.e. real-world entities represented as a list of attributes (names, places, organisations, etc.). In this work, we face the task of disambiguating companies on the basis of their written names. We propose a Siamese LSTM Network approach to extract -- via supervised learning -- an embedding of company name strings in a (relatively) low dimensional vector space and use this representation to identify pairs of company names that actually represent the same company (i.e. the same Entity). Given that the manual labelling of string pairs is a rather onerous task, we analyse how an Active Learning approach to prioritise the samples to be labelled leads to a more efficient overall learning pipeline. With empirical investigations, we show that our proposed Siamese Network outperforms several benchmark approaches based on standard string matching algorithms when enough labelled data are available. Moreover, we show that Active Learning prioritisation is indeed helpful when labelling resources are limited, and let the learning models reach the out-of-sample performance saturation with less labelled data with respect to standard (random) data labelling approaches.
Alessandro Basile, Riccardo Crupi, Michele Grasso, Alessandro Mercanti, Daniele Regoli, Simone Scarsi, Shuyi Yang, Andrea Cosentini
2023-03-07T15:07:57Z
http://arxiv.org/abs/2303.05391v2
# Disambiguation of Company names via Deep Recurrent Networks ###### Abstract Name Entity Disambiguation is the Natural Language Processing task of identifying textual records corresponding to the same Named Entity, i.e. real-world entities represented as a list of attributes (names, places, organisations, etc.). In this work, we face the task of disambiguating companies on the basis of their written names. We propose a Siamese LSTM Network approach to extract -- via supervised learning -- an embedding of company name strings in a (relatively) low dimensional vector space and use this representation to identify pairs of company names that actually represent the same company (i.e. the same Entity). Given that the manual labelling of string pairs is a rather onerous task, we analyse how an Active Learning approach to prioritise the samples to be labelled leads to a more efficient overall learning pipeline. With empirical investigations, we show that our proposed Siamese Network outperforms several benchmark approaches based on standard string matching algorithms when enough labelled data are available. Moreover, we show that Active Learning prioritisation is indeed helpful when labelling resources are limited, and let the learning models reach the out-of-sample performance saturation with less labelled data with respect to standard (random) data labelling approaches. keywords: Machine learning; Natural Language Processing; Named Entity Disambiguation; Siamese Network; Active Learning + Footnote †: journal: ## 1 Introduction A common information retrieval task with several applications is the association of company names from any internal or external source to a specific company registered in an internal database. A straightforward method based on the equality of strings is insufficient to tackle the issue, since company names are not uniquely spelt and abbreviations or synonyms vary across databases. Often, data sources contain company names manually entered, which may include typos, abbreviations or indeed mistakes of any kind. Some exemplar applications include: 1. integrating data from other sources into the internal database to supplement companies' information. 2. Linking the name of a company mentioned in the news or the social media to an actual company in an internal registry so that it may be associated with, e.g. sentiment or ESG scores. 3. The sender and receiver of a bank transfer are in general manually entered, and do not need to match specific registry (contrary to the transfer code). Nevertheless, it is useful to correctly identify the sender and receiver companies, e.g. to detect fraudulent behaviours. Company name disambiguation refers to the task of determining whether two given strings encoding company names actually represent the same company. For example, the two strings "Intesa Sanpaolo S.p.A." and "Intesa San Paolo bank" in fact represent the same corporation. This task is an instance of the so-called Named Entity Disambiguation (NED) (Aghaebrahimian and Cieliebak, 2020), whose aim is to identify data records corresponding to the same Named Entity, i.e. real-world entities represented as a list of attributes (names, places, organisations, etc.). It must be said that the NED task has, ironically, a somewhat fuzzy meaning in the literature, and it is used to refer to more or less different problems. Moreover, many other labels are often employed in the literature to refer to slightly similar -- when not identical -- tasks. We refer to Barlaug and Gulla (2021), who try to introduce a somewhat general framework, whose specific instances can be matched to various NED-related tasks and sub-tasks. In particular, Entity Disambiguation, or Entity Matching, should not be confused with Entity Recognition, which is the task of mapping a string (usually a word or a group of words inside a longer text sequence) to a finite set of classes (the Entities), without -- in general -- the explicit need of _comparing_ strings (Kolitsas et al., 2018). The contribution of this work is twofold: on the one hand, we propose a Siamese Recurrent Neural Network approach to the ask of disambiguating pairs of company names. We show, via experiments, that the proposed approach outperforms other baseline models and that is efficient in generalising to other domains. On the other hand, we use our proposed model in an Active Learning setting to demonstrate how to make human labelling more efficient by prioritising the samples to be labelled. The rest of the paper is organised as follows: Section 2 is devoted to a discussion of relevant literature, in particular regarding NED and Active Learning. In Section 3 we detail the methodologies we use for the NED task: we describe both our proposed model and the baseline approaches we use as benchmark. Section 4 is devoted to describing how we implement the Active Learning setting. In Section 5 we thoroughly describe how we build the datasets that we use in the experiments. The latter are described in Section 6. Discussion of the insights derived from the experimental results is presented in Section 7, while Section 8 contains concluding remarks. The python code implementation of the Siamese Neural Network model, of the Active Learning setting and of all the experiments, is available in open-source at github.com/rcrupi/SiameseDisambiguation. ## 2 Related Works ### Entity Disambiguation Most classical approaches to strings pair matching leverage string similarity measures, quantifying how much two given strings are similar with a more or less sophisticate deterministic rule (Cohen et al., 2003; Christen, 2006; Sun et al., 2015). These methods are built on several background encodings of given strings, such as phonetic-based, character-based, or based on term frequency/inverse term frequency (_tf-idf_) and hybrid versions of these (Christen, 2006). Other approaches try to employ also semantic knowledge to compute similarity (Prasetya et al., 2018). More recently, methodologies based on _learning the appropriate similarity function_ from a sample of data of the desired domain have become quite common. For instance, Piskorski and Jacquet (2020) employ _tf-idf_ vectors of \(n\)-grams as predictors for a Machine Learning (ML) classifier. The internal (learned) representation can then be exploited as an abstract vector summarising the information relevant to the task. Finally, a standard vector similarity function -- such as cosine similarity -- of the representations of two strings can be used to infer their task-specific distance. In the domain of toponym matching -- i.e. pairing of different strings representing the same real-world location -- Santos et al. (2018) have faced a problem very similar to the one we are discussing. They propose an approach based on a _Siamese_ Deep Neural Network architecture (Chicco, 2021) and benchmark it against several distance-based methodologies and several classifiers taking as input various pairwise string distances. They conclude that their approach is superior in terms of matching performance, even if less efficient in terms of computational time with respect to pure distance methods. Neculoiu et al. (2016) propose a Siamese Deep Learning model as well, with the slightly different task of mapping strings to a predefined set of job names. This classification task is nevertheless approached by translating it into a NED framework and by learning a vector representation of strings, such that close vectors correspond to the same job class. They use two different loss terms for positive and negative matches: in particular, the loss term for negative samples is zero below a certain threshold, so as not to pay a cost for non-matching pairs that are increasingly dissimilar. Furthermore, they introduce some interesting data augmentation techniques, such as observations derived by adding some random typos and character-level deletion in positive pair samples. Aghaebrahimian and Cieliebak (2020) also propose a Deep Learning approach to NED, while having as input, not the raw strings, but rather character-level _tf-idf_ vectors of \(n\)-grams, similarly to Piskorski and Jacquet (2020). Moreover, they train the network in a contrastive fashion, i.e. by feeding input triplets with the observed string and both an actual string match (i.e. string representing the same entity -- positive example) and a non-matching string (negative example). We refer to Barlaug and Gulla (2021) for a thorough review of Neural Networks-based approaches to the NED task. ### Active Learning Entity Matching datasets are typically constructed through laborious human labelling. To increase the efficiency of such a procedure, techniques have been proposed to carefully prioritise the samples to be labelled. Against this background, _Active Learning_ -- i.e. the sub-field of ML with the characteristic that the learning algorithm is allowed to choose the data from which it learns (Settles, 2009; Arora and Agarwal, 2007) -- has been proven to be beneficial in the Entity Matching domain (Meduri et al., 2020). In particular, the selection criteria of candidate observations to be labelled are usually expressed in terms of _informativeness_ and _representativeness_(Zhou, 2018). While representativeness-based approaches try to find a pattern of the unlabelled data, using graphs or clustering methods, informativeness-based approaches -- such as Uncertainty sampling and query-by-committee -- choose the instances to be labelled based on how uncertain they are to be classified. In particular, Query-by-Committee approaches propose to train several classifiers and define the uncertainty of a given observation based on the rate of disagreement on their predictions. They are sometimes referred to as the learner-agnostic approaches. Uncertainty sampling, on the other hand, given a specific classifier, employs the distance from the decision boundary as a proxy for uncertainty. It is therefore referred to as learner-aware approach. While in Meduri et al. (2020) the task is to disambiguate two instances on the basis of several different information sets (e.g. address, name, etc.), in this work we focus on company disambiguation based on string names only. A disambiguation task involving couples of words (e.g. 'principle', 'principal', and 'end', 'and') was faced in Banko and Brill (2001). In particular, they extract features from the words and apply a ML classifier to estimate their similarity. Active Learning (specifically query-by-committee) is then exploited to iteratively select batches from a pool of unlabelled samples. Half of the samples in each batch are selected randomly, while the other half are selected on the basis of their uncertainty. Since Deep Learning requires huge amount of data, Active Learning is particularly well suited to limiting the data labelling but keeping high the performance (Zhan et al., 2022). In particular, in this work we adopt a modification of the "Least confidence" approach as query strategy (Huang, 2021). Other works, such as Sorscher et al. (2022), suggest a self-supervised data pruning method. In contrast to Active Learning, data reduction is done in a single step. The self-supervised metric is based on the application of \(k\)-means over the embedding space of a pre-trained network: an instance far away from its cluster centroid is considered uncertain. Experiments in Sorscher et al. (2022) suggest that the instances to be removed actually depend on the size of the starting dataset: if it is large enough, it is beneficial to include the most uncertain samples, whereas if it is small, it is preferable to include simplest (least uncertain) samples. ## 3 Methods In ML settings, a multiclass classification function \(\mathcal{C}:X\mapsto Y\) takes as input a feature vector \(\mathbf{x}\in X\) from the input feature space \(X\) and outputs a class label \(y\in Y\) from a finite set of possible \(\bar{\alpha}\) classes \(Y=\{0,1,2,\ldots,\bar{\alpha}-1\}\)(Hastie et al., 2009). Most families of ML classifiers actually learn to estimate the probabilities \[\mathbb{P}(y=\alpha\mid\mathbf{x}),\quad\alpha\in\{0,1,\ldots,\bar{\alpha}\}\,. \tag{1}\] In the following, we label with \(\hat{y}_{i,\alpha}\) the estimated probability that the \(i\)-th sample belongs to the class \(\alpha\). Being probabilities, the \(\hat{y}_{i,\alpha}\) are such that \(\sum_{\alpha=0}^{\bar{\alpha}-1}\hat{y}_{i,\alpha}=1\) and \(\hat{y}_{i,\alpha}\in[0,1]\)\(\forall\alpha\in Y\). The predicted class \(\bar{y}_{i}\) is the one associated with the highest probability, namely \[\bar{y}_{i}=\operatorname*{argmax}_{\alpha\in Y}\hat{y}_{i,\alpha}.\] The case with only two class labels (\(\bar{\alpha}=2\)) -- i.e. binary classification tasks -- are usually formalised as: \[\mathcal{C}:X\mapsto[0,1], \tag{2}\] where the output \(\hat{y}_{i}\) is an estimate of \(\mathbb{P}(y_{i}=1\mid\mathbf{x}_{i})\) and corresponds to the output \((\hat{y}_{i,0},\hat{y}_{i,1})=(1-\hat{y}_{i},\hat{y}_{i})\) in the generic multiclass setting. In this work, we frame the string match problem as a binary classification task, where the components of the input feature vector \(\mathbf{x}_{i}\) are couples of strings \(\mathbf{x}_{i}=\{a_{i},b_{i}\}\), and the classifier estimates the probability \(\hat{y}_{i}\) that the two strings correspond to the same company (we label 1 the matching class). Calling \(\mathbf{S}\) the set of possible character symbols, we may formalise the string matching classifier as a function \[\mathcal{C}:\mathbf{S}^{n}\times\mathbf{S}^{n}\mapsto[0,1], \tag{3}\] where the integer \(n\) denotes the fixed (maximum) length of the strings to be analysed. ### Baseline methods To determine how dissimilar two strings \(a\) and \(b\in\mathbf{S}^{n}\) are, in the following we shall make use either of a distance function -- \(\mathcal{D}(a,b)\in\mathbb{R}^{+}\) where 0 stands for identical strings and the higher the distance the more dissimilar \(a\) and \(b\) -- or a similarity function -- \(\mathcal{S}(a,b)\in[0,1]\), 1 denoting identical strings. #### Levenshtein Widely used deterministic methods are the _edit distance_ metrics. Generally speaking, they are based on counting the number of operations needed to transport a string onto another. The choice of the type of operations allowed determines the specific form of the distance. The most widely known edit-distance metric is the _Levenshtein distance_ (sometimes referred to as _the_ edit-distance), which calculates the distance as the number of insertions, deletions, and substitutions required to transform one string into another. The formula of the Levenshtein distance \(\mathcal{D}_{\text{Lev}}(a,b)\) between two strings \(a=a_{0}a_{1}a_{2}\ldots a_{m}\) of length \(m\) and \(b=b_{0}b_{1}b_{2}\ldots b_{n}\) of length \(n\) is given by the following recursion: \[\mathcal{D}_{\text{Lev}}(a,b)=\begin{cases}m&\text{ if }n=0,\\ n&\text{ if }m=0,\\ \mathcal{D}_{\text{Lev}}(\widetilde{a},\widetilde{b})&\text{ if }a_{0}=b_{0}, \\ 1+min\begin{cases}\mathcal{D}_{\text{Lev}}(\widetilde{a},b)&\\ \mathcal{D}_{\text{Lev}}(a,\widetilde{b})&\text{ otherwise};\\ \mathcal{D}_{\text{Lev}}(\widetilde{a},\widetilde{b})&\end{cases}\end{cases} \tag{4}\] where \(\widetilde{x}\) denotes the string \(x\) without the first character (\(x_{0}\)), i.e. \(\widetilde{x}=x_{1}x_{2}\ldots x_{s}\). A closely related edit-based distance is the _InDel distance_, which allows only insertions and deletions. It is easy to see that the InDel distance is equivalent to the Levenshtein distance where the substitution operation is assigned a cost of 2 (deletion + insertion). We call _InDel Ratio_ the following normalised version of the InDel distance: \[\mathcal{R}_{\text{ID}}(a,b)=\left(1-\frac{\mathcal{D}_{ID}(a,b)}{|a|+|b|} \right), \tag{5}\] where \(\mathcal{D}_{\rm ID}(a,b)\) denotes the InDel distance between \(a\) and \(b\). We make use of the Python library TheFuzz to compute \(\mathcal{R}_{\rm ID}(a,b)\), with the sole difference that TheFuzz expresses the ratio in percentage points. #### 2.1.1 Jaro-Winkler similarity The _Jaro-Winkler (JW) similarity_ (\(\mathcal{S}_{\rm JW}\)) is another edit-based metric emphasising both the amount of matching characters and their placement inside the two strings. Notice that this is a notion of similarity, i.e. \(\mathcal{S}_{\rm JW}\in[0,1]\), \(\mathcal{S}_{\rm JW}(a,b)=0\) corresponding to no match at all between two strings \(a\) and \(b\), and \(\mathcal{S}_{\rm JW}(a,b)=1\) to exact match. The Jaro-Winkler similarity is a variant of the Jaro similarity, that, for two strings \(a\) and \(b\), is defined as \[\mathcal{S}_{\rm J}(a,b)=\begin{cases}0&\text{if }c=0,\\ \dfrac{1}{3}\left(\dfrac{c}{|a|}+\dfrac{c}{|b|}+\dfrac{c-t}{c}\right)&\text{ otherwise},\end{cases} \tag{6}\] with: * \(c\) the number of matching characters. Two characters are considered a match when they are the same and they are no more than \(\frac{\max(|a|,|b|)}{2}-1\) chars apart of one another, * \(t\) is the number of transpositions counted as the number of matching characters found in the wrong order, divided by two. The JW similarity extends the definition of the Jaro similarity by favouring strings with a matching prefix: \[\mathcal{S}_{\rm JW}(a,b)=\mathcal{S}_{\rm J}(a,b)+\ell\,p\left(1-\mathcal{S} _{\rm J}(a,b)\right), \tag{7}\] where \(\ell\) is the length of the common prefix up to a maximum value, and \(p\) is a constant scaling factor determining the strength of the premium. The maximum value attributed to \(\ell\) and the value of \(p\) should be chosen such that \(\ell p\leq 1\). #### 2.1.2 Jaccard similarity While the edit-based metrics look at what is necessary to do to transform one string into another, the _token-based_ metrics consider the strings as sets of tokens (i.e. the words or characters composing the strings) and search for the common tokens between two sets. A widely-used token-based similarity metric is the _Jaccard similarity_ which is defined as the ratio of the intersection over the union of the token's sets \(A=\{a_{0},a_{1},a_{2},\ldots,a_{m}\}\) and \(B=\{b_{0},b_{1},b_{2},\ldots,b_{m}\}\) for the two strings \(a\) and \(b\) respectively (sometimes referred to as IoU -- Intersection over Union), i.e. \[\mathcal{S}_{\rm Jac}=\frac{|A\cap B|}{|A\cup B|}. \tag{8}\] Notice that the Jaccard metric does not take into account the order of tokens, unlike the previously discussed edit-based metrics. #### 2.1.3 Token Set Ratio Another approach for string matching is represented by the _approximate string matching_ algorithms (Navarro, 2001) that leverage the basic notions of distance introduced so far, but take into account matching also at substring level. In particular, we make use of the so-called _Token Set Ratio_ metric computed via TheFuzz Python library. It works as follows: 1. takes the _unique words_ (i.e. substrings separated by whitespaces) for each string \(a\) and \(b\), let us call them \(W_{a}\) and \(W_{b}\), respectively: 2. builds the following word sets: \[I_{ab}=W_{a}\cap W_{b},\] \[W_{a\setminus b}=W_{a}\setminus W_{b},\] \[W_{b\setminus a}=W_{b}\setminus W_{a};\] 3. sorts alphabetically the sets and builds new strings \(s_{ab}\), \(s_{a\setminus b}\), \(s_{b\setminus a}\) by joining with whitespaces the words in the corresponding (sorted) sets; 4. builds the new strings: \(c_{a}\) joining \(s_{ab}\) and \(s_{a\setminus b}\) with a whitespace, and analogously \(c_{b}\) with \(s_{ab}\) and \(s_{b\setminus a}\); 5. compute the similarity as \[\mathcal{R}_{\text{TS}}(a,b)=\max\begin{cases}\mathcal{R}_{\text{ID}}(s_{ab}, c_{a})\\ \mathcal{R}_{\text{ID}}(s_{ab},c_{b})\\ \mathcal{R}_{\text{ID}}(c_{a},c_{b})\end{cases}.\] (9) #### Baseline classifier To build a classifier based on the match algorithms just described, the strings are pre-processed by removing punctuation and capitalising the text. The five string match algorithms listed in Table 1 constitute our baseline methods and for each of the 5 string matching score listed in Table 1, we do the following: * pre-process the strings with the cleaning method, * applies the selected string match algorithm to each pair of strings in the training dataset, * train a Decision Stump -- i.e. a Decision Tree with a single node -- given as input the score just computed, and as label the match/non-match nature of each pair of strings. \begin{table} \begin{tabular}{l c c} \hline \hline **score** & **type** & **range** \\ \hline Levenshtein & distance & \(\mathcal{D}_{\text{Lev}}(a,b)\in[0,\max{(|a|,|b|)}]\) \\ InDel & similarity & \(\mathcal{R}_{\text{ID}}(a,b)\in[0,1]\) \\ Jaro-Winkler & similarity & \(\mathcal{S}_{\text{JW}}(a,b)\in[0,1]\) \\ Token Set Ratio & similarity & \(\mathcal{R}_{\text{TS}}(a,b)\in[0,1]\) \\ Jaccard & similarity & \(\mathcal{S}_{\text{Jac}}(a,b)\in[0,1]\) \\ \hline \hline \end{tabular} \end{table} Table 1: List of metrics used as features in the Baseline and Random Forest classifier. ### Random Forest classifier The validity of string similarity algorithms presented so far depends on the use case. Therefore we use a _Random Forest_ classifier which uses the five string match metrics listed in Tab. 1 as input features at the same time. The Random Forest pipeline goes as follows: * pre-process the strings with the same cleaning method as for the Baseline Trees, * for each pair of strings in the training dataset, compute the 5 scores listed in Table 1, * extract 2 additional features from each string: the _number of characters_ and the _number of words_ (i.e. substrings split with respect to whitespaces), * train a Random Forest classifier with 9 features in input (5 matching scores + 2 \(\times\) number of words + 2 \(\times\) number of characters), and as label the match/non-match nature of each pair of strings. The popular Scikit-Learn Python library (Pedregosa et al., 2011) is used both for the Decision Stump of the Baseline classifiers and for the Random Forest implementations. The hyperparameters of the Random Forest are set to: max_depth=3, n_estimators=100, class_weight='balanced'. ### Proposed Approach We propose an approach based on Recurrent Neural Networks (RNNs) employing a Siamese strategy (Bromley et al., 1993), framing the learning problem as a binary classification of string pairs. To keep the format of the input consistent, each string is preprocessed as follows: the strings are padded to a length of 300 chars2, using the heavy division sign as placeholder for padding. The string cleaning is in this case limited to the uppercase. The rationale is to leave as much information as possible to the Neural Network model to learn useful patterns. Each string is then tokenised character-wise and one-hot encoded with an alphabet of 63 symbols3 (i.e. the placeholder plus 62 symbols for capital letters, numbers, punctuation, and whitespace), resulting in a \(300\times 63\) input matrix. Footnote 2: The longest string in our whole dataset has 124 chars. If longer strings are to be fed to the model, then a truncation to 300 is implemented. Footnote 3: Another possible approach is using the entire word as a single token. Experiments using this approach resulted in poor performances, because the model cannot properly handle cases of spelling errors or abbreviations Each input matrix, representing a string, is then processed by an Embedding Model (Figure 1), composed of a \(300\times 63\) embedding matrix (i.e. a matrix whose entries are learned via loss optimisation) followed by an LSTM layer with 16 nodes. Weights sharing is employed during learning between the encoding model of the two strings to force the two encoding models to be identical (which is indeed the origin of the name "siamese"), in such a way as to preserve the symmetry of the problem and to effectively learn a unique representation space for individual strings. Notice that we don't make use of the full sequence of hidden LSTM states (that would be a \(300\times 16\) matrix) as output of the LSTM layer -- i.e. the embedding representation of individual strings -- we instead employ the hidden state corresponding to the last token in the string, that nevertheless implicitly contains information of all the hidden states along the sequence -- this is indeed the main feature of RNN models4. Footnote 4: We experimented the same architecture with a Bidirectional LSTM layer in place of a plain LSTM layer, but without any sign of improvement in performance, while on the other hand, the computational cost increased significantly. The two encodings thus generated are then employed to compute several vector distances -- namely, \(L_{1}\), \(L_{2}\), \(L_{\infty}\), cosine-based distance and the element-wise absolute difference. These results are then fed as inputs to a Feed-Forward Neural Network classifier, composed of two consecutive blocks, each consisting of a dense layer with ReLU activations, batch normalisation, and dropout. The final output is a single neuron with a sigmoid activation function, to get the classification score (Figure 2). Since every operation performed on the inputs has the commutative property, the whole model \(\mathcal{C}^{s}\), composed by the ensamble of the encoding model and the prediction model, has commutative property, then given any two input strings \(a\) and \(b\), we have \(\mathcal{C}^{s}(a,b)=\mathcal{C}^{s}(b,a)\) by design. Binary Crossentropy is used as loss function, as usual for binary classification tasks. We employ Nadam as optimisation algorithm, with parameter choice \(\beta_{1}=0.8\), \(\beta_{2}=0.9\) and a fixed learning rate \(\epsilon=10^{-4}\). Both the Embedding model and the downstream Feed-Forward classifier are implemented in Python via TensorFlow(Abadi et al., 2015). The rationale for employing a Siamese approach is that of learning a high-level embedding of strings, where the similarity in the embedding space reflects the probability of being instances of the same entity. Figure 1: **Embedding model**. Schematic TensorFlow representation of the Embedding model described in Section 3.3. Each block denotes a TensorFlow layer, with input and output tensor dimensions. As usual in TensorFlow representations, the generic batch size is denoted with the symbol ‘?’. ## 4 Active Learning In an ideal scenario, a predictive model can be built from labelled data in a fully supervised way: generally speaking, increasing the amount of labelled data improves the generalisation capacity of the learned model. However, in a real-world scenario, the number of labelled instances is often limited: the labelling process is often costly, time-consuming and oftentimes requires high-level domain knowledge. On the other hand, unlabelled data are in general much easier to collect and may be used to improve the predictive performance of the model. Formally, in a classification setting, there are: * a set of labelled instances \((X_{l},y_{l})\) (where \(X_{l}\) represents features and \(y_{l}\) denotes the corresponding labels); * a set of not-labelled-yet instances \(X_{u}\). If the labelling process is cost-effective, one could get the labels \(y_{u}\) of not-labelled-yet instances \(X_{u}\), and then use a fully-supervised learning algorithm (Sen et al., 2020). In a completely opposite situation, where no additional labels can be collected, or at a prohibitive cost, semi-supervised methods have been proved to be effective (van Engelen and Hoos, 2020). Oftentimes, the situation is in the middle: given the available resources, only a limited number of instances can be labelled. How to effectively exploit these labelling resources is the focus of Active Learning (Settles, 2009; Aggarwal et al., 2014; Ren et al., 2022): given the limited labelling capability, how to choose the subset \(X_{c}\subset X_{u}\) to label in order to obtain the maximum performance Figure 2: **Siamese architecture**. Two input strings are processed by the same LSTM-based encoding model (see Figure 1) to get a 16-dim vector representation each. These are used to compute several distances: \(L_{1}\), \(L_{2}\), \(L_{\infty}\), cosine and the element-wise absolute difference. This information is then concatenated — obtaining a 20-dim vector — and fed to two consecutive blocks, each composed of a dense layer with ReLU activations, batch normalisation, and dropout. The first block has a dense layer with 32 nodes, while the second block has a dense layer with 16 nodes. The final layer is a single neuron with a sigmoid activation function. gain? In practice, \(X_{c}\) is chosen and constructed according to some query strategies in a multi-step procedure. In our work, we adopt an uncertainty sampling strategy (Settles, 2009) where, at each step, instances of \(X_{u}\) on which the prediction of the most updated model is less certain are selected. These data points are removed from \(X_{u}\), labelled by domain experts and then added to \((X_{l},y_{l})\) in order to train an improved version of the classifier, with the rationale that, since the additional data points are picked near the decision boundary instead of being randomly selected, they contain more valuable information for the model learning. Let \(x_{i}\in X_{u}\) be an unlabelled instance, we denote with \((\hat{y}_{i,0},\ldots,\hat{y}_{i,\bar{\alpha}-1})\) the probabilities estimated by the classifier over the \(\bar{\alpha}\) classes, such that \(\sum_{\alpha}\hat{y}_{i,\alpha}=1\). Then, we can define the uncertainty as \[unc(\hat{y}_{i})=1-\max_{\alpha}\hat{y}_{i,\alpha}. \tag{10}\] In the case of a binary classification setting (\(\bar{\alpha}=2\)), it is equivalent to measuring the distance of the positive class predicted probability from \(\frac{1}{2}\): \[unc(\hat{y}_{i})=\frac{1}{2}-\left|\frac{1}{2}-\hat{y}_{i}\right|. \tag{11}\] At each step of the training process, instances of \(X_{u}\) are sorted according to the uncertainty defined above and the top \(B\) most uncertain samples are added to \(X_{l}\) and removed from \(X_{u}\). A classifier is then trained on the updated version of \(X_{l}\). Empirical evidence during experiments suggests that using directly the uncertainty defined in Equation (10) is sub-optimal in balancing the exploration and exploitation threshold (Banko and Brill, 2001): feeding the learner with only uncertain most samples, especially at the beginning, could result in a batch of data too biased towards difficult instances, leading to poor generalisation capability. To prevent this, and alternatively to the strategy introduced by Banko and Brill (2001) -- i.e. using batches composed of half of samples selected randomly and the other half selected on Figure 3: **Active Learning**. Illustrative representation of the Active Learning procedure outlined in Algorithm 1. the basis of uncertainty -- we propose to use the following noisy version of uncertainty: \[unc_{\sigma}(\hat{y}_{i})=unc(\hat{y}_{i})+\epsilon_{i},\quad\epsilon_{i}\sim \mathcal{N}(0,\sigma), \tag{12}\] where \(\epsilon_{i}\) are independent and identically distributed normal random variables and \(\sigma\) denotes the level of the noise we would like to introduce. The random noise introduced in Equation (12) helps the learner to generalise better during the first stages of the Active Learning procedure. In Algorithm 1 we formalise the entire procedure described above, while Figure 3 shows an illustrative representation. ``` 0:\(X^{0}_{t}\) (initial labelled instances) \(y^{0}_{t}\) (labels relative to \(X^{0}_{t}\)) \(X^{0}_{u}\) (initial unlabelled instances) \(M\) (number of iterations) \(B^{1},B^{2},\ldots,B^{M-1}\) (batch sizes) \(\sigma\) (noise for uncertainty) output: a trained classifier \(\mathcal{C}^{M}\) Train an initial classifier \(\mathcal{C}^{1}\) on \((X^{0}_{t},y^{0}_{t})\) for\(j=1,\ldots,M-1\)do // Predict the probabilities over the instances, according to equation (3) {\(\hat{y}=\mathcal{C}^{j}(x)\mid x\in X^{j-1}_{u}\)} // Compute \(unc_{\sigma}\) for each point in \(X^{j-1}_{u}\) according to equations (11) and (12) \(\left\{unc_{\sigma}(\hat{y})=\frac{1}{2}-\left|\frac{1}{2}-\hat{y}\right|+ \epsilon\right\}\) // Select from \(X^{j-1}_{u}\) the \(B^{j}\) instances with highest \(unc_{\sigma}\) \(X^{j}_{c}\leftarrow\) top \(B^{j}\) instances with respect to \(unc_{\sigma}\) // Label the selected instances \(y^{j}_{c}\leftarrow\) domain expert labels relative to \(X^{j}_{c}\) // Update the set of labelled instances \(X^{j}_{l}\gets X^{j-1}_{l}\cup X^{j}_{c}\) \(y^{j}_{t}\gets y^{j-1}_{l}\cup y^{j}_{c}\) // Update the set of unlabelled instances \(X^{j}_{u}\gets X^{j-1}_{u}\setminus X^{j}_{c}\) Train a classifier \(\mathcal{C}^{j+1}\) on \((X^{j}_{l},y^{j}_{l})\) end Result:\(\mathcal{C}^{M}\) ``` **Algorithm 1**Active Learning Procedure ## 5 Data The data employed in our analysis are extracted from two different domains, i.e. company registry and bank transfers. More specifically, the data consist of couples of strings being: 1. the company names (concatenated with the address) of the same entity as recorded in two different datasets obtained by external data providers. 2. the beneficiary names of the same entity as recorded in SWIFT5 bank transfers. The two sources are used independently, with the first used for training and testing and the second only for testing. ### Data labelling Each instance of the datasets used in the experiments consists of a pair of names and a binary target variable: we use the label 1 when the pair of names correspond to the same company, and label 0 otherwise. As stated in previous sections, the labelling process is time-consuming since it involves the identification of raw data usually with human annotation. To tackle this task, we adopt a 2-step strategy: we pre-label some couples of names with a rule-based criterion suggesting the target variable (match/non-match), and then we check the suggested labels manually. The rule to pre-label company string pairs is based on the domain of data we are considering. Pre-labelling for company registry dataMany companies are identified through the Legal Entity Identifier (LEI): aliases with the same LEI refer to the same company entity. We leverage this background to identify candidate couples with positive labels (same LEI) or with negative labels (different LEI). At the end of this process, these suggested labels are manually validated. We label 9,000 couples of names from the company registry data (with a 1:4 positive/negative label ratio). Figure 3(a) displays the distribution of JW similarity relative to the 9,000 couples conditioned on the label matching. It is worth noting that the pre-labelling strategy based on the LEI code goes beyond the simple string similarity between company names. Indeed, the LEI code can be used to identify named entities even when they undergo various types of legal transactions, such as mergers, acquisitions, consolidations, purchases and management acquisitions. In this case, the company name can vary after the legal transaction, while still referring to the same entity. Of course, these _counter-intuitive positive matches_ are beyond the range of validity of the methods we are discussing in this work -- based solely on the similarity of string names -- but we decided to include some of them to test their limits. We discuss some of these examples in Section 7. Pre-labelling strategies for bank transferIn a bank transfer, funds are transferred from the bank account of one entity (the sender or payer) to another bank account (of the beneficiary or recipient). Beneficiaries with the same bank account (IBAN) refer to the same company and can be used to identify candidate couples with a positive label. More challenging is the construction of couples with negative labels, identified by recipients with different IBAN. The reason is that the same company may own more than one IBAN, and this requires a more detailed validation. With this strategy, we label 200 couples from the bank transfers (with a 1:1 positive/negative labels ratio). We keep these data separate from the previously discussed 9,000 pairs, and we use them only for testing purposes, with the rationale of verifying the robustness of our approaches under domain shift scenarios (see Section 5.2). ### Training and test sets We adopt a stratified \(k\)-fold approach to split the data: we split the 9,000 labelled instances into 3 subsets \(S_{1}\), \(S_{2}\), and \(S_{3}\) with equal size and taking care to maintain the same positive/negative ratio. At each iteration, we choose one of them as the test set \(S_{\mathrm{test}}\) and keep the union of the rest as a training set \(S_{\mathrm{train}}^{\mathrm{L}}\). From \(S_{\text{train}}^{\text{L}}\) we sample \(\frac{1}{3}\) of its elements to form a _medium size training set_\(S_{\text{train}}^{\text{M}}\), and then we again sample \(\frac{1}{20}\) of instances contained in \(S_{\text{train}}^{\text{M}}\) to obtain a _small size training set_\(S_{\text{train}}^{\text{S}}\). Therefore, we end with 3 training sets for each fold: \(S_{\text{train}}^{\text{S}}\subset S_{\text{train}}^{\text{M}}\subset S_{ \text{train}}^{\text{L}}\). These 3 training sets with increasing size are useful to analyse how the different approaches perform in relation to the amount of data they are given to learn from (see experiments described in Section 6 and the corresponding discussion in Section 7). Analogously, we define different test sets: the _randomly ordered test set_\(S_{\text{test}}^{\text{RO}}=S_{\text{test}}\) (i.e. the entire hold-out set available), and the _JW ordered test set_\(S_{\text{test}}^{\text{JO}}\) obtained by ranking \(S_{\text{test}}^{\text{RO}}\) instances according to their _JW similarity_6 and taking the top 100 negative cases (i.e. non-matching pairs that are nevertheless mostly JW-similar) and the bottom 100 positive cases (i.e. matching pairs that are nevertheless mostly JW-dissimilar). The distribution of JW similarity of samples in \(S_{\text{test}}^{\text{JO}}\) conditioned on matching labels is shown in Figure 3(b). Footnote 6: As reported in the experiments (see Table 2), the JW method performs better compared to the other baseline methodologies. Therefore, selecting test instances based on this metric is a way to check the robustness of ML approaches. To test the robustness of the methods presented in Section 3, we introduce a third test set \(S_{\text{test}}^{\text{DS}}\) in addition to \(S_{\text{test}}^{\text{RO}}\) and \(S_{\text{test}}^{\text{JO}}\), extracted from a different data source. Namely, as anticipated at the beginning of Section 5, we extract pairs of company names from SWIFT bank transfer registry, where transaction payers write the beneficiary without any form of oversight. The dataset is balanced, with 100 positive examples obtained by matching recipients with the same IBAN and 100 negative cases obtained by random matching. We name this dataset the _domain shifted test set_. We expect the \(S_{\text{test}}^{\text{JO}}\) and \(S_{\text{test}}^{\text{DS}}\) test sets to be particularly challenging for the string matching algorithms given in Section 3: indeed \(S_{\text{test}}^{\text{JO}}\) is, by design, a stress test for the single-feature based Figure 4: **Similarity distribution over string pairs**. Distribution of JW similarity conditioned on the label matching over (a) the 9,000 labelled instances and (b) JW-ordered test set. The continuous score is binned with size 0.05. In (a) the histogram shows that the negative (i.e. non-matching) samples are roughly normally distributed around 0.55, while the majority of positive (i.e. matching) samples have JW similarity between 0.8 and 1. In (b), on the contrary, the distributions of positive and negative pairs are largely overlapping, making it difficult to determine a threshold between the two classes, in particular with respect to the whole dataset. methods, and it can be used to estimate how effectively the Random Forest and the Siamese Network are able to generalise with respect to the baselines. The \(S_{\text{test}}^{\text{DS}}\) dataset instead is likely to be challenging for several reasons: it is drawn from a different dataset with respect to the training set; not only the beneficiary names may be affected by typos and all sorts of noise due to the free writing, but they are also not in a standardised form, i.e. they may contain additional information such as the company address7. Footnote 7: This motivates the use of names and address in the data extracted from company registry data. ## 6 Experiments Different modelling strategies (string distance metrics, Deep Neural Networks, Active Learning) have different (dis)advantages. In order to compare them fairly and point out the best scenario in which to apply each of them, we prepare two different experimental settings: 1. standard supervised classification setting, 2. Active Learning setting. In each of these two settings, we run the experiments employing a 3-fold cross-validation strategy as described in Section 5.2. ### Supervised classification setting We evaluate how the performances of the models presented in section 3 change as they are trained on training sets of different sizes (\(S_{\text{train}}^{\text{L}}\), \(S_{\text{train}}^{\text{M}}\) and \(S_{\text{train}}^{\text{S}}\)). Each of the training sets is used to train each of the following 7 models: * 5 different Baseline Trees (see Section 3), based on Levenstein distance (\(\mathcal{D}_{\text{Lev}}\)), InDel ratio (\(\mathcal{R}_{\text{ID}}\)), Token Set Ratio (\(\mathcal{R}_{\text{TS}}\)), Jaccard similarity (\(\mathcal{S}_{\text{Jac}}\)), and JW similarity (\(\mathcal{S}_{\text{JW}}\)); * a Random Forest classifier, introduced in Section 3.2; * our proposed Siamese Network, introduced in Section 3.3. Out-of-sample performance is evaluated on test sets \(S_{\text{test}}^{\text{RO}}\), \(S_{\text{test}}^{\text{JO}}\), and \(S_{\text{test}}^{\text{DS}}\) by computing the Balanced Accuracy (BA), thus giving equal weights to positive and negative classes, irrespective of actual class imbalances. More precisely, as argued in Chicco et al. (2021), BA is a good measure -- preferable over, e.g. Matthews Correlation Coefficient (MCC) and \(F_{1}\) score -- when the aim is to compare classifiers across datasets with different class imbalances, and/or when the focus is to correctly classify the ground truth instances, which is exactly what we are doing. Table 2 summarises experimental results discussed in Section 7. We leave to Table B.4 in the appendix additional metrics computed for the experiments (\(F_{1}\) score and MCC). ### Active Learning setting We then employ an Active Learning strategy -- outlined in Section 4 and Algorithm 1 -- to train both the Random Forest and the Siamese Neural Network. In our experiments, the initial labelled instances (\(X_{l}^{0},y_{l}^{0}\)) consist of 100 samples and correspond to \(S_{\text{train}}^{\text{S}}\), while \(X_{u}^{0}\) consista of the residual 5900 couples, namely \(S_{\text{train}}^{\text{L}}\setminus S_{\text{train}}^{\text{S}}\). We fix the \(\sigma\) parameter at 1/6 for all the experiments. The batch sizes are set in such a way that all instances are spanned with \(M=9\) iterations. More formally, for \(j=1,2,\ldots,M-1\), the batch size at the \(j\)-th iteration is \[B^{j}=\begin{cases}100\times 2^{j-1}&j\in[1,4],\\ 800&j=5,6,\\ 1400&j>6.\end{cases}\] This choice is motivated by the idea of better tracking the impact of the Active Learning approach: indeed, we expect the greater benefits to come in the very first phases, while the marginal benefit after having seen enough data will be negligible. As mentioned in Section 4, the choice of the subset of unlabelled instances to be labelled \((X_{c}^{j})\) lies at the heart of the Active Learning strategy. To benchmark this procedure, besides the _Least Confident learner_ (LC) selecting \(X_{c}^{j}\) according to the uncertainty (Equation (12)), we run the same experiment with a _Random learner_ (R) picking the unlabelled samples in a purely random fashion. Therefore, we end up with a total of four learners. At the end of each iteration \(j\), we evaluate their performance as follows: 1. _pre-train batch test_: we test the model \(\mathcal{C}^{j}\) on the next-to-be-labelled instances, i.e. \(X_{c}^{j}\): we here expect poor results for the LC learners, since we are testing on most uncertain samples for the model \(\mathcal{C}^{j}\) (see Figure 4(a)). 2. We train with respect to the new training set \((X_{l}^{j},y_{l}^{j})\), thus obtaining \(\mathcal{C}^{j+1}\). 3. We test \(\mathcal{C}^{j+1}\) on: * the batch samples \(X_{c}^{j}\), and we refer to it as the _post-train batch test_ (see Figure 4(b)): we here expect good results, since it is an in-sample valuation; * the updated unlabelled set \(X_{u}^{j}\), i.e. all the remaining unlabelled instances, and we refer to it as the _not-labelled-yet test_ (see Figure 4(c)); * the actual test set, namely \(S_{\text{test}}^{\text{RO}}\) (see Figure 4(d)). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{**training set set**} & \multicolumn{1}{c}{**test set**} & \multicolumn{1}{c}{**Levenshtein**} & \multicolumn{1}{c}{**InDel**} & \multicolumn{1}{c}{**Token**} & \multirow{2}{*}{**Jaccard**} & \multirow{2}{*}{**JW**} & \multicolumn{1}{c}{**Random**} & **Siamese** \\ \cline{4-3} \cline{6-10} \multicolumn{1}{c}{**set size**} & & & **Ratio** & & & **Set Ratio** & & **Forest** & **Network** \\ \hline \hline \multirow{4}{*}{small} & _RO_ & \(0.665\pm 0.045\) & \(0.855\pm 0.025\) & \(0.935\pm 0.005\) & \(0.577\pm 0.009\) & \(\mathbf{0.957\pm 0.003}\) & \(0.951\pm 0.017\) & \(0.892\pm 0.042\) \\ & _JO_ & \(0.428\pm 0.061\) & \(0.505\pm 0.005\) & \(0.662\pm 0.006\) & \(0.438\pm 0.06\) & \(0.678\pm 0.016\) & \(0.678\pm 0.018\) & \(\mathbf{0.725\pm 0.044}\) \\ & _DS_ & \(0.582\pm 0.05\) & \(0.717\pm 0.01\) & \(0.723\pm 0.015\) & \(0.533\pm 0.058\) & \(0.717\pm 0.003\) & \(0.718\pm 0.02\) & \(\mathbf{0.735\pm 0.013}\) \\ & _RO_ & \(0.643\pm 0.014\) & \(0.878\pm 0.013\) & \(0.944\pm 0.007\) & \(0.523\pm 0.041\) & \(0.957\pm 0.003\) & \(0.965\pm 0.006\) & \(\mathbf{0.975\pm 0.002}\) \\ medium & _JO_ & \(0.473\pm 0.038\) & \(0.488\pm 0.029\) & \(0.652\pm 0.003\) & \(0.463\pm 0.064\) & \(0.675\pm 0.013\) & \(0.697\pm 0.013\) & \(\mathbf{0.867\pm 0.019}\) \\ & _DS_ & \(0.535\pm 0.009\) & \(0.735\pm 0.009\) & \(0.743\pm 0.003\) & \(0.5\pm 0.0\) & \(0.715\pm 0.0\) & \(0.743\pm 0.008\) & \(\mathbf{0.773\pm 0.016}\) \\ & _RO_ & \(0.641\pm 0.012\) & \(0.871\pm 0.014\) & \(0.944\pm 0.007\) & \(0.523\pm 0.04\) & \(0.956\pm 0.001\) & \(0.967\pm 0.004\) & \(\mathbf{0.976\pm 0.002}\) \\ large & _JO_ & \(0.495\pm 0.0\) & \(0.5\pm 0.0\) & \(0.652\pm 0.003\) & \(0.465\pm 0.061\) & \(0.687\pm 0.006\) & \(0.72\pm 0.023\) & \(\mathbf{0.903\pm 0.051}\) \\ & _DS_ & \(0.53\pm 0.0\) & \(0.733\pm 0.012\) & \(0.743\pm 0.003\) & \(0.5\pm 0.0\) & \(0.715\pm 0.0\) & \(0.743\pm 0.003\) & \(\mathbf{0.777\pm 0.01}\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Experimental results**. Balanced Accuracy of the Random Forest, Siamese Neural Network, and the 5 single-distance Decision Trees, all trained with datasets of different sizes (small, medium, large) and tested on the three test sets discussed in Section 5.2, namely randomly ordered (\(S_{\text{test}}^{\text{RO}}\)), JW-ordered (\(S_{\text{test}}^{\text{IO}}\)) and domain shifted (\(S_{\text{test}}^{\text{DS}}\)). Mean and standard deviation computed via stratified \(k\)-fold cross-validation are displayed. Contrary to the 5 baseline models, the performances of the Random Forest and the Siamese Neural Network improve with the size of the training set. Moreover, the Siamese Neural Network trained on medium and large datasets outperforms the other approaches. Bold figures denote row-wise maximum values. ## 7 Discussion Table 2 summarises the results of the experiments in terms of BA. The following is a list of insights we can derive from its inspection. The Random Forest model trained on the small dataset \(S_{\text{train}}^{\text{S}}\) has a good performance on the randomly ordered test set \(S_{\text{test}}^{\text{RO}}\). This is likely due to the fact that the input features -- described in Section 3.2 -- are essentially string similarity measures, thus the pattern to be learned does not need a lot of observations. The downside is poor generalisation when more data are provided. Indeed, by increasing the training set size from \(S_{\text{train}}^{\text{S}}\) to \(S_{\text{train}}^{\text{M}}\), less than \(2\%\) of BA is gained on the randomly ordered test set and no gain at all from \(S_{\text{train}}^{\text{M}}\) to \(S_{\text{train}}^{\text{L}}\). Moreover, its performance drops by \(\sim 27\%\) and \(\sim 24\%\) in the JW ordered test and the domain shifted test, respectively. Figure 5: **Active Learning performances.** BA during the Active Learning procedure computed over: (a) next-to-be-labelled instances (\(X_{c}^{j-1}\)), (b) just-labelled instances (\(X_{c}^{j-1}\)), (c) all the remaining unlabelled instances (\(X_{i}^{j-1}\)), (d) the test set \(S_{\text{test}}^{\text{RO}}\), as described in Section 6.2. On the \(x\)-axis we plot the number of labelled samples (log scale), starting from \(|X_{l}^{0}|=100\) up to \(|X_{l}^{M-2}|=4600\) — i.e. before adding the remaining \(B^{M-1}\) samples in the last iteration. In the post-train case (b), the evaluation starts from \(|X_{l}^{1}|=200\) (i.e. after adding the first \(B^{1}\) samples) up to the whole training dataset \(|X_{l}^{M-1}|=6000\). The evaluation over \(S_{\text{test}}^{\text{RO}}\) (d) is done in all available steps, i.e. from 100 up to 6000 samples. BA of Random Forest and Siamese Network models both as Random learners and as Least Confident learners is reported. The mean and 95% normal confidence intervals are obtained by aggregating the BA over the 3 cross-validation folds. The same thing is true -- even more so -- for the JW based classifier. In this case, the decision tree needs only to perform an optimal choice of the threshold to put on the JW similarity score. We can expect that the increase in size of the training set is only slightly changing this threshold, with a negligible impact on the out-of-sample performances. Then we can deduce that JW metric does not tend to overfit his domain and can generalise with acceptable performances. The same reasoning can be applied to the other string match models. In particular, the InDel ratio and the Token Set Ratio perform remarkably well with a small amount of data, again due to the simple rule to be learned by the classifier. Concerning the JW model, we can observe a poorer generalisation to new domains. It is worth noticing that the drop in performance for the Baseline classifiers when switching from \(S_{\text{test}}^{\text{RO}}\) to \(S_{\text{test}}^{\text{JO}}\) and \(S_{\text{test}}^{\text{DS}}\) is slightly larger than for the Random Forest. This is reasonable given that the Random Forest may use the information coming from all of the similarity metric scores at the same time. The Siamese model systematically outperforms other approaches on both medium and large training datasets. This demonstrates the ability of the Neural Network to avoid overfitting to the specific domain, to generalise across different distributions, and to learn an alias associated with a company that may differ significantly from a simple string match similarity (e.g. the pair "REF SRL" and "RENOVARE ENERGY FARM SRL"). Table 3 shows several examples of matching and non-matching couples of company string names as they are classified by the Siamese model trained on \(S_{\text{train}}^{\text{L}}\), with the corresponding estimated probability \(\hat{y}\). In particular, the examples in Table 3 are extracted by drawing from non-matching couples with _high_ JW similarity, and from matching couples with _low_ JW similarity. In this way, we expect to highlight some interesting and challenging sample couples. Indeed, one can see that the Siamese model is able to correctly classify company names expressed as acronyms (e.g. in "S.P.I.G.A." and "REF"). On the other hand, there are cases more difficult to explain, such as the correctly classified match for the couple ("RONDA", "TORO ASSICURAZIONI SPA") where the entity is indeed the same -- TORO insurance actually merged into RONDA in 20048 -- but the company names are completely different. Further work is needed to explain the reasons behind such counter-intuitive matches of the Neural Network, and to find the patterns behind such classifications. Footnote 8: gazzzettaufficiale (GU Parte Seconda n.83 del 8-4-2004). We reported false positive examples where the names are very similar but they actually belong to different companies, e.g. "E.U.R.O. S.R.L." and "EURO STEEL SRL/MILANO". The overconfidence of the prediction could be solved, e.g. by incorporating additional information, such as address, holding and legal form of the two companies. As expected, most of the false negative samples -- i.e. pairs representing the same entity but predicted to be non-matching -- in Table 3 can be related to situations in which the company names are (almost) completely different, but the entity is indeed the same, likely due to some legal transaction (merger, acquisition, consolidation, etc.) as discussed in Section 5.1. Finally, the true negative examples show the remarkable capabilities of the Siamese model to correctly classify as different entities even pairs with very similar company names, such as "RECOS S.R.L." and "PECOS SRL", with very high confidence. Figure 4(d) shows the out-of-sample BA of the Siamese Network and the Random Forest when computed on \(S_{\text{test}}^{\text{RO}}\) at each Active Learning step. We can easily see that the Least Confident approach systematically outperforms the random choice for both models, confirming the value of Active Learning. However, while the Random Forest performance plateaus already before 400 samples, the Siamese needs up to 2,000: the Siamese Network has to leverage a larger amount of data to effectively learn patterns. Figures 4(a)-4(b) can help us understand the mechanisms at play. Figure 4(a) displays out-of-sample BA values on next-to-be-labelled batches: we expect poor performance for the Least Confident choice with respect to Random choice since in the former case we are selecting uncertain instances (i.e. difficult for the model) on purpose. Figure 4(b), on the other hand, displays BA values on batches just fed to the models: we here expect -- in general -- higher performances, being an in-sample evaluation. Interestingly, we see that the Least Confident choice has poor performance with respect to Random choice. This may be due to the fact that (at least a fraction of the) most uncertain observations remain indeed intrinsically difficult to classify, despite the training. This effect is more pronounced for the Random Forest, likely because it is largely based on similarity metrics. Notice that, as the number of samples seen by the model increases, this effect is less and less pronounced and finally reverts, indicating that residual samples are becoming easier to classify in the Least Confident approach. Incidentally, we notice that comparing the performances of the model on a batch before and after the model has been trained on it, allows to define an _early stopping_ rule. Namely, calling \(Acc_{pre}\) and \(Acc_{post}\) the BA of a model over a batch before and after it has been trained on it, respectively, we can define a threshold \(\theta\) and interrupt the process when: \(|Acc_{post}-Acc_{pre}|<\theta\). The rationale is that, if a model has about the same performances on a batch before and after having been trained on it, it means that it has already learned to generalise over unseen data. This rule can be applied, with an appropriate \(\theta\), both on the Least Confident and Random algorithm. Finally, Figure 4(c) shows the out-of-sample BA when computed on \(X_{u}\) (i.e. all the unlabelled samples of the training set at that step) at each step of Algorithm 1. This plot confirms that the Least Confident approach chooses the instances in such a way that the remaining ones are simpler to be classified. This is especially true for Random Forest. ## 8 Conclusions In our analysis, we have compared the performances of several supervised classifiers in the field of company name disambiguation.Providing pairs of company names as input, we consider two types of Machine Learning classifiers: Decision Stumps and Random Forest classifiers based on classical string similarity measures between the two names; a Neural Network classifier on top of a learned LSTM embedding space of strings. The data are extracted from external company registry data and bank transfers. More specifically, we collect 9,000 couples of company names from external company registry data, and 200 couples of beneficiary names in bank transfers (the latter used only for testing purposes). All approaches are evaluated over three different test sets: a "randomly ordered" test set (RO), i.e. 3000 samples randomly chosen from the company registry dataset, a "Jaro-Winkler ordered" test set (JO), i.e. 200 instances drawn from RO in such a way that we select JW-dissimilar actual matches and JW-similar actual non-matches, and a "domain shift" test set (DS), i.e. 200 couples of beneficiary names taken from a (different) dataset of bank transfers. The purpose of this work is twofold: on the one hand, we show that if enough data is available, the Siamese approach outperforms the other models and can be applied to other domains. Indeed, according to Table 2, the performance of the Baseline methods and the Random Forest barely improves when more data are provided for training. This is likely due to the fact that all the information extracted from string pairs is encoded in classical string similarity metrics for Baseline Trees and Random Forest. On the contrary, increasing the size of the training set improves the performance of the Siamese model, as it can learn a more effective embedding space the more data it learns from. This demonstrates the Neural Network's capacity to generalise while avoiding overfitting to a specific domain. These features enable our Siamese Neural Network to learn its own concept of string similarity and appropriately detect aliases connected with a company that differ significantly from a basic string match similarity. The other goal of our research is to demonstrate how to make human labelling more efficient by using an Active Learning strategy. Indeed, starting with a minimal training set of 100 labelled data, we show that training the model with subsequent batches of the most uncertain samples (Least Confident learner) is more efficient than training with randomly chosen instances. One limitation of this work is the use of company names only for the goal of disambiguation. As previously said, it is possible that company names by themselves _do not_ contain enough information to resolve all the matches, as in cases where completely different company names still refer to the same Entity. We leave to future work the extension to include additional information, such as addresses, legal forms, shareholding, etc. We also plan to perform a more thorough analysis of the architecture of the Siamese Recurrent Network, possibly using a Transformer approach for the input sequences (Vaswani et al., 2017). Given the effectiveness of the Active Learning procedure, we plan to use an unlabelled dataset of hundreds of thousands of Entity pairs to extract from them the most informative few thousand pairs to label, to analyse how the procedure here outlined scales with data availability. ## Author contributions **Riccardo Crupi**: Conceptualisation, Methodology, Software, Validation, Investigation, Writing - Original Draft, Project Administration. **Michele Grasso**: Software, Validation, Investigation, Writing - Original Draft. **Daniele Regoli**: Methodology, Investigation, Writing - Original Draft. **Shuyi Yang**: Methodology, Software, Investigation, Writing - Review & Editing. **Simone Scarsi**: Methodology, Software, Investigation, Writing - Review & Editing. **Alessandro Mercanti**: Data curation, Writing - Review & Editing. **Alessandro Basile**: Writing - Review & Editing. **Andrea Cosentini**: Writing - Review & Editing, Supervision. ## Acknowledgements We would like to thank Ilaria Penco for her assistance with the legal aspects of the manuscript. We thank Andrea Barral for useful discussions on methodology and data curation. We also thank Giacomo Di Prinzio, Giulia Genta and Nives Visentin from Intesa Sanpaolo, and Sandro Bellu, Indrit Gjonaj, Andrea Giordano and Gabriele Pellegrinetti from Tecnet Dati s.r.l., namely the team that developed the disambiguation project for Intesa Sanpaolo, that inspired the research here presented.
2306.00437
Responsibility Perspective Transfer for Italian Femicide News
Different ways of linguistically expressing the same real-world event can lead to different perceptions of what happened. Previous work has shown that different descriptions of gender-based violence (GBV) influence the reader's perception of who is to blame for the violence, possibly reinforcing stereotypes which see the victim as partly responsible, too. As a contribution to raise awareness on perspective-based writing, and to facilitate access to alternative perspectives, we introduce the novel task of automatically rewriting GBV descriptions as a means to alter the perceived level of responsibility on the perpetrator. We present a quasi-parallel dataset of sentences with low and high perceived responsibility levels for the perpetrator, and experiment with unsupervised (mBART-based), zero-shot and few-shot (GPT3-based) methods for rewriting sentences. We evaluate our models using a questionnaire study and a suite of automatic metrics.
Gosse Minnema, Huiyuan Lai, Benedetta Muscato, Malvina Nissim
2023-06-01T08:27:00Z
http://arxiv.org/abs/2306.00437v1
# Responsibility Perspective Transfer for Italian Femicide News ###### Abstract Different ways of linguistically expressing the same real-world event can lead to different perceptions of what happened. Previous work has shown that different descriptions of gender-based violence (GBV) influence the reader's perception of who is to blame for the violence, possibly reinforcing stereotypes which see the victim as partly responsible, too. As a contribution to raise awareness on perspective-based writing, and to facilitate access to alternative perspectives, we introduce the novel task of automatically rewriting GBV descriptions as a means to alter the perceived level of responsibility on the perpetrator. We present a quasi-parallel dataset of sentences with low and high perceived responsibility levels for the perpetrator, and experiment with unsupervised (mBART-based), zero-shot and few-shot (GPT3-based) methods for rewriting sentences. We evaluate our models using a questionnaire study and a suite of automatic metrics. ## 1 Introduction "A terrible incident involving husband and wife", "Husband kills wife", "Her love for him became fatal": these different phrasings can all be used to describe the same violent event, in this case a _femicide_, but they won't trigger the same perceptions in the reader. Perceptions vary from person to person, of course, but also depend substantially and systematically on the different ways the same event is framed (Iyengar, 1994). Especially in the context of gender-based violence (GBV), this has important consequences on how readers will attribute responsibility: victims of femicides are often depicted, and thus perceived, as (co-)responsible for the violence they suffer.1 Footnote 1: A report on femicides from November 2018 by two Italian research institutes points out that the stereotype of a shared responsibility between the victim and its perpetrator is still widespread among young generations: “56.8% of boys and 38.8% of girls believe that the female is at least partly responsible for the violence she has suffered” (Laboratorio Adolescentza and Istituto IARD, 2018). There is indeed evidence from the linguistic literature (Pinelli and Zanchi, 2021; Meluzzi et al., 2021) that people perceive responsibility differently according to how femicides are reported (more blame on the perpetrator in "Husband kills wife", more focus on the victim in "Her love for him became fatal"). In general, linguistic strategies that background perpetrators have been shown to favour victim blaming (Huttenlocher et al., 1968; Henley et al., 1995; Bohner, 2002; Gray and Wegner, 2009; Hart and Fuoli, 2020; Zhou et al., 2021). This way of reporting contributes to reinforcing such social stereotypes. If we want social stereotypes to be challenged, the language we use to describe GBV is thus an excellent place to start, also from a Natural Language Processing (NLP) perspective. Recent work has shown that perspectives on femicides and their triggered perceptions can be modelled automatically (Minnema et al., 2022, 2020). In this paper, as shown in Box 1, we explore the challenge of _rewriting_ descriptions of GBV with the aim to increase the perceived level of blame on the perpetrator, casting it as a style transfer task (Xu et al., 2012; Jin et al., 2022). In this novel _responsibility perspec tive transfer_ task, a given sentence from femicide news reports gets rewritten in a way that puts more responsibility on the perpetrator, while preserving the original content. ContributionsWe create an evaluation set containing semi-aligned pairs with "low" and "high" sentences expressing similar information relative to an event, from an existing dataset of Italian news (SS2.1). In absence of parallel training data, we follow previous work (Lample et al., 2019; Luo et al., 2019; Lai et al., 2021) to train an unsupervised style transfer model using mBART (Liu et al., 2020) on non-parallel data (with style labels) with a zero-shot and a few-shot approach using GPT-3 (Brown et al., 2020) to perform rewriting (SS2.2). We run both human-based and automatic evaluations to assess the impact of rewriting on the perceived blame, comparing original and rephrased texts to find that models can achieve detectable perspective shifts (SS3). By introducing the novel task of responsibility perspective transfer, providing an evaluation dataset, a battery of trained models, and evidence of a successful methodology, we hope to foster further research and application developments on this and other perspective rewriting tasks that are relevant to society.2 Footnote 2: Data and code are available at [https://github.com/gossminn/responsibility-perspective-transfer](https://github.com/gossminn/responsibility-perspective-transfer). ## 2 Experimental Settings ### Datasets Our work makes use of the _RAI femicide corpus_(Belluati, 2021), a dataset of 2,734 news articles covering 582 confirmed femicide cases and 198 other GBV-related cases3 in Italy between 2012-2017. Of these, 182 cases (comprising 178 femicides and 4 other cases) are linked to a set of news articles from the period 2015-2017 that report on these cases. This dataset is augmented with perspective annotations from Minnema et al. (2022). Gold annotations (averaged z-scored perception values from 240 participants) are available for 400 sentences, and silver annotations (annotated with the best-scoring model from Minnema et al. 2022) are available for 7,754 further sentences. Using event metadata, we automatically extracted pairs of sentences \(\langle L,H\rangle\), where \(L\) and \(H\) both reference the same GBV case, but respectively have a below-average (\(L\)) or above-average (\(H\)) level of perceived perpetrator blame. Next, for a subset of 1,120 sentences from the combined gold-silver perspective dataset, we performed manual filtering to ensure that for pair, \(L\) and \(H\) reference not only the same _case_, but also show substantial overlap in terms of the specific _events_ within this case that they describe (e.g. the violence itself, the police investigation, conviction of a suspect, etc.). This yielded a set of 2,571 pairs (or 304 pairs if each sentence is used only once). Footnote 3: Including cases of non-lethal violence, suspected femicide, and suicide. ### Models Due to the limited availability of parallel data, we experiment with several existing text generation methods known to work in low-data settings. Unsupervised mBARTWe train an unsupervised model with iterative back-translation (Hoang et al., 2018): two mBART-based models, one for each transfer direction, where outputs of one direction with source sentences are used to supervise the model in the opposite direction. All experiments are implemented atop Transformers (Wolf et al., 2020) using mBART-50 (Tang et al., 2021). We use the Adam optimizer with a polynomial learning rate decay, and a linear warmup of 100 steps for a maximum learning rate of 1e-4. We limit the maximum token length to 150. To alleviate computational costs and catastrophic forgetting, we only update the parameters of the decoder, freezing the other parameters. mBART + meta-informationA unique feature of our dataset is the availability of detailed meta-information about the events. We made a selection of the properties likely to be most relevant for characterizing the event and assigning responsibility (names of the victim and perpetrator, type of victim-perpetrator relationship, murder weapon and location) and concatenated this meta-information to the corresponding source sentence as input during training. We tried two order settings: _source-meta_ and _meta-source_. Preliminary experiments showed that concatenating only the event properties themselves, without including property names, produced the most promising results. For example: _"Trapani, Donna di 60 anni uccisa dall'ex marito -- Anna Manuguerra, Antonino Madone, ex coniuge, arma da taglio, Nubio, casa"_ ("Trapani: 60-year old woman killed by ex-husband -- [victim name], [perpetrator name], ex-spouse, cutting weapon, [town name], at home"). We use the same training setup as for the previous model. GPT-3: Naive implementationWe also experimented with using the _text-davinci-002_ version of GPT-3 (Brown et al., 2020) in a range of zero-shot and few-shot setups. Our _naive-zero_ setup uses a simple prompt telling the model to rewrite the sentence with more focus on the perpetrator.4 Next, _naive-few_ uses a similarly simple prompt5 along with a set of ten low-high sentence pairs randomly sampled from the gold annotations. Footnote 4: _Riscrivi la frase concentrandoti sul coepvelo_ ” (“Rewrite the sentence and concentrate on the culprit”) Footnote 5: _Riscrivi le seguenti frasi da low ad high. Per high si intende che la colpa e attribuit international killer. Ecco alcani esempl: [...] Riscrivi la seguentie frase:_” (“Rewrite the following sentences from low to high). ‘High’ means that the blame is entirely put on the killer. Here are some examples: [...] Rewrite the following sentence:”) Footnote 6: The annotators were authors G.M. and M.B. GPT-3: Iterative few-shotA challenging factor for our naive few-shot approach is that the the 'natural' source-target pairs from our annotated data are not perfect minimal pairs, as they differ in perspective but also have some content differences. In an effort to use maximally informative pairs as few-shot examples, we designed an iterative process for compiling small curated sets of examples. First, we designed an improved zero-shot prompt by giving a set of source-target pairs sampled from the gold annotations to the model and prompting it to explain the differences between the pairs. We discovered by accident that this yields a very plausible and concise task definition, and we reasoned that a definition generated by the model on the basis of real examples might be more informative as a prompt than a manually designed one. We then provided two annotators7 with the resulting definition7, as well as with five more source sentences sampled from the corpus. Each of the annotators then adapted the definition into a zero-shot prompt, used that prompt to generate target sentences for each of the source sentences, and selected the best candidate from these to create a set of pairs with maximal perspective contrast and content overlap, to be used in a few-shot prompt. We kept both versions of the few-shot prompt, _iter-1_ and _iter-2_ in order to measure the combined effects of small difference in prompt, randomness in the generated candidates, and judgement differences in the selection of the best candidate. Footnote 7: The definition (slightly edited for grammar) is: “_Le frasi precede dall’etichetta “Low:_”_ tendono ad essere piu brevi e non danno la colpa e explicita all’assassaino, unentre le frasi precede dall’etichetta “High:_” tendono ad essere piu dirette e a dare la colpa all’assassaino._” (“The sentences preceded by “Low:” tend to be shorter and don’t explicitly blame the murderer, while the sentences preceded by “High:” tend to be more direct and blame the murderer.”) ### Evaluation Methods The main goal of responsibility perspective transfer is to generate a sentence with the desired perspective ("style strength" in classic style transfer tasks) that still has the same semantic content as the source sentence. We assess the performance of different models using standard metrics commonly employed in text style transfer (Mir et al., 2019; Briakou et al., 2021; Lai et al., 2022; Jin et al., 2022), and custom automatic metrics; we also run a questionnaire study with human participants. Automatic EvaluationFor estimating perspective quality, we used the best-performing perspective regressor from Minnema et al. (2022) which is based on an Italian monolingual DistilBERT model (_BERTino_; Muffo and Bertino, 2020). For content preservation, we use three popular text generation metrics: \(n\)-gram-based _BLEU_(Papineni et al., 2002) and _ROUGE_(Lin, 2004), as well as a neural-based model _COMET_(Rei et al., 2020). Human EvaluationParticipants were given an online survey with 50 blocks, each corresponding to one source sentence sampled from the dataset. In each block, participants rated: 1) the level of perceived agent responsibility in each of the seven _target_ candidates; 2) the level of _content preservation_ of each target relative to the source. We also designed a separate, smaller questionnaire that \begin{table} \begin{tabular}{c c|c c|c c c|c c c c} \hline \hline \multicolumn{2}{c|}{**Perspective Model**} & \multicolumn{2}{c|}{**Source Target**} & \multicolumn{3}{c|}{**mBART**} & \multicolumn{3}{c}{**GPT-3**} \\ _Dimension_ & _R2_ & & _(avg)_ & _base_ & _src-meta_ & _meta-src_ & _na-zero_ & _na-few_ & _iter-1_ & _iter-2_ \\ \hline \multicolumn{2}{c|}{**“blames the murderer”**} & _0.61_ & -0.511 & 0.445 & -0.250 & -0.188 & 0.284 & -0.157 & -0.375 & 0.109 & -0.116 \\ \multicolumn{2}{c|}{**“caused by a human”**} & _0.60_ & -0.228 & 0.362 & -0.037 & 0.005 & 0.371 & 0.042 & -0.095 & 0.278 & 0.076 \\ \multicolumn{2}{c|}{**“focuses on the murderer”**} & _0.65_ & -0.518 & 0.597 & -0.184 & -0.108 & **0.567** & 0.033 & -0.349 & 0.179 & -0.104 \\ \hline \hline \end{tabular} \end{table} Table 1: Automatic evaluation of perspective using the BERTino-based model from Minnema et al. (2022). Scores are z-normalized (i.e., a score -1 or 1 means “one standard deviation below/above average”). Target scores are averaged across different target sentences. asked the same questions about the few-shot examples used in _iter-1_ and _iter-2_. The pool of invited participants was a group of people with mixed genders and backgrounds from the personal network of the authors. No remuneration was offered. Four invitees responded to the main questionnaire, and three invitees responded to the few-shot example questionnaire (all female, mean age: 46). The participants have different levels of education (from middle school to university) and live in different regions of Italy. Our evaluation study should be seen as a pilot, and larger-scale, more representative studies are planned for the future. The main aim of the pilot was to have a small-scale validation of our automatic metrics (taken from previous work and developed on the basis of a large-scale human study) and to test our evaluation setup (which questions to ask, etc.). The questionnaire was designed and distributed using Qualtrics.8 Footnote 8: [https://www.qualtrics.com/](https://www.qualtrics.com/) ## 3 Results ### Automatic Results Perspective EvaluationFollowing Minnesota et al. (2022a), we distinguish between several perceptual dimensions using a perception regression model, as shown in Table 1. Our main dimension of interest (highlighted in blue) is _blame on murderer_, but we also look at the two closely related dimensions of _cause_ and _focus on murderer_. As shown by the \(R^{2}\) scores, regression quality is decent for all of these dimensions. We observe that the source and target sentences have lower and higher blame scores respectively, which are also consistent on the two related dimensions, affirming that our testing data is of good quality in terms perspective aspect. For all models, the perception scores of the predicted sentences are higher than those of the source sentences, with mBART/_meta-src_ achieving the highest scores. This suggests that all models alter perceptions of responsibility to some extent. However, in virtually all cases, perception scores stay well below the target, and in many cases below the average level (zero). For mBART-based results, models with meta-information perform better than the baseline, with _meta-src_ reaching particularly high scores. Within the GPT-3 settings, zero-shot (_na-zero_), surprisingly, performs better than few-shot; (_na-few_), and _iter-1_ yields the highest scores. Content PreservationWhen taking source sentences as the reference, three metrics show that the outputs have higher similarities to them than the target sentences. mBART/_base_ has the highest scores, which (combined with the low perception scores of this model) suggests that the model tends to copy from the source sentence. Within the GPT-3 settings, _iter-1_ has the highest scores. Using instead the target sentences as reference, we see that all scores are very close, with mBART/_meta-src_ reaching the best performance, followed by GPT-3/_na-few_ and GPT-3/_iter-1_. ### Human-based Results Table 3 reports the results of our human evaluation study. We find that mBART/_meta-src_ is the best overall model on perspective, but has poor \begin{table} \begin{tabular}{l l|c c||c c c|c c c} \hline \hline \multirow{2}{*}{**Metric**} & \multirow{2}{*}{\(\leftrightarrow\)} & \multirow{2}{*}{**Source**} & \multirow{2}{*}{**Target**} & \multicolumn{3}{c|}{mBART} & \multicolumn{3}{c}{**GPT3**} \\ & & & _(arg)_ & & _base_ & _src-meta_ & _meta-src_ & _na-zero_ & _na-few_ & _iter-1_ & _iter-2_ \\ \hline **BLEU** & src & - & 0.015 & 0.725 & 0.612 & 0.236 & 0.303 & 0.435 & 0.489 & 0.285 \\ **ROUGE** & src & - & 0.100 & 0.808 & 0.701 & 0.351 & 0.551 & 0.638 & 0.659 & 0.450 \\ **COMET** & src & - & -1.216 & 0.540 & 0.257 & -0.591 & 0.103 & 0.538 & 0.379 & -0.058 \\ **BLEU** & tgt & 0.015 & - & 0.014 & 0.016 & 0.024 & 0.010 & 0.013 & 0.014 & 0.009 \\ **ROUGE** & tgt & 0.100 & - & 0.110 & 0.104 & 0.132 & 0.088 & 0.094 & 0.098 & 0.090 \\ **COMET** & tgt & -1.175 & - & 1.194 & -1.178 & -1.002 & -1.090 & -1.045 & -1.057 & -1.059 \\ \hline \hline \end{tabular} \end{table} Table 2: Automatic content preservation metrics (BLEU, ROUGE, COMET), comparing generated sentences against source and gold target sentences. \begin{table} \begin{tabular}{l c||c c} \hline \hline & **Perspective** & **Similarity** & **HM** \\ \hline **mBART** & _base_ & 2.14 & 7.72 & 3.34 \\ & _src-meta_ & 2.50 & 6.78 & 3.65 \\ & _meta-src_ & 4.50 & 3.62 & 4.01 \\ **GPT-3** & _na-zero_ & 2.77 & 6.52 & 3.89 \\ & _na-few_ & 2.08 & 8.17 & 3.31 \\ & _iter-1_ & 3.57 & 7.97 & 4.98 \\ & _iter-2_ & 3.84 & 6.60 & 4.85 \\ \hline **Examples** & _for iter-1_ & 5.20 & 6.93 & 5.94 \\ & _for iter-2_ & 3.87 & 5.27 & 4.46 \\ \hline \hline \end{tabular} \end{table} Table 3: Human evaluation results on model outputs and examples for few-shot. HM is the harmonic mean of perspective and similarity scores similarity. Meanwhile, GPT3/_na-few_ achieves the highest score on similarity but the lowest score in terms of perspective, and its overall performance is lower than that of GPT3/_na-zero_. GPT3/_iter-1_ has the best overall performance with an HM of 4.98. We found reasonably high levels of inter-annotator agreement (Spearman's rank correlation between pairs of annotators). Correlations ranged between 0.3-0.6 (blame) and 0.4-0.6 (similarity) with high levels of significance (\(p<0.0001\)). The examples for few-shot are of higher quality overall as they were picked by the authors. ### Case Study Box 2 shows two sets of example outputs generated by mBART and GPT-3.9 While hand-picked, these examples show that both models are capable of generating sentences that increase responsibility while trying to preserve content. However, they also highlight a key challenge: what if the source sentence lacks details about the event? The mBART model has access to event metadata and uses this effectively in Example 1 to produce a sentence that stays close to the source but with details from the metadata filled in (though with rather clunky sentence structure). In Example 2, instead, it produces a sentence that is factually correct but also loses most of the information from the source sentence. On the other hand, GPT-3, without access to metadata, often 'invents' missing information. This is evident in the second example, in which it faithfully preserves the source sentence and increases the level of blame by adding plausible but (partially) incorrect information about the crime. Footnote 9: Due to lack of space, we include only generations from the overall best-performing model from each category. ## 4 Discussion & Conclusion We proposed responsibility perspective transfer as a new task and introduced a dataset and models for applying this task to Italian news reporting about femicides. Our dataset contains a limited amount of quasi-aligned pairs that proved useful for evaluation and few-shot learning. We experimented with two modeling approaches: unsupervised mBART (with or without enriching the input with metadata) and zero-shot/few-shot learning with GPT-3. Our human and automatic evaluations suggest _GPT-3/iter-1_ as the best overall model, with a relatively high level of responsibility placed on the perpetrator and a good degree of content preservation. For the latter, most models score at least 6/10 on average on the human survey. The perspective change itself has also been achieved by our models, with substantially increased levels of perceived perpetrator blame compared to the source, but there is still much room for improvement: none of the models comes close to having the same level of blame as the target sentences do, and in the human evaluation survey no model achieves a 'blame score' of more than 4.5/10. The main obstacle for future improvements seems to lie with the lack of truly parallel data; however, our GPT-3-based iterative approach of creating minimal pairs seems to have worked quite well, and might be further exploited on a larger scale. ## 5 Limitations This paper introduced the new task of responsibility perspective transfer and provided initial data collection and modeling for a specific domain (news about gender-based violence) and language (Italian). The main limitation of our work is that the (mBART) models that we trained and the prompts (for GPT-3) that we designed are specific to this domain and language and cannot be applied 'out-of-the-box' in other contexts. However, all of our modeling setups require no or limited training data and make use of readily available existing models, so we believe the general approach to be easily transferrable to other domains. Another limitation comes from the fact that we used GPT-3: the model is closed-source and can only be accessed with a paid subscription to the OpenAI API ([https://beta.openai.com/](https://beta.openai.com/)). This has consequences for reproducibility for several reasons. First of all, we do not have access to the exact technical specifications of the model or to the training data that was used. The GPT-3 models are regularly updated (at the time of our experiments, _text-davinci-002_ was the most recent available version), but limited information is available about what distinguishes each version from the previous ones or from the original model introduced in Brown et al. (2020). Moreover, access to the API is controlled by OpenAI and could be closed at any time at the company's discretion; the API is currently quite accessible with no waiting list and a reasonably generous free trial, but the rates (paid in USD) might not be affordable for researchers outside of institutions in high-income countries, and not all researchers might be comfortable agreeing to the company's terms and conditions. Finally, the generative process involves a degree of randomness, and through the API it is not possible to fixate the model's random seed, meaning that the model produces different predictions every time it is called, even when using exactly the same prompt. ## 6 Ethics Statement We see three important ethical considerations around our paper. The first consideration is related to the use of large proprietary language models (GPT-3). Apart from the reproducibility limitations resulting from the use of GPT-3 discussed above, there are more general ethical questions surrounding the use of GPT-3 and similar models, for example the high energy usage and resulting carbon emissions, and societal questions around the oligopoly on state-of-the-art language models that is currently in the hands of a handful of large US-based companies. The second consideration relates to the task that we introduce: while we see perspective transfer models as a valuable tool for studying how language 'frames' (social) reality that could also have practical applications, for example in journalism, we strongly believe that any such applications must be approached with extreme care. The models that we introduce are scientific analysis tools that could be used to suggest alternative viewpoints on an event, but we believe that generations should _not_ be seen as necessarily reflecting a 'true' or 'better' perspective, and should not used in a prescriptive way (i.e. used to tell someone how to write). We believe that the authors (journalists or others) of any text ultimately bear exclusive responsibility for the views, perspectives and (implicit) values expressed in it, and should be careful in making use of texts (re-)written by computers, such as the ones produced by our proposed models. Finally, we are aware that our task domain (femicide/gender-based violence) is a societally and emotionally loaded topic, and that the texts contained in our dataset and produced by our models might be disturbing. In particular, in some cases, models may produce graphic descriptions of violence and/or produce questionable moral judgements (e.g., we have occasionally seen statements such as "the perpetrator of this horrible crime does not have the right to live" spontaneously produced by some of the models), and potential users of applications of the model should be aware of this. For the purposes of this paper, the only people external to the research team who have been extensively exposed to model outputs were the annotators in our human evaluation study. In the introduction page of our online questionnaire, annotators were warned about the sensitive nature of the topic and advised that they could stop their participation at any time if they felt uncomfortable and could contact the authors with any questions. Prior to running the online questionnaire we have requested and obtained ethical approval by the Ethical Review Committee of our research institution. ### Author contributions Authors G.M. and H.L. share first co-authorship (marked with **). G.M. had primary responsibility for data collection and preparation, setting up the GPT-3 experiments and running the human evaluation survey. H.L. had primary responsibility for the mBART experiments and the automatic evaluation. B.M. annotated data (pair alignment) and contributed to prompt engineering and the design of the evaluation questionnaire. M.N. coordinated and supervised the overall project. ## Acknowledgements Authors G.M. and M.N. were supported by the Dutch National Science organisation (NWO) through the project _Framing situations in the Dutch language_, VC.GW17.083/6215. Author H.L. was supported by the China Scholarship Council (CSC). We would like to thank the annotators for helping us evaluate the models' outputs. We also thank the ACL anonymous reviewers for their useful comments. Finally, we thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster.
2307.08342
Stability results for a hierarchical size-structured population model with distributed delay
In this paper we investigate a structured population model with distributed delay. Our model incorporates two different types of nonlinearities. Specifically we assume that individual growth and mortality are affected by scramble competition, while fertility is affected by contest competition. In particular, we assume that there is a hierarchical structure in the population, which affects mating success. The dynamical behavior of the model is analysed via linearisation by means of semigroup and spectral methods. In particular, we introduce a reproduction function and use it to derive linear stability criteria for our model. Further we present numerical simulations to underpin the stability results we obtained.
Dandan Hu, József Z. Farkas, Gang Huang
2023-07-17T09:27:41Z
http://arxiv.org/abs/2307.08342v1
# Stability results for a hierarchical size-structured population model with distributed delay ###### Abstract In this paper we investigate a structured population model with distributed delay. Our model incorporates two different types of nonlinearities. Specifically we assume that individual growth and mortality are affected by scramble competition, while fertility is affected by contest competition. In particular, we assume that there is a hierarchical structure in the population, which affects mating success. The dynamical behavior of the model is analysed via linearisation by means of semigroup and spectral methods. In particular, we introduce a reproduction function and use it to derive linear stability criteria for our model. Further we present numerical simulations to underpin the stability results we obtained. ## 1 Introduction Population dynamics has been at the center of biomathematics since Malthus' exponential model of population growth. The renowned logistic equation is the classic example of a mathematical model for a self-regulating population. Simple models, like the logistic model are based on the premise that for example the per capita growth rate is solely determined by the total population size. Although these type of models tackle the issues of population self-regulation and stability, they fail to consider individual variations. As a consequence, only population level processes can be accounted for, and their predictive power may be limited. It is clear that an individual's activity in a specific population may be influenced not just by one-on-one interactions with other members of the population of the same physiological state, but also by interactions with individuals who are of a different state (e.g. older or younger, larger or smaller etc.) than themselves. It has been proven that when population density rises, competition among individuals for a restricted resource increases, and individuals may compete for a variety of resources including food, space, shelter, and mates. It has been also shown that individuals of various species, such as fish, lizards, water buffalo, snails, and others, have a positive relationship between the quantity of available food and their own body size [1]. A similar phenomenon occurs for terrestrial plants, which rely on solar energy for photosynthesis. The survival of each plant is heavily influenced by the vertical component of plant size distribution, or the size related hierarchy within a particular strand. Clearly, a taller plant is exposed to more light, and the energy is then channeled into their individual growth. Consequently, a hierarchical size-structured population model may prove to be useful to model such species, in particular when modelling intra-specific competition. Indeed, hierarchical size-structured population models have been studied extensively in the literature. Without the desire of completeness we mention a few relevant recent papers, where the interested reader will also find further references [2, 4, 3, 5, 6]. In the present paper we introduce and study a size-structured population model in which the birth rate is a function of an infinite dimensional interaction variable related to a hierarchy in the population (modelling contest competition), and the growth and death rates are functions of the total population size (modelling scramble competition). Hence our model incorporates two different types of nonlinearities. Specifically, we consider the following system that describes the dynamics of a hierarchical size-structured population model with delayed birth process. \[\left\{\begin{array}{l}\frac{\partial p(s,t)}{\partial t}+\frac{\partial}{ \partial s}(\gamma(s,P)p(s,t))+\mu(s,P)p(s,t)=0,\quad 0\leqslant s\leqslant m,\;t>0, \\ \\ p(0,t)=\int_{0}^{m}\int_{-\theta}^{0}\beta(s,\tau,Q(s,t+\tau))p(s,t+\tau) \mathrm{d}\tau\mathrm{d}s,\quad t>0,\\ \\ p(s,\delta)=p^{0}(s,\delta),\quad 0<s\leqslant m,\;\delta\in[-\theta,0].\end{array}\right. \tag{1.1}\] Here \(p(s,t)\) stands for the density of individuals with respect to size \(s\in[0,m]\), where \(m\) is the maximum size of an individual in the population. The functions \(\gamma\) and \(\mu\) denote individual growth and mortality rates respectively, which depend on the individual's own size \(s\) as well as on the total population size \[P(t)=\int_{0}^{m}p(s,t)\mathrm{d}s. \tag{1.2}\] The function \(\beta\) in Eqs. (1.1) stands for the fertility rate of an individual, which depends on the size \(s\) and a function of the population density (environment) specified as: \[Q(s,t+\tau)=\alpha\int_{0}^{s}w(r)p(r,t+\tau)\mathrm{d}r+\int_{s}^{m}w(r)p(r, t+\tau)\mathrm{d}r,\quad s\in[0,m],t>0. \tag{1.3}\] The interaction variable \(Q\) accounts for a hierarchy in the population impacting fertility/reproduction, where the parameter \(\alpha\,(0\leq\alpha<1)\) determines the strength of the hierarchy between individuals of different sizes. In particular, \(\alpha=0\) corresponds to an absolute hierarchical structure, in which large individuals in the population have an absolute advantage. The other limiting case \(\alpha=1\) describes a scenario with no hierarchical structure in the population, that is, each individual is in fair (scramble) competition when accessing resources. The parameter \(\tau\in[-\theta,0]\), where \(\theta>0\) is expressed as the maximum delay. The distributed delay through \(Q\) is introduced here to account for the effect of delay through contest competition. Note the slightly unusual boundary condition we employ in our model (1.1). From the physical point of view the flux of individuals at the minimal size is naturally \(\gamma(0,P)p(0,t)\). Hence we tacitly assume that \(\gamma(0,P)\equiv 1\), i.e. the growth rate is normalised such that newborns have the same growth speed independent of the standing population. This assumption yields a great deal of simplification in the linearisation and makes the computations much more tractable. In the rest of the paper we assume that the vital rates satisfy the following regularity assumptions: \[\gamma =\gamma(s,P)\in C^{1}\left([0,m];C^{1}[0,\infty)\right),\,\gamma>0,\] \[\mu =\mu(s,P)\in C\left([0,m];C^{1}[0,\infty)\right),\,\mu\geq 0,\] \[\beta =\beta(s,\tau,Q)\in C\left([0,m]\times[-\theta,0);C^{1}[0,\infty) \right),\,\beta\geq 0,\] \[w =w(s)\in C^{1}([0,m]),\,w>0.\] Mathematical models of physiologically structured populations have been developed and investigated by numerous researchers over the past decades. Without completeness we mention here a few recent (and not so recent) works [7, 8, 10, 9, 18, 21, 19, 20, 15, 12, 13, 11, 14, 16, 17], where the interested reader will find further useful references. There are two main modelling approaches to build and study structured population models. The classic PDE modelling approach, which we employ here, utilises the natural density distribution of the population, and therefore the resulting models are typically formulated as first order hyperbolic equations with non-local boundary conditions, such as the one we study here. For relatively simple PDE models one can often directly derive a renewal (integral) equation for the population birth rate, which is a delay equation. For more complicated models, in particular with infinite dimensional nonlinearities, such a direct approach is not necessarily convenient. In this case it is possible to build from basic biological principles a structured population model, which takes the form of a delay equation, or an even more abstract dynamical system. Then the question of equivalence between the two different formulations naturally arises, which has been the subject of the recent papers: [24, 22, 23]. For a linear model with distributed states at birth we studied in [24] the equivalence results we were able to establish are quite satisfying. However, for certain classes of nonlinear models the question of equivalence is much more complicated, and the delay equation formulation has an advantage in particular when studying qualitative properties via linearisation. Using the the framework of nonlinear semigroup theory it is possible to establish existence of solutions of the nonlinear PDE model on a suitable Banach space using the Crandall-Liggett theorem [25], however the arising (solution) nonlinear semigroup cannot be shown to be continuously differentiable in general. This does not necessarily mean though, that using the formal linearisation we employed here stability results cannot be deduced in the PDE framework. In fact we expect that this is possible, hence we tacitly assumed that this is the case in this work. Indeed the specific examples presented in Section 7, also support this. To invoke the linearised stability principle from the delay formulation of a model, one can study the equivalence of the two formulations, for example by means of a continuous map, which maps orbits of the PDE formulation to orbits of the delay formulation. For a different, size-structured predator-prey (usually referred to as a _Daphnia_) model, such equivalence between orbits was studied in the recent paper [22]. Moreover, in the recent manuscript [23]; the delay formulation of a single species hierarchical size-structured population model is studied. That model is equipped with the classical boundary condition, representing recruitment of newborn individuals, and it is assumed that mortality is constant; however the growth rate depends on an infinite dimensional environmental variable (due to the hierarchical structure), but not size explicitly. For this model we have verified directly that the characteristic equation deduced (in a similar fashion as here) from the linearisation of the PDE model, is equivalent to the characteristic equation deduced from the linearisation of the delay formulation, which is a major indicator that linear stability results deduced in the PDE framework do indeed hold for the original nonlinear model. We also briefly discussed the equivalence between orbits in the two formulations, however due to the special delay formulation we employed (eliminating the infinite dimensional environmental variable) difficulties arise; please see Section 6 in the above mentioned manuscript for more details. Researchers have been focusing on how to incorporate delays (e.g. maturation) in the recruitment process, in particular in the context of age-structured models, see e.g. [26, 27, 28, 30, 29], where stability results were obtained using similar methods to the ones we deploy here. We specifically mention the paper [6], in which a similar size-structured model was studied. However, in that model the growth and mortality rates only depend on size, and not on the total population size. It is clear that in most populations, individual growth and survival are greatly correlated with the size of the standing population, and that if the total population size falls below a certain level the population will almost certainly die out, which is also known as Allee effect [31]. To account for this, in our model we incorporated growth and mortality rates, which do depend on the total population size, which makes our model more realistic. The main aim of our work is to introduce and study a hierarchical size-structured population model, which incorporates two significantly different types of nonlinearities and a delay in the recruitment process. We aim to demonstrate how to apply the theory of strongly continuous semigroups for this model. The rest of the paper is organised as follows. In Sect. 2, we first give conditions for the existence of a positive stationary solution of model (1.1) and formally linearise it around a steady state. We then recall some theoretical results, which we will utilise later when studying the linearisation of the model. In Sect. 3, we rewrite the linearised system as an abstract Cauchy problem, and then prove that it is governed by a strongly continuous semigroup of operators. In Sect. 4, we study important regularity properties of the governing linear semigroup. In Sect. 5, we derive an explicit characteristic equation characterising the point spectrum of the generator of the governing linear semigroup. In Sect. 6, we establish criteria for the linear stability and instability of steady states of our model. Finally, in Sect. 7, some examples will be presented and using numerical simulations we verify that the linear stability results obtained in the previous section are indeed valid for the original nonlinear model. ## 2 Preliminaries It is clear that our model (1.1) admits the trivial stationary solution. We now establish necessary conditions for the existence of a positive stationary solution. Clearly, any non-trivial stationary solution \(p_{*}(s)\) of (1.1) satisfies the following equations \[\frac{\partial}{\partial s}\left[\gamma(s,P_{*})p_{*}(s)\right]=-\mu(s,P_{*})p _{*}(s), \tag{2.1}\] \[p_{*}(0)=\int_{0}^{m}\int_{-\theta}^{0}\beta\left(s,\tau,Q_{*}(s)\right)p_{*}( s)\partial\tau\partial s. \tag{2.2}\] The general solution of Eq. (2.1) is found as \[p_{*}(s)=p_{*}(0)e^{-\int_{0}^{s}\frac{\mu(y,P_{*})+\gamma_{\mu}(y,P_{*})}{ \gamma(y,P_{*})}\partial y}. \tag{2.3}\] Substituting Eq. (2.3) into Eq. (2.2), we observe that \[1=\int_{0}^{m}\int_{-\theta}^{0}\beta\left(s,\tau,Q_{*}(s)\right)e^{-\int_{0} ^{s}\frac{\mu(y,P_{*})+\gamma_{\mu}(y,P_{*})}{\gamma(y,P_{*})}\partial y} \partial\tau\partial s, \tag{2.4}\] if \(p_{*}(0)\neq 0\). Hence for \(P\geq 0\) and \(Q\geq 0\), we define the basic reproduction function as \[\mathscr{R}(P,Q)=\int_{0}^{m}\int_{-\theta}^{0}\Pi(s,P)\beta(s,\tau,Q(s,t+ \tau))\partial\tau\partial s, \tag{2.5}\] where the function \(\Pi\) is given for \(0\leq s\leq m\) by \[\Pi(s,P)=e^{-\int_{0}^{s}\frac{\mu(y,P)+\gamma_{\mu}(y,P)}{\gamma(y,P)} \partial y}. \tag{2.6}\] By integration of Eq. (2.3), we obtain \[p_{*}(0)=\frac{P_{*}}{\int_{0}^{m}e^{-\int_{0}^{s}\frac{\mu(y,P_{*})+\gamma_{ \mu}(y,P_{*})}{\gamma(y,P_{*})}\partial y}\partial s}, \tag{2.7}\] where \(P_{*}=\int_{0}^{m}p_{*}(s)\partial s\) represents the positive population size at the steady state. Finally, using Eq. (2.7) in Eq. (2.3), we get \[p_{*}(s)=\frac{P_{*}\Pi(s,P_{*})}{\int_{0}^{m}\Pi(s,P_{*})\partial s}. \tag{2.8}\] Then the function \(Q_{*}\), defined by \[Q_{*}(s)=\alpha\int_{0}^{s}w(r)p_{*}(r)\partial r+\int_{s}^{m}w(r)p_{*}(r)\partial r, \tag{2.9}\] satisfies the equation \[\mathscr{R}\left(P_{*},Q_{*}\right)=1. \tag{2.10}\] We give the following existence result for the stationary solution of model (1.1). **Proposition 2.1**.: _If \(p_{*}(s)\) is a positive stationary solution of model (1.1), then \(p_{*}\) is defined by (2.8) and the function \(Q_{*}\) satisfies (2.9) and \(\mathscr{R}\left(P_{*},Q_{*}\right)=1\)._ Given a stationary solution \(p_{*}\), we linearise our model (1.1) by introducing the infinitesimal perturbation \(u=u(s,t)\) and making the ansatz \(p=u+p_{*}\). After inserting this expression into (1.1) and omitting all nonlinear terms, we obtain the linearised problem \[\begin{split} 0=&\frac{\partial}{\partial t}u(s,t)+ \gamma_{*}(s)\frac{\partial}{\partial s}u(s,t)+\nu_{*}(s)u(s,t)+\varepsilon_{* }(s)U(t),\\ u(0,t)=&\int_{0}^{m}\int_{-\theta}^{0}\beta_{Q}(s,\tau,Q_{*})p_{*}(s)H(s,t+\tau)\mathrm{d}\tau\mathrm{d}s\\ &+\int_{0}^{m}\int_{-\theta}^{0}\beta(s,\tau,Q_{*})u(s,t+\tau) \mathrm{d}\tau\mathrm{d}s,\\ H(s,t)=&\alpha\int_{0}^{s}w(r)u(r,t)\mathrm{d}r+ \int_{s}^{m}w(r)u(r,t)\mathrm{d}r,\\ u(s,\delta)=& u^{0}(s,\delta),H(\delta)=H^{0}( \delta),\quad\delta\in[-\theta,0],\end{split} \tag{2.11}\] where we have set \(H(s,t)=Q(s,t)-Q_{*}(s)\), \(U(t)=P(t)-P_{*}=\int_{0}^{m}u(s,t)\partial s\), and \[\gamma_{*}(s) =\gamma(s,P_{*}),\] \[\nu_{*}(s) =\gamma_{s}(s,P_{*})+\mu(s,P_{*}),\] \[\varepsilon_{*}(s) =p_{*}(s)\left(\mu_{P}(s,P_{*})+\gamma_{sP}(s,P_{*})\right)+p_{* }^{\prime}(s)\gamma_{P}(s,P_{*}).\] We now recall some important definitions and results from the theory of linear operators, which we are going to utilise later on. First let us recall the following characterisation theorem due to Hille and Yosida, see e.g. [32]. **Lemma 2.2**.: _A linear operator \(A\) is the infinitesimal generator of a \(C_{0}\)-semigroup of contractions \(T(t)\), \(t\geq 0\) if and only if (i) \(A\) is closed and \(\overline{D(A)}=X\); (ii) The resolvent set \(\rho(A)\) of \(A\) contains \(\mathbb{R}^{+}\) and for every \(\lambda>0\)_ \[\left\|R(\lambda:A)\right\|\leq\frac{1}{\lambda}. \tag{2.12}\] In fact, Lemma 2.2 implies that any Hille-Yosida operator gives rise to a \(C_{0}\)-semigroup on the closure of its domain. **Definition 2.3**.: _Let \((A,D(A))\) be a linear operator on the Banach space \(X\) and set_ \[X_{0}=(\overline{D(A)},\left\|\cdot\right\|);\] \[A_{0}x=Ax,\text{ for }x\in D\left(A_{0}\right)=\left\{x\in D (A):Ax\in X_{0}\right\}.\] _Then the operator \((A_{0},D\left(A_{0}\right))\) is called the part of \(A\) in \(X_{0}\)._ Particularly, if \(\left(A,D(A)\right)\) is a Hille-Yosida operator, its part \(\left(A_{0},D\left(A_{0}\right)\right)\) generates a strongly continuous semigroup \(\left(T_{0}(t)\right)_{t\geq 0}\) on \(X_{0}\) (see e.g. [32]). **Lemma 2.4**.: _(see e.g. [33]) Let the operator \(A\) be a Hille-Yosida operator on a Banach space \(X\). If the operator \(B\) is a bounded linear operator on \(X\), then the operator \(A+B\) is also a Hille-Yosida operator on the Banach space \(X\)._ **Definition 2.5**.: _Let \(\left(A,D(A)\right)\) be a closed linear operator on a Banach space \(X\). Then the point spectrum of \(A\), denoted by \(\sigma_{p}(A)\), is defined as_ \[\sigma_{p}(A):=\{\lambda\in\mathbb{C}:\lambda I-A:D(A)\to X\text{ is not injective}\},\] _and a crucial quantity \(s(A)\), called the spectral bound of \(A\), is denoted by_ \[s(A):=\sup\{\mathrm{Re}\,\lambda:\lambda\in\sigma(A)\}.\] **Definition 2.6**.: _If \(\left(A,D(A)\right)\) is a generator of a \(C_{0}\)-semigroup \(\left(T(t)\right)_{t\geq 0}\), we denote by \(\omega_{0}(A)\) the growth bound of the semigroup \(\left(T(t)\right)_{t\geq 0}\), and define it as_ \[\omega_{0}(A):=\lim_{t\rightarrow+\infty}\,t^{-1}\log\|T(t)\|.\] The following lemmas (see e.g. [33] and [34]) will be used to establish the positivity of the governing \(C_{0}\)-semigroup \(\left(T(t)\right)_{t\geq 0}\). **Lemma 2.7**.: _(Riesz-Schauder theory) Let \(\left(A,D(A)\right)\) be a compact operator on the Banach space \(X\), then (i) \(0\in\sigma(A)\) when dim \(X=\infty\); (ii) \(\sigma(A)\backslash\{0\}\) = \(\sigma_{p}(A)\backslash\{0\}\); (iii) \(\sigma(A)\) is a discrete set having no limit points except 0._ **Lemma 2.8**.: _A strongly continuous semigroup \(\left(T(t)\right)_{t\geq 0}\) on a Banach lattice \(X\) is positive if and only if the resolvent \(R(\lambda,A)\) of its generator \(A\) is positive for all sufficiently large \(\lambda\)._ ## 3 Existence of a \(C_{0}\)-semigroup governing the linearised system In this section, to prove the well-posedness of the linearised problem (2.11), we set up a \(C_{0}\)-semigroup framework on a suitable Banach lattice. For any positive stationary solution \(p_{*}(s)\), we denote the Banach space \[\mathfrak{X}=L^{1}([0,m])\] with the usual norm \(\|\cdot\|\) and on this space we introduce the following operators \[(\mathfrak{A}_{m}f)(s)=-\gamma_{*}(s)f^{\prime}(s)-\nu_{*}(s)f(s)\text{ for }s\in[0,m]\] with domain \(D\left(\mathfrak{A}_{m}\right)=W^{1,1}(0,m)\), \[(\mathfrak{B}_{m}f)(s)=-\varepsilon_{*}(s)\int_{0}^{m}f(s)\partial s\text{ for }s\in[0,m]\] with domain \(D\left(\mathfrak{B}_{m}\right)=L^{1}(0,m)\). The \(m\) subscript indicates that the operators are specified on their maximum domain. Moreover, we define the boundary operator \[\mathcal{P}:D\left(\mathfrak{A}_{m}\right)\rightarrow\mathbb{C},\;\mathcal{P} (f):=f(0),\] which is used to express the boundary condition ([26]). Next, we introduce the delay operator \[\Phi(y)= \int_{0}^{m}\int_{-\theta}^{0}\beta(s,\tau,Q_{*})y(s,\tau)\mathrm{d }\tau\mathrm{d}s\] \[+\int_{0}^{m}\int_{-\theta}^{0}p_{*}(s)\beta_{Q}(s,\tau,Q_{*}) \left(\alpha\int_{0}^{s}w(r)y(r,\tau)\partial r+\int_{s}^{m}w(r)y(r,\tau) \partial r\right)\mathrm{d}\tau\mathrm{d}s,\] where \(y\in E=L^{1}([-\theta,0],\mathfrak{X})\cong L^{1}((0,m)\times[-\theta,0])\). Then with these notations Eqs. (2.11) can be cast in the form of an abstract boundary delay system: \[\left\{\begin{array}{rcl}\frac{\mathrm{d}}{\mathrm{d}t}u(t)=&(\mathfrak{A}_ {m}+\mathfrak{B}_{m})\,u(t),\quad t\geqslant 0,\\ \mathcal{P}u(t)=&\Phi\left(u_{t}\right),\\ u_{0}(t)=&u^{0}(t),\quad t\in[-\theta,0],\end{array}\right. \tag{3.1}\] where \(u^{0}(t):=u^{0}(\cdot,t)\), \(u:[0,+\infty)\to L^{1}(0,m)\) is defined as \(u(t):=u(\cdot,t)\), and \(u_{t}:[-\theta,0]\to L^{1}(0,m)\) is the history segment defined in the usual way as \[u_{t}(\tau):=u(t+\tau),\quad\tau\in[-\theta,0].\] In order to transform (3.1) into an abstract Cauchy problem, on the Banach space \(E\), we introduce the differential operator \[\left(Y_{m}y\right)(\tau):=\frac{\mathrm{d}}{\mathrm{d}\tau}y(\tau)\] with domain \(D\left(Y_{m}\right)=W^{1,1}([-\theta,0],\mathfrak{X}).\) Moreover, we define another boundary operator \(G:D\left(Y_{m}\right)\rightarrow\mathfrak{X}\) as \[Gy:=y(0).\] Next, we consider the product space \(\mathscr{X}:=E\times\mathfrak{X}\), on which we define the matrix operator \[\mathscr{A}=\mathscr{A}_{1}+\mathscr{A}_{2},\] where \[\mathscr{A}_{1}:=\left(\begin{array}{cc}Y_{m}&0\\ 0&\mathfrak{A}_{m}\end{array}\right),\ \ \mathscr{A}_{2}:=\left(\begin{array}{cc}0&0\\ 0&\mathfrak{B}_{m}\end{array}\right) \tag{3.2}\] with domain \[D(\mathscr{A}) =D\left(\mathscr{A}_{1}\right)\] \[=\left\{\begin{array}{l}\left(\begin{array}{c}y\\ f\end{array}\right)\in D\left(Y_{m}\right)\times D\left(\mathfrak{A}_{m}\right) :\begin{array}{c}Gy=f\\ \mathcal{P}f=\Phi y\end{array}\right\}.\] We get the following abstract Cauchy problem \[\left\{\begin{array}{l}\mathscr{U}^{\prime}(t)=\mathscr{A}\mathscr{U}\left( t\right),\quad t\geqslant 0,\\ \mathscr{U}(0)=\mathscr{U}_{0},\end{array}\right. \tag{3.3}\] which corresponds to the operator\((\mathscr{A},D(\mathscr{A}))\) on the space \(\mathscr{X}\). Here \(\mathscr{U}(t)=\left(\begin{array}{c}u_{t}\\ u(t)\end{array}\right)\) denotes the function \(\mathscr{U}:[0,+\infty)\rightarrow\mathscr{X}\). To establish the well-posedness of the abstract Cauchy problem (3.3), we will show that \((\mathscr{A},D(\mathscr{A}))\) generates a \(C_{0}\)-semigroup on \(\mathscr{X}\). First of all, we consider the Banach space \(\mathcal{X}:=E\times\mathfrak{X}\times\mathfrak{X}\times\mathbb{C}\) and the matrix operator \[\mathcal{A}:=\left(\begin{array}{ccc|cc}Y_{m}&0&0&0\\ -G&0&Id&0\\ \hline 0&0&\mathfrak{A}_{m}&0\\ \Phi&0&-\mathcal{P}&0\end{array}\right)\] with domain \(D(\mathcal{A})=D\left(Y_{m}\right)\times\{0\}\times D\left(\mathfrak{A}_{m} \right)\times\{0\}.\) **Proposition 3.1**.: _The operator \(\left(\mathcal{A},D(\mathcal{A})\right)\) is a Hille-Yosida operator on the Banach space \(\mathscr{X}\)._ Proof.: The operator \(\mathcal{A}\) can be written as the sum of two operators on \(\mathcal{X}\) as \(\mathcal{A}=\mathcal{A}_{1}+\mathcal{A}_{2},\) where \[\mathcal{A}_{1}=\left(\begin{array}{ccc|cc}Y_{m}&0&0&0\\ -G&0&0&0\\ \hline 0&0&\mathfrak{A}_{m}&0\\ 0&0&-\mathcal{P}&0\end{array}\right),\quad\mathcal{A}_{2}=\left(\begin{array} []{ccc|cc}0&0&0&0\\ 0&0&Id&0\\ \hline 0&0&0&0\\ \Phi&0&0\end{array}\right)\] with \(D\left(\mathcal{A}_{1}\right)=D(\mathcal{A})\) and \(D\left(\mathcal{A}_{2}\right)=\mathcal{X}.\) It is easy to see that the restriction \((Y_{0},D\left(Y_{0}\right))\) of \(Y_{m}\) to the kernel of \(G\) generates the nilpotent left shift semigroup \(\left(\mathfrak{T}_{0}(t)\right)_{t\geqslant 0}\) on \(E\) which is given by \[(\mathfrak{T}_{0}(t)y)(s,\tau)=\left\{\begin{array}{ll}y(s,t+\tau),&\text{ if }t+\tau\leqslant 0,\\ 0,&\text{ if }t+\tau>0.\end{array}\right.\] Similarly, one can verify by direct computations that the restriction \(\left(\mathfrak{A}_{0},D\left(\mathfrak{A}_{0}\right)\right)\) of \(\mathfrak{A}_{m}\) to the kernel of \(\mathcal{P}\) generates the positive semigroup \(\left(\Lambda_{0}(t)\right)_{t\geqslant 0}\) on \(\mathfrak{X}\) defined as \[(\Lambda_{0}(t)f)(s)=\left\{\begin{array}{ll}e^{-\int_{\Gamma^{-1}(\Gamma(s )-t)}^{s}\frac{\nu_{\ast}(y)}{\gamma_{\ast}(y)}\partial y}f\left(\Gamma^{-1}( \Gamma(s)-t)\right),\text{ if }t\leq\Gamma(s),\\ 0,\text{ if }t>\Gamma(s),\end{array}\right.\] where \[\Gamma(s)=\int_{0}^{s}\frac{1}{\gamma_{\ast}(y)}\mathrm{d}y. \tag{3.4}\] Next we demonstrate that \(\mathcal{A}_{1}\) is a Hille-Yosida operator. To this end note that for any \(\lambda\in\mathbb{C}\) and \(\tilde{f}\neq 0,\) the resolvent equation \[\left(\lambda I-\mathfrak{A}_{0}\right)f=\tilde{f}\] has the implicit solution \[f(s)=e^{-\int_{0}^{s}\frac{\lambda+\nu_{\ast}(y)}{\gamma_{\ast}(y)}\mathrm{d}y }\int_{0}^{s}\frac{\tilde{f}(\alpha)}{\gamma_{\ast}(\alpha)}e^{\int_{s}^{ \alpha}\frac{\lambda+\nu_{\ast}(\alpha)}{\gamma_{\ast}(\alpha)}\mathrm{d}a} \mathrm{d}\alpha. \tag{3.5}\] It shows that \(\sigma\left(\mathfrak{A}_{0}\right)=\emptyset\) as \(\gamma_{\ast}(s)>0.\) In the same way, \(\sigma\left(Y_{0}\right)=\emptyset\). Then for \(\lambda\in\mathbb{C}\), we have the resolvent \[R\left(\lambda,\mathcal{A}_{1}\right)=\left(\begin{array}{cccc}R\left( \lambda,Y_{0}\right)&\epsilon_{\lambda}&0&0\\ 0&0&0&0\\ 0&0&R\left(\lambda,\mathfrak{A}_{0}\right)&\varphi_{\lambda}\\ 0&0&0&0\end{array}\right),\] where \[\epsilon_{\lambda}(\tau)=e^{\lambda\tau},\tau\in[-\theta,0]\text{ and }\varphi_{ \lambda}(s)=e^{-\int_{0}^{s}\frac{\lambda+\nu_{\ast}(y)}{\gamma_{\ast}(y)} \mathrm{d}y},s\in[0,m]. \tag{3.6}\] In addition, \[\ker\left(\lambda-Y_{m}\right) =\left\{f\cdot\epsilon_{\lambda}:f\in\mathfrak{X}\right\},\] \[\ker\left(\lambda-\mathfrak{A}_{m}\right) =<\varphi_{\lambda}>.\] Let \((z_{1}\ z_{2}\ z_{3}\ z_{4})^{T}\in\mathcal{X}\) and \(\lambda>0\), we have \[\left\|R\left(\lambda,\mathcal{A}_{1}\right)(z_{1}\ z_{2}\ z_{3} \ z_{4})^{T}\right\| =\left\|R\left(\lambda,Y_{0}\right)z_{1}+\epsilon_{\lambda}z_{2} \right\|_{E}+\left\|R\left(\lambda,\mathfrak{A}_{0}\right)z_{3}+z_{4}\varphi_{ \lambda}\right\|_{\mathfrak{X}}\] \[\leq\left\|R\left(\lambda,Y_{0}\right)z_{1}\right\|_{E}+\left\| \epsilon_{\lambda}z_{2}\right\|_{E}+\left\|R\left(\lambda,\mathfrak{A}_{0} \right)z_{3}\right\|_{\mathfrak{X}}+\left\|z_{4}\varphi_{\lambda}\right\|_{ \mathfrak{X}}\] \[\leq\int_{-\theta}^{0}\frac{1}{\lambda}\|z_{1}(\tau)\|_{ \mathfrak{X}}d\tau+\frac{1}{\lambda}\left\|z_{2}\right\|_{\mathfrak{X}}+ \frac{1}{\lambda}\left\|z_{3}\right\|_{\mathfrak{X}}+\frac{1}{\lambda}|z_{4}|\] \[= \frac{1}{\lambda}\left(\|z_{1}\|_{E}+\|z_{2}\|_{\mathfrak{X}}+\| z_{3}\|_{\mathfrak{X}}+|z_{4}|\right).\] Therefore, we obtain \[\|\lambda R\left(\lambda,\mathcal{A}_{1}\right)\|\leq 1,\] and \(\mathcal{A}_{1}\) is a Hille-Yosida operator. Since the perturbing operator \(\mathcal{A}_{2}\) is bounded, it follows from Lemma 2.4 that \(\mathcal{A}\) is also a Hille-Yosida operator. In particular, the Hille-Yosida operator \(\mathcal{A}\) is the generator of a strongly continuous semigroup on the closure of its domain, by Lemma 2.2. Hence, according to Proposition 3.1 we observe that the operator \(\left(\mathcal{A}_{0},D\left(\mathcal{A}_{0}\right)\right)\) also yields a strongly continuous semigroup on the space \(E\times\left\{0\right\}\times\mathfrak{X}\times\left\{0\right\}\). The operator \(\left(\mathscr{A}_{1},D\left(\mathscr{A}_{1}\right)\right)\) generates a \(C_{0}\)-semigroup on \(\mathscr{X}\), as shown by the following theorem. **Theorem 3.2**.: _The operator \(\left(\mathscr{A}_{1},D\left(\mathscr{A}_{1}\right)\right)\) is isomorphic to the part \(\left(\mathcal{A}_{0},D\left(\mathcal{A}_{0}\right)\right)\) of the operator \(\left(\mathcal{A},D(\mathcal{A})\right)\) on the closure of its domain \(\overline{D(\mathcal{A})}\)._ Proof.: From Definition 2.3, we observe that the part \(\left(\mathcal{A}_{0},D\left(\mathcal{A}_{0}\right)\right)\) of \(\left(\mathcal{A},D(\mathcal{A})\right)\) on the closure of its domain \[\mathcal{X}_{0}:=\overline{D(\mathcal{A})}=E\times\left\{0\right\}\times \mathfrak{X}\times\left\{0\right\}\] generates a strongly continuous semigroup. Or more precisely, \[D\left(\mathcal{A}_{0}\right) =\left\{x\in D(\mathcal{A}):\mathcal{A}x\in\overline{D(\mathcal{A })}\right\}\] \[=\left\{\begin{array}{l}\left(\begin{array}{c}y\\ 0\\ s\\ 0\end{array}\right):y\in D\left(Y_{m}\right),s\in D\left(\mathfrak{A}_{m} \right),\ \mathcal{A}\left(\begin{array}{c}y\\ 0\\ s\\ 0\end{array}\right)\in\mathcal{X}_{0}\right\}\] \[=\left\{\begin{array}{l}\left(\begin{array}{c}y\\ 0\\ s\\ 0\end{array}\right):y\in D\left(Y_{m}\right),s\in D\left(\mathfrak{A}_{m} \right),\ \ Gy=s\\ \mathcal{P}s=\Phi y\end{array}\right\}.\] Hence, the operator \(\left(\mathscr{A}_{1},D(\mathscr{A}_{1})\right)\) is isomorphic to \(\left(\mathcal{A}_{0},D\left(\mathcal{A}_{0}\right)\right)\) and generates a \(C_{0}\)-semigroup on the state space \(\mathcal{X}\). Next we formulate the most important result of this section as folllows. **Theorem 3.3**.: _The operator \((\mathscr{A},D(\mathscr{A}))\) of the abstract boundary delay problem (3.3) generates a strongly continuous semigroup \((\mathscr{T}(t))_{t\geqslant 0}\) of boundary linear operators on \(\mathscr{X}\)._ Proof.: Since isomorphisms have similar properties, we can obtain that the matrix operator \(\mathscr{A}_{1}\) is as well as a Hille-Yosida operator. In addition to this, both \((\mathscr{A}_{2},D(\mathscr{A}_{2}))\) and \(\mathfrak{B}_{m}\) are bounded perturbations of \(\mathscr{A}\) on \(\mathfrak{X}\), thus by \(\mathscr{A}=\mathscr{A}_{1}+\mathscr{A}_{2}\) and using the Desch-Schappacher perturbation theorem (Corollary 3.4 in [33]), we conclude that \(\mathscr{A}\) generates a strongly continuous semigroup. The following well-posedness result for (3.3) is implied by Theorem 3.3 (see Theorem 2.1 in Ref. [36]). **Proposition 3.4**.: _Assume that the initial value of the linear boundary delay problem (3.1) is \(u^{0}\in E\), then it has a unique solution \(u(s,t)\) in the space \(C\left([-\theta,+\infty),\mathfrak{X}\right)\), given by \(u(s,t)=u^{0}(s,t)\) for \(t\in[-\theta,0]\) and_ \[u(s,t)=\Pi_{2}\left(\mathscr{T}(t)\left(\begin{array}{c}u^{0}(s,0)\\ u^{0}(0)\end{array}\right)\right),\text{ for }t>0,\] _where \(\Pi_{2}\) is the projection operator of \(\mathscr{T}(t)\) on the space \(\mathfrak{X}\)._ ## 4 Regularity properties of the \(C_{0}\)-semigroup In this section, we study regularity properties of the governing linear semigroup and use results from the spectral theory of \(C_{0}\) semigroups to prove that \(s\left(\mathscr{A}\right)\in\sigma\left(\mathscr{A}\right)=\sigma_{p}\left( \mathscr{A}\right)\). Then the stability of the positive stationary solution of model (1.1) is determined by the position of the leading eigenvalue. We will then demonstrate that it is possible to obtain an explicit characteristic equation corresponding to the linearised system to determine the position of the leading eigenvalue. We first establish the main result of this section. **Theorem 4.1**.: _The spectrum of \(\mathscr{A}\) can contain only isolated eigenvalues of finite multiplicity._ Proof.: Since the operator \(\mathscr{A}_{2}\) is clearly compact on \(\mathfrak{X}\), it suffices to verify the claim for the operator \(\mathscr{A}_{1}\). To this end, given \(z\in\mathfrak{X}\), we find a unique solution \(u\in D(\mathscr{A}_{1})\) of the equation \[\lambda u-\mathscr{A}_{1}u=z\] in the form \[u(s)=e^{-\int_{0}^{s}\frac{\lambda+\nu_{*}(y)}{\gamma_{*}(y)}\mathrm{d}y}\int _{0}^{s}e^{\int_{s}^{\alpha}\frac{\lambda+\nu_{*}(a)}{\gamma_{*}(a)}\mathrm{d }a}\frac{z(\alpha)}{\gamma_{*}(\alpha)}\mathrm{d}\alpha. \tag{4.1}\] Consequently, for \(\lambda>0\) large enough, the resolvent operator \((\lambda I-\mathscr{A}_{1})^{-1}\) exists and is bounded, mapping \(\mathfrak{X}=L^{1}(0,m)\) into \(W^{1,1}(0,m)\). It then follows from Sobolev embedding theorems, that \(W^{1,1}(0,m)\) is compactly embedded in \(\mathfrak{X}\), that is, any bounded set \(M\) on \(W^{1,1}(0,m)\) is a compact set on \(\mathfrak{X}\). It also follows from the boundedness of \((\lambda I-\mathscr{A}_{1})^{-1}\), that \((\lambda I-\mathscr{A}_{1})^{-1}M\) is a bounded set in \(W^{1,1}(0,m)\). Using the definition of a compact operator, it's not hard to see that \((\lambda I-\mathscr{A}_{1})\) is a compact operator on \(W^{1,1}(0,m)\). The conclusion of the theorem is then obtained by using Riesz-Schauder theory, e.g. Lemma 2.7. **Proposition 4.2**.: _The linear stability of the stationary solution of model (1.1) is determined by spectrum of the generator, i.e.,_ \[\sigma(\mathscr{T}(t))=\{0\}\cup e^{\sigma(\mathscr{A})},\quad t>0.\] _Furthermore, the spectral bound_ \[s(\mathscr{A})=\sup\{\mathrm{Re}\lambda\mid\lambda\in\sigma(\mathscr{A})\] _coincides with the growth rate (see e.g. [33, 32])_ \[\omega_{0}=\lim_{t\to\infty}t^{-1}\ln\|\mathscr{T}(t)\|.\] If the eigenvalue with the largest real part was real, our analysis would be greatly simplified. In some cases, the following finding allows us to reach this conclusion. To this end, we would mention various existing lemmas and theorems in order to establish the second major result in this section. **Lemma 4.3**.: _For \(\lambda\in\rho\left(Y_{0}\right)\cap\rho\left(\mathfrak{A}_{0}\right)\), we define the abstract Dirichlet operators (see e.g. [37]) separately_ \[\begin{array}{l}K_{\lambda}:\mathfrak{X}\to E\text{ by }K_{\lambda}:=1 \circ\epsilon_{\lambda},\\ L_{\lambda}:E\rightarrow\mathfrak{X}\text{ by }L_{\lambda}:=\left(1\circ \varphi_{\lambda}\right)\Phi,\end{array} \tag{4.2}\] _where \(\epsilon_{\lambda}\) and \(\varphi_{\lambda}\) are given in (3.6). Then \(K_{\lambda}\in\mathscr{L}(\mathfrak{X},E)\) and \(L_{\lambda}\in\mathscr{L}(E,\mathfrak{X}).\) Apart from that,_ \[\begin{array}{l}G\left(K_{\lambda}(f)\right)=f,\text{ for all }f\in D \left(\mathfrak{A}_{m}\right),\\ \mathcal{P}\left(L_{\lambda}\left(y\right)\right)=\Phi(y),\text{ for all }y\in D \left(Y_{m}\right).\end{array} \tag{4.3}\] Next we will study the position of eigenvalues related to the compactness of operators, in particular we have: **Lemma 4.4**.: _Let \(\lambda\in\rho\left(Y_{0}\right)\cap\rho\left(\mathfrak{A}_{0}\right)\), and consider the following properties (i) \(\lambda\in\rho\left(\mathscr{A}_{1}\right)\); (ii) \(1\in\rho\left(K_{\lambda}L_{\lambda}\right)\) for the operator \(K_{\lambda}L_{\lambda}\in\mathscr{L}(E)\); (iii) \(1\in\rho\left(L_{\lambda}K_{\lambda}\right)\) for the operator \(L_{\lambda}K_{\lambda}\in\mathscr{L}(\mathfrak{X})\). Then one has the implications \((i)\Leftarrow(ii)\Leftrightarrow(iii)\). In particular, if \(K_{\lambda}\) and \(L_{\lambda}\) are compact operators, the assertions \((i),(ii)\) and \((iii)\) are equivalent._ This lemma is taken from [37], specifically see Theorem 2.7 in [37]. Here the operator \(L_{\lambda}\) is compact, which has one-dimensional range. Therefore \(K_{\lambda}L_{\lambda}\) and \(L_{\lambda}K_{\lambda}\) are compact too. From Lemma 4.4 we have the following result. **Theorem 4.5**.: _For the operator \(\left(\mathscr{A}_{1},D(\mathscr{A}_{1})\right)\), there holds that (i) \(\lambda\in\sigma\left(\mathscr{A}_{1}\right)\Leftrightarrow 1\in\sigma \left(L_{\lambda}K_{\lambda}\right)\Leftrightarrow 1\in\sigma_{p}\left(L_{ \lambda}K_{\lambda}\right)\Leftrightarrow\lambda\in\sigma_{p}\left(\mathscr{A} _{1}\right)\); (ii) Moreover, if \(\lambda\in\rho\left(\mathscr{A}_{1}\right)\) equivalently \(1\in\rho\left(L_{\lambda}K_{\lambda}\right)\), then the resolvent of \(\mathscr{A}_{1}\) is given by_ \[R\left(\lambda,\mathscr{A}_{1}\right)=\left(\begin{array}{cc}\left(1-K_{ \lambda}L_{\lambda}\right)^{-1}R\left(\lambda,Y_{0}\right)&\left(1-K_{ \lambda}L_{\lambda}\right)^{-1}K_{\lambda}R\left(\lambda,\mathfrak{A}_{0} \right)\\ \left(1-L_{\lambda}K_{\lambda}\right)^{-1}L_{\lambda}R\left(\lambda,Y_{0} \right)&\left(1-L_{\lambda}K_{\lambda}\right)^{-1}R\left(\lambda,\mathfrak{A} _{0}\right)\end{array}\right). \tag{4.4}\] Proof.: We just need to verify (4.4). For \(\lambda\in\rho\left(Y_{0}\right)\cap\rho\left(\mathfrak{A}_{0}\right)\), we have \[\left(\lambda-\mathscr{A}_{1}\right)=\left(\begin{array}{cc}\lambda-Y_{0}& 0\\ 0&\lambda-\mathfrak{A}_{0}\end{array}\right)\mathcal{B}_{\lambda}, \tag{4.5}\] where \(\mathcal{B}_{\lambda}:=\left(\begin{array}{cc}\mathrm{Id}&-K_{\lambda}\\ -L_{\lambda}&Id\end{array}\right)\) is a bounded linear matrix operator on \(D\left(Y_{m}\right)\times\,D\left(\mathfrak{A}_{m}\right)\) and the matrix \(\left(\begin{array}{cc}\lambda-Y_{0}&0\\ 0&\lambda-\mathfrak{A}_{0}\end{array}\right)\) has domain \(D\left(Y_{0}\right)\times D\left(\mathfrak{A}_{0}\right)\). The inverse of \((\lambda-\mathscr{A}_{1})\) is \[R\left(\lambda,\mathscr{A}_{1}\right)=\mathcal{B}_{\lambda}^{-1}\left( \begin{array}{cc}R\left(\lambda,Y_{0}\right)&0\\ 0&R\left(\lambda,\mathfrak{A}_{0}\right)\end{array}\right).\] By the definition of \(\mathcal{B}_{\lambda}\), we get \[\mathcal{B}_{\lambda}^{-1}=\left(\begin{array}{cc}\left(1-K_{\lambda}L_{ \lambda}\right)^{-1}&\left(1-K_{\lambda}L_{\lambda}\right)^{-1}K_{\lambda}\\ \left(1-L_{\lambda}K_{\lambda}\right)^{-1}L_{\lambda}&\left(1-L_{\lambda}K_{ \lambda}\right)^{-1}\end{array}\right).\] Therefore expression (4.4) follows. We conclude this section by establishing a criterion to guarantee the positivity of the governing linear semigroup. **Theorem 4.6**.: _Suppose that_ \[\begin{split}&\int_{-\theta}^{0}\beta\left(\cdot,\tau,Q_{*}( \cdot)\right)\mathrm{d}\tau+w(\cdot)\left(\int_{0}^{\cdot}\int_{-\theta}^{0} \beta_{Q}\left(y,\tau,Q_{*}(y)\right)p_{*}(y)\mathrm{d}\tau\mathrm{d}y\\ &+\alpha\int_{-\theta}^{m}\int_{-\theta}^{0}\beta_{Q}\left(y,\tau,Q_{*}( y)\right)p_{*}(y)\mathrm{d}\tau\mathrm{d}y\end{split}\right)\geq 0,\end{split} \tag{4.6}\] _then the semigroup \((\mathscr{T}(t))_{t\geqslant 0}\), generated by the operator \((\mathscr{A},D(\mathscr{A}))\) is positive._ Proof.: Here condition (4.6) is a direct generalisation of the positivity condition corresponding to the age-structured model established in [38]. If \(\beta_{Q}\equiv 0\), condition (4.6) is trivially satisfied. Condition (4.6) guarantees that operator \(\mathscr{A}_{2}\) is positive, then we only need to show that the semigroup \((\mathscr{T}_{1}(t))_{t\geq 0}\) generated by the operator \(\mathscr{A}_{1}\) is positive. Firstly, we consider the operator \(K_{\lambda}L_{\lambda}\). By the definitions of \(K_{\lambda}\) and \(L_{\lambda}\) in Lemma 4.3, it is clear that \[\lim_{\mathrm{Re}\lambda\rightarrow+\infty}\left\|K_{\lambda}L_{\lambda} \right\|=0.\] For \(\mathrm{Re}\lambda\) is sufficiently large, we have \(\left\|K_{\lambda}L_{\lambda}\right\|<1\). The operator \(\left(1-K_{\lambda}L_{\lambda}\right)\) is invertible, and the Neumann series determines its inverse \(\left(1-K_{\lambda}L_{\lambda}\right)^{-1}\). Distinctly, the condition (4.6) implies that \(K_{\lambda}L_{\lambda}\) is a positive operator, and \(\left(1-K_{\lambda}L_{\lambda}\right)^{-1}\) is positive as well if \(\mathrm{Re}\lambda\) is large enough. Hence from the representation (4.4), we see that \(R\left(\lambda,\mathscr{A}_{1}\right)\) is non-negative for such \(\lambda\). Therefore, in combination with Lemma 2.8 above, the operator \((\mathscr{A}_{1},D\left(\mathscr{A}_{1}\right))\) generates a positive semigroup on the Banach lattice \(E\times\mathfrak{X}\), which concludes the proof. The following result can be established using results from the theory of positive semigroups (see e.g. [33, 32] and also [9, 2] for similar results). **Proposition 4.7**.: _Suppose that condition (4.6) is satisfied. Then \(s(\mathscr{A})\in\sigma(\mathscr{A})\). Specifically, \(s(\mathscr{A})\) is a dominant eigenvalue, namely_ \[s(\mathscr{A})=\sup\{\mathrm{Re}\lambda\mid\lambda\in\sigma_{p}(\mathscr{A})\}.\] The characteristic equation The linear stability of stationary solutions of model (1.1) is determined by the eigenvalues of the semigroup generator \(\mathscr{A}\) according to the results we derived in the previous section. In this section, we derive an explicit characteristic equation to study the position of the eigenvalues of the generator \(\mathscr{A}\). The eigenvalue equation \[(\lambda I-\mathscr{A})u=0 \tag{5.1}\] for \(\lambda\in\mathbb{C}\) and non-trivial \(u\) is equivalent to the system \[\begin{split} 0=&\gamma_{*}(s)u^{\prime}(s)+(\lambda+ \nu_{*}(s))u(s)+\varepsilon_{*}(s)\bar{U},\\ u(0)=&\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau }\left(\beta(s,\tau,Q_{*}(s))u(s)+\beta_{Q}(s,\tau,Q_{*}(s))p_{*}(s)H(s)\right) \mathrm{d}\tau\mathrm{d}s,\end{split} \tag{5.2}\] where \(\bar{U}=\int_{0}^{m}u(s)\mathrm{d}s\) and \[\begin{split} H(s)&=\alpha\int_{0}^{s}w(r)u(r) \mathrm{d}r+\int_{s}^{m}w(r)u(r)\mathrm{d}r\\ &=(\alpha-1)\int_{0}^{s}w(r)u(r)\mathrm{d}r+\int_{0}^{m}w(r)u(r) \mathrm{d}r.\end{split} \tag{5.3}\] We assume that \(\alpha\in[0,1)\) holds for the rest of this section. From (5.3), we have \[H^{\prime}(s)=(\alpha-1)w(s)u(s)\text{ and }H^{\prime\prime}(s)=(\alpha-1) \left(w^{\prime}(s)u(s)+w(s)u^{\prime}(s)\right). \tag{5.4}\] Using relations (5.4), we can write system (5.2) in the form of \(H\) as well as its derivatives \[H^{\prime\prime}(s)+\left(\frac{\lambda+\nu_{*}(s)}{\gamma_{*}(s)}-\frac{w^{ \prime}(s)}{w(s)}\right)H^{\prime}(s)+(\alpha-1)\bar{U}\frac{w(s)\varepsilon_{ *}(s)}{\gamma_{*}(s)}=0. \tag{5.5}\] Eq. (5.5) is accompanied by boundary conditions of the form \[\alpha H(0)=H(m), \tag{5.6}\] \[\begin{split} H^{\prime}(0)=& w(0)\int_{0}^{m}\int_ {-\theta}^{0}e^{\lambda\tau}\frac{\beta\left(s,\tau,Q_{*}(s)\right)}{w(s)}H^{ \prime}(s)\mathrm{d}\tau\mathrm{d}s\\ &+(\alpha-1)w(0)\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau} \beta_{Q}\left(s,\tau,Q_{*}(s)\right)p_{*}(s)H(s)\mathrm{d}\tau\mathrm{d}s. \end{split} \tag{5.7}\] Thus the general solution of (5.5) can be expressed as \[H(s)=H(0)+H^{\prime}(0)\int_{0}^{s}\frac{w(y)}{w(0)}\pi_{*}(\lambda,y)\partial y +(1-\alpha)\int_{0}^{s}w(y)\pi_{*}(\lambda,y)\int_{0}^{y}\frac{\varepsilon_{*} (r)\bar{U}}{\pi_{*}(\lambda,y)\gamma_{*}(r)}\mathrm{d}r\mathrm{d}y, \tag{5.8}\] where \[\pi_{*}(\lambda,y)=e^{-\int_{0}^{y}\frac{\lambda+\gamma_{*}(a,P_{*})+\mu(a,P_ {*})}{\gamma(a,P_{*})}\partial a}.\] Meanwhile, substituting the solution (5.8) into (5.7), we get \[0 =H(0)(\alpha-1)w(0)\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau} \beta_{Q}\left(s,\tau,Q(s)_{*}(s)\right)p_{*}(s)\mathrm{d}\tau\mathrm{d}s \tag{5.9}\] \[+H^{\prime}(0)\left(1-\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda \tau}\beta(s,\tau,Q_{*}(s))\pi_{*}(\lambda,s)\mathrm{d}\tau\mathrm{d}s\right)\] \[+H^{\prime}(0)(1-\alpha)\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda \tau}\beta_{Q}\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\int_{0}^{s}w(y)\pi_{*}( \lambda,y)\mathrm{d}y\mathrm{d}\tau\mathrm{d}s\] \[-\bar{U}\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta(s, \tau,Q_{*}(s))\pi_{*}(\lambda,s)\int_{0}^{s}\frac{\varepsilon_{*}(y)w(0)(1- \alpha)}{\pi_{*}(\lambda,y)\gamma_{*}(y)}\mathrm{d}y\mathrm{d}\tau\mathrm{d}s\] \[+\bar{U}\int_{0}^{m}\int_{-\theta}^{0}\left(e^{\lambda\tau} \beta_{Q}(s,\tau,Q_{*}(s))p_{*}(s)\int_{0}^{s}w(y)\pi_{*}(\lambda,y)\int_{0}^ {y}\frac{\varepsilon_{*}(r)w(0)(1-\alpha)^{2}}{\pi_{*}(\lambda,r)\gamma_{*}(r) }\mathrm{d}r\mathrm{d}y\right)\mathrm{d}\tau\mathrm{d}s.\] Using the boundary condition (5.6) and the solution (5.8), we obtain \[(1-\alpha)H(0)+H^{\prime}(0)\int_{0}^{m}\frac{w(s)}{w(0)}\pi_{*}(\lambda,s) \mathrm{d}s+\bar{U}\int_{0}^{m}w(s)\pi_{*}(\lambda,s)\int_{0}^{s}\frac{ \varepsilon_{*}(y)(1-\alpha)}{\pi_{*}(\lambda,y)\gamma_{*}(y)}\mathrm{d}y \mathrm{d}s=0. \tag{5.10}\] The general solution of Eq.(5.2)\({}_{a}\) takes the form \[u(s)=u(0)\pi_{*}(\lambda,s)-\pi_{*}(\lambda,s)\int_{0}^{s}\frac{\varepsilon_{* }(y)\bar{U}}{\pi_{*}(\lambda,y)\gamma_{*}(y)}\mathrm{d}y. \tag{5.11}\] Integrating (5.11) from \(0\) to \(m\), we obtain \[\bar{U}=-\bar{U}\int_{0}^{m}\pi_{*}(\lambda,s)\int_{0}^{s}\frac{\varepsilon_{ *}(y)}{\pi_{*}(\lambda,y)\gamma_{*}(y)}\mathrm{d}y\mathrm{d}s+U(0)\int_{0}^{m} \pi_{*}(\lambda,s)\mathrm{d}s. \tag{5.12}\] Eqs.(5.9), (5.12) and the boundary condition (5.4) imply that \[0 =H(0)\int_{0}^{m}\pi_{*}(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_{- \theta}^{0}e^{\lambda\tau}\beta_{Q}\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\mathrm{ d}\tau\mathrm{d}s \tag{5.13}\] \[+H^{\prime}(0)\int_{0}^{m}\pi_{*}(\lambda,s)\mathrm{d}s\int_{0}^{ m}\int_{-\theta}^{0}e^{\lambda\tau}\beta(s,\tau,Q_{*}(s))\frac{\pi_{*}(\lambda,s)}{w(0)( \alpha-1)}\mathrm{d}\tau\mathrm{d}s\] \[+H^{\prime}(0)\int_{0}^{m}\pi_{*}(\lambda,s)\mathrm{d}s\int_{0}^{ m}\int_{-\theta}^{0}e^{\lambda\tau}\beta_{Q}\left(s,\tau,Q_{*}(s)\right)p_{*}(s) \int_{0}^{s}\frac{w(y)}{w(0)}\pi_{*}(\lambda,y)\mathrm{d}y\mathrm{d}\tau \mathrm{d}s\] \[+\bar{U}\int_{0}^{m}\pi_{*}(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_ {-\theta}^{0}e^{\lambda\tau}\beta_{Q}(s,\tau,Q_{*}(s))p_{*}(s)\int_{0}^{s} \frac{\varepsilon_{*}(r)(1-\alpha)}{\pi_{*}(\lambda,r)\gamma_{*}(r)}w(y)\pi_{* }(\lambda,y)\mathrm{d}r\mathrm{d}y\mathrm{d}\tau\mathrm{d}s\] \[-\bar{U}\int_{0}^{m}\pi_{*}(\lambda,s)\mathrm{d}s\int_{0}^{m} \int_{-\theta}^{0}e^{\lambda\tau}\beta(s,\tau,Q_{*}(s))\pi_{*}(\lambda,s)\int_{ 0}^{s}\frac{\varepsilon_{*}(y)}{\pi_{*}(\lambda,y)\gamma_{*}(y)}\mathrm{d}y \mathrm{d}\tau\mathrm{d}s\] \[-\bar{U}\left(1+\int_{0}^{m}\pi_{*}(\lambda,s)\int_{0}^{s}\frac{ \varepsilon_{*}(y)}{\pi_{*}(\lambda,y)\gamma_{*}(y)}\mathrm{d}y\mathrm{d}s \right).\] Hence, the linear system composed of (5.9), (5.10) and (5.13) has a non-zero solution \((H(0),H^{\prime}(0),\bar{U})\) if and only if \(\lambda\) satisfies the equation \[\left(\begin{array}{ccc}A_{11}(\lambda)&A_{12}(\lambda)&A_{13}(\lambda)\\ A_{21}(\lambda)&A_{22}(\lambda)&A_{23}(\lambda)\\ A_{31}(\lambda)&A_{32}(\lambda)&A_{33}(\lambda)\end{array}\right)\left( \begin{array}{c}H(0)\\ H^{\prime}(0)\\ \bar{U}(0)\end{array}\right)=0, \tag{5.14}\] where we define \[A_{11}(\lambda)= (1-\alpha)w(0)\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta_{Q }\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\mathrm{d}\tau\mathrm{d}s,\] \[A_{12}(\lambda)= 1-\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta(s,\tau,Q_{* }(s))\pi_{*}(\lambda,s)\mathrm{d}\tau\mathrm{d}s\] \[+(1-\alpha)\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta_{Q }\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\int_{0}^{s}w(y)\pi_{*}(\lambda,y)\mathrm{ d}y\mathrm{d}\tau\mathrm{d}s,\] \[A_{13}(\lambda)= \int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta(s,\tau,Q_{*}( s))\pi_{*}(\lambda,s)\int_{0}^{s}\frac{\varepsilon_{*}(y)w(0)(\alpha-1)}{\pi_{*}( \lambda,y)\gamma_{*}(y)}\mathrm{d}y\mathrm{d}\tau\mathrm{d}s\] \[+\int_{0}^{m}\int_{-\theta}^{0}\left(e^{\lambda\tau}\beta_{Q}(s, \tau,Q_{*}(s))p_{*}(s)\int_{0}^{s}w(y)\pi_{*}(\lambda,y)\int_{0}^{y}\frac{ \varepsilon_{*}(r)w(0)(1-\alpha)^{2}}{\pi_{*}(\lambda,r)\gamma_{*}(r)} \partial r\partial y\right)\mathrm{d}\tau\mathrm{d}s,\] \[A_{21}(\lambda)= 1-\alpha,\] \[A_{22}(\lambda)= \int_{0}^{m}\frac{w(s)}{w(0)}\pi_{*}(\lambda,s)\mathrm{d}s,\] \[A_{23}(\lambda)= \int_{0}^{m}w(s)\pi_{*}(\lambda,s)\int_{0}^{s}\frac{\varepsilon_{ *}(y)(1-\alpha)}{\pi_{*}(\lambda,y)\gamma_{*}(y)}\mathrm{d}y\mathrm{d}s,\] \[A_{31}(\lambda)= \int_{0}^{m}\pi_{*}(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_{- \theta}^{0}e^{\lambda\tau}\beta_{Q}\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\mathrm{ d}\tau\mathrm{d}s,\] \[A_{32}(\lambda)= \int_{0}^{m}\pi_{*}(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_{- \theta}^{0}e^{\lambda\tau}\beta(s,\tau,Q_{*}(s))\frac{\pi_{*}(\lambda,s)}{w(0 )(\alpha-1)}\mathrm{d}\tau\mathrm{d}s\] \[+\int_{0}^{m}\pi_{*}(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_{- \theta}^{0}e^{\lambda\tau}\beta_{Q}\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\int_{0 }^{s}\frac{w(y)}{w(0)}\pi_{*}(\lambda,y)\mathrm{d}y\mathrm{d}\tau\mathrm{d}s,\] \[A_{33}(\lambda)= \int_{0}^{m}\pi_{*}(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_{- \theta}^{0}e^{\lambda\tau}\beta_{Q}(s,\tau,Q_{*}(s))p_{*}(s)\int_{0}^{s}\int_{ 0}^{y}\frac{\varepsilon_{*}(r)(1-\alpha)}{\pi_{*}(\lambda,r)\gamma_{*}(r)}w(y )\pi_{*}(\lambda,y)\mathrm{d}r\mathrm{d}y\mathrm{d}\tau\mathrm{d}s\] \[-\int_{0}^{m}\pi_{*}(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_{- \theta}^{0}e^{\lambda\tau}\beta(s,\tau,Q_{*}(s))\pi_{*}(\lambda,s)\int_{0}^{s} \frac{\varepsilon_{*}(y)}{\pi_{*}(\lambda,y)\gamma_{*}(y)}\mathrm{d}y\mathrm{d }\tau\mathrm{d}s\] \[-\left(1+\int_{0}^{m}\pi_{*}(\lambda,s)\int_{0}^{s}\frac{ \varepsilon_{*}(y)}{\pi_{*}(\lambda,y)\gamma_{*}(y)}\mathrm{d}y\mathrm{d}s \right).\] **Proposition 5.1**.: \(\lambda\in\mathbb{C}\) _is a eigenvalue of the operator \(\mathscr{A}\) if and only if \(\lambda\) is a solution of the following characteristic equation_ \[K(\lambda)=\left|\begin{array}{ccc}A_{11}(\lambda)&A_{12}(\lambda)&A_{13}( \lambda)\\ A_{21}(\lambda)&A_{22}(\lambda)&A_{23}(\lambda)\\ A_{31}(\lambda)&A_{32}(\lambda)&A_{33}(\lambda)\end{array}\right|=0. \tag{5.15}\] In summary, \(K(\lambda)\) determines the characteristic equation corresponding to the linearised system (2.11) and its zeros are the eigenvalues of the operator \(\mathscr{A}\), which completely determine the spectrum of \(\mathscr{A}\), and therefore the linear stability of the steady state. ## 6 Linear stability analysis In the previous section we deduced an explicit characteristic equation corresponding to the linearisation of hierarchical size-structured model (1.1). We now use this characteristic equation to derive stability criteria. By virtue of Corollary 4.7 we can investigate the asymptotic stability and instability of stationary solutions of model (1.1) using the characteristic equation. Further, we will show how the basic reproduction function \(\mathscr{R}(P,Q)\) introduced in Eq. (2.5) can be used to establish stability and instability conditions. The first result addresses the stability of the trivial stationary solution \(p_{0}\equiv 0\). **Theorem 6.1**.: _The trivial stationary solution \(p_{0}\equiv 0\) is linearly asymptotically stable if \(\mathscr{R}(0,0)<1\), and unstable if \(\mathscr{R}(0,0)>1\) holds._ Proof.: For \(p_{0}\equiv 0\) we have \[\hat{A}_{12}(\lambda)= 1-\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta(s,\tau,0) \pi_{0}(\lambda,s)\mathrm{d}\tau\mathrm{d}s,\] \[\hat{A}_{21}(\lambda)= 1-\alpha,\] \[\hat{A}_{22}(\lambda)= \int_{0}^{m}\frac{w(s)}{w(0)}\pi_{0}(\lambda,s)\mathrm{d}s,\] \[\hat{A}_{32}(\lambda)= \int_{0}^{m}\pi_{0}(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_{- \theta}^{0}e^{\lambda\tau}\beta(s,\tau,0)\frac{\pi_{0}(\lambda,s)}{w(0)(\alpha -1)}\mathrm{d}\tau\mathrm{d}s,\] \[\hat{A}_{31}(\lambda)= \hat{A}_{23}(\lambda)=\hat{A}_{13}(\lambda)=\hat{A}_{11}(\lambda )=0,\] \[\hat{A}_{33}(\lambda)= -1,\] where \[\pi_{0}(\lambda,s)=e^{-\int_{0}^{s}\frac{\lambda+\gamma_{s}(a,0)+\mu(a,0)}{ \gamma(a,0)}\mathrm{d}a}.\] Hence the characteristic equation (5.15) reduces to \[\hat{K}(\lambda)=\left|\begin{array}{ccc}0&\hat{A}_{12}(\lambda)&0\\ 1-\alpha&\hat{A}_{22}(\lambda)&0\\ 0&\hat{A}_{32}(\lambda)&-1\end{array}\right|=(1-\alpha)\hat{A}_{12}(\lambda), \text{ for }0\leq\alpha<1. \tag{6.1}\] It is readily observed that \[\hat{K}(0)=(1-\alpha)\hat{A}_{12}(0)=(1-\alpha)\left(1-\mathscr{R}(0,0)\right). \tag{6.2}\] Clearly condition (4.6) is satisfied and therefore we can restrict the characteristic equation \(\hat{K}(\lambda)\) to \(\lambda\in\mathbb{R}\). Furthermore, from (6.1), we have \[\lim_{\lambda\to+\infty}\hat{K}(\lambda)=1-\alpha,\;\hat{K}^{\prime}(\lambda) =(1-\alpha)\hat{A}^{\prime}_{12}(\lambda)>0. \tag{6.3}\] Therefore, if \(\mathscr{R}(0,0)<1\) holds, we can obtain \(\hat{K}(0)>0\) and \(\hat{K}(\lambda)\) is monotonically increasing, which implies that the characteristic equation (5.15) cannot have non-negative roots. However, if \(\mathscr{R}(0,0)>1\) holds, there is a positive root since \(\hat{K}(0)<0\). The claim of the theorem follows. Next we will address the instability of positive stationary solutions. **Theorem 6.2**.: _Let \(p_{*}(s)\) be any positive stationary solution of (1.1) and suppose that all the conditions of Theorem 4.6 are fulfilled. Then the positive stationary solution \(p_{*}(s)\) is linearly unstable if \(K(0)<0\)._ Proof.: It suffices to show that there exists a positive solution \(\lambda\) of the characteristic equation (5.15). We can readily deduce that \[\lim_{\lambda\rightarrow+\infty}K(\lambda)=\begin{vmatrix}0&1&0\\ 1-\alpha&0&0\\ 0&0&-1\end{vmatrix}=1-\alpha,\text{ for }0\leq\alpha<1. \tag{6.4}\] Here the limit is taken in \(\mathbb{R}\), then we can formulate the above simple instability criterion, which follows immediately from the Intermediate Value Theorem since \(K(0)<0\). Since a strict linear stability proof requires showing that all zeros of the characteristic equation are be located in the left half-plane of \(\mathbb{C}\), the stability results for positive stationary solutions of model are much more difficult to obtain than instability results, especially considering that our growth and mortality rates are both depend on the total population size, and the birth rate that involves fertility delay and an infinite dimensional interaction variable (environment). We will now demonstrate for some special cases of the model ingredients, that we can overcome these difficulties. Consider the situation when mortality and growth rates are independent of the population size \(P\), i.e. \(\gamma_{P}\equiv 0\equiv\mu_{P}\). Hence \(\varepsilon_{*}=p_{*}(s)\left(\mu_{P}(s,P_{*})+\gamma_{sP}(s,P_{*})\right)+p _{*}^{\prime}(s)\gamma_{P}(s,P_{*})=0\). In this case, we can derive explicit conditions for the linear stability and instability of the positive stationary solution in a relatively straightforward fashion. We have the following result. **Theorem 6.3**.: _Suppose that \(\varepsilon_{*}\equiv 0\) and the positivity condition (4.6) holds true._ _(i) If \(\beta_{Q}\left(s,\tau,Q_{*}\right)<0\), then the positive stationary solution \(p_{*}\) is linearly asymptotically stable._ _(ii) If \(\beta_{Q}\left(s,\tau,Q_{*}\right)\geq 0\), then \(p_{*}\) is linearly unstable._ Proof.: For the special case of model ingredients we are dealing now, we have for the terms in the characteristic equation (5.15) \[\tilde{A}_{11}(\lambda)= (1-\alpha)w(0)\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta _{Q}\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\mathrm{d}\tau\mathrm{d}s,\] \[\tilde{A}_{12}(\lambda)= 1-\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta(s,\tau,Q_{ *}(s))\pi(\lambda,s)\mathrm{d}\tau\mathrm{d}s\] \[+(1-\alpha)\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta_{Q }\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\int_{0}^{s}w(y)\pi(\lambda,y)\mathrm{d }y\mathrm{d}\tau\mathrm{d}s,\] \[\tilde{A}_{13}(\lambda)= 0,\] \[\tilde{A}_{21}(\lambda)= 1-\alpha,\] \[\tilde{A}_{22}(\lambda)= \int_{0}^{m}\frac{w(s)}{w(0)}\pi(\lambda,s)\mathrm{d}s,\] \[\tilde{A}_{23}(\lambda)= 0,\] \[\tilde{A}_{31}(\lambda)= \int_{0}^{m}\pi(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_{-\theta}^{ 0}e^{\lambda\tau}\beta_{Q}\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\mathrm{d}\tau \mathrm{d}s,\] \[\tilde{A}_{32}(\lambda)= \int_{0}^{m}\pi(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_{-\theta}^{ 0}e^{\lambda\tau}\beta(s,\tau,Q_{*}(s))\frac{\pi(\lambda,s)}{w(0)(\alpha-1)} \mathrm{d}\tau\mathrm{d}s\] \[+\int_{0}^{m}\pi(\lambda,s)\mathrm{d}s\int_{0}^{m}\int_{-\theta}^{ 0}e^{\lambda\tau}\beta_{Q}\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\int_{0}^{s} \frac{w(y)}{w(0)}\pi(\lambda,y)\mathrm{d}y\mathrm{d}\tau\mathrm{d}s,\] \[\tilde{A}_{33}(\lambda)= -1,\] where we have set \[\pi(\lambda,s)=e^{-\int_{0}^{s}\frac{\lambda+\gamma_{s}(\alpha)+\mu(\mathrm{a})}{ \gamma(\mathrm{a})}\mathrm{d}a}.\] It follows that \[\tilde{K}(\lambda) =\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta(s,\tau,Q_{*}( s))\pi(\lambda,s)\mathrm{d}\tau\mathrm{d}s\] \[+(\alpha-1)\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta_{Q }\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\int_{0}^{s}w(y)\pi(\lambda,y)\mathrm{d}y \mathrm{d}\tau\mathrm{d}s\] \[+\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta_{Q}\left(s, \tau,Q_{*}(s)\right)p_{*}(s)\mathrm{d}\tau\mathrm{d}s\int_{0}^{m}w(y)\pi( \lambda,y)\mathrm{d}y-1.\] Clearly condition (4.6) of Theorem 4.6 is satisfied, thus we can restrict the characteristic equation \(\tilde{K}(\lambda)\) to \(\lambda\in\mathbb{R}\). Making use of \(\beta_{Q}\left(s,\tau,Q_{*}\right)<0\) and Eq. (2.4), we obtain \[\tilde{K}(0)= (\alpha-1)\int_{0}^{m}\int_{-\theta}^{0}\beta_{Q}\left(s,\tau,Q_ {*}(s)\right)p_{*}(s)\int_{0}^{s}w(y)\pi(0,y)\mathrm{d}y\mathrm{d}\tau\mathrm{d}s\] \[+\int_{0}^{m}\int_{-\theta}^{0}\beta_{Q}\left(s,\tau,Q_{*}(s) \right)p_{*}(s)\mathrm{d}\tau\mathrm{d}s\int_{0}^{m}w(y)\pi(0,y)\mathrm{d}y+ \mathscr{R}(0,Q_{*})-1\] \[= \int_{0}^{m}\int_{-\theta}^{0}\beta_{Q}\left(s,\tau,Q_{*}(s) \right)p_{*}(s)\left(\alpha\int_{0}^{s}w(y)\pi(0,y)\mathrm{d}y+\int_{s}^{m}w(y )\pi(0,y)\mathrm{d}y\right)\mathrm{d}\tau\mathrm{d}s\] \[< 0.\] Moreover, we deduce that \[\tilde{K}^{\prime}(\lambda) =\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta(s,\tau,Q_{*}( s))(\tau-\int_{0}^{s}\frac{1}{\gamma(a)}\partial a)\pi(\lambda,s)\mathrm{d} \tau\mathrm{d}s\] \[+(\alpha-1)\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta_{Q }\left(s,\tau,Q_{*}(s)\right)p_{*}(s)\int_{0}^{s}w(y)(\tau-\int_{0}^{s}\frac{1 }{\gamma(a)}\mathrm{d}a)\pi(\lambda,y)\mathrm{d}y\mathrm{d}\tau\mathrm{d}s\] \[+\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta_{Q}\left(s, \tau,Q_{*}(s)\right)p_{*}(s)\int_{0}^{m}w(y)(\tau-\int_{0}^{s}\frac{1}{\gamma( a)}\partial a)\pi(\lambda,y)\mathrm{d}y\mathrm{d}\tau\mathrm{d}s.\] The tricky step needed here is to note that \[\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta_{Q}\left(s, \tau,Q_{*}(s)\right)p_{*}(s)\int_{0}^{s}w(y)(\tau-\int_{0}^{s}\frac{1}{\gamma (a)}\partial a)\pi(\lambda,y)\mathrm{d}y\mathrm{d}\tau\mathrm{d}s\] \[= \int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}w(s)\pi(\lambda,s)( \tau-\int_{0}^{s}\frac{1}{\gamma(a)}\mathrm{d}a)\int_{s}^{m}\beta_{Q}\left(y, \tau,Q_{*}(y)\right)p_{*}(y)\mathrm{d}y\mathrm{d}\tau\mathrm{d}s.\] By means of the positivity condition (4.6), we observe that \[\tilde{K}^{\prime}(\lambda)= \int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\beta(s,\tau,Q_{*}(s)) (\tau-\int_{0}^{s}\frac{1}{\gamma(a)}\partial a)\pi(\lambda,s)\mathrm{d}\tau \mathrm{d}s\] \[+\alpha\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}w(s)\pi( \lambda,s)(\tau-\int_{0}^{s}\frac{1}{\gamma(a)}\mathrm{d}a)\int_{s}^{m}\beta_{ Q}\left(y,\tau,Q_{*}(y)\right)p_{*}(y)\mathrm{d}y\mathrm{d}\tau\mathrm{d}s\] \[+\int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}w(s)\pi(\lambda,s )(\tau-\int_{0}^{s}\frac{1}{\gamma(a)}\mathrm{d}a)\int_{0}^{s}\beta_{Q}\left(y,\tau,Q_{*}(y)\right)p_{*}(y)\mathrm{d}y\mathrm{d}\tau\mathrm{d}s\] \[= \int_{0}^{m}\int_{-\theta}^{0}e^{\lambda\tau}\pi(\lambda,s)(\tau -\int_{0}^{s}\frac{1}{\gamma(a)}\mathrm{d}a)\Big{[}\beta\left(s,\tau,Q_{*}(s)\right)\] \[+w(s)\left(\int_{0}^{s}\beta_{Q}\left(y,\tau,Q_{*}(s)\right)p_{*}( y)\mathrm{d}y+\alpha\int_{s}^{m}\beta_{Q}\left(y,\tau,Q_{*}(s)\right)p_{*}(y) \mathrm{d}y\right)\Big{]}\mathrm{d}\tau\mathrm{d}s\] \[\leq 0.\] As a result, for \(\lambda\geq 0\), \(\tilde{K}(\lambda)\) is monotone decreasing, and the stability result follows. Since \(\tilde{K}(0)\geq 0\) by \(\beta_{Q}\left(s,\tau,Q_{*}\right)\geq 0\) and \(\lim_{\lambda\rightarrow+\infty}\tilde{K}(\lambda)=-1\), the instability result follows from the Intermediate Value Theorem. ## 7 Examples and simulations In this section we will present two examples to illustrate and underpin the linear stability results presented in Theorems 6.1 and 6.3. **Example 7.1**.: _(Stability of \(p_{0}\)) We set the model ingredients as follows:_ \[\gamma\equiv 1,\,\mu\equiv 0.5,\,w\equiv 1,\,\alpha=0.5,\,\theta=1.5,\,m=8;\] \[\beta(s,\tau,Q(s,t+\tau))=\begin{cases}0.5e^{\tau}(0.7+sin^{2}(2s))(1-Q),&0 \leq s\leq 8,\\ 0,&\text{otherwise}.\end{cases}\] Figure 1: \(\mathscr{R}(0,0)=0.9088\), \(p_{0}\) represents the trivial stationary solution and \(P(t)\) the total population size at time \(t\); the initial conditions corresponding to \(p_{1}\) to \(p_{2}\) are \(u_{1}=12sin^{2}(s)(8-s)^{2}\) and \(u_{2}=3cos^{2}(s+\frac{\pi}{2})(10-s)^{2}\). _We compute \(\mathscr{R}(0,0)=0.9088<1\) using the inherent net reproduction function (2.5). We can observe that as time increases, solutions approach the horizontal plane (trivial stationary solution), demonstrating the linear stability result in Theorem 6.1, as shown in Fig.1._ _When the fertility rate is changed to_ \[\beta(s,\tau,Q(s,t+\tau))=\begin{cases}0.55e^{\tau}(1+cos^{2}(0.1s))(1-Q),&0 \leq s\leq 8,\\ 0,&\text{otherwise,}\end{cases}\] _we compute \(\mathscr{R}(0,0)=1.6297>1\). As shown in Fig.2, the numerical results indicate that the solutions corresponding to the initial conditions \(p_{1}\) and \(p_{2}\) gradually move away from the horizontal plane, demonstrating the instability result presented in Theorem 6.1._ **Example 7.2**.: _(Stability of \(p_{*}\)) Let us now consider the following set of model ingredients_ \[\gamma\equiv 1,\,\mu\equiv 0.58,\,w\equiv 1,\,\alpha=0.6,\,\theta=0.5;\] \[\beta(s,\tau,Q(s,t+\tau))=\begin{cases}e^{\tau}(1+1.8s)(1-Q),&0\leq Q\leq 1, \\ 0,&\text{otherwise.}\end{cases}\] _It is not difficult to verify that both conditions (4.6) and \(\beta_{Q}\left(s,\tau,Q_{*}\right)<0\) hold true for the current set of model ingredients. Here we take the initial conditions_ \[u_{1}(s)=\frac{0.1}{0.1+10s^{3}}+0.028,\quad u_{2}(s)=\frac{0.1}{4+2s^{3}}+0.1 ;\qquad s\in[0,8].\] _The numerical results show that total population sizes corresponding to the solutions \(p_{1},p_{2}\) eventually converge to the total population size corresponding to the positive stationary solution \(P_{*}\), which demonstrates the stability result in Theorem 6.3, as shown by Fig.3._ Figure 2: \(R(0,0)=1.6297\); the total population size \(P(t)\) is plotted on the left; the initial conditions corresponding to \(p_{1}\) to \(p_{2}\) are \(u_{1}=0.3sin^{2}(s+\frac{\pi}{3})(10-s)^{2};u_{2}=0.5sin^{2}(s+\frac{\pi}{2}) (12-s)^{2}\). _Next we replace the fertility function with the following one_ \[\beta(s,\tau,Q(s,t+\tau))=\begin{cases}0.5e^{\tau}(1+0.1s)Q,&Q\geq 0,\\ 0,&\text{otherwise.}\end{cases}\] _It is obvious that conditions (4.6) and \(\beta_{Q}\left(s,\tau,Q_{*}\right)\geq 0\) of Theorem 6.3 are satisfied. The trajectories \(p_{1}\) and \(p_{2}\) are shown in Fig.4 with two different initial conditions. This example demonstrates the instability result we obtained in Theorem 6.3._ Figure 4: \(P(t)\) denotes the total population size at time \(t\); \(p_{*}\)represents the stationary solution; the parameters \(\gamma\equiv 1,\mu\equiv 0.58,w\equiv 1,\alpha\equiv 0.6,\theta=0.5,m=8\); the initial conditions corresponding to curves \(p_{1}\) to \(p_{2}\) are \(u_{1}=\frac{0.1}{0.1+10s^{3}}+0.028\) and \(u_{2}=\frac{0.1}{4+2s^{3}}+0.1\). On the left we can see the total population sizes plotted, while on the right the corresponding density distributions. Conclusion In this work we have introduced and analysed a partial differential equation model intended to describe the dynamics of a hierarchical size-structured population. Our model incorporates two different types of nonlinearities: we assumed that individual growth and mortality are affected by scramble competition (which allows to model for example Allee effects); while recruitment of offspring is affected by contest competition via an infinite dimensional interaction variable related to a hierarchy in the population. Moreover, we incorporated delay in the recruitment (e.g. to account for maturation delay). We have formally linearised our model around a steady state and showed how to apply the theory of strongly continuous semigroups. In particular we studied the asymptotic behaviour of the governing semigroup by using spectral methods. In contrast to [2], we were able to derive an explicit characteristic equation, which characterises the point spectrum of the semigroup generator. This then allowed us to derive some stability/instability results, in particular using an appropriately defined net reproduction function. The stability results we deduced were obtained by using a formal linearisation of the PDE model. A rigorous result often referred to as the Principle of Linearised Stability has not been established for the PDE model we studied here, therefore we presented examples and numerical simulations to underpin the formal stability results we established. Structured population models incorporating an infinite dimensional nonlinearity, e.g. due to a hierarchical structure in the population have been studied for long by many researchers. One of the earliest models describing a hierarchically age-structured population can be found in [39]. There is a major difference though between age-structured, i.e. semilinear, and size-structured, i.e. quasilinear models, such as the one we studied here. While natural age-structured PDE models tend to be well-posed on the biologically relevant state space of \(L^{1}\); size-structured (quasilinear) models are not necessarily well-posed on \(L^{1}\), in particular when the growth rate depends on the infinite dimensional nonlinearity (interaction variable) in a non-monotone fashion, see e.g. [3, 4]. In this case, in order to study existence of solutions, it is necessary to enlarge the state space and allow for measure valued solutions. The choice of the particular state space then becomes very important as demonstrated recently in [40], in particular when trying to extend the theory of positive semigroups to such a setting. ## Acknowledgments The authors are grateful to the editors and the anonymous referees for their valuable comments and suggestions which led to an improvement of our original manuscript. The Project was Supported by the Fundamental Research Funds for the Central Universities, China University of Geosciences (Wuhan) (NO.G1323523061).
2305.06516
Solar Cycle Precursors and the Outlook for Cycle 25
Sunspot Cycle 25 over 3 years past the cycle minimum of December 2019. At this point, curve-fitting becomes reliable and consistently indicates a maximum sunspot number of 135+/-10 - slightly larger than Cycle 24's maximum of 116.4, but well below the Cycles 1-24 average of 179 (ranging from 81 for Cycle 6 to 285 for Cycle 19). A geomagnetic precursor, the minimum in the aa-index, and the Sun's magnetic precursors, the polar field strength and axial dipole moment at the time of minimum, are often used to predict the amplitude of the cycle at (or before) the onset of the cycle. We examine Cycle 25 predictions produced by these precursors. The geomagnetic precursor indicated a Cycle 25 slightly stronger than Cycle 24, with a maximum of 132+/-8. The Sun's magnetic precursors indicated that Cycle 25 would be similar to Cycle 24, with a maximum sunspot number of 120+/-10 or 114+/-15. Combining the curve-fitting results with the precursor predictions, we conclude that Cycle 25 will have a maximum smoothed sunspot number of 134+/-8 with maximum occurring late in the fall of 2024. Models for predicting the Sun's magnetic field ahead of minimum, were generally successful at predicting the polar precursors years in advance. The fact that Sun's magnetic precursors at cycle minimum were successfully predicted years before minimum and that the precursors are consistent with the size of Cycle 25 suggests that we can now more reliably predict the solar cycle.
Lisa A. Upton, David H. Hathaway
2023-05-11T01:38:07Z
http://arxiv.org/abs/2305.06516v2
# Solar Cycle Precursors and the Outlook for Cycle 25 ###### Abstract Key Points: * Solar Cycle 25 has revealed itself as a small cycle with an expected sunspot number maximum of about 134. * Three magnetic precursors from the cycle minimum in 2019 predicted a similar Cycle 25 sunspot number maximum. * Two of these magnetic precursors were accurately predicted years before cycle minimum using surface flux transport. ###### Abstract Sunspot Cycle 25 is now over 3 years past the cycle minimum of December 2019. At this point in the cycle, curve-fitting to the activity becomes reliable and now consistently indicates a maximum sunspot number of \(135\pm 10\) - slightly larger than Cycle 24's maximum of 116.4, but well below the average of 179. A geomagnetic precursor, the minimum in the \(aa\)-index, and the Sun's magnetic precursors, the Sun's polar field strength and its axial dipole moment at the time of minimum, are often used to predict the amplitude of the cycle at (or before) the onset of the cycle. We examine Cycle 25 predictions produced by these precursors. The geomagnetic precursor indicated a Cycle 25 slightly stronger that Cycle 24, with a maximum of \(132\pm 8\). The Sun's magnetic precursors indicated that Cycle 25 would be more similar to Cycle 24, with a maximum sunspot number of \(120\pm 10\) or \(114\pm 15\). Combining the curve-fitting results with the precursor predictions, we conclude that Cycle 25 will have a maximum smoothed sunspot number of \(134\pm 8\) with maximum occurring late in the fall of 2024. Models for predicting the Sun's magnetic field ahead of minimum, were generally successful at predicting the polar precursors years in advance. The fact that Sun's magnetic precursors at cycle minimum were successfully predicted years before minimum and that the precursors are consistent with the size of Cycle 25 suggests that we can now reliably predict the solar cycle. ## Plain Language Summary Now that over 3 years have passed since the start of Cycle 25, we can determine the size of the cycle and look back at previous predictions. For the last eleven months we have consistently found that Cycle 25 is following the behavior of a smaller than average sunspot cycle, just slightly larger than the last cycle. The strength of the Sun's magnetic field at the start of a sunspot cycle has become recognized as the best predictor for the ultimate strength of that cycle. This follows from solar magnetic dynamo models in which the magnetic field at minimum gets stretched and strengthened to produce the magnetic sunspots and explosive magnetic activity of cycle maximum. Three different measurements of the strength of the Sun's magnetic field in late 2019 and early 2020 (at the start of the current sunspot cycle) indicated that this cycle would be slightly stronger than the previous cycle, but still weaker than average. Models can be used to estimate two of these measurements well before cycle minimum, thus providing a reliable prediction years before the start of a sunspot cycle. ## 1 Introduction Activity on the Sun varies with a periodicity of about eleven years. This variability is characterized by fluctuations in the appearance of sunspots but also includes the evolution of coronal holes, changes in the solar wind speed, and changes in the frequency of eruptive events such as solar flares and coronal mass ejections (Hathaway, 2015). Together, these and other solar phenomena drive changes in the interplanetary environment, i.e. space weather. As space weather events interact with the Earth, they cause geomagnetic storms that impact the geospace environment in a variety of ways. Extreme ultraviolet and x-ray emissions give rise to ionization in the ionosphere and heating and increased density in the thermosphere. This increases the drag on satellites and debris in low Earth orbits and increases the risk of satellites colliding with debris or even completely deorbiting - as occurred in early 2022 with the SpaceX Starlink satellites (Hapgood et al., 2022). Increases in energetic charged particles can further disrupt, damage, or cause failure of satellites or their electrical components. While satellites essential for communications and national defense are most at risk, geomagnetic storms are not a threat to just them. Radiation from these storms also pose a threat to astronauts in space and crew on airline flights over the poles. Closer to home, ground-induced currents produced by geomagnetic storms can overwhelm power grids, resulting in power outages. Predicting the size of a solar cycle is an important step in improving our ability to prepare for space weather years in advance. Two (related) physics based precursors for predicting the amplitude of a solar cycle have risen to the top as being the most reliable: geomagnetic activity levels and the Sun's magnetic configuration (polar fields and axial dipole moment) near the time of sunspot cycle minimum. Cycle predictions can be made well before cycle minimum by using models to simulate the evolution of the Sun's surface magnetic field. Once the cycle is well under way, curve fitting can be used to determine the amplitude of the cycle. In this paper, we present a comprehensive picture of the outlook for Solar Cycle 25 based on each of these methods, individually and as a combined prediction. ## 2 Curve Fitting and the Amplitude (Maximum) of Cycle 25 Cycle 24/25 minimum occurred in December of 2019 and we are now over three years into Cycle 25. At this stage of the solar cycle, curve fitting becomes quite reliable. Hathaway et al. (1994) proposed a parametric curve for fitting to the monthly sunspot numbers, \(R\), in each cycle with: \[R(t)=A(t-t_{0})^{3}[\exp[(t-t_{0})^{2}/B^{2}]-C]^{-1} \tag{1}\] where \(R\) is the relative sunspot cycle number and \(t-t_{0}\) is the time in months since the effective start of the cycle. Starting with four-parameters (amplitude \(A\), rise time \(B\), asymmetry \(C\), and starting time \(t_{0}\)), they showed that the asymmetry parameter \(C\) could be fixed for all cycles and the rise time parameter \(B\) could be expressed in terms of amplitude (this is the Waldmeier effect - big cycles rise more rapidly to maximum). This allows the cycle to be fit with just two parameters - the starting time for the cycle, \(t_{0}\), and the amplitude, \(A\). In 2015, the sunspot number was revised to version 2.0 (Clette et al., 2016; _Smoothed Sunspot Number_, 2022, hereafter, V2.0). We find that the values for \(B\) and \(C\) have changed for V2.0, with the best fit for B and C now given by: \[B = 36.3+0.72/\sqrt{A}\mbox{ months} \tag{2}\] \[C = 0.70 \tag{3}\] We apply a Levenberg-Marquart method (Press et al., 1992) to fit this nonlinear function of two parameters to the monthly sunspot numbers. Using the uncertainties in the monthly sunspot numbers, we generate 100 Monte Carlo realizations of the monthly sunspot number record, which are then fit to determine the uncertainties in the fit parameters. The goodness of the fit between this two parameter function and the smoothed sunspot number for previous cycles is shown in the top panel of Figure 1. We investigate the accuracy of the fitting as the cycles progress by performing the curve fitting at six month intervals from each cycle's minimum. We then compare the amplitude of the fit at each interval to the observed final amplitude fit. We find that using the the two parameters, the interval fit typically converges to within 10% of the final fit values after about 3 years into each cycle (near the inflection point on the rising curve), as illustrated in the bottom panel of Figure 1. Note that there is a tendency to overestimate the amplitude early in each cycle. The curve fitting results for Cycle 25 were variable and quite uncertain until a year ago (May 2022 - 30 months after minimum). Since then the results have been virtually unchanged - the curve fitting has given \(R_{max}(25)=135\pm 10\) with \(t_{0}=2019.8\pm 0.3\). This determination of the amplitude of Cycle 25 is unlikely to change by more than 10% as the cycle continues to develop. ## 3 Magnetic Precursors for Predicting Cycle Amplitudes Geomagnetic precursors to the solar cycle were first suggested by the Russian researcher Ohl in 1966 (Ohl & Ohl, 1979). He found that the minimum in geomagnetic activity, as indicated by the _aa_ index (_Geomagnetic aa Index_, 2022, plotted in the top panel of Figure 2), is strongly correlated with the amplitude of the oncoming cycle. This can be seen by comparing the heights of the minima, shown in blue, to the following peaks in sunspot number, shown in red. (Note that we include a correctional offset of +3 nT prior to 1957, when the English observing station was moved from Abinger to Hartland (Svalgaard & Cliver, 2005).) A slight drawback to this method comes from the fact that these geomagnetic minima usually occur just after the sunspot cycle minimum. Other geomagnetic precursor methods have been devised to provide earlier predictions but are either based on data that don't cover as many sunspot cycles or require data processing with free parameters (Feynman, 1982; Thompson, 1993). The Sun's polar magnetic field configuration near the time of cycle minimum is gaining popularity as a precursor for the amplitude of the following cycle. At minimum, the Sun's magnetic field is well characterized by a simple axial dipole. This simple configuration is often the starting point for models of the Sun's magnetic dynamo (Babcock, 1961). The geomagnetic precursors near minimum are thought to perform well because they are driven by high-speed solar wind streams and are thus reflections Figure 1: The two parameter functional fit to each of the last 14 cycles (top). The fit amplitudes relative to their final values as functions of time lapsed since minimum for the last 14 cycles (bottom). Figure 2: Solar cycle prediction precursors from magnetic conditions (in blue) near cycle minimum. Top panel: The geomagnetic \(aa\)-index in black with minimum values in blue. Middle panel: The Sun’s axial dipole strength from WSO in black with values at cycle minima in blue. Bottom panel: The Sun’s polar fields from WSO in black with values at cycle minima in blue. The smoothed sunspot number (V2.0) is shown in red in each panel. of the strength of the Sun's polar magnetic fields (Schatten & Sofia, 1987; Wang & Sheeley, 2009). Measurements of the Sun's polar fields over the last four solar cycles have proven to be successful predictors of the following cycle amplitude (Schatten et al., 1978; Svalgaard et al., 2005; Petrovay, 2010; Munoz-Jaramillo et al., 2013). Direct, systematic (daily) measurements of the Sun's polar fields have been made at the Wilcox Solar Observatory (WSO) since 1976 (_Solar Polar Field Strength_, 2022; Hoeksema, 1995; Svalgaard et al., 1978). The polar magnetic field configuration can be characterized in two different ways: by calculating the axial dipole component of the Sun's magnetic field (Figure 2, middle panel) or by averaging the flux density over each polar region (i.e., the polar field strength, shown in Figure 2, bottom panel). The latitudes and field component (e.g., radial or line-of-sight) used to calculate the flux density at each pole are somewhat arbitrary. For WSO, the polar field measurement was set by the spatial resolution of the instrument and defined using the highest latitude pixel, which measures the line-of-sight fields nominally between \(55^{\circ}\) and the poles. Magnetic data from other instruments have often employed the radial component and used different latitude ranges. While the flux density over each polar region offers insight into hemispheric asymmetries, the innate ambiguity associated with this measurement may make the axial component of the Sun's magnetic dipole a better metric for solar cycle prediction (Upton & Hathaway, 2014). ## 4 Magnetic Precursor Measurements at Cycle 24/25 Minimum We now have observations of the Sun's polar fields as well as measurements of geomagnetic activity during (and for three years after) the Cycle 24/25 minimum. For the geomagnetic precursor, we focus on the minimum in geomagnetic activity as measured by the _aa_ index (Ohl's method) because it does not require any decomposition and the measurements date back to 1868, allowing the relationship to be determined for 13 cycles. For the polar field precursor, we look at both the polar field strength and the axial dipole strength during solar cycle minimum, as measured by WSO. These measurements are all shown in Figure 2, along with the smoothed sunspot number V2.0. While the polar field measurements do indeed appear to be indicative of the strength of the next cycle, they only provide three solar cycles (Cycles 22-24) for determining the relationship between the polar fields at minimum and the amplitude of the next cycle. We begin by relating each precursor measurement to the strength of following cycle as indicated by \(R_{max}\), i.e., the maximum smoothed sunspot number V2.0 for that cycle. This is shown in Figure 3. We calculate the ratio of \(R_{max}\) to the minimum of the geomagnetic _aa_ index for each cycle and find: \[R_{max}=(12.3\pm 0.8)aa_{min} \tag{4}\] where, \(aa_{min}\) is the minimum in the _aa_ index near the time of solar minimum. Likewise, We calculate the ratio of \(R_{max}\) to the polar fields and axial dipole moment and find: \[R_{max} = (95.6\pm 8.3)B_{polar} \tag{5}\] \[R_{max} = (59.5\pm 8.2)B_{l=1} \tag{6}\] where, \(B_{polar}\) is the absolute value of the difference between the north and south polar field strengths and \(B_{l=1}\) is the axial dipole field strength. While all three methods do suggest a weak cycle, they don't completely agree. The minimum in aa-Index of 10.76 gives \(R_{max}(25)=132\pm 8\), the WSO polar fields of 1.26 give \(R_{max}(25)=120\pm 10\) Figure 3: Sunspot number maxima as functions of the magnetic precursors. Top panel: Cycle maxima plotted as functions of the minima in the geomagnetic \(aa\) index. Bottom panel: Cycle maxima as functions of the WSO axial dipole at the photosphere in red and the WSO polar fields in blue. In both panels the average ratio of sunspot number maximum to precursor value is given by the black line. The values of the precursors at Cycle 24/25 minimum are shown by the blue and red vertical lines while the values they indicate for Cycle 25 maximum are shown by the horizontal lines. and the WSO axial dipole of 1.91 gives \(R_{max}(25)=114\pm 15\). The variance weighted mean of these precursors gives \(R_{max}(25)=125\pm 6\) where reported errors are \(1\sigma\) errors, which is slightly lower that, but within the range given by the current curve fitting values. ## 5 Combined Prediction Both the magnetic precursors and the curve fitting indicate that Cycle 25 sunspot number maximum will be slightly bigger than the 116 of Cycle 24 but significantly smaller than the average of 179 for all 24 previous cycles. (Hathaway et al., 1999) proposed a combined prediction based on a weighted mean of the precursor predictions and the curve fitting prediction with more weight given to the curve fitting as the cycle progresses (50/50 at 36 months past minimum). This combined prediction gives a maximum smoothed sunspot number of \(R_{max}(25)=134\pm 8\) with maximum occurring late in the fall of 2024. Figure 4 shows a plot of the predicted curve, along with the observed monthly sunspot numbers V2.0. ## 6 Magnetic Precursor Estimates before Minimum Modelers have sought to extend the predictive range of the polar field precursors by using physics based dynamo models (Charbonneau, 2020; Nandy, 2021) or Surface Flux Transport (SFT) models (Sheeley, 2005; Jiang et al., 2014) to obtain the polar fields years ahead of minimum. Before cycle minimum in 2019, the Solar Cycle Prediction Panel, which represents NOAA, NASA and the International Space Environmental Services (ISES), released its official sunspot number forecast (_Solar Cycle 25 Forecast Update_, 2019), predicting a sunspot number maximum of \(115\pm 10\). Their Figure 4: Combined sunspot number prediction based on the magnetic precursors and curve fitting at 40 months into Cycle 25. Monthly sunspot numbers V2.0 are noisy and shown in red. The prediction curves are smooth and shown in black. consensus forecast was based largely on the model projections for the polar fields at minimum. In 2016 and 2018, we used our Advective Flux Transport (AFT) model to create forecasts of the Sun's polar fields in order to predict the strength of Solar Cycle 25 (Hathaway & Upton, 2016; Upton & Hathaway, 2018). Our results indicated that the polar fields at Cycle 24/25 minimum would be similar to those at the previous minimum, suggesting Solar Cycle 25 would be a weak cycle, very similar in amplitude to Solar Cycle 24. Another SFT model (Cameron et al., 2016; Jiang et al., 2018) reported similar results, predicting that Solar Cycle 25 would be slightly larger than Cycle 24, but less than the more average sized Cycle 23. The AFT model assimilates the observed magnetic field from the SOHO/MDI (Scherrer et al., 1995) and SDO/HMI (Scherrer et al., 2012) magnetograms in order to provide the closest contact with the observations. This mode, known as the AFT Baseline, provides the most accurate representation of the magnetic field over the entire surface of the Sun. In it's predictive mode, AFT starts with a map from the AFT Baseline and advances it forward to make predictions about how the Sun's magnetic field will evolve. In 2016 and 2018, we used AFT to create predictions of the Sun's polar field evolution in order to predict the strength of Solar Cycle 25. Now that we are well past minimum, and Cycle 25 is well under way, we revisit those predictions to assess their performance. In (Hathaway & Upton, 2016), we predicted an axial dipole for the solar minimum of \(1.36\pm 0.20\) G and in (Upton & Hathaway, 2018) we predicted an axial dipole of \(1.56\pm 0.05\) G. (A comparison of those results and detailed discussion can be found in (Upton & Hathaway, 2018).) We show our 2018 polar field predictions in Figure 5 (blue lines), along with the corresponding measurements from WSO (in black), and the AFT Baseline constructed with the MDI, and HMI observations (in red). AFT was able to successfully predict the evolution of the AFT/HMI axial dipole. However, the axial dipole in the WSO observations was about 20% higher than HMI and AFT at the time of cycle minimum. The AFT polar field strengths (bottom panel of Figure 5) are calculated from the radial magnetic field above \(55^{\circ}\) (red line) and \(60^{\circ}\) (pink line). The LOS WSO measurements (nominally \(55^{\circ}\) and above) are shown in black and the HMI derived Mean Radial Fields \(60^{\circ}\) (_HMI Polar Field_, 2022; Sun et al., 2015) are shown in light gray (smoothed values in dark gray). AFT was able to successfully predict the evolution of the AFT/HMI polar fields in the North, but that the polar fields in the South did diverge somewhat starting in late 2019. The difference between the north and south polar field strengths in the 2018 predictions ranged from 5.49 to 5.92, with an average of 5.68. The observed difference was 5.25, or about 10% smaller than the AFT 2018 ensemble of predictions. Note that the radial AFT/HMI polar fields above \(55^{\circ}\) are smaller than the WSO single pixel LOS polar fields corresponding to the same latitude range. The AFT Baseline polar fields above \(60^{\circ}\) at the start of 2020 agree with the WSO fields above \(55^{\circ}\) and with the HMI derived fields above \(60^{\circ}\). ## 7 Discussion Geomagnetic and solar magnetic precursors have been shown to be most reliable predictors of an impending solar cycle. However, there are still some uncertainties associated with these predictors. Currently, the geomagnetic precursors are more robust due to the length of the dataset, but the physical mechanism behind their success is less direct. The polar precursors have a stronger foundation in physics, but with a much shorter time line the functional relationship is poorly defined. This is confounded by the fact that the polar measurements, in and of themselves, are ## 6 Conclusion Figure 5: Polar precursor measurements. This figure shows the axial dipole measurements in the top panel and the hemispheric polar fields strengths in the bottom panel. WSO is shown in black, MDI/HMI/AFT measurements are shown in red/pink (\(55^{\circ}/60^{\circ}\)), and the (Upton & Hathaway, 2018) predictions are shown in blue. The northern/southern polar fields are are shown with a solid/dashed line. For additional reference, the HMI radial polar field measurements are shown in light gray (smoothed in dark gray). NOTE: WSO measurements have been re-scaled to be more consistent with the HMI strengths. not well constrained. This is primarily due to innate observational limitations. This is highlighted in Figure 5 by a) the mismatch between the WSO and the MDI/HMI/AFT axial dipole measurements (top panel), b) the annual oscillation in the WSO LOS polar field measurements (bottom panel), and c) the spread in the HMI radial polar field measurements (bottom panel). The WSO axial dipole and the MDI/HMI/AFT axial dipole appear to be offset in both time and amplitude. The offset in time is most apparent during the dipole reversals, as the WSO reversals precede the MDI/HMI/AFT reversal by about a year or two. The offset in amplitude is most apparent during last solar minimum, when the axial dipole is relatively flat. These offsets are a consequence of the limited resolution of the WSO observations convolved with the changing latitude range of the WSO pixel (due to an orbit around the inclined Sun). As previously mentioned, the highest latitude WSO pixel measures the line-of-sight fields nominally between \(55^{\circ}\) and the poles. However, the Sun's rotation axis is inclined \(\sim 7.15^{\circ}\) with respect to the ecliptic plane. So, while on average the highest latitude pixel measures from \(55^{\circ}\) and above, the latitudes actually being measured actually vary between \(48^{\circ}\) and above to \(62^{\circ}\) and above. This produces a seasonal oscillation in the measurements of the Sun's polar fields, which is clearly visible as an annual signal in the plot of the hemispheric polar field strengths (bottom panel of Figure 5). Furthermore, the northern/southern (solid/dashed) hemisphere measurements are 6-months out of phase with one another, such that the northern/southern WSO measurements come into better agreement with the MDI/HMI/AFT measurements in the Spring/Fall, when the the Southern/Northern pole is inclined toward the Earth. It should be noted that these annual signals are present in the MDI and HMI data as well, though to a lesser extent. This is because the higher resolution of MDI and and HMI provide more detailed latitudinal coverage, but the inclined orbit causes the line-of-sight angle of the magnetic field (with respect to the radial component) to also change by \(\pm 7^{\circ}\) over the course of the yearly orbit. The deviations in the polar field measurements are most notable when the polar fields are rapidly evolving, during the polar field reversals. At this time, new polarity flux is being transported to the poles to cancel with the old polarity polar flux, causing a large latitudinal gradient in the high latitude flux. As the Earth orbits the Sun and the latitudes measured by the most poleward WSO pixel change, the pixel samples lower/higher latitudes and more of the new/old polarity flux is present in that pixel. Consequently, the average polar field strength measured by that pixel changes substantially over the course of the orbit resulting in the observed annual oscillation. These deviations in the hemispheric polar field strengths feed into the measurements of the axial dipole moment (top panel of Figure 5). However, rather than appearing as an annual oscillation, they instead present as the offsets in amplitude and time noted above. One might expect that the deviations would cancel out, since the northern and southern hemispheres are out of phase with one another and one of the poles is always favorable. However, the unfavorable pole always appears "ahead" in the polar reversal process (because the lower latitudes that are being include have more new polarity flux). This means that at nearly all times, one of the two poles will seem to be "ahead" and thus the evolution of the axial dipole as measured by WSO will precede the true polar reversal in time, producing the apparent temporal offset seen in the top panel of Figure 5. This is evidenced by the fact that the WSO polar field measurements preferentially precede the MDI/HMI polar field measurements. If not properly accounted for, the temporal deviations can also produce an offset in amplitude. The WSO axial dipole moment during minima is the "yardstick" used to determine the strength of the next cycle for prediction purposes. Since, the axial dipole moment is most crucial during solar minima, one might naturally scale the the WSO and MDI/HMI measurements such that they are in agreement during the Solar Cycle 22/23 and Solar Cycle 23/24 minima (1999 and 2009), as done in Figure 5. However, doing so results in the WSO dipole being about 20 percent higher than the HMI dipole for the Solar Cycle 24/25 minimum in December 2019. This faulty scaling resulted in the AFT solar cycle prediction underestimating the strength of Solar Cycle 25, not because the flux transport did not accurately predict the evolution of polar fields, but rather because the "yardstick" used to make the prediction was not calibrated properly. While outside the scope of this paper, a detailed accounting of these deviations and a proper calibration of the WSO observations with respect to the modern space observations is needed in order to reduce the uncertainty in the polar field-SSN relationship. Uncertainty in the polar measurements and the polar field-SSN relationship would also greatly benefit from polar mission to measure the magnetic fields from directly overheard thus removing uncertainty associated with LOS observations of the magnetic field. Uncertainty in the polar measurements not withstanding, the predictions presented in this paper firmly point to Solar Cycle 25 being slight larger than Solar Cycle 24, but no where close to the amplitude of Solar Cycle 23 or the average solar cycle strength (\(\sim\) 180). This will be the first increase to the cycle amplitude that we've seen since solar Cycle 21 (e.g., about 50 years). This may mean that we have reached the inflection point in the current Gleissberg cycle and might start to see bigger cycles again. ## 8 Open Research The results presented in this paper rely on geomagnetic indices (_Geomagnetica Index_, 2022) calculated and made available by ISGI Collaborating Institutes from data collected at magnetic observatories. We thank the involved national institutes, the INTERMAGNET network and ISGI (isgi.unistra.fr). Wilcox Solar Observatory data (_Solar Polar Field Strength_, 2022) used in this study was obtained via the web site [http://wso.stanford.edu/Polar.html](http://wso.stanford.edu/Polar.html) courtesy of J.T. Hoeksema. The Wilcox Solar Observatory is currently supported by NASA. HMI Polar Field data (_HMI Polar Field_, 2022) used in this study was obtained via the web site [http://jsoc.stanford.edu/ajax/lookdata.html?ds=hmi.meanpf_720s](http://jsoc.stanford.edu/ajax/lookdata.html?ds=hmi.meanpf_720s). HMI is currently supported by NASA. SILSO sunspot number data (_Smoothed Sunspot Number_, 2022) was obtained via the web site [https://www.sidc.be/silso/](https://www.sidc.be/silso/) courtesy of the Royal Observatory of Belgium, Brussels. ## Acknowledgments LAU was supported by NASA Heliophysics Living With a Star grants NNH16ZDA010N-LWS and NNH18ZDA001N-LWS and by NASA grant NNH18ZDA001N-DRIVE to the COFFIES DRIVE Center managed by Stanford University. DHH was supported by NASA contract NAS5-02139 (HMI) to Stanford University. HMI data used in this study are courtesy of NASA-SDO and the HMI science team. MDI data used in this study are courtesy of NASA/ESA-SOHO and the MDI science team. We thank Todd Hoeksema, Marc DeRosa, Xudong Sun, and Andres Munoz-Jaramillo for useful discussions regarding the uncertainty in the polar field measurements.
2301.00490
$H_0$ Tension on the Light of Supermassive Black Hole Shadows Data
Cosmological tensions in current times have opened a wide door to study new probes to constrain cosmological parameters, specifically, to determine the value of the Hubble constant $H_0$ through independent techniques. The two standard methods to measure/infer $H_0$ rely on: (i) anchored observables for the distance ladder, and (ii) establishing the relationship of the $H_0$ to the angular size of the sound horizon in the recombination era assuming a standard Cosmological Constant Cold Dark Matter ($\Lambda$CDM) cosmology. However, the former requires a calibration with observables at nearby distances, while the latter is not a direct measurement and is model-dependent. The physics behind these aspects restrains our possibilities in selecting a calibration method that can help minimise the systematic effects or in considering a fixed cosmological model background. Anticipating the possibility of deeply exploring the physics of new nearby observables such as the recently detected black hole shadows, in this paper we propose standard rules to extend the studies related to these observables. Supermassive black hole shadows can be characterised by two parameters: the angular size of the shadow and the black hole mass. We found that it is possible to break the degeneracy between these parameters by forecasting and fixing certain conditions at high(er) redshifts, i.e., instead of considering the $\approx$10\% precision from the EHT array, our results reach a $\approx 4\%$, a precision that could be achievable in experiments in the near future.
Celia Escamilla-Rivera, Rubén Torres Castillejos
2023-01-02T00:13:01Z
http://arxiv.org/abs/2301.00490v1
# \(H_{0}\) Tension on the Light of Supermassive Black Hole Shadows Data ###### Abstract Cosmological tensions in current times have opened a wide door to study new probes to constrain cosmological parameters, specifically, to determine the value of the Hubble constant \(H_{0}\) through independent techniques. The two standard methods to measure/infer \(H_{0}\) rely on: (i) anchored observables for the distance ladder, and (ii) establishing the relationship of the \(H_{0}\) to the angular size of the sound horizon in the recombination era assuming a standard Cosmological Constant Cold Dark Matter (\(\Lambda\)CDM) cosmology. However, the former requires a calibration with observables at nearby distances, while the latter is not a direct measurement and is model-dependent. The physics behind these aspects restrains our possibilities in selecting a calibration method that can help minimise the systematic effects or in considering a fixed cosmological model background. Anticipating the possibility of deeply exploring the physics of new nearby observables such as the recently detected black hole shadows, in this paper we propose standard rules to extend the studies related to these observables. Supermassive black hole shadows can be characterised by two parameters: the angular size of the shadow and the black hole mass. We found that it is possible to break the degeneracy between these parameters by forecasting and fixing certain conditions at high(er) redshifts, i.e., instead of considering the \(\approx\)10% precision from the EHT array, our results reach a \(\approx\) 4%, a precision that could be achievable in experiments in the near future. Furthermore, we found that our estimations provide a value of \(H_{0}=72.89\pm 0.12\) km/s/Mpc and, for the baryonic mass density, \(\Omega_{m}=0.275\pm 0.002\), showing an improvement in the values reported so far in the literature. We anticipate that our results can be a starting point for more serious treatments of the physics behind the SMBH shadow data as cosmological probes to relax tension issues. ## I Introduction One of the most challenging problems of modern cosmology is the statistical tension on the estimation of the expansion rate of the universe today: the Hubble constant \(H_{0}\). Different experiments, observables and predicted theoretical models gives \(H_{0}\) values that disagree strongly. Therefore, in order to determine \(H_{0}\), the distance to observables on astrophysical and cosmological scales is one of the most relevant factors since measurements of this constant in the local universe based on distance ladder methods do not match the estimated value in the early universe, where the first methodology takes into account serious statistical precision analysis and the latter considers a standard cosmological model supported by observational evidence. The most pressing issue, in particular for this tension, is the 5.0\(\sigma\) disagreement between the local value given by the SH0ES collaboration [1] (\(H_{0}\) = 73.04 \(\pm\) 1.04 km/s/Mpc) and the early estimated value given by the Planck collaboration [2] (\(H_{0}\) = 67.27 \(\pm\) 0.60 km/s/Mpc). While measurements involved in each sector of the early and late universe agree, e.g., CMB and BAO observations,1 the \(H_{0}\) tension persists at the ends of the empirical distance ladder. Several proposals have been raised to find a viable solution to this issue. On one hand, it is possible to consider late \(H_{0}\) measurements which do not require a benchmark model, such as the standard Cosmological Constant Cold Dark Matter (\(\Lambda\)CDM) model. On the other hand, early \(H_{0}\) measurements can be performed if we assume a collection of physical properties based on a pre-established model that can describe the evolution of the universe. A broad compendium of these two paths for estimating \(H_{0}\) can be found in [3]. Footnote 1: It is important to mention that the combination of CMB lensing and BAO data has raised serious outcomes related to the spatial curvature of the universe, i.e., without considering these observables, Planck data tends to agree with a closed universe scenario [28; 29]. While the core of the latter paths are based on refining our calibration methods or considering physics beyond the standard \(\Lambda\)CDM model, we should contemplate the possibility of exploring the nature of new observables that could shed some light on the \(H_{0}\) tension. In this direction, astronomical objects with greater diversity, both in distance and physical characteristics, are being used as standard rulers in our local universe. Among the proposals, it has been suggested that we can use Black Hole (BH) shadows as standard rulers [4; 5]. The detection of the first BH shadow of M87* by the Event Horizon Telescope Collaboration (EHT) [6] opened a new window to using this kind of rulers to independently determine the BH mass and its distance from the observer. Through this mechanism, we can compute the physical size of the BH shadow and compare it with the observed size. Along with the detection of M87* in our nearby galaxy, a second one, the shadow of the central BH in our galaxy, Sagittarius A* (Sgr A*) [7], has also been detected. In particular, supermassive black holes (SMBH) shadows are interesting candidates to study our local universe since their physics is quite simple, and we can see them as standard rulers if the relation between the size of the shadow and the mass of the SMBH that produces it, the so-called angular size redshift \(\alpha\), is established. To use these kinds of observables, we need to set two limits: 1. Measurements at low redshifts. SMBH shadows can be used to estimate \(H_{0}\) in a cosmological independent way and also without evoking the distance ladder method. By adopting a peculiar velocity of the host galaxy \(v_{p}\) in km/s, the mass of the SMBH \(M\) in solar masses \(M_{\odot}\) and the angular size \(\alpha\), we can directly compute the distance to the SMBH. Finally, using the Hubble law, we can estimate \(H_{0}\). Clearly, performing this kind of estimate using two SMBH shadows' data points (M87* and Sgr A*) is insufficient, not only by the low data point density, but also due their high uncertainties. However, we expect a future improvement in this direction since the abundance of SMBHs can be hosted in spiral and elliptical galaxies [8]. At this scale,[9] proposed using a mock SMBH shadow catalog as anchor in the distance ladder method, which can offer an estimation of \(H_{0}\). 2. Measurement at high redshifts. At this scale, determining the mass of the BH can be difficult due to the fact that we need high resolution in the equipment, and the estimation of the uncertainties is quite large. However, reverberation mappings [10] techniques combined with spectroastrometry analyses [11] have been employed to determine BH mass and distance simultaneously. In [12], the authors proposed a set of simulated SMBH shadows at this scale, which were performed by assuming a fiducial benchmark cosmology, making it possible to determine a set of cosmological parameters as \(\Omega_{m}\) and \(H_{0}\). These approaches have set a convenient path to estimate \(H_{0}\). It is important to mention that both of them require certain assumptions between a large SMBH shadows baseline to perform the statistics or higher resolutions in the equipment to perform the observations. SMBH shadows as a cosmological probe are still new and lack statistical power in comparison to other baselines. However, the future of these measurements looks bright and hopeful. While the technological capacity is developing, we can start by considering more serious analyses behind their physics. Our goal is to forecast a larger set of SMBH shadows by relaxing the assumptions made in the latter approaches. By doing this, we will significantly decrease the errors, and we will obtain a higher \(H_{0}\) in comparison to the reported ones. Recent works have studied the possibility of using BH shadow measurements from the Event Horizon Telescope (EHT), e.g., to constrain astrophysical free parameters on a Kerr-Newman-Kiselev-Letelier BH configuration [13], analyse cosmological constant corrections on the BH shadow radii [14], determine optical features from a Schwarzschild MOG BH with several thin accretions [15], examine BH charges [16] and evaluate the effects in the Kerr-Newman BH in quintessential dark energy scenarios [17]. This paper is organised as follows. In Section 2, we review the characteristics in order to employ SMBH shadows as standard rulers, and we describe the equations behind these. In Section 3, we present the algorithm to simulate SMBH shadows and compare them with EHT observations. In addition, we include the process to perform this forecasting at low and high(er) redshifts. In Section 4, we describe the current data sets employed in our analyses including the compilation of the BH events and their observations (see Table 1), and the new forecasted baseline. In Section 5, we discuss the results obtained for our \(H_{0}\) estimations performed with our mock data and present a comparison with previous works. Finally, we give a summary of discussions in Section 6. ## 2 Black hole shadow description as standard rulers A general way to describe the BH shadow in a realistic expanding universe is to consider the angular size/redshift relation, which establishes the information between the apparent angular size of the BH and its redshift and how this quantity changes at cosmological distances. Once this angular size is determined, we can compute the angular diameter distance to the BH. The advantage to describing these physical distance relations is inspired by the fact that SMBH shadows can be used as standard rulers [4], which makes them useful to determine \(H_{0}\) in two regimes: * Nearby galaxies (low redshift, \(z\leq 0.01\)), where SMBH shadows are cosmologically model-independent and do not require methods that consider anchors in the distance ladder. By a simple calculation using the Hubble flow velocity of a galaxy and its peculiar velocity, we can derive the Hubble velocity and constrain \(H_{0}\). In addition, at \(z\to 0\), the angular size of the BH shadow grows as the BH mass gets bigger, which makes it a suitable candidate to study our local universe in comparison to the BAO peaks, in which amplitude decreases due to the cosmological expansion. However, since the angular size and the mass of the BH are linearly correlated, we require precise techniques to break this degeneracy such as, for example, stellar-dynamics [18], gas-dynamics [19] or maser observations [20; 21]. * High(er) distance observations (high(er) redshift \(0.01<z<7\) (\(z>7\))), where not only do we require a fiducial cosmology to determine the BH mass, but also we require an angular resolution around 0.1 \(\mu\)as. These kinds of observations have been show to be extremely difficult [4]; however, by combining techniques such as reverberation mappings [10] and very long baseline interferometry technologies, it will be possible to achieve such resolution. Going further in distance, the search for quasars beyond \(z>7\)[22] allows us to estimate BH masses at higher redshift. While the uncertainties in these measurements are high, it is important to note that this is a reference that SMBH shadows can be part of future catalogs, making them useful in proving the cosmic expansion at this scale. As far as we know, there have been two methodologies to constrain \(H_{0}\) using SMBH shadows in the described redshift ranges: * Combination of M87\({}^{*}\) observation and SNIa catalog [12]. This proposal allows us to study a particular set of cosmological parameters by considering a collection of 10 BH shadows' simulated data points under a fiducial benchmark cosmology (\(\Omega_{m}=0.3\) and \(H_{0}=70\) km/s/Mpc) and the Pantheon SNIa catalog (see Section 4 for its description). The results are reported in Table 2. However, these simulations are restricted solely to BH masses of \(M=3\times 10^{9}M_{\odot}\) within an interval \(7<z<9\), which, combined with SNIa observations, are not able to constrain the \(\Lambda\)CDM model. * Combination of mock catalogues for SMBH (\(\approx 10^{6}\) BH simulated data points) plus mock SNIa data for the Vera C. Rubin Observatory LSST2. This proposal starts from the same point of view as the latter; however, the mock SMBH data are used as anchors to calibrate the distance ladder. While the number of BH data points simulated is high, the forecasting is based on a benchmark cosmology and a single shadow data from M87\({}^{*}\). Furthermore, a cosmographic approach was employed at low redshift, making it impossible to constrain atthe third order of the series, i.e., the jerk current value \(j_{0}\)[9]. Footnote 2: www.lsst.org Our goal is to forecast a larger set of SMBH shadows by relaxing the assumptions made in the latter methodologies. Additionally, we are going to consider a second shadow data: the Sgr A* observation [7]. To set the theoretical quantities, we need to write distance quantities in terms of the characteristics of the object under study. In this work, we employ the standard definition of the luminosity distance for a flat \(\Lambda\)CDM model as [23]: \[d_{L}(z)=(1+z)\frac{c}{H_{0}}I(z), \tag{1}\] where \(c\) is the speed of light, \(H_{0}\) is the present-day Hubble parameter and \(I(z)\) is given by the integral \[I(z)=\int_{0}^{z}\left(\Omega_{m0}(1+\tilde{z})^{3}+\Omega_{\Lambda_{0}} \right)^{-1/2}\ d\tilde{z}, \tag{2}\] where \(\Omega_{m0}\) and \(\Omega_{\Lambda_{0}}\) are the present values of the critical density parameters for matter and a dark energy component, respectively. The luminosity distance can be related to the angular diameter distance \(d_{A}(z)\) by the reciprocity theorem which states that [24]: \[d_{L}(z)=(1+z)^{2}d_{A}(z). \tag{3}\] By definition, the angular diameter distance of an object is \(d_{A}=L/\Delta\theta\), with \(L\) the proper diameter of the object and \(\Delta\theta\) its observed angular diameter. If we are able to measure one of these distances at certain redshift \(z\), then we can obtain information about the cosmological parameters denoted in Equation (2). Certainty, a BH does not emit photons. Moreover, we can observe the light rays that curve around its event horizon and create a ring with a black spot in the center. We call this the shadow of the black hole. It is known that for a Schwarzschild (SH) BH, the angular radius of its shadow is [25]: \[\alpha_{sh}(z)=\frac{3\sqrt{3}m}{d_{A}(z)}, \tag{4}\] where \(m=GM/c^{2}\) is called the mass parameter of the black hole, \(G\) being the constant of gravitation, \(c\) the speed of light and \(M\) the mass of the BH in solar masses units. This equation is an approximate expression for the visible angular radius of the shadow when using the angular diameter distance \(d_{A}\) by assuming a radial coordinate large enough in comparison with the SMBH horizon in order to obtain an effective linear radius as \(3\sqrt{3}m\). Using Equations (1)-(3), we can rewrite the above equations as: \[\alpha_{sh}=\frac{3\sqrt{3}m}{(1+z)}\frac{c}{H_{0}}I(z). \tag{5}\] Notice that the BH shadow depends on its mass and distance, the Hubble constant \(H_{0}\) and the free cosmological parameters \((\Omega_{m},\Omega_{\Lambda})\). Using Equations (2) and (5), notice that at lower redshift we obtain \[\alpha_{\rm LR}=\frac{3\sqrt{3}mH_{0}}{cz}, \tag{6}\] where we can easily notice that given a value for the radius of the shadow \(\alpha_{LR}\) and the redshift \(z\) of the BH with a known mass, it is possible to directly determine a value of \(H_{0}\). With Equation (3), we can establish the relation between the modulus distance and the distance modulus of the source through: \[\mu=m(z)-M=5\ log_{10}\ d_{L}(z)+25, \tag{7}\] where \(m(z)\) and \(M\) are the apparent and absolute magnitude of the supernova, respectively. Notice that in the latter equations, we have considered that the SMBHs are described by an SH metric with spin zero. However, astrophysical SMBHs have rotation, and the spin effects can change the size described by Equation (4). This leads us to consider a Kerr metric, where both the SMBH spin and the observer inclination angle will change the BH shadow along the horizontal axis. Meanwhile, in the vertical axis line of sight, we conserve the SH shadow. This asymmetric shadow size has been observed by the EHT collaboration; nevertheless, a set of numerical phenomenological expressions is required in order to compute the right size of this shadow. A further study on these key characteristics has been conducted in [9]. In the discussion section of this work, we mention how this treatment does not affect the determination of the distance from the shadow. ## III Black hole shadows forecasting Detecting BH shadows requires a great deal of time and technological resources. In recent years, the EHT collaboration3 has been able to observe two SMBH shadows, M87* and Sgr A*, and continues to work in search of more of them and in different systems, e.g., binary systems. However, two data points are not statistically enough to generate comprehensive astrophysical analyses let alone cosmological ones. In addition, reaching a sufficient number of observations that can significantly constrain cosmological parameters can take a long period of time. Until we can reach an optimal number of observations of such a kind, we can perform forecasting through the standard rule methods. Footnote 3: eventhorizontelescope.org In order to produce mock data for SMBH shadows M87* and Sgr A*, we use Equation (5) and its low redshift approximation \(\alpha_{LR}\) (6). Following this line of thought, we consider these steps: 1. Assume a conservative fiducial cosmological model. In our case, instead of using a conservative Planck data cosmology [26] as in other studies, we will consider the local values: \(H_{0}=73.8\) km/s/Mpc, \(\Omega_{m0}=0.262\), where \(\Omega_{\Lambda_{0}}=1-\Omega_{m}\). 2. Under the above condition, we compute Equation (2) only as a function of the redshift \(z\). Once the integral is solved, we can use the reciprocity relation Equation (3) in Equation (5). This will be our main function, and it takes as input values a set of two variables: the redshift \(z\) and the SMBH mass. We can consider this result for the low redshift case as a cutoff when \(z\leq 0.1\). Notice that it is necessary to write these equations in BH shadow units, e.g., \(M_{\odot}\), and perform the appropriate conversion to express the results in \(\mu\)as units. Additionally, we need to assume an error in the simulations. In [12], the authors considered the M87* single data, which at low redshift constrains \(H_{0}\) with a 13% error. In order to reduce this number, it was considered a symmetric uncertainty in this single data point as the variance takes the form \(P\sqrt{N}=\sigma\); therefore, if we want to reach a precision of \(P\approx 4\%\), we require \(N=10\) SMBH shadows simulations. While this assumption is reasonable and the estimated error decreases by almost 8%, the mean value for \(H_{0}\) does not change. Since we are going to consider two SMBH shadows from M87* and Sgr A*, this symmetric uncertainty assumption will be relaxed in order to reach a precision of 4% using a conservative quantity comparable with other observables at low \(z\), e.g., SNIa, and a number of simulations derived from data with asymmetric errors at high \(z\). 3. In comparison to the previous methodologies described in Section 2, in order to test our algorithm effectiveness, we can compare the simulated SMBH shadows' outputs with the current M87* and Sgr A* observations given by the EHT. As we show in Figure 1, the simulations are very near to the observables, e.g., for Sgr A* we have that \(\alpha_{\rm observed}=26.4\pm 1.15\)\(\mu\)as, while \(\alpha_{\rm simulated}=26.6\pm 1.06\)\(\mu\)as. For the M87* case, we have a value \(\alpha_{\rm observed}=21\pm 1.5\)\(\mu\)as, and our simulation gives \(\alpha_{\rm simulated}=19.5\pm 0.78\)\(\mu\)as. The latter result gives a 4.7% difference from the simulated data point. This quantity is due to our pre-established precision since the error percentage in the M87* observation is close to 7%, while for Sgr A* it is reduced to almost 4%. Once we have tested our algorithm against the available observables, we can build a data points larger catalog that can be used in our statistical analyses. To do so, we need to build a baseline with input data containing pairs of redshift and SMBH mass \((z,M)\). In our case, we are going to generate two different mock catalogs: (i) for nearby galaxies estimations (low redshift) and (ii) for high(er) redshift SMBH shadows. The architecture of our algorithm is shown in Figure 2. ### High(er) Redshift Observations for Hubble Constant Constraints For high(er) redshift, we will simulate SMBH shadows within \(7\leq z\leq 9\). As we mentioned, since some quasar observations can be reached in such a range, it is possible to observe SMBHs in this region if they have a minimum of \(10^{9}M_{\odot}\). These types of BHs are viable as long as they have existed long enough to acquire larger masses, as, e.g., in the case of primordial BH. Although the population of BHs in this range is expected to be large, if we take into account the above conditions, then the number of BHs that meet these characteristics are drastically reduced; therefore, we first generate 10 random redshift values in the range of \(7\leq z\leq 9\) according to Step 2 described above. For the SMBH masses, we will use a uniform random distribution that takes values in the range of \(10^{9}-10^{10}\)\(M_{\odot}\). Once we have the synthetic catalog with 10 random pairs as \((z,M)\), we can feed them into the simulation algorithm described in Figure 2 and obtain the radii of the SMBH shadow associated with each pair. Our forecasted data is shown at the left of Figure 3. Figure 1: Comparison of the forecasted SMBH shadow radius (red color) from EHT observations M87* and Sgr A* (blue color). We consider a conservative fixed cosmology with \(H_{0}=73.8\) km/s/Mpc, \(\Omega_{m0}=0.262\) to compare the observable radius \(\alpha\) with its forecasted value. ### Nearby Galaxies Estimation Observations for Hubble Constant Constraints Analogous to our previous forecasting, we will now simulate the SMBH shadows at low redshift. For this case, as far as we know, most galaxies have a BH at their center; therefore, the possibility of observing them is greater in comparison to high(er) redshift ranges. We will simulate a conservative quantity of 500 SMBH shadows and generate random redshift values within \(0\leq z\leq 2.5\). We use a fixed BH mass within \(5\times 10^{6}M_{\odot}\). Our synthetic catalog consists of 500 random pairs \((z,M)\) from where we can obtain the radius of the BH shadow associated with each pair. In this case, we add a noise to our data that does not exceed our precision of 4%. Our forecasted data is shown at the right of Figure 3. We will use both of these synthetic catalogs to implement Bayesian statistics in order to constrain the Hubble constant \(H_{0}\). Figure 2: Architecture method to forecast SMBH shadows. The color boxes denote the physical quantities and priors (green color), the setting of variables and units (blue color), the redshift cutoff (orange color) and the computation employed (pink color). ## IV Current and simulated data sets With the BH shadows forecasting now described, we can perform statistical analyses combining the simulated catalogs at low and high(er) redshifts with local observables as SNIa using a \(\chi^{2}\)-statistics method. The set of best fits (\(h,\Omega_{m}\)) can be obtained through a process with a modified version on emcee-PHOEB4 for our cosmology and the new baseline (SNIa + BH shadows) and the extract of constraints using GetDist5 Footnote 4: phoebe-project.org Footnote 5: getdist.readthedocs.io In this paper we consider four different data sets: * Pantheon SNIa catalog [27]. This catalog contains data of 1048 SNIa, observed in the range of low redshift from \(0.01<z<2.3\). For each supernova, the redshift \(z\) and the apparent magnitude \(m(z)\) are given, which allows us to build the modulus distance \(\mu\) by fixing the absolute magnitude \(M\). In this analysis, we use the value \(M=-19.3\), for one case. * EHT direct observations. This set contains data from the two observations of the SMBH M87* and Sgr A*. For each observable, their mass \(m\) is given in \(M_{\odot}\) units, the redshift \(z\) and the radius of their shadow in \(\mu\)as units. A compilation of these data is given in Table 1. * High redshift SMBH shadows. This set contains 10 simulated shadows for SMBH between \(7\leq z\leq 9\) (see Figure 3 at the left). For each forecasted BH, its redshift \(z\), the size of its radius in \(\mu\)as and the error in this radius are given. Details are described in Section 3.1. * Low redshift SMBH shadows. This set contains 500 simulated shadows for SMBH between \(0.1\leq z\leq 2\) (see Figure 3 at the right). For each forecasted BH its redshift \(z\), the size of its radius in \(\mu\)as and the error in this radius are given. Details are described in Section 3.2. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Black Hole Event** & **Redshift \(z\)** & **Radius (\(\mu\)as)** & **Mass (\(M_{\odot}\))** \\ \hline M87* & 0.00428 & 21\(\pm\) 1.5 & \(6.6\pm 0.4\times 10^{9}\) \\ Sgr A* & 0.000001895 & 26.4 \(\pm\) 1.15 & \(4\pm 0.32\times 10^{6}\) \\ \hline \end{tabular} \end{table} Table 1: Compilation of the BH events and their observations (data from [6; 7]). The first column denotes the BH event and its reference; the second column, the \(z\) at which they were observed; the third and fourth columns, the radius in microarc-second (\(\mu\)as) and the mass in solar masses units (\(M_{\odot}\)) of the BH event, respectively. Figure 3: **Left**: Simulations for 10 SMBH shadows at high(er) redshift in \(7\leq z\leq 9\), with \(10^{9}-10^{10}M_{\odot}\). We consider a fiducial conservative cosmology with \(H_{0}=73.8\) km/s/Mpc, \(\Omega_{m0}=0.262\). Dashed lines indicate the values \(\alpha_{sh}=0.1\)\(\mu\)as and \(\alpha_{sh}=1\)\(\mu\)as, from bottom to top respectively. **Right**: Simulations for 500 SMBH shadows at low redshift in \(0.0001\leq z\leq 2.5\), with \(5\times 10^{9}M_{\odot}\). We consider the same latter cosmology setup. Dashed lines indicate the values \(\alpha_{sh}=0.1\)\(\mu\)as and \(\alpha_{sh}=1\)\(\mu\)as, from bottom to top, respectively. Along with the analysis, we use the reduced Hubble constant \(h\), defined as \(h=H_{0}/100\) [km/s/Mpc]. In the case of observables reported in the Pantheon catalog, we employ: \[\chi^{2}_{\rm SNIa}=\frac{1}{2}\sum_{i=0}^{N_{\rm SNIa}}\frac{[\mu_{\rm SNIa}- \mu_{th}(z;h,\Omega_{m})]^{2}}{\sigma^{2}_{\rm SNIa}}, \tag{8}\] where \(\mu_{\rm SNIa}\) and \(\sigma_{\rm SNIa}\) denote the modulus distance and its error for the SNIa, and \(\mu_{th}\) is the theoretical modulus distance, given by Equation (7). The total sample consists of \(N_{\rm SNIa}=1048\) data points. The \(\chi^{2}\)-statistics for the BH simulations can be expressed as: \[\chi^{2}_{\rm BH}=\frac{1}{2}\sum_{i=0}^{N_{\rm BH}}\frac{[\alpha_{obs}-\alpha _{th}(z;h,\Omega_{m})]^{2}}{\sigma^{2}_{BH}}, \tag{9}\] where \(\alpha_{obs}\) is the observed radius of the shadow given by the EHT observations plus the SMBH shadow simulated at low and high \(z\), \(\alpha_{th}\) is the theoretical radius of the BH shadow given by Equation (5) for high redshift simulations and by Equation (6) for low redshift simulations and EHT observations, and \(\sigma_{BH}\) are the errors in the observed/simulated radius from the BH shadow for each case. For EHT observations, \(N_{\rm BH}=2\); for high redshift simulations, \(N_{\rm BH}=10\) and for low redshift simulations \(N_{\rm BH}=500\). Our final statistical analysis consists of the total baseline \(\chi^{2}_{\rm T}=\chi^{2}_{\rm SNIa}+\chi^{2}_{\rm BH}\). ## V Results In Figure 4, we show the reduced \(h\) for: (i) the SMBH shadows observed by the EHT array, and (ii) the combination of both SMBH shadows observed using our algorithm. Notice that the M87* shadow exceeds the SNIa Pantheon statistical range, which is obvious since we are computing a posterior with a single distant point in \(z\) in comparison to the Sgr A* shadow (which is nearest to our Milky Way galaxy), whose \(H_{0}\) values lies at \(1\sigma\) within the SNIa data set. Furthermore, the combination of M87* + Sgr A* gives a higher value of \(H_{0}\) at the \(1\sigma\) border. In Figure 5, we show the confidence contours for each of our analyses: (i) When we consider the non-calibrated SNIa full catalog, a constraint value for \(\Omega_{m}\) can be obtained, while the \(H_{0}\) fails to be constrained. (ii) The simulated Figure 4: Reduced Hubble constant \(h=H_{0}/100\) [km/s/Mpc] determined by our analysis using the SMBH shadows from M87* and Sgr A* combined observations at low redshift (blue color). Each individual posterior for M87* (red color) and Sgr A* (black color) are also showed. Vertical bands show the constraints at \(1\sigma\) for the Pantheon SNIa catalog (green color) and the Planck Collaboration (purple color). The vertical dashed lines indicate the mean value for \(h\). SMBH shadows at high(er) redshift have weak constraints on the \(\Omega_{m}\) parameter. This leads to our final analysis: (iii) the combination of SNIa data plus SMBH mock data at low z, which can constrain the cosmological parameter in a local redshift range \((\Omega_{m},H_{0})\). (iv) Once we consider a calibrated SNIa full catalog (green color), we notice a tension between this catalog and the SMBH shadows simulations (red color). A full compilation of the resulting pair \((\Omega_{m},H_{0})\) are given in Table 2 for each result reported in the literature, their baseline and the results obtained using our analyses. This table is complemented with a whisker plot given in Figure 6. Notice that the value for \(H_{0}\) using forecasting SMBH shadows at low \(z\) under the assumptions described in our architecture gives an uncertainty lower than the ones reported in other methodologies. Additionally, our combined catalog SNIa + SMBH at high z reduces the tension in comparison to the direct EHT observations. Figure 5: **Left**: Confidence contours C.L for the non-calibrated (full sample) Pantheon SNIa sample (here denoted by SN) and the high(er) redshift SMBH shadows simulations (here denoted by BH). Notice that SNIa (green color) gives a horizontal band that extends all over the range of values of \(h\), which means that it can constrain the value for \(\Omega_{m}\) but is not able to constrain \(H_{0}\). The SMBH shadow simulations (blue color) form a region that extend widely along the values of \(\Omega_{m}\). Our total statistics are given by the red C.L region, which allow two constrain the cosmological parameters \((\Omega_{m},H_{0})\), see Table 2. **Right**: C.L for the calibrated (fixed \(M=-19.3\)) Pantheon SNIa sample and the low redshift SMBH shadows simulations (here denoted by BH). Notice that SNIa (green color) gives a wider contour over the range of values of \(h\) and \(\Omega_{m}\) in comparison to the SMBH shadows simulations (red color) that form a smaller region and give higher values for \(h\). ## VI Discussion In this paper, we proposed extending the studies of supermassive black holes shadows as standard rulers order to study the \(H_{0}\) tension. A BH cast a shadow in the neighborhood area of emission with a shape and size that can be computed using the location of the several photon orbits at different directions with respect to the spin axis. Furthermore, the angular size of shadows from a high redshift BH can increase due to cosmic expansion, hence the possibility to find constraints on the expansion history at high redshift. The advantage of these SMBH shadows is the property that can be characterised by two parameters: the angular size of the shadow \(\alpha\) at low and high redshifts and the BH mass. Moreover, a degeneracy between these parameters can arise at high redshift since assumptions on the precision of the experiment need to be taken into account. In order to break the degeneracy, in this paper we propose a viable forecasting method by fixing certain conditions at high(er) redshifts, i.e our results reach \(\approx 4\%\), a precision that could be achievable in future experiments and with optimistic conditions. Furthermore, we found that our estimations provide a value of \(H_{0}=72.89\pm 0.12\) and \(\Omega_{m}=0.275\pm 0.002\), showing an improvement in the systematics reported so far in the literature for the SMBH standard rulers, including SN catalog in the total dataset analysis. Is important to mention that we recover the initial \(H_{0}\) fixed prior (see Step 1 in Sec.3) when solely it is considered the SMBH simulated sample (see the corresponding result in red color in the right part of Figure 5). In addition, our value at high(er) redshifts, \(H_{0}=72.0\pm 3.4\) and \(\Omega_{m}=0.285\pm 0.029\), improves upon the systems reported at \(8\%\)[9], in fact, our systematic results are reduced to \(4.7\%\). We stress out that more general BH scenarios can be considered, for example, BH with spin and inclination variation as Kerr BHs. It was determined that these characteristics do not modify the determination of distances from the SMBH shadows [9]. In fact in general terms, the BH mass and the angular shadow size are the only two parameters that contribute significantly even in this spin-inclination BH system. Furthermore, we should mention that the treatment of a database that include SMBH forecastings and observational data like SN could bring relative weight issues combined with the fact that is necessary the assumption of an initial prior on \(H_{0}\). However, we expect that the precision obtained in this system can be improved using our methodology discussed here. We will report this \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Base Line** & **Observable/Simulations** & \(\mathbf{H_{0}}\) **[km/s/Mpc]** & \(\mathbf{\Omega_{m}}\) \\ \hline Planck collaboration & CMB & 67.27 \(\pm\) 0.604 & 0.315 \(\pm\) 0.007 \\ \hline & Cepheid-SNIa sample with fixed & & \\ SH0ES collaboration & \(M=-19.253\) & 73.04 \(\pm\) 1.04 & \(0.297^{+0.023}_{-0.21}\) \\ \hline EHT first observations & Size of M87* shadow. & 79.7 \(\pm\) 5.7 & - \\ \hline EHT second Observations & Size of Sgr A* shadow. & 73.2 \(\pm\) 3.2 & - \\ \hline Both EHT Observations [This work] & Sizes of M87* and Sgr A* shadows & & \\ & mass using stellar-dynamics & & \\ & method plus SNIa from Pantheon & & \\ & catalog & & \\ \hline & Size of M87* shadow and mock & & \\ & catalogues for Supermassive BH & & \\ & (\(\approx 10^{6}\) BH simulated) plus mock & & \\ & SNIa data for LSST. & \(70.3\pm 7.5\) & Fixed \\ \hline & Sizes of M87* and Sgr A* shadows, & & \\ & SNIa from Pantheon catalog plus & & \\ & forecasting for the sizes of SMBH & & \\ SNIa + SMBH at low redshift & shadows with M = \(3\times 10^{9}M_{\odot}\) at & & \\ & \(z\leq 0.01\) (see Section 3.2). & 72.89 \(\pm\) 0.12 & 0.275 \(\pm\) 0.002 \\ \hline & SNIa from Pantheon catalog plus & & \\ & forecasting for the size of the & & \\ & SMBH shadows with & & \\ SNIa + SMBH at high redshift & M = \(10^{9}-10^{10}\)\(M_{\odot}\) between & & \\ [This work] & \(7\leq z\leq 9\) (see Section 3.1). & 72.0 \(\pm\) 3.4 & 0.285 \(\pm\) 0.029 \\ \hline \end{tabular} \end{table} Table 2: Compilations of results for (\(H_{0},\Omega_{m}\)) constraints. The first column denotes the baseline and its references (data from [1; 6; 7; 9; 12; 26], respectively). The second column indicates the observable/simulations treated in each baseline. The third and fourth columns indicate the (\(H_{0},\Omega_{m}\)) values obtained from each analysis, respectively. Our results are indicated in the last two rows. In addition, we denote as Fixed when the baseline is considered flat prior to the parameter at hand, Not reported when the parameter was not computed, and a dashed line (-) indicates a not related constraint since it was estimated at low redshift. elsewhere. Currently, SMBH shadow observations are still few; therefore the calculations derived from low data point statistics cannot accurately constrain a set of cosmological parameters. However, as we presented in this paper, studying the precision assumptions that can be used for future experiments could allow us to promote these observables as future candidates in the many baselines used in cosmological tensions research. ###### Acknowledgements. The authors thank the referees and editor for some important comments which helped us to improve the paper considerably. C.E.-R. is supported by the Royal Astronomical Society as FRAS 10147 and by PAPIIIT UNAM Project TA100122. This article is based upon work from COST Action CA21136 Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse) supported by COST (European Cooperation in Science and Technology).
2307.04578
Exceptional points and phase transitions in non-Hermitian binary systems
Recent study demonstrated that steady states of a polariton system may show a first-order dissipative phase transition with an exceptional point that appears as an endpoint of the phase boundary [R. Hanai et al., Phys. Rev. Lett. 122, 185301 (2019)]. Here, we show that this phase transition is strictly related to the stability of solutions. In general, the exceptional point does not correspond to the endpoint of a phase transition, but rather it is the point where stable and unstable solutions coalesce. Moreover, we show that the transition may occur also in the weak coupling regime, which was excluded previously. In a certain range of parameters, we demonstrate permanent Rabi-like oscillations between light and matter fields. Our results contribute to the understanding of nonequilibrium light-matter systems, but can be generalized to any two-component oscillatory systems with gain and loss.
Amir Rahmani, Andrzej Opala, Michał Matuszewski
2023-07-10T14:11:20Z
http://arxiv.org/abs/2307.04578v1
# Exceptional points and phase transitions in non-Hermitian binary systems ###### Abstract Recent study demonstrated that steady states of a polariton system may demonstrate a first-order dissipative phase transition with an exceptional point that appears as an endpoint of the phase boundary [R. Hanai et al., Phys. Rev. Lett. 122, 185301 (2019)]. Here, we show that this phase transition is strictly related to the stability of solutions. In general, the exceptional point does not correspond to the endpoint of a phase transition, but rather it is the point where stable and unstable solutions coalesce. Moreover, we show that the transition may occur also in the weak coupling regime, which was excluded previously. In a certain range of parameters, we demonstrate permanent Rabi-like oscillations between light and matter fields. Our results contribute to the understanding of nonequilibrium light-matter systems, but can be generalized to any two-component oscillatory systems with gain and loss. Phase transitions correspond to significant alterations of the properties of a system caused by the modification of physical parameters. Examples include the ferromagnetic-paramagnetic phase transition [1], gas-liquid-solid transition [2], Bose-Einstein condensation in bosonic and fermionic systems [3], metal-insulator transition in solid state [4], and topological phase transitions [5]. Phase transitions may also occur in non-Hermitian systems, which are systems that do not satisfy the condition of Hermiticity, which is embedded in quantum mechanics [6]. Here the non-Hermitian contributions may stem from dissipation [7] or asymmetric coupling [8] and lead to a number of unique properties such as non-reciprocity [9], mutually interlinked non-Hermitian phase transitions [10] and the non-Hermitian skin effect [11]. A striking example of non-Hermitian physics that deviates significantly from the Hermitian case is the coalescence of eigenstates and energy eigenvalues at so-called exceptional points (EPs). These spectral singularities may be accompanied by a non-Hermitian phase transition [12]. Standard procedure to investigate these phase transitions is through the study of the spectrum of the system as some controllable parameters are changed [7]. Typically, the process involves meticulous adjustment of loss and gain in order to achieve the desired outcome. In general, in a linear system the presence of EPs is independent of the stability of the stationary state that the system evolves to [13]. However, in a nonlinear system, more than one solution may be stable, which gives rise to the phenomena of bistability and multistability [14; 15; 16; 17]. The existence of nonlinear features may affect the non-Hermitian effects realized in linear cases or give rise to entirely new phenomena [18; 19; 20; 21; 22; 23; 24; 25]. In order to examine the relationship between nonlinearity and non-Hermitian physics, it is necessary to study systems that possess variable nonlinearity and controllable gain and loss. Particularly suitable systems for this study are those where matter couples with light, as they allow to take advantage of the difference in physical properties of these components. For example, it was demonstrated that exceptional points appear naturally in light-matter systems of exciton-polaritons and subtreshold Fabry-Perot lasers [13; 26]. Moreover, it is possible to induce exceptional points by manipulating spatial and spin degrees of freedom of exciton-polaritons in various configurations [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. In the case of bosonic condensates of exciton-polaritons, it was predicted that a dissipative first-order phase transition line exists in the phase diagram [26], similar to a critical point in a liquid-gas phase transition. According to this study, this phase transition line exists in the regime of strong light-matter coupling and has an endpoint which corresponds to an exceptional point [26]. In this letter, we investigate a non-Hermitian model describing interaction between two oscillating modes. We use it to examine the significance of nonlinearity in a non-Hermitian phase transition. This model can describe light and matter modes in exciton-polariton condensation and lasing, as investigated in Ref. [26]. We find that the model is incomplete unless nonlinear saturation of gain is taken into account. Importantly, saturation increases the complexity of the phase diagram and leads to the appearance of bistability. It has also profound consequences on the physics of the system. We find that while the first-order phase transition line with an endpoint is present, the equivalence of the endpoint to an exceptional point as found in [26] is no longer valid in the general case. The phase diagram of Ref. [26] can be restored in the limit of strong saturation. In contrast to the results of Ref. [26], the transition between solutions can occur also in the weak coupling regime. This suggests that the second threshold from polariton to photon lasing, observed in experiments [38; 39; 40], may be related to a dissipative phase transition in the weak coupling regime. Moreover, we find a regime of permanent Rabi-like oscillations between two stable solutions. This regime corresponds to a line in the phase diagram that ends with an exceptional point. _Model and Analytical Solutions._ We consider a system of two coupled oscillators described by a non-Hermitian Hamiltonian with gain and loss. The imbalance between gain and loss in a linear system leads in general to solutions exponentially growing or decaying in time. To obtain non-trivial stationary solutions it is necessary to include nonlinearity. Here we adopt cubic nonlinearity that appears naturally in symmetric systems with no dependence on the complex phase. Such a model can be realized, among many other physical systems, in the case of cavity photons coupled to excitons, where the nonlinearity occurs only in the matter (exciton) component [41]. The system is described by complex functions \(\psi_{C}=n_{C}e^{i\varphi_{C}}\) and \(\psi_{X}=n_{X}e^{i\varphi_{X}}\), corresponding to amplitudes of cavity photons and excitons, respectively. The dynamics is governed by equations \(i\hbar\partial\psi/\partial t=i\hbar\partial_{t}|\Psi\rangle=H|\Psi\rangle\) with \(|\Psi\rangle=(\psi_{C},\psi_{X})^{\rm T}\), where non-Hermitian Hamiltonian \(H\) is given by [26] \[H=\begin{pmatrix}E_{C}-i\hbar\gamma_{C}&\hbar\Omega_{R}\\ \hbar\Omega_{R}&E_{X}+g|\psi_{X}|^{2}+ip\end{pmatrix}\,. \tag{1}\] Here \(\hbar\Omega_{R}\) is the coupling strength, \(\gamma_{C}\) is the decay rate of the photon field, and \(p\) represents the gain to the exciton field. This gain can be realized in practice by nonresonant optical or electrical pumping. We define the complex nonlinear coefficient as \(g=g_{1}-ig_{2}\), where \(g_{1}\) is the strength of two body interactions (Kerr-like nonlinearity) and \(g_{2}|\psi_{X}|^{2}\) is the saturation term that allows to avoid instability. Spectrum of Hamiltonian (1) can be found analytically \[E= \frac{1}{2}\big{[}E_{c}+\mathcal{E}+i(\mathcal{P}-\hbar\gamma_{c})\] \[\pm\sqrt{4\hbar^{2}\Omega_{R}^{2}+[\mathcal{E}-E_{c}+i(\mathcal{ P}+\hbar\gamma_{c})]^{2}}\big{]}\,, \tag{2}\] where \(\mathcal{P}=p-g_{2}(n_{X}^{\rm SS})^{2}\) and \(\mathcal{E}=E_{x}+g_{1}(n_{X}^{\rm SS})^{2}\). For convenience, we denote the solution associated with plus (minus) by \(U(L)\). The respective steady state analytical solutions \(|\Psi\rangle=|\Psi_{0}\rangle e^{-iEt}\) can be found from the condition \(\text{Im}[E]=0\), that is, the imaginary part of the eigenvalue of (1) must be zero. In [26], it was argued that one or two real energy solutions exist in certain regions in parameter space. However, it can be seen from (2) that except from special values of parameters, real energy solutions can exist only when saturation represented by \(g_{2}\) is taken into account. We will show below that accounting for the nonlinear \(g_{2}\) term does in fact lead to the appearance of up to three real-energy solutions, each of them of the form (2). The condition \(\text{Im}[E]=0\) allows one to find analytical expression for \(n_{X}^{\rm SS}\) \[(n_{X}^{\rm SS})^{2}=\frac{1}{g}\big{(}\text{Re}[E]-E_{X}-iP-\frac{(\hbar \Omega_{R})^{2}}{\text{Re}[E]-E_{C}+i\hbar\gamma_{C}}\big{)}. \tag{3}\] The resulting explicit formula for \(n_{X}^{\rm SS}\) is tedious, but for a given \(n_{X}^{\rm SS}\), one can find closed forms of steady state \(n_{C}^{\rm SS}\) and \(\varphi_{CX}=\varphi_{C}-\varphi_{X}\) \[n_{C}^{\rm SS}= n_{X}^{\rm SS}\sqrt{\frac{p}{\hbar\gamma_{C}}-\frac{\big{(}n_{X} ^{\rm SS}\big{)}^{2}\,g_{2}}{\hbar\gamma_{C}}}\,, \tag{4}\] \[\varphi_{CX}^{SS}= \arg\left(\frac{\delta-g_{1}(n_{X}^{\rm SS})^{2}}{\hbar\Omega_{ R}\left(n_{C}^{\rm SS}/n_{X}^{\rm SS}-n_{X}^{\rm SS}/n_{C}^{\rm SS}\right)}-i \frac{\gamma cn_{C}^{\rm SS}}{\Omega_{R}n_{X}^{\rm SS}}\right)\,, \tag{5}\] where we introduced photon-exciton energy detuning \(\delta=E_{C}-E_{X}\). _Non-Hermitian Phase Transitions._ We use the analytical solutions from the previous section to determine the phase diagram of the system, looking at it from two perspectives. We analyze the steady state solutions and their multiplicity, as in Fig. 1(a). On the other hand, we consider the lowest-energy state among the dynamically stable ones and investigate its properties and possible transitions, see Fig. 1(b). The latter approach is equivalent to analyzing a system that is weakly coupled to an energy sink, which does not perturb the spectrum, but picks the lowest-energy stable solution after a sufficiently long evolution due to its energetic stability. In the case when the conservative nonlinearity \(g_{1}\) is stronger than the dissipative nonlinearity \(g_{2}\), representative phase diagrams are shown in Fig. 1. We focus on the blue-detuned case (\(\delta>0\)), which is much richer that the red-detuned case. In Fig. 1(a) the number of steady state solutions is shown. Up to three non-zero solutions, corresponding to both upper and lower branches of Eq. (2) can exist, which results from the nonlinearity of the system. The region of zero solutions corresponds to the situation where pumping cannot overcome losses and no lasing nor polariton condensation occurs. For given \(\Omega\) and \(\gamma_{C}\), increasing pumping \(p\) can lead to one or several thresholds, as indicated with horizontal lines. Special points in the phase diagram (marked by stars in Fig. 1) include the exceptional point (EP) and the endpoint of the first-order phase transition (ET). In contrast to [26], we find that in general they do not coincide. To determine the position of the EP, one can find the following conditions for which the real and imaginary parts of eigenvalues are zero in Eq. (2) \[p^{\rm EP}=\hbar\Omega_{R}+\frac{g_{2}\delta}{g_{1}}\,,\,\,\,\gamma_{C}= \Omega_{R}\,. \tag{6}\] This can occur when \(n_{X}^{\rm SS}=\delta/g_{1}\), that is, whenever the system is blue-detuned (\(\delta>0\)). On the other hand, the ET point is clearly visualised in the phase diagram that takes into account the energetic instability in panel Fig. 1(b). The first-order phase transition line begins at the ET point in the weak coupling regime (\(\gamma_{C}>\Omega_{R}\)) and follows the arc represented by the ET-EP line towards the EP point. Below the EP, the phase transition line follows into the strong coupling regime. We conclude that, contrary to the results of [26], the first-order phase transition can occur also in the weak coupling regime. This can be explained by a simple physical argument. Since the pumping influences the effective photon-exciton detuning \(\tilde{\delta}=E_{C}-(E_{X}+g(\eta_{X}^{\text{SS}})^{2})\), the increase of pumping can change of the sign of \(\tilde{\delta}\), leading to an abrupt change of the lowest-energy state in the weak-coupling regime. Figure 1(d) shows the dependence of the real part of the energy of solutions shown in Figs. 1(a,b), in the vicinity of the ET-EP line. As can be seen, the ET point is the point of the transition to bistability. On the other hand, the EP point corresponds to a turning point in the bistability curve. The cross-section including the EP point (\(\gamma_{C}=\Omega\)) is depicted in more detail in Figure 1(c), which shows the occurrence of two stable branches from the upper and lower branches of Eq. (2) and one unstable branch. At the EP, the unstable upper branch coalesces with the lower stable branch, leading to the first-order phase transition. The cross-section with the ET point (\(\gamma_{C}>\Omega_{R}\)) is shown in Fig. 1(e), where the bistability curve closes, and the transition from the upper to lower branch becomes smooth. This leads to the possibility to encircle the exceptional point as indicated with arrows in Fig. 1(d). Interestingly, additional features that have an influence on the physics of the system can occur in the strong coupling case (\(\gamma_{C}<\Omega_{R}\)), see Fig. 1(f). These include the disappearance of one of the solutions in a certain parameter range and the dynamical instability of the lowest-energy branch (marked with orange line). Consequently, the upper, higher-energy solution may become the only viable solution despite the existence of lower-energy solutions. In the opposite case when the dissipative nonlinearity dominates over the conservative one, we find that the Figure 1: (a-b): Phase diagrams of generic nonresonantly driven binary system (1). In (a) the number of stationary states is marked with colors in the function of normalized photon decay rate (\(\gamma_{c}\)) and pumping strength (\(p\)). In (b) only the lowest-energy stable state is shown. Here colors indicate the real part of the energy of the corresponding solution. In (a) and (b) the exceptional point (EP, marked with a red star) appears both as a point on the phase boundary and as an endpoint of the R-line. The R-Line corresponds to non-decaying Rabi-like oscillations between two modes. The endpoint of the phase transition (ET) is marked with a black star. Cross-sections of constant \(\gamma_{c}\) with different numbers of thresholds (th) are marked with horizontal lines. Panels (c-f) show bistability and phase transitions in more detail. In (c) we show the case \(\gamma_{C}=\Omega_{R}\), for which the energy eigenvalues coalesce at the EP, which is also a turning point of a bistability curve. Stable solutions are marked with S and black lines, while unstable solutions are marked with US and orange lines. Panel (d) shows real part of energies for different pumping and decay rates. The ET point corresponds to the transition to bistability at \(\gamma_{C}>\Omega_{R}\). This cross-section is depicted in panel (e), while in panel (f) we show the case \(\gamma_{C}<\Omega_{R}\), where the unstable solution is split into two branches, and the lowest-energy solution becomes unstable in certain regions of pumping. Other parameters are \(\delta=0.2\)\(\hbar\Omega_{R}\), \(g_{1}=0.1\)\(\hbar\Omega_{R}\), \(E_{X}=0\), \(E_{C}=0.2\)\(\hbar\Omega\) and \(g_{2}=0.3\)\(g_{1}\). Figure 2: Phase diagrams in the case when dissipative nonlinearity \(g_{2}\) dominates over the conservative nonlinearity \(g_{1}\). In this case, endpoint of the phase transition (ET) and exceptional point (EP) correspond to the same point in parameter space, recovering the results of [26]. Parameters are the same as in Fig. 1, except for \(g_{2}=4.5\)\(g_{1}\). phase diagram of energetically stable solutions recovers the results of [26], see Fig. 2. As the dissipative nonlinearity is increased, the length of the ET-EP arc decreases, and finally the two points coalesce. In this specific case, the exceptional point is characterized by a jagged crest in the phase diagram, embodying a third-order exceptional point (see supplementary materials). This phenomenon arises from the coalescence of two stable solutions and a single unstable solution. _Permanent Rabi-Like Oscillations: R-Line._ Our analysis allows to predict that a peculiar oscillating state may form, as indicated in Fig. 1(a) by _R-Line_. In this case, long evolution leads to permanent oscillations, resembling Rabi oscillations in a two-level system, instead of stationary solutions. To explain this phenomenon, we examine imaginary and real parts of eigenvalues given in Eq. (2). An example is shown in Figs. 3(a) and 3(b). In general, two kinds of stationary solutions corresponding to \(\text{Im}[E(n_{X})]=0\) may exist. As shown in Fig. 3(a), in this particular case there are two solutions from the upper branch and one solution from the lower branch (the black dashed vertical lines denote the emergent solutions). Our interest is in solutions from upper and lower branches that occur at the same \(n_{X}\), while there is a gap in respective real parts, see Fig. 3(b). Such solutions occur when \(p=(g_{2}/g_{1})\delta+\hbar\gamma_{C}\), which corresponds to a straight line (marked by R-line) in the phase diagram of Fig. 1(c). An example of such permanent oscillations is shown in Fig. 3(c). After initial transient time, the oscillations stabilize at a cetain amplitude. When different initial conditions are used, the system may end up in one of the steady state solutions, as shown in Fig. 3(d). The frequency of oscillations is given by the gap, \(\Omega=2\sqrt{\Omega_{R}^{2}-\gamma_{C}^{2}}\). When the parameters of the system approach the exceptional point along the R-line, the gap decreases and the period of oscillations increases. At the exceptional point (\(\Omega_{R}=\gamma_{C}\)), the solutions coalesce and the period becomes infinite. Therefore, the exceptional point is the endpoint of the R-line. _Discussion._ We showed that, contrary to previous understanding, non-Hermitian polariton systems exhibit first-order phase transition with an endpoint that in general does not coincide with the exceptional point. Explanation of this phenomenon requires taking into account the nonlinear gain saturation and the consideration of the bistability curve. While the endpoint of the phase transition is where the bistability appears, the exceptional point is where the stable and unstable solutions coalesce. In addition, we demonstrated that first-order phase transition may occur in the weak coupling regime, and that for certain values of parameters one can predict permanent oscillations, whose frequency vanishes at the exceptional point. The predicted results contribute to the ongoing debate surrounding polariton/photon lasing. The presence of an exceptional point has been identified as the possible underlying factor for the observed second threshold [26]. Here, we provide further insights by identifying several other thresholds in phase diagrams and pointing out that multiplicity and stability of solutions are also crucial factors, so far overlooked. The presented results may be applied to much broader class of systems. The non-Hermitian Hamiltonian represented by the \(2\times 2\) matrix in Eq. (1) describes in general an arbitrary two-mode oscillatory system with gain and loss in the two modes, and the cubic nonlinearity in one of them. This term appears naturally in any oscillatory system in the first order as long as the nonlinearity respects the global \(U(1)\) symmetry of the oscillations. Examples include not only all quantum mechanical systems such as Bose-Einstein condensates, but also high-frequency coupled classical oscillators, where phase of oscillations is irrelevant on the time scale of a slowly varying envelope. The results presented here should be applicable to any such system that exhibits exceptional points and nonlin Figure 3: Permanent oscillations. (a) Imaginary part of energy eigenvalues versus exciton density \(n_{X}\). The condition \(\text{Im}[E(n_{X})]=0\) provides the possible stationary solutions. Here the solution from the lower energy branch (denoted by L) coincides with one of the solutions from the upper branch (denoted by U). (b) Real part of eigenvalues versus \(n_{X}\). The nonzero energy gap between the solutions results in Rabi oscillations with frequency \(\Omega_{R}\). Examples of simulations are shown in (c) and (d). In (c) oscillations stabilize after initial transient. In (d), different initial condition for the same parameters leads to a steady state solution from the upper branch. Using parameters: \(p=0.82\ \hbar\Omega_{R}\), \(\gamma_{C}=0.75\ \Omega_{R}\), \(g_{1}=0.1\ \hbar\Omega_{R}\), \(g_{2}=0.3\ g_{1}\) and \(\delta=0.2\ \hbar\Omega_{R}\). earity. A.R. and M.M. acknowledge support from National Science Center, Poland (PL), Grant No. 2016/22/E/ST3/00045. A.O. acknowledges support from Grant No. 2019/35/N/ST3/01379.
2305.12563
A Symbolic Framework for Evaluating Mathematical Reasoning and Generalisation with Transformers
This paper proposes a methodology for generating and perturbing detailed derivations of equations at scale, aided by a symbolic engine, to evaluate the generalisability of Transformers to out-of-distribution mathematical reasoning problems. Instantiating the framework in the context of sequence classification tasks, we compare the capabilities of GPT-4, GPT-3.5, and a canon of fine-tuned BERT models, exploring the relationship between specific operators and generalisation failure via the perturbation of reasoning aspects such as symmetry and variable surface forms. Surprisingly, our empirical evaluation reveals that the average in-distribution performance of fine-tuned models surpasses GPT-3.5, and rivals GPT-4. However, perturbations to input reasoning can reduce their performance by up to 80 F1 points. Overall, the results suggest that the in-distribution performance of smaller open-source models may potentially rival GPT by incorporating appropriately structured derivation dependencies during training, and highlight a shared weakness between BERT and GPT involving a relative inability to decode indirect references to mathematical entities. We release the full codebase, constructed datasets, and fine-tuned models to encourage future progress in the field.
Jordan Meadows, Marco Valentino, Damien Teney, Andre Freitas
2023-05-21T20:40:37Z
http://arxiv.org/abs/2305.12563v2
# A Symbolic Framework for Systematic Evaluation ###### Abstract Whether Transformers can learn to apply symbolic rules out-of-distribution is an open research question. In this paper, we devise a data generation method for producing intricate mathematical derivations, and systematically perturb them with respect to syntax, structure, and semantics. Our task-agnostic approach generates equations, annotations, and inter-equation dependencies, employing symbolic algebra for scalable data production and augmentation. We then instantiate a general experimental framework on next-equation prediction using 200K examples, and assess mathematical reasoning and generalisation capabilities of Transformer encoders. The experiments reveal that perturbations heavily affect performance and can reduce F1 scores of \(97\%\) to below \(17\%\). This suggests that inference is dominated by surface-level patterns unrelated to a deeper understanding of mathematical operators, and underscores the importance of rigorous, large-scale evaluation frameworks for revealing fundamental limitations of existing models. ## 1 Introduction Systematicity and out-of-distribution generalisation in deep neural models, such as Transformers (Vaswani et al., 2017), are challenging yet crucial areas of research (Schlegel et al., 2023; Belinkov, 2022; Teney et al., 2020). Enhancing these capabilities could bolster model robustness (Kumar et al., 2020), facilitate transparent decision-making (Lee et al., 2022), and amplify complex reasoning abilities in science and mathematics (Frieder et al., 2023; Valentino et al., 2022b; Lewkowycz et al., 2022; Drori et al., 2022; Welleck et al., 2021). Various strategies have been proposed to evaluate model robustness, including direct input manipulation (Rozanova et al., 2023b; Stolfo et al., 2022; Nie et al., 2020; Kaushik et al., 2019) and probing on the internal representation (Rozanova et al., 2023a; Ravichander et al., 2021; Elazar et al., 2021; Veitch et al., 2020). This paper considers input interventions through syntactic, structural and semantic perturbations to mathematical text. Current interventional approaches are challenged by the difficulty of isolating confounding factors, and formalising the expected causal mechanisms that underpin models' predictions (Rozanova et al., 2023b; Stolfo et al., 2022; Ribeiro et al., 2020; Kaushik et al., 2019). These hurdles impact the scope and reliability of causality and robustness studies (Pearl, 2009; Shreya et al., 2022). To tackle existing limitations, we leverage the rich environment of symbolic algebra to design a task-agnostic and systematic evaluation framework. Strict symbolic rules offer a systematic approach to perturbing mathematical reasoning and evaluating Figure 1: An overview of the proposed framework. We leverage computer algebra to generate large-scale training data for mathematical reasoning tasks **(a)** and apply systematic perturbations to examples from a static test set to form a perturbed test set **(b)**. The static evaluation scores are compared with scores on the perturbed set, given some metric, to determine model robustness and generalisation **(c)**. out-of-distribution generalisation of neural models. This allows us to perturb multiple elements of math reasoning, covering structural, semantic, and syntactic aspects across diverse mathematical subdomains, extending beyond the limited interventional scope of previous works (Stolfo et al., 2022; Patel et al., 2021; Ribeiro et al., 2020; Kaushik et al., 2019; Yao et al., 2021). Additionally, we address an impending data scarcity problem, where high-quality data is forecast to be outpaced by the training needs of models within the decade (Villalobos et al., 2022). Symbolic engines facilitate the generation of annotated mathematical reasoning, which allows the construction of high-quality datasets for various tasks. We combine (18) symbolic operators with simple rules that guide exploration of equational state spaces and generate derivations, then perturb and adapt them for specific entailment tasks. These are sequence classification tasks involving next-equation prediction that focus on operator usage in annotated derivations, or the direct integration or differentiation of expressions. To validate our framework, we test a canon of Transformer encoders used in mathematical language processing (Li et al., 2023; McNichols et al., 2023; Zhong et al., 2022; Meadows and Freitas, 2022) to determine their capacity for learning how operators work out-of-distribution, and to abstract fundamental properties impacting their ability to generalise. To summarise, the paper offers the following contributions: 1. An approach to generating annotated equational derivations of controllable complexity levels, involving premise equation generation (Algorithm 1) and the sequential application of operators to premises (Algorithm 2). 2. A systematic and scalable methodology to perturb various aspects of mathematical language including structure, syntax, and semantics. 3. Example instantiations of the framework involving sequence classification tasks based on next-equation prediction. The generated datasets include static and perturbed derivations totalling 200K task-specific examples. 4. An extensive experimental framework for training models on mathematical reasoning tasks and evaluating their robustness, including dataset generation, systematic perturbation, training, and evaluation (Fig. 1). 5. An empirical evaluation of Transformer encoders used in mathematical language processing. Our results suggest that models are not predominantly learning abstract rules in this context, and inference heavily depends on superficial textual patterns unrelated to deeper mathematical understanding. To the best of our knowledge, we are the first to propose a general algebraic solver-based framework for producing large-scale and systematic evaluation benchmarks for mathematical reasoning with Transformers. We release the data generation algorithms and the complete datasets adopted for the experiments1. Footnote 1: [https://github.com/anonymous/TBA](https://github.com/anonymous/TBA) ## 2 Related Work This work meets at the intersection of computer algebra, mathematical reasoning with Transformer-based models, and evaluation frameworks. We briefly describe the landscape for each domain. Computer algebra.SymPy (Meurer et al., 2017) is a computer algebra system used in conjunction with a number of language processing methods. For example, Chen et al. (2022) solve numerical reasoning tasks including simple math elements such as numbers, by chain-of-thought prompting language models to generate SymPy solvable code. Mandlecha et al. (2022) use SymPy to generate data for answering questions ranging from arithmetic to calculus, tasks, without testing for generalisability. Hu and Yu (2022) solve a similar array of problems from a large-scale dataset (Saxton et al., 2019), and test for generalisability to an extrapolation set of problems. Drori et al. (2022) fine-tune the decoder model, Codex (Chen et al., 2021), on a dataset of questions from MIT's university-level mathematics courses, generating SymPy solution code. Meadows and Freitas (2021) incorporate computer algebra with a basic heuristic search to reconstruct derivations from condensed matter research (Mann et al., 2018). Reasoning with mathematical language.Since early reasoning work with Transformers (Saxton et al., 2019; Clark et al., 2020; Rabe et al., 2020), they have revolutionised multiple subdomains of mathematical language processing (Meadows and Freitas, 2022) and are responsible for headlining results (Lewkowycz et al., 2022; Drori et al., 2022). Transformer encoder models obtain state-of-the-art performance in variable typing (Ferreira et al., 2022; Lai et al., 2022), formula search (Zhong et al., 2022; Peng et al., 2021), natural language premise selection (Valentino et al., 2022; Tran et al., 2022), and retrieval-based math question answering (Reusch et al., 2022; Novotny and Stefanik, 2022), among others. **Data-augmentation and evaluation frameworks.** In particular, Stolfo et al. (2022) perturb elements of math word problems (Liang et al., 2022) such as numerical operands of implicit arithmetic operations, and natural language. Inspired by related work (Patel et al., 2021; Ribeiro et al., 2020), they apply a causal analysis (Christiansen et al., 2021) to determine the effect of do-interventions (Pearl, 2022). Their data-augmentation approach is limited to one or two task-dependent interventions. Our approach is task-agnostic, systematic, and scalable, and allows for complex changes to mathematical elements such as operators, variables, expressions, and equations. ## 3 Generating and Perturbing Derivations with Computer Algebra We describe the general framework for generating derivations from a vocabulary of symbols and a set of operators. The operators include addition, subtraction, multiplication, division, exponentiation, \(\cos\), \(\sin\), \(\log\), \(\exp\), operations for setting up derivatives and integrals and evaluating them, expression substitutions, and operations for defining premises. An example of a generated derivation is given in Fig. 3. ### Premise Generation Derivations rely on premise equations. Operations are applied to premises to generate new equations, as shown in Fig. 3. The first step in Fig. 3 is the premise equation (1). We outline our approach to generating premises in this subsection, using a vocabulary, and a set of operations defined within a computer algebra system. The vocabulary includes uppercase and lowercase English characters, excluding {i, e, d, O}, due to their connection with math concepts. Operators are classified by their arity. For example, the symbols \(Z\) and \(o\) are sampled from the vocabulary and used as operands for the 2-arity operator "divide". Then, \(Z\) is sampled from the vocabulary as an operand for the 1-arity operator "integrate". This expression becomes \(\int\frac{Z}{\beta}dZ\), and consists of the free symbols \(Z\) and \(o\). This is the RHS of the premise equation. Figure 2: Example perturbations applied to a derivation using computer algebra. Given the _next-equation prediction_ task chosen, annotation replacement (red) is _semantics-altering_, while the others are _semantics-preserving_. Variable Renaming (**VR**) involves replacing symbols with out-of-distribution Greek letters. Expression Exchange (**EE**) swaps expressions either side of an equality symbol. Annotation Replacement (**AR**) selects an alternative final annotation that leads to an alternative final equation. To form the LHS, a function symbol is sampled from the vocabulary, in this case \(S\), and the two free symbols are assigned as variables. The LHS and RHS are themselves inputted as arguments of an equation operation, and the premise (1) is obtained. A formal description is given by Algorithm 1 in Appendix B. ### Equational Reasoning Operations accept premise equations and sampled math elements, and generate new equations. All generated equations in a derivation may be used to derive the next equation. We describe this formally in Algorithm 2, including a description of how we sample from equation distributions to emulate human-like derivations, in Appendix C. Operators are classified by their arity \(\in[0,2]\), and are naturally applied to each side of an equation. For example, starting from the premise in Fig. 3, given by \[S(Z,o)=\int\frac{Z}{o}dZ, \tag{1}\] the 2-arity class is selected, and "differentiate" is chosen from operators matching that arity. All valid expressions and variables are sampled from, and the algorithm selects \(Z\). The annotation ['differentiate', 1, Z], means the operator was applied to operand equation (1), with the second operand \(Z\), and is applied to both LHS and RHS of (1) to give \[\frac{\partial}{\partial Z}S(Z,o)=\frac{\partial}{\partial Z}\int\frac{Z}{o}dZ. \tag{2}\] The notation ['minus', 1, Derivative(S(Z,o), Z)] means that an 2-arity operation was selected, the operator "minus" was chosen, and the LHS of (2) was selected as the second operand. This operand is subtracted from both sides of first operand equation (1), to give \[S(Z,o)-\frac{\partial}{\partial Z}S(Z,o)=-\frac{\partial}{\partial Z}S(Z,o)+ \int\frac{Z}{o}dZ. \tag{3}\] The final step, annotated by ['substitute_LHS_for_RHS', 3, 2], means the substitution function was chosen, and equations (3) and (2) are the first and second operands. In this case, the LHS of (2) was identified within (3), and substituted for the RHS of (2), to give: \[S(Z,o)-\frac{\partial}{\partial Z}S(Z,o)=-\frac{\partial}{\partial Z}\int \frac{Z}{o}dZ+\int\frac{Z}{o}dZ. \tag{4}\] This procedure, formalised in Algorithm 2, allows for a systematic and scalable generation of equational derivations, with grounded symbolic properties. ### Perturbations A perturbation targets a single aspect of a derivation (such as variable names), and either preserves the semantics of the original derivation or alters it in a specific and controllable way. Semantics-preserving.An input derivation sequence, such as Fig. 4(a), is associated with a label determining its truth value (in the current classification context). Semantics-preserving perturbations generate examples that preserve the semantic link between sequence and label, and the given sequence is still True after the change. Fig. 2 describes _variable renaming_ (**VR**) and _expression exchange_ (**EE**) perturbations of this type. Variable renaming involves sampling variables from a different vocabulary to that of the training data. We sample from a set of ten Greek symbols and replace English variables using SymPy substitution operations. The EE perturbation generates reordered equations, where the only change is the Figure 3: A generated derivation using the proposed computer algebra system. Colours highlight the dependencies between different reasoning steps. position of top-level expressions either side of the equality operator. We also necessarily reverse any asymmetric annotations with respect to LHS or RHS of equations. For our set of operations, this means replacing the substitution function, "substitute_LHS_for_RHS", with its RHS equivalent, and vice versa. Due to a premise always featuring a function on the LHS, it never sees EE examples during training, and due to the Greek symbols, VR examples are also out-of-distribution. We also apply variable renaming and _equality conversion_ perturbations to examples from the _direct calculus_ task variation. As described in Fig. 4(b), a differentiation example input may be \(x^{2}\) [SEP] \(x\) [SEP] \(2x\). _Equality conversion_ (**EC**) involves converting the valid expressions into equations, by sampling the LHS symbol from unused symbols. The EC perturbed example in this case may be \(y=x^{2}\) [SEP] \(x\) [SEP] \(\frac{d}{dx}y=2x\). For integration, the first expression becomes the equation containing the differential operator. This perturbation is the simplest possible introduction of an equality operator while maintaining mathematical correctness. Semantics-altering.A perturbation of this type alters the semantic link between the sequence and label. A perturbed sequence is now False if it was previously True. As in Fig 4(a), an input consists of previous derivation steps, an annotation associated with an operator used to generate the final equation, and the _candidate_ final equation. This equation is either the ground truth or a negative example, and the input is paired with a label that reflects the overall sequence coherence. Both true and false versions of a sequence are seen during training, that differ only by the final equation. To alter the semantic link between sequence and label meaningfully, the _annotation replacement_ (**AR**) perturbation replaces a final annotation in a way that the _negative example_ corresponds to the _positive label_, and vice versa. The classification labels are then swapped to reflect the change. This perturbation is possible because we generate negative examples by applying alternative final operations to equations using computer algebra. We do not apply the AR perturbation in direct calculus, because negatives are generated differently for that task. As mentioned in Section 4, negatives for direct calculus are selected by ranking generated premise expression lists with a string metric. Fig.2 and Fig. 1(b) display the effect of perturbations. ## 4 Framework Instantiation ### Next-Equation Prediction We instantiate the general framework described in Section 3 for evaluation, formalising two sequence classification tasks as a next-equation prediction. Next-equation prediction is a binary sequence classification task. Given all equations and annotations in the derivation so far, including the final annotation that describes how to construct the ground truth final equation, a candidate equation is paired with the prior context, and the model learns whether the reasoning entails this equation. We generate two variations of the task: the above describes the _structured derivations_ variation that relies on Algorithm 2, while _direct calculus_ involves single-step differentiation and integration of premise expressions generated with Algorithm 1. Structured derivations.Fig. 4(a) describes the input format for generated derivations. A step consists of an equation and an annotation, as described in Fig. 3. An annotation is a list comprising an operator name and its operands. Each step [an, eq] is linearised and comma separated, up to the final step. The final step annotation is separated from the derivation, and the final equation is replaced with a negative example equation, or left unchanged. All three input components are [SEP] separated as in Fig. 4(a). Negative examples are generated by applying a different operation to a previous equation. Any previous equations may be used to generate the final equation, meaning there are long-range dependencies. Mathematically, the model should learn the necessary equation dependencies required to form the final equation, and how to apply the correct operator (guided by the final annotation). Direct calculus.In this task we emphasise a single-step evaluation of derivatives and integrals. Fig. 4(b) describes the input. A premise expression containing _at least two_ variables is generated, a variable is randomly selected from the premise, Figure 4: Two variations of encoder input for the next-equation prediction task: (a) structured derivations, and (b) direct calculus. and the resulting expression after differentiating or integrating with respect to that variable is the ground truth. This positive example is either replaced with a negative example, or not. A classifier infers if the reasoning context generated the final expression. Negative examples are generated by selecting from a list of alternative premise expressions. This list includes the result of differentiating/integrating the expression with respect to other variables in the expression, and differentiating/integrating other randomly generated expressions comprised of the same symbols. The list of expressions are then ranked in terms of their Damerau-Levenshtein distance (Zhao and Sahni, 2019) from the ground truth (Meadows and Freitas, 2021). For example, the expression \(-T+sin(U)\) is differentiated with respect to \(T\) to give \(-1\). The corresponding negative example is \(1\). The expression, variable, and candidate expression are [SEP] separated upon input to the model, as shown in Fig. 4(b). ### Data Generation We construct datasets that allow for derivation reconstruction within the computer algebra system, such that they may be further perturbed or extended. The derivations themselves are task-agnostic, but we include negative equations from the current task for reproducibility. A single entry consists of the reasoning sequence up to the final expression or equation (see Fig. 4). This sequence is grouped with both the correct final equation and negatives, and is stored in both LaTeX and SymPy-interpretable language (Meurer et al., 2017). Before a model encounters an example, it is processed into two sequences: one including the positive equation, one including the negative, along with their classification labels. The number of examples seen by models is displayed in Table 1. Perturbations are applied to the test set and generate an equal number of perturbed examples. The _structured derivation_ datasets include 20k training, 5k development, and 4k evaluation examples. _Direct calculus_ includes 32k training, 5k development, and 4k evaluation examples. Generalisation to extrapolation examples.A model that can sufficiently generalise should be able to solve mathematically less complex versions of problems encountered during training. The structured derivations task is split into a further three variants: those considering 2, 3, and 4 step derivations. 4-step derivations are intuitively the hardest, as the static evaluation supports, and 2-step derivations are the easiest. To test for generalisability to an extrapolation set. We evaluate models trained on derivations with _higher_ step counts, on derivations with _lower_ step counts. This is represented in the **s - 1** and **s - 2** columns in Table 2, where **s** is the number of steps the model was trained on. Models solving the direct calculus task are trained/evaluated on examples comprising _at least two_ variables, _e.g., \(\cos(ax)-z\)_. We generate a set of easier calculus problems with 1.5k examples that consist of only one variable, _e.g., \(\cos(x)\)_. ## 5 Evaluation As described in Fig. 1, we first evaluate models on a static set, apply perturbations to the static set examples, evaluate models on the perturbed sets, and compare the difference between scores (accuracy and F1). In addition, we evaluate on _extrapolation_ examples described in the previous section, including derivations consisting of less steps, and direct calculus examples consisting of functions of single variables. Table 2_(structured derivations)_ and Table 3_(direct calculus)_ display results from next-equation prediction experiments. ### Results **Models fail to generalise.** For _structured derivations_, models average \(80\%\) F1 over all static derivation lengths, and decreases due to perturbations average \(10\%\) (VR), \(11\%\) (EE), and \(16\%\) (AR). This is at most \(4\%\) above F1 majority baseline. BERT-uncased and SciBERT-cased fine-tuned on 2-step derivations are exceptions, but the 13 other models are sensitive to at least one perturbation. All models tested do not generalise to _less_ derivation steps, reaching as low as \(11\%\) F1. \begin{table} \begin{tabular}{c|c|c|c} \hline **Task** & **Train** & **Dev** & **Test** \\ \hline _Structured derivations_ & & & \\ 2-steps & 20K & 5K & 4K \\ 3-steps & 20K & 5K & 4K \\ 4-steps & 20K & 5K & 4K \\ \hline _direct calculus_ & & & \\ integration & 32K & 8K & 4K \\ differentiation & 32K & 8K & 4K \\ \hline \end{tabular} \end{table} Table 1: The number of examples considered by models during training, development, and evaluation. In _direct calculus_ static scores average \(90\%\) and perturbations decrease this by \(17\%\) (VR) and \(33\%\) for Equation Conversion (EC). All 10 fine-tuned models completely fail to generalise to perturbations and easier examples, with \(97\%\) F1 scores repeatedly dropping below \(17\%\). Entailment pre-training improves generalisability.BERT Devlin et al. (2018) was trained on masked language modelling (MLM) and next sentence prediction (NSP) objectives. SciBERT Beltagy et al. (2019) was fine-tuned with scientific papers on MLM and NSP. MathBERT Peng et al. (2021) was fine-tuned with scientific papers on standard and structural MLM, and a context correspondence objective (related to NSP). Fine-tuning generally overwrites representations learned from previous tasks Mosbach et al. (2020), so it is likely that MathBERT as forgotten those associated with NSP (compared to BERT or SciBERT). The context correspondence objective used to train MathBERT learns to pair the _description of an equation_ with the equation itself. In contrast, NSP involves recognising consecutive sentences which - particularly in scientific text - better teaches _logical entailment_. Next-equation prediction considers if _context entails the equation in an argumentative sense_, rather than a descriptive sense. We therefore attribute generalisability failures of MathBERT to insufficient entailment pre-training. It has struggled with entailment before Meadows et al. (2022). Learning formula structure instead of entailment does not necessitate structural perturbation invariance.Expression Exchange (EE) and Equation Conversion (EC) involve perturbing implicit tree-structures of equations, such as operator trees Mansouri et al. (2019). Despite MathBERT using a dedicated pre-training objective for learning equation tree structure, it is not more robust to structural perturbations than other models. ### Qualitative Analysis We consider (uncased) models trained on 3-step derivations. This number of steps closely reflects the average results over all step counts in Table 2. The **All** (perfect generalisation) and **Not P** (complete generalisation failure) columns of Table 4 reinforce the relative generalisability gap \begin{table} \begin{tabular}{l r|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Static**} & \multicolumn{2}{c|}{**VR**} & \multicolumn{2}{c|}{**EE**} & \multicolumn{2}{c|}{**AR**} & \multicolumn{2}{c}{**s - 1**} & \multicolumn{2}{c}{**s - 2**} \\ \hline & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 \\ \hline BERT-base-uncased (**s=2**) & 0.877 & 0.889 & 0.870 & **0.881** & 0.870 & 0.880 & 0.875 & 0.887 & - & - & - & - \\ BERT-base-uncased (**s=3**) & 0.789 & 0.787 & 0.719 & 0.710 & 0.691 & 0.660 & 0.537 & 0.506 & 0.684 & 0.690 & - & - \\ BERT-base-uncased (**s=4**) & 0.588 & 0.636 & 0.550 & 0.603 & 0.564 & 0.603 & 0.424 & 0.481 & 0.657 & 0.622 & 0.528 & 0.298 \\ \hline BERT-base-cased (**s=2**) & 0.872 & 0.885 & 0.819 & 0.832 & 0.853 & 0.861 & 0.855 & 0.872 & - & - & - & - \\ BERT-base-cased (**s=3**) & 0.782 & 0.773 & 0.688 & 0.645 & 0.650 & 0.589 & 0.545 & 0.496 & 0.546 & 0.305 & - & - \\ BERT-base-cased (**s=4**) & 0.668 & 0.717 & 0.585 & 0.615 & 0.626 & _0.622_ & 0.433 & 0.531 & 0.719 & 0.739 & 0.543 & 0.218 \\ \hline MathBERT (**s=2**) & 0.832 & 0.820 & 0.762 & 0.706 & 0.790 & 0.757 & 0.785 & 0.760 & - & - & - \\ MathBERT (**s=3**) & 0.842 & 0.839 & 0.691 & 0.645 & 0.633 & 0.522 & 0.663 & 0.640 & 0.674 & 0.587 & - & - \\ MathBERT (**s=4**) & 0.671 & 0.684 & 0.595 & 0.526 & 0.623 & 0.621 & 0.485 & 0.479 & 0.686 & 0.680 & 0.518 & 0.290 \\ \hline SciBERT-uncased (**s=2**) & 0.925 & 0.926 & 0.729 & 0.704 & 0.868 & 0.861 & 0.900 & 0.902 & - & - & - & - \\ SciBERT-uncased (**s=3**) & 0.889 & _**0.984** & 0.821 & _0.819_ & 0.703 & _0.664_ & 0.709 & _0.722_ & 0.806 & _0.818_ & - & - \\ SciBERT-uncased (**s=4**) & 0.763 & _0.765_ & _0.695_ & _0.668_ & 0.686 & 0.659 & 0.607 & _0.596_ & 0.769 & _0.772_ & 0.593 & _0.574_ \\ \hline SciBERT-cased (**s=2**) & 0.926 & **0.931** & 0.853 & 0.871 & 0.898 & **0.902** & 0.910 & **0.917** & - & - & - & - \\ SciBERT-cased (**s=3**) & 0.772 & 0.724 & 0.727 & 0.672 & 0.610 & 0.441 & 0.508 & 0.295 & 0.529 & 0.128 & - & - \\ SciBERT-cased (**s=4**) & 0.710 & 0.709 & 0.651 & 0.646 & 0.666 & 0.654 & 0.470 & 0.429 & 0.779 & 0.749 & 0.527 & 0.110 \\ \hline Average (**s=2**) & 0.886 & 0.890 & 0.807 & 0.799 & 0.856 & 0.853 & 0.865 & 0.868 & - & - & - & - \\ Average (**s=3**) & 0.815 & 0.803 & 0.729 & 0.698 & 0.657 & 0.575 & 0.592 & 0.532 & - & - & - & - \\ Average (**s=4**) & 0.680 & 0.702 & 0.615 & 0.612 & 0.633 & 0.642 & 0.484 & 0.503 & - & - & - \\ \hline Average over all steps & 0.794 & 0.798 & 0.717 & 0.703 & 0.715 & 0.690 & 0.647 & 0.634 & - & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 2: Model performance on the structured derivations variation of the next-equation prediction task. The **Static** column shows scores on data that is unperturbed with respect to the training data. **VR** (variable renaming) shows scores after renaming variables as Greek letters. **EE** (expression exchange) shows scores after swapping expressions either side of equality symbols in equations. **AR** (annotation replacement) shows scores after deriving an alternative final equation. **s - n** shows scores after evaluating on derivations with \(n\) less steps than training derivations (for \(n\in\{1,2\}\)). Bold numbers denote highest F1 scores for **2-step** derivations. Bold italic numbers denote highest _3-step_ scores. Bold, italic, and underlined numbers denote highest _4-step_ scores. between SciBERT and MathBERT, despite both being trained on scientific corpora, and display the top three operators by normalised frequency per generalisation category. Generalisation failure depends on the unpredictability of an operator.For examples where models perfectly generalise, the operator responsible for setting up an integral (without evaluating it) is most common. This is likely because it involves prepending a unique text span "int" to expressions either side of equations, which is easy to identify. Models generalise well to \(\cos\), \(\sin\), \(\exp\), and \(\log\) operators, likely due to their similarly predictable effect on equations associated with regular text spans. To highlight that it is likely the relative unpredictability of an operator's effect on text that leads to generalisation failure, we analyse the set of examples where _both_ SciBERT and MathBERT correctly classify unperturbed sequences, but misclassify _all_ perturbed sequences. Three examples are displayed in Fig. 5. The _renaming premise_ opera \begin{table} \begin{tabular}{c|c|c|c} & **Static**\(\pm\) & **All** & **Not P** \\ \hline \multirow{2}{*}{BERT} & 62.3 & 7.4 & 5.3 \\ & \(R\) & \(\int_{E}\) & \(\partial_{E}\) & \(\int-\) \\ \hline \multirow{2}{*}{SciBERT} & 79.6 & 21.3 & 1.6 \\ & \(R\) & \(\int_{E}\) & \(\partial_{E}\) & \(R\)\(X^{O}\times\) \\ \hline \multirow{2}{*}{MathBERT} & 70.3 & 7.8 & 9.3 \\ & \(R\) & \(\int\) & \(\int\) & \(\cos\) \\ \end{tabular} \begin{tabular}{c|c|c} & **Not P** \\ \hline \multirow{2}{*}{BERT} & 62.3 & 7.4 & 5.3 \\ & \(S_{L}\) & \(\int_{E}\) & \(R\) \\ \hline \multirow{2}{*}{SciBERT} & 79.6 & 21.3 & 1.6 \\ & \(R\) & \(\int_{E}\) & \(\partial_{E}\) \\ \hline \multirow{2}{*}{MathBERT} & 70.3 & 7.8 & 9.3 \\ & \(R\) & \(\int_{E}\) & \(\int\) \\ \end{tabular} \begin{tabular}{c|c|c} & **Not P** \\ \hline \multirow{2}{*}{BERT} & 62.3 & 7.4 & 5.3 \\ & \(S_{L}\) & \(\int_{E}\) & \(\int_{E}\) \\ \hline \multirow{2}{*}{SciBERT} & 79.6 & 21.3 & 1.6 \\ & \(R\) & \(\int_{E}\) & \(\partial_{E}\) \\ \hline \multirow{2}{*}{MathBERT} & 70.3 & 7.8 & 9.3 \\ & \(R\) & \(\int_{E}\) & \(\int\) \\ \end{tabular} \begin{tabular}{c|c|c} & **Not P** \\ \hline \multirow{2}{*}{BERT} & 62.3 & 7.4 & 5.3 \\ & \(S_{L}\) & \(\int_{E}\) & \(\int_{E}\) \\ \hline \multirow{2}{*}{SciBERT} & 79.6 & 21.3 & 1.6 \\ & \(R\) & \(\int_{E}\) & \(\int\) \\ \end{tabular} \begin{tabular}{c|c|c} & **Not P** \\ \hline \multirow{2}{*}{BERT} & 62.3 & 7.4 & 5.3 \\ & \(S_{L}\) & \(\int_{E}\) & \(\int_{E}\) \\ \hline \multirow{2}{*}{SciBERT} & 79.3 & 21.3 & 1.6 \\ & \(R\) & \(\int_{E}\) & \(\int\) \\ \end{tabular} \begin{tabular}{c|c|c} & **Not P** \\ \hline \multirow{2}{*}{BERT} & 62.3 & 7.4 & 5.3 \\ & \(S_{L}\) & \(\int_{E}\) & \(\int_{E}\) \\ \hline \multirow{2}{*}{SciBERT} & 79.6 & 21.3 & 1.6 \\ & \(R\) & \(\int_{E}\) & \(\int\) \\ \end{tabular} \begin{tabular}{c|c|c} & **Not P** \\ \hline \multirow{2}{*}{SciBERT} & 62.3 & 7.4 & 5.3 \\ & \(S_{L}\) & \(\int_{E}\) & \(\int_{E}\) \\ \hline \multirow{2}{*}{SciBERT} & 79.3 & 21.3 & 1.6 \\ & \(R\) & \(\int_{E}\) & \(\int\) \\ \end{tabular} \begin{tabular}{c|c|c} & **Not P** \\ \hline \multirow{2}{*}{SciBERT} & 70.3 & 7. tion is overwhelmingly frequent. It takes a random previously defined expression as the RHS, and defines a new function as the LHS. It does not necessarily depend on a single previous step and is non-deterministic due to random sampling of the RHS, yet it can never generate more complex equations than those previously derived (unlike all other operators). ## 6 Conclusion We propose the use of mathematical reasoning generation algorithms to generate synthetic data for training language models to derive equations. In this case, we fine-tune Transformer encoder models to classify correct use of operators when deriving equations, through next-equation prediction tasks. Models obtain high scores on unperturbed test sets, but largely fail to generalise to perturbed test sets systematically generated with computer algebra. This instantiation of the framework suggests models are relying on textual patterns mostly unrelated to any deeper mathematical understanding of the operators. We explore the relationship between generalisation failure and operator usage, and determine that operators with a less predictable (and identifiable) effect on the surface form of equations reliably leads to generalisation failure, even if the mathematics is not necessarily more difficult. Although models that incorporate formula structure are strong in many tasks (Zhong et al., 2022; Meadows and Freitas, 2022; Peng et al., 2021), we suggest this should not overwrite knowledge that is crucial to the application at hand, such as textual entailment in next-equation prediction. Future research may explore the flexibility of the proposed symbolic framework to instantiate novel math reasoning tasks, investigate the systematic behaviour of larger language models (Chung et al., 2022; Brown et al., 2020), and incorporate causal analysis (Stolfo et al., 2022). ## 7 Limitations Overall ethical impact.This work explores a systematic way to elicit the mathematical/symbolic inference properties of Transformer-based models in mathematical language processing tasks. As such, it contributes in the direction of a critique of the true reasoning capabilities and biases of these models. Derivation generation.There are irrelevant steps in some longer derivations, such as applying an operation to an equation, but not using the result. This should not affect results as the final equation always depends on a previous equation, except when it is a renaming premise. This error is likely due to incorrect subderivation extraction, and will be improved. Perturbations.Perturbations should only change a single aspect of the input, controlling for all other casual aspects. The variable renaming perturbation should only replace variables. However, the difference between the use of Greek and English alphabet, is the wordpiece tokenizer splits _e.g._,\(\beta\) into two tokens, meaning attention scores are calculated between them. This does not occur for English characters. Also, due to SymPy ordering limitations, a change in notation may change the ordering of commutative variables within expressions. Therefore, this implementation does not lead to an pure perturbation that only changes a single mathematical property, and further artefacts are present within the evaluation. Integration.SymPy does not generate integration constants. Although we account for this within derivation generation, we choose not to for direct calculus of integrals. Also, many integrals are evaluated to be case-based expressions, including value inequalities. We omit these examples for a closer comparison with differentiation, and for better compatibility with the perturbations. ## Acknowledgements This work was partially funded by the Swiss National Science Foundation (SNSF) project NeuMath (200021_204617).
2306.04651
Minimax programming problems subject to addition-Łukasiewicz fuzzy relational inequalities and their optimal solutions
This article focuses on minimax programming problems subject to addition-{\L}ukasiewicz fuzzy relational inequalities. We first establish two necessary and sufficient conditions that a solution of the fuzzy relational inequalities is a minimal one and explore the existence condition of the unique minimal solution. We also supply an algorithm to search for minimal solutions of the fuzzy relational inequalities starting from a given solution. We then apply minimal solutions of the fuzzy relational inequalities to the minimax programming problems for searching optimal solutions. We provide two algorithms to solve a kind of single variable optimization problems, and obtain the greatest optimal solution. The algorithm for finding minimal solutions of a given solution are also used for searching minimal optimal solutions.
Xue-Ping Wang, Meng Li, Qian-Yu Shu
2023-06-06T07:36:42Z
http://arxiv.org/abs/2306.04651v2
# Minimal solutions of fuzzy relational inequalities with addition-Lukasiewicz composition+ # Minimal solutions of fuzzy relational inequalities with addition-Lukasiewicz composition+ Footnote †: Supported by the National Natural Science Foundation of China (No.12071325) Xue-ping Wang, Meng Li, Qian-yu Shu _School of Mathematical Sciences, Sichuan Normal University, Chengdu 610066, Sichuan, People's Republic of China_ Corresponding author. [email protected]; fax: +86-28-84761502E-mail address: [email protected] address: [email protected] **Abstract** This article focuses on minimal solutions of fuzzy relational inequalities with addition-Lukasiewicz composition. We mainly establish two necessary and sufficient conditions that a solution of the fuzzy relational inequalities is a minimal one and explore the existence condition of the unique minimal solution. We also show that for each solution of the fuzzy relational inequalities there is a minimal one that is less than or equal to the solution, and supply an algorithm to search for minimal solutions of a given solution to the fuzzy relational inequalities, which is explained by using numerical examples. _Keywords_: Fuzzy relational inequality; Addition-Lukasiewicz composition; Minimal solution; Algorithm; Solution set ## 1 Introduction A fuzzy relation plays a central role in fuzzy system, and a fuzzy relational equation is one of the main topics of fuzzy relations. In 1976, Sanchez [16] first introduced and investigated fuzzy relational equations with max-min composition. Since then, many researchers all over the world have discussed a variety of fuzzy relational equations (inequalities) with different composition operators, such as max-min, max-product, max-Lukasiewicz, addition-min, etc., see e.g. [4, 9, 13, 14]. They have tried to explore an effective method to describe the solution set of fuzzy relational equations (inequalities) by their minimal solutions, unique solution, approximate solutions and so on, respectively, see e.g. [1, 4, 9, 14, 15]. It is well known that the data transmission mechanism in BitTorrent-like peer-to-peer (P2P) file sharing system has been reduced to a system of fuzzy relational inequalities with addition-min composition as below. \[\left\{\begin{array}{l}a_{11}\wedge x_{1}+a_{12}\wedge x_{2}+\cdots+a_{1n} \wedge x_{n}\geq b_{1},\\ a_{21}\wedge x_{1}+a_{22}\wedge x_{2}+\cdots+a_{2n}\wedge x_{n}\geq b_{2},\\ \cdots\\ a_{m1}\wedge x_{1}+a_{m2}\wedge x_{2}+\cdots+a_{mn}\wedge x_{n}\geq b_{m}, \end{array}\right. \tag{1}\] where \(a_{ij},x_{j}\in[0,1]\), \(b_{i}>0\), \(a_{ij}\wedge x_{j}=\min\{a_{ij},x_{j}\}\) with \(i\in\{1,2,\cdots,m\}\) and \(j\in\{1,2,\cdots,n\}\), and the operation \(``+"\) is the ordinary addition [9, 21]. In such a system, there are \(n\) users \(A_{1},A_{2},\cdots,A_{n}\) who simultaneously download some file data. The \(jth\) user \(A_{j}\) delivers the file data with quality level \(x_{j}\) to \(A_{i}\), and \(a_{ij}\) is the communicative bandwidth between \(A_{i}\) and \(A_{j}\). So that \(a_{ij}\wedge x_{j}\) is the actual network traffic that \(A_{i}\) receives from \(A_{j}\), and the communicative quality requirement of download traffic of \(A_{i}\) is at least \(b_{i}\) with \(b_{i}>0\), then \(A_{i}\) receiving the file data from other users is \[a_{i1}\wedge x_{1}+\cdots+a_{ii-1}\wedge x_{i-1}+a_{ii+1}\wedge x_{i+1}+\cdots +a_{in}\wedge x_{n}\geq b_{i}.\] Based on the above, we add the \(a_{ii}\wedge x_{i}\) in the \(i\)th inequality and get the system (1). Li and Yang [9] gave a sufficient and necessary condition for the system (1) to be solvable, and supplied a method to get one minimal solution. Yang [21] showed that the solution set of the system (1) is convex and has an infinite number of minimal solutions, unlike the solution sets of most of the fuzzy relational inequalities which have a finite number of minimal solutions. To avoid a network congestion and ensure the data transmission, many scholars have studied optimization models of different objective functions with the system (1) as constraints, such as linear programming [5, 6, 21], multilevel programming [7, 25], and min-max programming [2, 12, 18, 19, 23, 26]. In particular, Yang [21] proved that all the optimal solutions of the related optimization models are minimal solutions of the system (1). This inspires that many authors do their best to discuss the existence of minimal solutions and try to find minimal solutions of the system (1), see e.g. [9, 10, 20, 21, 22, 24]. Fortunately, Li and Wang [11] presented an algorithm to compute all minimal solutions of the system (1). Indeed, the operator \(\wedge\) in the system (1) is a continuous triangular norm (t-norm for short), and we usually denote \(\wedge\) by \(T_{M}\). There are three basic kinds of typical continuous t-norms, i.e., minimum (\(T_{M}\)), product (\(T_{P}\)), Lukasiewicz t-norm (\(T_{L}\)) [8]. Therefore, from the continuous t-norm point of view, we just need to consider three kinds of mathematical models for the P2P file sharing systems, i.e., the system with addition-min composition, the system with addition-product composition and the system with addition-Lukasiewicz composition. For the first system, i.e., the system (1), Li and Wang [11] supplied an algorithm to calculate all its minimal solutions, therefore, we can describe the solution set of the system (1) by all minimal solutions and its greatest solution when it is solvable. Moreover, if we replace \(\wedge\) in the system (1) by \(T_{P}\) then the system (1) transforms into a classically linear inequalities assigned on [0,1] whose solution set can be characterized by a current method [17]. So that a natural question is how to describe the solution set of the system (1) when we replace \(\wedge\) in the system (1) by \(T_{L}\). This article will pay attention to studying minimal solutions of the following mathematical model for the P2P file sharing system: \[\left\{\begin{array}{l}T_{L}(a_{11},x_{1})+T_{L}(a_{12},x_{2})+\cdots+T_{L}(a_{1 n},x_{n})\geq b_{1},\\ T_{L}(a_{21},x_{1})+T_{L}(a_{22},x_{2})+\cdots+T_{L}(a_{2n},x_{n})\geq b_{2},\\ \cdots\\ T_{L}(a_{m1},x_{1})+T_{L}(a_{m2},x_{2})+\cdots+T_{L}(a_{mn},x_{n})\geq b_{m}, \end{array}\right. \tag{2}\] where \(a_{ij},x_{j}\in[0,1]\), \(b_{i}>0\), \(T_{L}(a_{ij},x_{j})=\max\{a_{ij}+x_{j}-1,0\}\) with \(i\in\{1,2,\cdots,m\}\) and \(j\in\{1,2,\cdots,n\}\), and the operation \(``+"\) is the ordinary addition. The remainder of this article is organized by four sections. Section 2 presents some definitions and properties of the system (2). Section 3 supplies a necessary and sufficient condition for a solution of the system (2) to be a minimal one, and investigates the existence condition of the unique minimal solution. In Section 4, we first show that for any solution of the system (2) there is a minimal one that is less than or equal to the solution, and then propose an algorithm to search for minimal solutions of a fixed solution to the system (2) with computational complexity \(O(m^{3})\) or \(O(n^{3})\). A conclusion in Section 5 is presented. ## 2 Preliminaries In this section, we give basic definitions and properties of the system (2). **Definition 2.1** ([8]): A binary operator \(T:[0,1]^{2}\rightarrow[0,1]\) is said to be a t-norm, if it is commutative, associative, increasing with respect to each variable and has an identity \(1\), i.e., \(T(x,1)=x\) for all \(x\in[0,1]\). Let \(I=\{1,2,\cdots,m\}\) and \(J=\{1,2,\cdots,n\}\) be two index sets. For \(x^{1}=(x_{1}^{1},x_{2}^{1},\cdots,x_{n}^{1})\), \(x^{2}=(x_{1}^{2},x_{2}^{2},\cdots,x_{n}^{2})\in[0,1]^{n}\), define \(x^{1}\leq x^{2}\) iff \({x_{j}}^{1}\leq{x_{j}}^{2}\) for arbitrary \(j\in J\); and define \(x^{1}<x^{2}\) iff \({x_{j}}^{1}\leq{x_{j}}^{2}\) and there is a \(j_{0}\in J\) such that \(x_{j_{0}}^{1}<x_{j_{0}}^{2}\). We further define \(x^{1}\lor x^{2}\) by \(x^{1}\lor x^{2}=(x_{1}^{1}\lor x_{1}^{2},x_{2}^{1}\lor x_{2}^{2},\cdots,x_{n}^{ 1}\lor x_{n}^{2})\). Then the system (2) can be tersely described as follows: \[\sum_{j\in J}T_{L}(a_{ij},x_{j})\geq b_{i},\forall i\in I\] or \[A\odot x^{T}\geq b^{T}\] where \(A=(a_{ij})_{m\times n}\), \(x=(x_{1},x_{2},\cdots,x_{n})\), \(b=(b_{1},b_{2},\cdots,b_{m})\) and \((a_{i1},a_{i2},\cdots,a_{in})\odot(x_{1},x_{2},\cdots,x_{n})^{T}=T_{L}(a_{i1}, x_{1})+T_{L}(a_{i2},x_{2})+\cdots+T_{L}(a_{in},x_{n})\geq b_{i}\). The system (2) is called solvable if there exists an \(x\in[0,1]^{n}\) satisfying \(A\odot x^{T}\geq b^{T}\). Denote the set of all the solutions of the system (2) by \[S(A,b)=\{x\in[0,1]^{n}|A\odot x^{T}\geq b^{T}\},\] and \(U\setminus V=\{x\in U\mid x\notin V\}\) where \(U\) and \(V\) are two sets. **Definition 2.2** ([4]): An \(\hat{x}\in S(A,b)\) is said to be the greatest solution if \(x\leq\hat{x}\) for all \(x\in S(A,b)\); an \(\hat{x}\in S(A,b)\) is said to be a minimal solution if \(x\leq\check{x}\) for any \(x\in S(A,b)\), implies \(x=\check{x}\). From Definition 2.1, the following is clear. **Theorem 2.1**: \(S(A,b)\neq\emptyset\) _iff \(\sum_{j\in J}a_{ij}\geq b_{i}\) for all \(i\in I\)._ Theorem 2.1 and Definition 2.1 imply the following proposition. **Proposition 2.1**: _For the system (2), we have:_ _If \(S(A,b)\neq\emptyset\), then \((1,1,\cdots,1)\) is its greatest solution._ _Let \(x\in S(A,b)\), \(x^{*}\in[0,1]^{n}\). Then \(x\leq x^{*}\) implies \(x^{*}\in S(A,b)\)._ According to Proposition 2.1 (ii), the following proposition holds. **Proposition 2.2**: _If \(x^{1},x^{2}\in S(A,b)\), then \(x^{1}\lor x^{2}\in S(A,b)\)._ **Theorem 2.2**: _If \(S(A,b)\neq\emptyset\), and there is an \(i\in I\) such that \(\sum_{j\in J}a_{ij}=b_{i}\) and \(a_{ij}\neq 0\) for any \(j\in J\), then \((1,1,\cdots,1)\) is the unique solution of the system (2)._ **Proof.** Since \(S(A,b)\neq\emptyset\), we have \((1,1,\cdots,1)\in S(A,b)\). Let \(x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\) and \(x\neq(1,1,\cdots,1)\). Then there is a \(j\in J\) such that \(x_{j}<1\). If there is an \(i\in I\) such that \(\sum_{j\in J}a_{ij}=b_{i}\) and \(a_{ij}\neq 0\) for any \(j\in J\), then \(\max\{a_{ij}+x_{j}-1,0\}<a_{ij}\). Therefore, \(\sum_{k\in J\setminus\{j\}}T_{L}(a_{ik},x_{k})+T_{L}(a_{ij},x_{j})<\sum_{j\in J }a_{ij}=b_{i}\), contrary to \(x\in S(A,b)\). This follows that \((1,1,\cdots,1)\) is the unique solution of the system (2). ## 3 Minimal solutions of the system (2) This section shows a sufficient and necessary condition for a solution of the system (2) being a minimal one. It then characterizes the uniqueness of minimal solutions to the system (2). Let \(x=(x_{1},x_{2},\cdots,x_{n})\in[0,1]^{n}\). Denote \(J^{*}(x)=\{j\in J|x_{j}>0\}\). **Theorem 3.1**: _Let \(x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\). Then \(x\) is a minimal solution of the system (2) iff there exists an \(i\in I\) such that \(\sum_{j\in J}T_{L}(a_{ij},x_{j})=b_{i}\), and \(a_{ij}+x_{j}-1>0\) for any \(j\in J^{*}(x)\)._ **Proof.** Let \(x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\). Assume that for any \(i\in I\) either \(\sum_{j\in J}T_{L}(a_{ij},x_{j})>b_{i}\), or there is a \(j_{0}\in J(x)\) such that \(a_{ij_{0}}+x_{j_{0}}-1\leq 0\). If there is a \(j_{0}\in J^{*}(x)\) such that \(a_{ij_{0}}+x_{j_{0}}-1\leq 0\) for any \(i\in I\), then \(a_{ij_{0}}+u-1\leq 0\) for all \(i\in I\), \(u\in[0,x_{j_{0}}]\). So that if let \(u_{0}\in[0,x_{j_{0}})\) and define \(x^{\prime}=(x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{n})\) with \[x^{\prime}_{j}=\left\{\begin{array}{l}u_{0},j=j_{0},\\ x_{j},j\neq j_{0}.\end{array}\right.\] Then it is clear that \(x^{\prime}<x\) and \(T_{L}(a_{ij_{0}},x_{j_{0}})=\max\{a_{ij_{0}}+x_{j_{0}}-1,0\}=0=\max\{a_{ij_{0}}+x ^{\prime}_{j_{0}}-1,0\}=T_{L}(a_{ij_{0}},x^{\prime}_{j_{0}})\) for all \(i\in I\). Hence, for any \(i\in I\), \[\sum_{j\in J}T_{L}(a_{ij},x^{\prime}_{j}) = T_{L}(a_{ij_{0}},x^{\prime}_{j_{0}})+\sum_{j\in J\setminus\{j_{0 }\}}T_{L}(a_{ij},x^{\prime}_{j})\] \[= T_{L}(a_{ij_{0}},x_{j_{0}})+\sum_{j\in J\setminus\{j_{0}\}}T_{L }(a_{ij},x_{j})\] \[= \sum_{j\in J}T_{L}(a_{ij},x_{j})\] \[\geq b_{i}.\] This indicates that \(x^{\prime}\in S(A,b)\), which contradicts that \(x\) is a minimal solution of the system (2). In the case of \(\sum_{j\in J}T_{L}(a_{ij},x_{j})>b_{i}\) for all \(i\in I\), let \(I_{1}=\{i\in I|a_{ij_{0}}+x_{j_{0}}-1>0\}\) with \(j_{0}\in J^{*}(x)\). Then \(I_{1}\neq\emptyset\). From \(\sum_{j\in J}T_{L}(a_{ij},x_{j})=T_{L}(a_{ij_{0}},x_{j_{0}})+\sum_{j\in J \setminus\{j_{0}\}}T_{L}(a_{ij},x_{j})=\max\{a_{ij_{0}}+x_{j_{0}}-1,0\}+\sum_{ j\in J\setminus\{j_{0}\}}T_{L}(a_{ij},x_{j})>b_{i}\) for any \(i\in I_{1}\), we have \(a_{ij_{0}}+x_{j_{0}}-1>b_{i}-\sum_{j\in J\setminus\{j_{0}\}}T_{L}(a_{ij},x_{j})\). Hence, \(x_{j_{0}}>1-a_{ij_{0}}+b_{i}-\sum_{j\in J\setminus\{j_{0}\}}T_{L}(a_{ij},x_{j})\) for all \(i\in I_{1}\). Denote \(\dot{x}_{j_{0}}=\max\limits_{i\in I_{1}}\{0,1-a_{ij_{0}}+b_{i}-\sum_{j\in J \setminus\{j_{0}\}}T_{L}(a_{ij},x_{j})\}\). Define \(y=(y_{1},y_{2},\cdots,y_{n})\) with \[y_{j}=\left\{\begin{array}{l}\dot{x}_{j_{0}},j=j_{0},\\ x_{j},j\neq j_{0}.\end{array}\right.\] Then it is clear that \(y<x\). We distinguish two cases: Case 1. For any \(i\in I_{1}\), \[\sum_{j\in J}T_{L}(a_{ij},y_{j})\] \[= T_{L}(a_{ij_{0}},y_{j_{0}})+\sum_{j\in J\setminus\{j_{0}\}}T_{L }(a_{ij},y_{j})\] \[= \max\{a_{ij_{0}}+y_{j_{0}}-1,0\}+\sum_{j\in J\setminus\{j_{0}\}} T_{L}(a_{ij},x_{j})\] \[= \max\{a_{ij_{0}}+\max\limits_{i\in I_{1}}\{0,1-a_{ij_{0}}+b_{i}- \sum_{j\in J\setminus\{j_{0}\}}T_{L}(a_{ij},x_{j})\}-1,0\}+\sum_{j\in J\setminus \{j_{0}\}}T_{L}(a_{ij},x_{j})\] \[\geq \max\{b_{i}-\sum_{j\in J\setminus\{j_{0}\}}T_{L}(a_{ij},x_{j}),0 \}+\sum_{j\in J\setminus\{j_{0}\}}T_{L}(a_{ij},x_{j})\] \[\geq b_{i}-\sum_{j\in J\setminus\{j_{0}\}}T_{L}(a_{ij},x_{j})+\sum_{ j\in J\setminus\{j_{0}\}}T_{L}(a_{ij},x_{j})\] \[= b_{i}.\] Case 2. For any \(i\in I\setminus I_{1}\), since \(y_{j_{0}}<x_{j_{0}}\), we have \(0\leq T_{L}(a_{ij_{0}},y_{j_{0}})=\max\{a_{ij_{0}}+y_{j_{0}}-1,0\}\leq\max\{a_{ ij_{0}}+x_{j_{0}}-1,0\}=T_{L}(a_{ij_{0}},x_{j_{0}})=0\), i,e., \(T_{L}(a_{ij_{0}},y_{j_{0}})=T_{L}(a_{ij_{0}},x_{j_{0}})\). Therefore, \(\sum_{j\in J}T_{L}(a_{ij},y_{j})=\sum_{j\in J}T_{L}(a_{ij},x_{j})\geq b_{i}\) for all \(i\in I\setminus I_{1}\). Cases 1 and 2 mean that \(y\in S(A,b)\), contrary to the fact that \(x\) is a minimal solution of the system (2). Conversely, suppose that there exists an \(i\in I\) such that \(\sum_{j\in J}T_{L}(a_{ij},x_{j})=b_{i}\), and \(a_{ij}+x_{j}-1>0\) for any \(j\in J^{*}(x)\). Now, let \(z=(z_{1},z_{2},\cdots,z_{n})\in S(A,b)\) with \(z<x\). Then there exists a \(j\in J\) such that \(0\leq z_{j}<x_{j}\). It is easily seen that \(j\in J^{*}(x)\), and \[T_{L}(a_{ij},z_{j})=\max\{a_{ij}+z_{j}-1,0\}<\max\{a_{ij}+x_{j}-1,0\}=T_{L}(a_{ ij},x_{j}).\] Moreover, \(z_{k}\leq x_{k}\) for any \(k\in J\setminus\{j\}\), which leads to \[T_{L}(a_{ik},z_{k})=\max\{a_{ik}+z_{k}-1,0\}\leq\max\{a_{ik}+x_{k}-1,0\}=T_{L}( a_{ik},x_{k}).\] Therefore, \(\sum_{j\in J}T_{L}(a_{ij},z_{j})<\sum_{j\in J}T_{L}(a_{ij},x_{j})=b_{i}\), contrary to \(z\in S(A,b)\). Denote the set of all minimal solutions of the system (2) by \(\check{S}(A,b)\). **Proposition 3.1**: _Let \(S(A,b)\neq\emptyset\) and define_ \[y=(y_{1},y_{2},\cdots,y_{n})\mbox{ with }y_{k}=\left\{\begin{array}{l} \max\limits_{i\in I}\{0,1+b_{i}-\sum_{t\in J}a_{it}\},k=j,\\ 1,k\neq j.\end{array}\right. \tag{3}\] _Then \(y\in S(A,b)\). Furthermore, if \(a_{it}>0\) for all \(i\in I\), \(t\in J\) and \(a_{ij}+y_{j}-1>0\) for any \(i\in I\), then \(y\in\check{S}(A,b)\)._ **Proof.** We first verify that \(y\in S(A,b)\). By the construction of \(y\), \(y_{j}\geq 1+b_{i}-\sum_{t\in J}a_{it}\), i.e., \(a_{ij}+y_{j}-1\geq b_{i}-\sum_{t\in J\setminus\{j\}}a_{it}\) for any \(i\in I\). It follows that \(\max\{a_{ij}+y_{j}-1,0\}\geq b_{i}-\sum_{t\in J\setminus\{j\}}a_{it}\). Therefore, \(T_{L}(a_{ij},y_{j})+\sum_{t\in J\setminus\{j\}}a_{it}\geq b_{i}\) for any \(i\in I\), which indicates \(y\in S(A,b)\). Next, we show that \(y\in\check{S}(A,b)\) if \(a_{it}>0\) for all \(i\in I\), \(t\in J\) and \(a_{ij}+y_{j}-1>0\) for any \(i\in I\). First note that \(y_{j}>0\). Then there is an \(i_{0}\in I\) such that \(y_{j}=1+b_{i_{0}}-\sum_{t\in J}a_{i_{0}t}>0\) and \(a_{i_{0}j}+y_{j}-1>0\). It follows from \(T_{L}(a_{i_{0}t},1)=a_{i_{0}t}\) that \[\sum_{t\in J}T_{L}(a_{i_{0}t},y_{t}) = \sum_{t\in J\setminus\{j\}}T_{L}(a_{i_{0}t},y_{t})+T_{L}(a_{i_{0} j},y_{j})\] \[= \sum_{t\in J\setminus\{j\}}T_{L}(a_{i_{0}t},y_{t})+\max\{a_{i_{0} j}+y_{j}-1,0\}\] \[= \sum_{t\in J\setminus\{j\}}a_{i_{0}t}+a_{i_{0}j}+y_{j}-1\] \[= \sum_{t\in J\setminus\{j\}}a_{i_{0}t}+a_{i_{0}j}+1+b_{i_{0}}-\sum _{t\in J}a_{i_{0}t}-1\] \[= b_{i_{0}}.\] Now, suppose that there is a \(\bar{y}=(\bar{y}_{1},\bar{y}_{2},\cdots,\bar{y}_{n})\in S(A,b)\) with \(\bar{y}<y\). Then there is a \(t_{0}\in J\) such that \(\bar{y}_{t_{0}}<y_{t_{0}}\). Since \(a_{it}>0\) for all \(i\in I\), \(t\in J\) and by the construction of \(y\), we have \(a_{i_{0}t_{0}}+y_{t_{0}}-1>0\), then \(T_{L}(a_{i_{0}t_{0}},y_{t_{0}})=\max\{a_{i_{0}t_{0}}+y_{t_{0}}-1,0\}>\max\{a_{ i_{0}t_{0}}+\bar{y}_{t_{0}}-1,0\}=T_{L}(a_{i_{0}t_{0}},\bar{y}_{t_{0}}).\) Thus \[b_{i_{0}} = \sum_{t\in J}T_{L}(a_{i_{0}t},y_{t})\] \[= \sum_{t\in J\setminus\{t_{0}\}}T_{L}(a_{i_{0}t},y_{t})+T_{L}(a_{ i_{0}t_{0}},y_{t_{0}})\] \[> \sum_{t\in J\setminus\{t_{0}\}}T_{L}(a_{i_{0}t},\bar{y}_{t})+T_{ L}(a_{i_{0}t_{0}},\bar{y}_{t_{0}})\] \[= \sum_{t\in J}T_{L}(a_{i_{0}t},\bar{y}_{t}),\] contrary to \(\bar{y}\in S(A,b)\). Therefore, \(y\in\check{S}(A,b)\). The following example illustrates Proposition 3.1. **Example 3.1**: Consider the following fuzzy relational inequalities: \[\left\{\begin{array}{l}T_{L}(0.5,x_{1})+T_{L}(0.9,x_{2})+T_{L}(0.7,x_{3}) \geq 1.7,\\ T_{L}(0.7,x_{1})+T_{L}(0.5,x_{2})+T_{L}(0.6,x_{3})\geq 1.2,\\ T_{L}(0.6,x_{1})+T_{L}(0.8,x_{2})+T_{L}(0.9,x_{3})\geq 1.8.\end{array}\right.\] Obviously, \((1,1,1)\in S(A,b)\). A simple calculation leads to \[y_{1} = \max_{i\in\{1,2,3\}}\{0,1+b_{i}-\sum_{j\in\{1,2,3\}}a_{ij}\}\] \[= \max\{0.6,0.4,0.5\}\] \[= 0.6,\] and \(a_{i1}+0.6-1>0\) for all \(i\in\{1,2,3\}\). Then by Proposition 3.1, \(y=(0.6,1,1)\) is a minimal solution. Similarly, we can obtain that both \((1,0.6,1)\) and \((1,1,0.6)\) are minimal solutions. From Example 3.1, minimal solutions of the system (2) are usually not unique. So that a quite natural problem is: what is the condition for a minimal solution of the system (2) being unique? Next, we will explore the condition under which the system (2) has a unique minimal solution. Let \(x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\), and for any \(j\in J\), define \[F_{j}(x)=\{x_{*j}\in[0,1]|T_{L}(a_{ij},x_{*j})+\sum_{k\in J\setminus\{j\}}T_{ L}(a_{ik},x_{k})\geq b_{i}\mbox{ for any }i\in I\} \tag{4}\] and \[\delta_{j}(x)=\min\{x_{*j}|x_{*j}\in F_{j}(x)\}. \tag{5}\] From (4), we deduce the following proposition. **Proposition 3.2**: _Let \(x,y\in S(A,b)\) with \(x\leq y\). Then \(F_{j}(x)\subseteq F_{j}(y)\) for any \(j\in J\)._ In what follows, let \(\hat{x}=(1,1,\cdots,1)\). Then we have the following theorem. **Theorem 3.2**: _If \(S(A,b)\neq\emptyset\), then \(\{x_{j}|x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\}=F_{j}(\hat{x})\)._ **Proof.** If \(S(A,b)\neq\emptyset\), then \(x\leq\hat{x}\) for any \(x\in S(A,b)\). According to Proposition 3.2, it is obvious that \(\{x_{j}|x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\}\subseteq F_{j}(\hat{x})\). Now, let \(x_{j}\in F_{j}(\hat{x})\) and define \(y=(y_{1},y_{2},\cdots,y_{n})\) with \[y_{k}=\left\{\begin{array}{l}1,k\neq j,\\ x_{j},k=j.\end{array}\right.\] Easily see that \(y\in S(A,b)\), then \(x_{j}\in\{x_{j}|x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\}\), i.e., \(F_{j}(\hat{x})\subseteq\{x_{j}|x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\}\). Therefore, \(\{x_{j}|x=(x_{j})_{j\in J}\in S(A,b)\}=F_{j}(\hat{x})\). **Corollary 3.1**: _The system (2) has a unique minimal solution iff_ \[(\delta_{1}(\hat{x}),\delta_{2}(\hat{x}),\cdots,\delta_{n}(\hat{x}))\in S(A,b).\] _Moreover, \((\delta_{1}(\hat{x}),\delta_{2}(\hat{x}),\cdots,\delta_{n}(\hat{x}))\) is the unique minimal solution._ **Proof.** Let \(y_{j}\in F_{j}(\hat{x})\). Then \((1,\cdots,1,y_{j},1,\cdots,1)\in S(A,b)\). So that if \(x=(x_{1},x_{2},\cdots,x_{n})\) is the unique minimal solution of the system (2), then \(x_{j}\leq y_{j}\), thus by the arbitrariness of \(y_{j}\), \(x_{j}\leq\delta_{j}(\hat{x})\). On the other hand, from Theorem 3.2, \(x_{j}\in F_{j}(\hat{x})\). Therefore, \(x_{j}=\delta_{j}(\hat{x})\) for any \(j\in J\). Conversely, if \((\delta_{1}(\hat{x}),\delta_{2}(\hat{x}),\cdots,\delta_{n}(\hat{x}))\in S(A,b)\), then from Theorem 3.2, it is straightforward that \((\delta_{1}(\hat{x}),\delta_{2}(\hat{x}),\cdots,\delta_{n}(\hat{x}))\) is the unique minimal solution. ## 4 Algorithm for calculating minimal solutions of the system (2) This section first proves that for every \(x\in S(A,b)\) there exists an \(x_{*}\in\tilde{S}(A,b)\) satisfying \(x_{*}\leq x\), and then presents an algorithm to search for minimal solutions of a given one to the system (2) with computational complexity \(O(m^{3})\) or \(O(n^{3})\). From Proposition 3.1, if \(a_{it}>0\) for all \(i\in I,t\in J\), and \(a_{ij}+y_{j}-1>0\) for all \(i\in I\), then \(y=(y_{1},y_{2},\cdots,y_{n})\) defined by (3) is a minimal solution when \(S(A,b)\neq\emptyset\). However, if there is an \(i\in I\) such that \(a_{ij}+y_{j}-1\leq 0\) then formula (3) may be invalid for constructing a minimal solution as shown by the following example. **Example 4.1**: Consider the following fuzzy relational inequalities: \[\left\{\begin{array}{l}T_{L}(0.5,x_{1})+T_{L}(0.7,x_{2})+T_{L}(0.4,x_{3}) \geq 1,\\ T_{L}(0.3,x_{1})+T_{L}(0.5,x_{2})+T_{L}(0.9,x_{3})\geq 1.3,\\ T_{L}(0.8,x_{1})+T_{L}(0.6,x_{2})+T_{L}(0.7,x_{3})\geq 1.6.\end{array}\right.\] Obviously, \((1,1,1)\in S(A,b)\). From Proposition 3.1, \[x_{1}=\max_{i\in\{1,2,3\}}\{0,1+b_{i}-\sum_{j\in\{1,2,3\}}a_{ij}\}=\max\{0.4,0.6,0.5\}=0.6.\] Since \(a_{21}+0.6-1<0\), Proposition 3.1 is not suitable for determining whether \(x=(0.6,1,1)\) is a minimal solution. In fact, it is easy to see that \(x=(0.6,1,1)\) is a solution of the fuzzy relational inequalities, and from Theorem 3.1, \(x=(0.6,1,1)\) is not a minimal solution. Therefore, we have to investigate another method for finding minimal solutions of the system (2). First, we have the following theorem. **Theorem 4.1**: _If \(x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\). Then \(x\in\check{S}(A,b)\) iff \(x_{j}=\delta_{j}(x)\) for any \(j\in J\)._ **Proof.** Let \(x\in\check{S}(A,b)\). Then \(x_{j}\in F_{j}(x)\) for any \(j\in J\). Thus, \(x_{j}\geq\delta_{j}(x)\) and for any \(x_{*j}\in F_{j}(x)\), \[T_{L}(a_{ij},\min\{x_{j},x_{*j}\})+\sum_{k\in J\setminus\{j\}}T_{L}(a_{ik},x_ {k})\] \[= \min\{T_{L}(a_{ij},x_{j}),T_{L}(a_{ij},x_{*j})\}+\sum_{k\in J \setminus\{j\}}T_{L}(a_{ik},x_{k})\] \[= \min\{T_{L}(a_{ij},x_{j})+\sum_{k\in J\setminus\{j\}}T_{L}(a_{ik},x_{k}),T_{L}(a_{ij},x_{*j})+\sum_{k\in J\setminus\{j\}}T_{L}(a_{ik},x_{k})\}\] \[\geq \min\{b_{i},b_{i}\}\] \[= b_{i}\] for all \(i\in I\). Therefore, \(\min\{x_{j},x_{*j}\}\in F_{j}(x)\) for all \(j\in J\). Furthermore, define \(x^{\prime}=(x^{\prime}_{1},x^{\prime}_{2},\cdots,x^{\prime}_{n})\) with \[x^{\prime}_{k}=\left\{\begin{array}{l}\min\{x_{j},x_{*j}\},k=j,\\ x_{k},k\neq j.\end{array}\right.\] Then clearly \(x^{\prime}\leq x\) and \(x^{\prime}\in S(A,b)\). Since \(x\in\check{S}(A,b)\), \(x^{\prime}=x\), i.e., \(x_{j}\leq x_{*j}\). Hence by the arbitrariness of \(x_{*j}\), we have \(x_{j}\leq\delta_{j}(x)\). Consequently, \(x_{j}=\delta_{j}(x)\) for every \(j\in J\). Conversely, suppose that \(x_{j}=\delta_{j}(x)\) for every \(j\in J\). Let \(y=(y_{1},y_{2},\cdots,y_{n})\in S(A,b)\) be such that \(y\leq x\). Then \(y_{j}\leq x_{j}=\delta_{j}(x)\) for every \(j\in J\). By Proposition 3.2, \(y_{j}\in F_{j}(x)\), which means that \(y_{j}\geq\delta_{j}(x)\). Thus \(y_{j}=x_{j}\) for every \(j\in J\), i.e., \(y=x\). Therefore, \(x\in\check{S}(A,b)\). From the proof of Theorem 4.1, the following statement is true. **Proposition 4.1**: _If \(x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\), and define \(y=(y_{1},y_{2},\cdots,y_{n})\) with_ \[y_{k}=\left\{\begin{array}{l}\delta_{j}(x),k=j,\\ x_{k},k\neq j.\end{array}\right.\] Then \(y\in S(A,b)\) and \(y\leq x\). Let \(x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\), and for any \(j\in J\), denote \[x^{(12\cdots j)}=\left\{\begin{array}{l}(\delta_{1}(x),x_{2},\cdots,x_{n}),j= 1,\\ \\ (\delta_{1}(x),\cdots,\delta_{j}(x^{(12\cdots(j-1))}),x_{j+1},\cdots,x_{n}),j \geq 2.\end{array}\right.\] We then have the theorem as below. **Theorem 4.2**: _If \(x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\), then there is an \(x^{*}\in\check{S}(A,b)\) satisfying \(x^{*}\leq x\)_ **Proof.** Let \(x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\). If \(x\in\check{S}(A,b)\), then obviously, there is an \(x^{*}=x\in\check{S}(A,b)\) satisfying \(x^{*}\leq x\). If \(x\notin\check{S}(A,b)\), then according to Proposition 4.1, \(x^{(1)}=(\delta_{1}(x),x_{2},\cdots,x_{n})\in S(A,b)\). Again by Proposition 4.1, we have \(x^{(12)}=(\delta_{1}(x),\delta_{2}(x^{(1)}),x_{3},\cdots,x_{n})\in S(A,b)\), and so on, we finally get \(x^{(12\cdots n)}=(\delta_{1}(x),\delta_{2}(x^{(1)}),\cdots,\delta_{n}(x^{(12 \cdots(n-1))}))\in S(A,b)\) and \(x^{(12\cdots n)}\leq x\). Suppose that \(y=(y_{1},y_{2},\cdots,y_{n})\in S(A,b)\) satisfies \(y\leq x^{(12\cdots n)}\). Then for any \(j\in J\), \(y_{j}\leq\delta_{j}(x^{(12\cdots(j-1))})\). Let \(x^{1}=(y_{1},x_{2},\cdots,x_{n})\). Then \(y\leq x^{1}\). Thus from Proposition 2.1, \(x^{1}\in S(A,b)\), i.e., \(y_{1}\in F_{1}(x)\), which means \(\delta_{1}(x)\leq y_{1}\). Hence, \(\delta_{1}(x)=y_{1}\) since \(y\leq x^{(12\cdots n)}\), i.e., \(y=(\delta_{1}(x),y_{2},\cdots,y_{n})\). Let \(x^{2}=(\delta_{1}(x),y_{2},x_{3},\cdots,x_{n})\). Then \(y\leq x^{2}\). Thus from Proposition 2.1, \(x^{2}\in S(A,b)\), i.e., \(y_{2}\in F_{2}(x^{(1)})\), which means \(\delta_{2}(x^{(1)})\leq y_{2}\). Hence, \(\delta_{2}(x^{(1)})=y_{2}\) since \(y\leq x^{(12\cdots n)}\), i.e., \(y=(\delta_{1}(x),\delta_{2}(x^{(1)}),\cdots,y_{n})\in S(A,b)\). Repeating the process as above, we get \(y_{j}=\delta_{j}(x^{(12\cdots(j-1))})\), \(j=3,\cdots,n\). Therefore, \(y=(\delta_{1}(x),\delta_{2}(x^{(1)}),\cdots,\delta_{n}(x^{(12\cdots(n-1))}))= x^{(12\cdots n)}\), i.e., \(x^{(12\cdots n)}\in\check{S}(A,b)\) and \(x^{(12\cdots n)}\leq x\). Applying Proposition 2.1 and Theorem 4.2, the following is true. **Theorem 4.3**: \[S(A,b)=\bigcup_{\tilde{x}\in S(A,b)}\{x\in[0,1]^{n}\mid\tilde{x}\leq x\leq(1, 1,\cdots,1)\}.\] **Theorem 4.4**: _If \(S(A,b)\neq\emptyset\) and \(a_{ij}>0\) for all \(i\in I\), \(j\in j\), then the system (2) has a unique solution iff there is an \(i\in I\) such that \(\sum_{j\in J}a_{ij}=b_{i}\)._ **Proof.** If \(S(A,b)\neq\emptyset\), then \((1,1,\cdots,1)\in S(A,b)\). If the system (2) has a unique solution, then from Theorem 4.3, \((1,1,\cdots,1)\) is also a unique minimal solution of the system (2). By Theorem 3.1, there is an \(i\in I\) such that \(\sum_{j\in J}T_{L}(a_{ij},1)=b_{i}\), i.e., \(\sum_{j\in J}a_{ij}=b_{i}\). Conversely, it is a straightforward matter from Theorem 2.2. Let \(x\in S(A,b)\). Then from the proof of Theorem 4.2, we can summarize the following algorithm for calculating an \(\tilde{x}\in\check{S}(A,b)\) such that \(\tilde{x}\leq x\). **Algorithm 4.1**: _Input \(x=(x_{1},x_{2},\cdots,x_{n})\in S(A,b)\). Output \(\tilde{x}\). Step 1. Calculate \(\tilde{x}=(\delta_{1}(\hat{x}),\delta_{2}(\hat{x}),\cdots,\delta_{n}(\hat{x}))\) defined by (5). If \(\tilde{x}\in S(A,b)\), then \(\tilde{x}:=\tilde{x}\), go to Step 8._ _Step 2. j:=j+1 Step 3. Calculate_ \[F_{j}(x^{(12\cdots(j-1))}) = \{x_{*j}\in[0,1]|T_{L}(a_{i1},\delta_{1}(x))+\cdots+T_{L}(a_{i(j-1) },\delta_{j-1}(x^{(12\cdots(j-2))})\] \[+T_{L}(a_{ij},x_{*j})+\sum_{k=j+1}^{n}T_{L}(a_{ik},x_{k})\geq b_{i }\mbox{ for any }i\in I\}\] _where \(x^{(0)}=x\)._ _Step 4. Calculate_ \[\delta_{j}(x^{(12\cdots(j-1))})=\min\{x_{*j}|x_{*j}\in F_{j}(x^{(12\cdots(j-1)) })\}.\] _Step 5. \(x^{(12\cdots j)}:=(\delta_{1}(x),\delta_{2}(x^{(1)}),\cdots,\delta_{j}(x^{(12 \cdots(j-1))},x_{j+1},\cdots,x_{n})\)._ _Step 6. Go to Step 2 when \(j<n\)._ _Step 7. \(\check{x}:=x^{(12\cdots j)}\)._ _Step 8. Output \(\check{x}\)._ **Remark 4.1**: We can surely replace \((1,2,\cdots,n)\) in Algorithm 4.1 by any of the permutations of \(\{1,2,\cdots,n\}\), and just by repeating the steps as shown in Algorithm 4.1, we can obtain a minimal solution. However, using Algorithm 4.1, we can find at most \(n!\) distinct minimal solutions since there are \(n!\) different permutations and two different permutations may produce a same minimal solution. The following two examples illustrate Algorithm 4.1 and Remark 4.1, respectively. **Example 4.2**: Consider the fuzzy relational inequalities in Example 3.1. It is obvious that \(x=(0.8,0.9,1)\) is a solution of the fuzzy relational inequalities. With Algorithm 4.1, we can find a minimal solution as follows: Step 1. Calculate \(\delta_{1}(\hat{x})=0.6\), \(\delta_{2}(\hat{x})=0.6\), \(\delta_{3}(\hat{x})=0.6\). Obviously, \((0.6,0.6,0.6)\) is not a solution. Step 2. \(j:=0+1\). Step 3. Calculate \(F_{1}(x)=\{x_{1}\mid 0.7\leq x_{1}\}\). Step 4. Calculate \(\delta_{1}(x)=0.7\). Step 5. \(x^{(1)}:=(0.7,0.9,1)\). Step 6. \(j=1<3\), \(j:=1+1\). Step 7. Calculate \(F_{2}(x^{(1)})=\{x_{2}\mid 0.9\leq x_{2}\}\). Step 8. Calculate \(\delta_{2}(x^{(1)})=0.9\). Step 9. \(x^{(12)}:=(0.7,0.9,1)\). Step 10. \(j=2<3\), \(j:=2+1\). Step 11. Calculate \(F_{3}(x^{(12)})=\{x_{3}\mid x_{3}=1\}\). Step 12. Calculate \(\delta_{3}(x^{(12)})=1\). Step 13. \(x^{(123)}:=(0.7,0.9,1)\). Step 14. \(j=3\nless 3\). Step 15. \(\check{x}:=x^{(123)}\). Step 16. Output \(\check{x}=(0.7,0.9,1)\). Thus, \(\check{x}\) is a minimal solution of the fuzzy relational inequalities satisfying \(\check{x}\leq x\). **Example 4.3**: Consider the fuzzy relational inequalities in Example 4.1. Obviously \(x=(0.9,0.9,0.9)\) is a solution. With Algorithm 4.1, we can get three different minimal solutions. Compute \(\delta_{1}(\hat{x})=0.5\), \(\delta_{2}(\hat{x})=0.6\), \(\delta_{3}(\hat{x})=0.6\). Obviously, \((0.5,0.6,0.6)\) is not a solution. Compute \[x^{(1)}=(\delta_{1}(x),0.9,0.9)=(0.8,0.9,0.9),\] \[x^{(12)}=(0.8,\delta_{2}(x^{(1)}),0.9)=(0.8,0.9,0.9),\] and \[x^{(123)}=(0.8,0.9,\delta_{3}(x^{(12)}))=(0.8,0.9,0.9).\] Similarly, we can also have the following minimal solutions: \[x^{(213)}=x^{(231)}=(0.9,0.8,0.9),\] \[x^{(321)}=x^{(312)}=(0.9,0.9,0.8),\] \[x^{(132)}=x^{(123)}=(0.8,0.9,0.9).\] Therefore, \(x^{(213)}\), \(x^{(321)}\) and \(x^{(132)}\) are three minimal solutions of the fuzzy relational inequalities fulfilling \(x^{(213)}\leq x\), \(x^{(321)}\leq x\) and \(x^{(132)}\leq x\). **Theorem 4.5**: _Algorithm 4.1 terminates after \(O(m^{3})\) or \(O(n^{3})\) operations._ **Proof.** The computational amount of Step 1 is \((4mn+m)n+4mn\). For every \(j\in J\), the computational amount of Steps 3 and 4 is \(4mn+m\). Then from Steps 2 to 6, the computational amount is \((4mn+m)n\). Thus the computational amount of Algorithm 4.1 is \((4mn+m)n+4mn+(4mn+m)n=8mn^{2}+6mn\). Thus, the computational complexity of Algorithm 4.1 is \(O(m^{3})\) or \(O(n^{3})\). ## 5 Conclusions This article established two necessary and sufficient conditions for a solution of the system (2) to be a minimal one. Then it proved that for every fixed solution of the system (2) there is a minimal solution which is less than or equal to the solution, which means that the solution set of the system (2) is completely determined by its greatest solution and all minimal ones. We also supplied an algorithm for finding minimal solutions of a fixed solution to the system (2) with computational complexity \(O(m^{3})\) or \(O(n^{3})\). It should be pointed out that the idea of Algorithm 4.1 originates in [3], and some times, Algorithm 4.1 may find more than one minimal solution for a fixed one by changing the permutations of \(\{1,2,\cdots,n\}\) as presented by Example 4.3. In the future, we shall develop an effect algorithm for finding all minimal solutions of the system (2) which will further be used for investigating the structure of the solution set of the system (2) and the optimal solutions of the optimization models concerned.
2302.07877
Gromov--Hausdorff Convergence of Spectral Truncations for Tori
We consider operator systems associated to spectral truncations of tori. We show that their state spaces, when equipped with the Connes distance function, converge in the Gromov--Hausdorff sense to the space of all Borel probability measures on the torus equipped with the Monge--Kantorovich distance. A crucial role will be played by the relationship between Schur and Fourier multipliers. Along the way, we introduce the spectral Fej\'er kernel and show that it is a good kernel. This allows to make the estimates sufficient to prove the desired convergence of state spaces. We conclude with some structure analysis of the pertinent operator systems, including the C*-envelope and the propagation number, and with an observation about the dual operator system.
Malte Leimbach, Walter D. van Suijlekom
2023-02-15T18:23:06Z
http://arxiv.org/abs/2302.07877v2
# Gromov-Hausdorff convergence of spectral truncations for low-dimensional tori ###### Abstract. We consider operator systems associated to spectral truncations of tori. In dimension 1, 2 and 3, we show that their state spaces, when equipped with the Connes distance function, converge in the Gromov-Hausdorff sense to the space of all Borel probability measures on the torus equipped with the Monge-Kantorovich distance. A crucial role will be played by the relationship between Schur and Fourier multipliers, and we will also see that the lattice point counting problem is the main obstacle to extending our results to higher dimensions. We conclude with some structure analysis of the pertinent operator systems, including the C*-envelope and the propagation number, and with an observation about the dual operator system. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Actions and commutators * 2.2 Good kernels * 2.3 Fourier and Schur multipliers * 2.4 The lattice point counting problem * 3 Convergence of Spectral Truncations of the \(d\)-Torus * 3.1 A candidate for the \(\mathrm{C}^{1}\)-approximate order isomorphism * 3.2 Structuring the problem * 3.3 Estimating the norm of the map \(\mathcal{F}_{\mathfrak{p}_{\Lambda}}\) * 3.4 Convergence of spectral truncations in low dimensions * 4 Structure Analysis of the Operator System \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\) * 4.1 C*-envelope and propagation number * 4.2 Dual * A The volume of the lense * B Some computations with special functions * C Some convex geometry ## 1. Introduction In a spectral approach to geometry, such as the one advocated in noncommutative geometry [7], a natural question that arises is how one may approximate a Introduction Let \((\mathcal{E},\mathcal{H},D)\) be a Banach space and let \(\mathcal{E}\) be a Banach space and let \(\mathcal{E}\) be a Banach space. A Banach space \(\mathcal{E}\) is said to be _\(\mathcal 3. the compositions \(\sigma_{\Lambda}\circ\rho_{\Lambda}\) and \(\rho_{\Lambda}\circ\sigma_{\Lambda}\) approximate the respective identities on \(\mathcal{E}\) and \(\mathcal{E}_{\Lambda}\) with respect to Lipschitz norm, i.e. \[\|a-\rho_{\Lambda}\circ\sigma_{\Lambda}(a)\| \leq\|[D,a]\|\] \[\|T-\sigma_{\Lambda}\circ\rho_{\Lambda}(T)\| \leq\|[D_{\Lambda},T]\|.\] Then we call the pair of maps \((\rho,\sigma)\) (by which we mean the collection of pairs of maps \((\rho_{\Lambda},\sigma_{\Lambda})\)) a \(\mathrm{C}^{1}\)_-approximate order isomorphism_. It was then shown in [36, Theorem 5] that if the metrics \(d_{\mathcal{E}_{\Lambda}}\), for all \(\Lambda\geq 0\), and \(d_{\mathcal{E}}\) (defined as in (1) and (2)) metrize the respective weak*-topologies on the state spaces \(\mathcal{S}(\mathcal{E}_{\Lambda})\) and \(\mathcal{S}(\mathcal{E})\) and if a \(\mathrm{C}^{1}\)-approximate order isomorphisms exists, then the sequence of metric spaces \((\mathcal{S}(\mathcal{E}_{\Lambda}),d_{\mathcal{E}_{\Lambda}})\) converges to \((\mathcal{S}(\mathcal{E}),d_{\mathcal{E}})\) in Gromov-Hausdorff distance. By exploiting this criterion it was shown in [36] that spectral truncations of the circle converge. Other results on Gromov-Hausdorff convergence that can be cast in this general framework include [2, 1, 29]. Note also the recent developments around the related notion of propinquity [20, 21, 22], also in the context of spectral triples, but which, however, mainly focuses on C*-algebras. It should be mentioned that, even though we are phrasing the question about convergence of spectral truncations in terms of Gromov-Hausdorff distance, another point of view would be quantum Gromov-Hausdorff distance [28]. In fact, these notions are not equivalent [16]. However, as the authors point out, it follows from [17, Proposition 2.14] that Gromov-Hausdorff convergence still implies quantum Gromov-Hausdorff convergence in the case which we consider below. In this paper, we show that the conditions in the above Definition can be met for tori in dimension \(d=1,2\) and \(3\), so that spectral truncations on these tori Gromov-Hausdorff converge to the torus as well. We indicate the technical obstacles that we encounter in dimensions \(d\geq 4\); they are related to the lattice point counting problem. Let us spend the remainder of this introduction by giving an extended overview of our setup and approach. Consider the spectral triple of the \(d\)-dimensional torus \(\mathbb{T}^{d}=\mathbb{R}^{d}/2\pi\mathbb{Z}^{d}\): \[\big{(}\mathrm{C}^{\infty}(\mathbb{T}^{d}),\mathrm{L}^{2}(S(\mathbb{T}^{d})), D\big{)}\] This consists of the \(*\)-algebra of smooth functions acting on the Hilbert space of \(\mathrm{L}^{2}\)-sections of the spinor bundle \(S(\mathbb{T}^{d})\) (by multiplication) and the Dirac operator \(D\) which acts on the dense subspace of smooth sections of the spinor bundle. We identify \(S(\mathbb{T}^{d})\) with the trivial bundle \(\mathbb{T}^{d}\otimes V\), where \(V:=\mathbb{C}^{\lfloor\nicefrac{{\alpha}}{{2}}\rfloor}\), and we write \(\mathcal{H}:=\mathrm{L}^{2}(\mathbb{T}^{d})\otimes V\cong\mathrm{L}^{2}(S( \mathbb{T}^{d}))\) for the Hilbert space. Recall that \(D=-i\sum_{\mu=1}^{d}\partial_{\mu}\otimes\gamma^{\mu}\) and that the spectrum of \(D\) (which is point-spectrum only) is given by \(\sigma(D)=\{\pm(n_{1}^{2}+\cdots+n_{d}^{2})^{\nicefrac{{1}}{{2}}}\,:\,n_{i}\in \mathbb{Z}\}\). The Dirac operator gives rise to the distance function (2) on the state space \(\mathcal{S}(\mathrm{C}(\mathbb{T}^{d}))\) which metrizes the weak*-topology on it and recovers the usual Riemannian distance on \(\mathbb{T}^{d}\) when restricted to pure states. For any \(\Lambda\geq 0\), let \(P_{\Lambda}\) be the orthogonal projection to the subspace of \(\mathcal{H}\) spanned by the eigenspinors \(e_{\lambda}\) of the eigenvalues \(\lambda\) with \(|\lambda|\leq\Lambda\). More concretely, we have \(P_{\Lambda}\mathcal{H}=\mathrm{span}\{e_{n}\,:\,n\in\mathbb{Z}^{d},\|n\|\leq \Lambda\}\otimes V\), with \(e_{n}(x):=e^{in\cdot x}\), for all \(x\in\mathbb{T}^{d}\). The spectral projection \(P_{\Lambda}\) gives rise to the following _operator system spectral triple_: \[\big{(}P_{\Lambda}\mathrm{C}^{\infty}(\mathbb{T}^{d})P_{\Lambda},P_{\Lambda} \mathcal{H},P_{\Lambda}DP_{\Lambda}\big{)}\] We use the notation \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}:=P_{\Lambda}\mathrm{C}^{\infty}(\mathbb{T}^ {d})P_{\Lambda}\) and write \(D_{\Lambda}:=P_{\Lambda}DP_{\Lambda}\). We also abbreviate \(d_{\Lambda}\equiv d_{P_{\Lambda}\mathrm{C}^{\infty}(\mathbb{T}^{d})P_{\Lambda}}\) for the distance function defined in (1). Observe that elements \(T\) of the operator system \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\) are of the form \(T=(t_{k-l})_{k,l\in\overline{\mathrm{B}}^{x}_{\Lambda}}\), where \(\overline{\mathrm{B}}^{\mathbb{Z}}_{\Lambda}:=\overline{\mathrm{B}}_{\Lambda} \cap\mathbb{Z}^{d}\) is the set of \(\mathbb{Z}^{d}\)-lattice points in the closed ball of radius \(\Lambda\), and where \(t_{k-l}=\langle e_{k},Te_{l}\rangle\). In particular, \(\mathrm{C}(\mathbb{T}^{1})^{(\Lambda)}\) is the operator system of \((2[\Lambda]+1)\times(2[\Lambda]+1)\)-Toeplitz matrices which was investigated at length in [8] and [11]. The candidate for the map \(\rho_{\Lambda}:\mathrm{C}^{\infty}(\mathbb{T}^{d})\to\mathrm{C}(\mathbb{T}^{d} )^{(\Lambda)}\) in Definition 1.2 is canonically inherent in the problem, namely, it is the compression map given by \(\rho_{\Lambda}(f)=P_{\Lambda}fP_{\Lambda}\). It is easy to see that this map is positive, unital and \(\mathrm{C}^{1}\)-contractive (Lemma 3.1). It is, however, less obvious what the candidate for the map \(\sigma_{\Lambda}:\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\to\mathrm{C}^{\infty} (\mathbb{T}^{d})\) should be. Inspired by the choice of the map given in [36] in the case of the circle, we propose the following map: \[\sigma_{\Lambda}(T):=\frac{1}{\mathcal{N}_{\mathrm{B}}(\Lambda)}\mathrm{Tr} \left(\left|\psi\right\rangle\left\langle\psi\right|\alpha(T)\right)\] Here, \(\alpha\) is the \(\mathbb{T}^{d}\)-action (5) on \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\), the vector \(\left|\psi\right\rangle\) is given by \(\left|\psi\right\rangle=\sum_{n\in\overline{\mathrm{B}}^{x}_{\Lambda}}e_{n}\), and \(\mathcal{N}_{\mathrm{B}}(\Lambda):=\#\overline{\mathrm{B}}_{\Lambda}\cap \mathbb{Z}^{d}\) is the number of \(\mathbb{Z}^{d}\)-lattice points in the closed ball of radius \(\Lambda\). One may consider [28, Section 2] as another instance of inspiration for this choice of map \(\sigma_{\Lambda}\) by realizing that the map \(\sigma_{\Lambda}\) is the formal adjoint of the map \(\rho_{\Lambda}\) when the \(*\)-algebra \(\mathrm{C}^{\infty}(\mathbb{T}^{d})\) is equipped with the \(\mathrm{L}^{2}\)-inner product and the operator system \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\) is equipped with the Hilbert-Schmidt inner product. Similarly as for \(\rho_{\Lambda}\), it is easy to see that the map \(\sigma_{\Lambda}\) is positive, unital and \(\mathrm{C}^{1}\)-contractive (Lemma 3.2). In order to show that our choice of maps \(\rho_{\Lambda}\) and \(\sigma_{\Lambda}\) gives rise to a \(\mathrm{C}^{1}\)-approximate order isomorphism it remains to show that their compositions approximate the respective identities on \(\mathrm{C}^{\infty}(\mathbb{T}^{d})\) and \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\) in Lipschitz norm. We show by direct computations (Lemma 3.3) that the maps \(\sigma_{\Lambda}\circ\rho_{\Lambda}\) and \(\rho_{\Lambda}\circ\sigma_{\Lambda}\) act on \(\mathrm{C}^{\infty}(\mathbb{T}^{d})\), respectively \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\) as follows: \[\sigma_{\Lambda}\circ\rho_{\Lambda}(f) =\left(\mathfrak{m}_{\Lambda}\widehat{f}\right)^{\sim}\!\!=: \mathcal{F}_{\mathfrak{m}_{\Lambda}}(f)\] \[\rho_{\Lambda}\circ\sigma_{\Lambda}(T) =\left(\mathfrak{m}_{\Lambda}(k-l)t_{k-l}\right)_{k,le\overline{ \mathrm{B}}^{x}_{\Lambda}}:=:\mathcal{S}_{\mathfrak{m}_{\Lambda}}(T)\] The map \(\mathcal{F}_{\mathfrak{m}_{\Lambda}}\) is known as _Fourier multiplication_ and the map \(\mathcal{S}_{\mathfrak{m}_{\Lambda}}\) as _Schur multiplication_, respectively with _symbol_ \[\mathfrak{m}_{\Lambda}(n):=\frac{\mathcal{N}_{\mathrm{L}}(\Lambda,n)}{\mathcal{ N}_{\mathrm{B}}(\Lambda)}, \tag{3}\] where \(\mathcal{N}_{\mathrm{L}}(\Lambda,n):=\#\overline{\mathrm{B}}_{\Lambda}\cap \overline{\mathrm{B}}_{\Lambda}(n)\cap\mathbb{Z}^{d}\) is the number of \(\mathbb{Z}^{d}\)-lattice points in the intersection of the closed ball of radius \(\Lambda\) with a copy of itself translated by \(n\) (we call this intersection a _lense_). We denote the compression of the Dirac operator by \(D_{\Lambda}:=P_{\Lambda}DP_{\Lambda}\). We apply an "antiderivative trick" (Lemma 3.4) to see that for obtaining estimates of the maps \(\mathrm{id}_{\mathrm{C}^{\infty}(\mathbb{T}^{d})}-\mathcal{F}_{\mathfrak{m}_{ \Lambda}}\) and \(\mathrm{id}_{\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}}-\mathcal{S}_{\mathfrak{m}_{ \Lambda}}\) in Lipschitz norm, one needs to estimate the following two maps: \[\mathcal{F}_{\mathfrak{w}_{\Lambda}}:=\frac{i}{2}\sum_{\mu=1}^{d}\mathcal{F}_ {\mathfrak{w}_{\Lambda}^{\mu}}\otimes\{\gamma^{\mu},\cdot\}:[D,C^{\infty}( \mathbb{T}^{d})]\to C^{\infty}(\mathbb{T}^{d});\] \[\mathcal{S}_{\mathfrak{w}_{\Lambda}}:=\frac{i}{2}\sum_{\mu=1}^{d}\mathcal{S}_{ \mathfrak{w}_{\Lambda}^{\mu}}\otimes\{\gamma^{\mu},\cdot\}:[D,C(\mathbb{T}^{d}) ^{(\Lambda)}]\to C(\mathbb{T}^{d})^{(\Lambda)},\] where \(\mathcal{F}_{\mathfrak{w}_{\Lambda}^{\mu}}\) and \(\mathcal{S}_{\mathfrak{w}_{\Lambda}^{\mu}}\) are now respectively Fourier and Schur multiplication with the symbol \[\mathfrak{w}_{\Lambda}^{\mu}(n)=\begin{cases}0,\text{ if }n=0\\ (1-\mathfrak{m}_{\Lambda}(n))\frac{n_{\mu}}{\|n\|^{2}},\text{ if }n\neq 0.\end{cases} \tag{4}\] A variation of the classical Bozejko-Fendler _transference theorem_ for Fourier and Schur multipliers (Lemma 3.5) then shows that the cb-norm of \(\mathcal{S}_{\mathfrak{w}_{\Lambda}}\) is bounded by the cb-norm of \(\mathcal{F}_{\mathfrak{w}_{\Lambda}}\). Since the latter map takes values in a commutative C*-algebra its cb-norm coincides with its norm, so this is what is left to estimate. As a means to treat the seemingly intractable quotient (3), we introduce the quantity \[\theta_{\Lambda}(n):=\frac{\mathcal{V}_{\mathrm{L}}(\Lambda,n)}{ \mathcal{V}_{\mathrm{B}}(\Lambda)},\] where \(\mathcal{V}_{\mathrm{L}}(\Lambda,n)\) is the volume of the dense \(\widetilde{\mathrm{B}}_{\Lambda}\cap\widetilde{\mathrm{B}}_{\Lambda}(n)\) and \(\mathcal{V}_{\mathrm{B}}(\Lambda)\) is the volume of the ball \(\widetilde{\mathrm{B}}_{\Lambda}\). It can be easily checked that the norm of the map \(\mathcal{F}_{\mathfrak{w}_{\Lambda}}\) is bounded as follows: \[\|\mathcal{F}_{\mathfrak{w}_{\Lambda}}\|\leq\frac{1}{2}\left(\|\sum_{\mu=1}^{d }\mathcal{F}_{(1-\theta_{\Lambda})\frac{n_{\mu}}{\|n\|^{2}}}\otimes\{\gamma^{ \mu},\cdot\}\|+\|\sum_{\mu=1}^{d}\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_ {\Lambda})\frac{n_{\mu}}{\|n\|^{2}}}\otimes\{\gamma^{\mu},\cdot\}\|\right)\] We show that the first of these two norms converges to \(0\) as \(\Lambda\to\infty\) by exploiting some classical Fourier theory on \(\mathbb{T}^{d}\) as well as some special function theory. For the second norm, we apply the estimate of a Fourier multiplier in terms of the \(\ell^{2}\)-norm of its symbol (Corollary 2.6). Some considerations about the lattice point counting problem (Subsection 2.4) allow to show that this norm converges to \(0\) as \(\Lambda\to\infty\) in low dimensions, \(d=1,2,3\). Altogether, this shows that our candidate \((\rho,\sigma)\) is a C\({}^{1}\)-approximate order isomorphism and hence, by the criterion from [36], spectral truncations of \(\mathbb{T}^{d}\) converge for \(d=1,2,3\), which is our main theorem. We also explain why our methods must fail if \(d\geq 6\). In the last section, we give a computation of the propagation number of the operator system \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\). We conclude by pointing out some obstacles on the road to determining its operator system dual. In particular, we argue that, for \(d\geq 2\), the operator system dual cannot be the one that we would have expected. ### Acknowledgements We thank Nigel Higson for the comments at an early stage of this project. ML thanks Gerrit Vos for an introduction to Fourier and Schur multipliers and Dimitris Gerontogiannis for fruitful discussions. This work was funded by NWO under grant OCENW.KLEIN.376. ## 2. Preliminaries ### Actions and commutators We spell out a few simple facts used throughout this article. Recall the usual action of \(\mathrm{C}(\mathbb{T}^{d})\) on \(\mathcal{H}\): \[f(g\otimes v)=(fg)\otimes v=\sum_{n\in\mathbb{Z}^{d}}\sum_{m\in\mathbb{Z}^{d }}\widehat{f}(n-m)\widehat{g}(m)e_{n}\otimes v\] This induces an action of \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\) on \(P_{\Lambda}\mathcal{H}\): \[T\left(\sum_{k\in\overline{\mathrm{B}}_{\Lambda}^{2}}a_{k}e_{k}\otimes v \right)=\sum_{k\in\overline{\mathrm{B}}_{\Lambda}^{2}}\sum_{l\in\overline{ \mathrm{B}}_{\Lambda}^{2}}t_{k-l}a_{l}e_{k}\otimes v\] Furthermore, we have the standard \(\mathbb{T}^{d}\)-action on \(\mathrm{C}(\mathbb{T}^{d})\) (as a subalgebra of \(\mathcal{B}(\mathcal{H})\)): \[\alpha_{\theta}(f)=\sum_{n\in\mathbb{Z}^{d}}\widehat{f}(n)e_{n}e^{in\cdot\theta}\] This induces an action of \(\mathbb{T}^{d}\) on \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\) (as an operator subsystem of \(\mathcal{B}(P_{\Lambda}\mathcal{H})\)): \[\alpha_{\theta}(T)=\Big{(}t_{k-l}e^{i(k-l)\cdot\theta}\Big{)}_{k,l\in \overline{\mathrm{B}}_{\Lambda}^{2}} \tag{5}\] Recall that the following holds: \[[D,f]=\sum_{\mu=1}^{d}\sum_{n\in\mathbb{Z}^{d}}n_{\mu}\widehat{f}(n)e_{n} \otimes\gamma^{\mu}\] Similarly, we have: \[[D_{\Lambda},T]=\sum_{\mu=1}^{d}\left((k_{\mu}-l_{\mu})t_{k-l}\right)_{k,l\in \overline{\mathrm{B}}_{\Lambda}^{2}}\otimes\gamma^{\mu} \tag{6}\] ### Good kernels We follow the convention in [34] and call an approximate identity in the Banach \(*\)-algebra \(\mathrm{L}^{1}(\mathbb{T}^{d})\) (with convolution) a _good kernel_: **Definition 2.1**.: For all \(\Lambda>0\), let \(K_{\Lambda}\in\mathrm{L}^{1}(\mathbb{T}^{d})\). The family \(\{K_{\Lambda}\}_{\Lambda>0}\) is called a _good kernel_, if the following holds: 1. \(\int_{\mathbb{T}^{d}}K_{\Lambda}(x)dx=1\) and 2. for all \(\varepsilon>0\), we have that \(\int_{\mathbb{T}^{d}\setminus\mathrm{B}_{\varepsilon}(0)}K_{\Lambda}(x)dx \to 0\), as \(\Lambda\to\infty\). The following lemma gives a criterion for when certain \(\mathrm{L}^{1}\)-functions on \(\mathbb{R}^{d}\) induce good kernels. The arguments are adapted from the discussion in [12, Section 3.4.1] on the summability of Bochner-Riesz means of functions on the \(d\)-torus. **Lemma 2.2**.: _Let \(\theta_{0}\in\mathrm{C}_{0}(\mathbb{R}_{\geqslant 0})\) be a continuous function which vanishes at \(\infty\) and set \(\theta(x):=\theta_{0}(\|x\|)\), for every \(x\in\mathbb{R}^{d}\). Assume that \(\theta_{0}(0)=1\) and that both \(\theta,\vec{\theta}\in L^{1}(\mathbb{R}^{d})\). For all \(\Lambda>0\), define_ \[K_{\Lambda}(x):=\sum_{n\in\mathbb{Z}^{d}}\theta\left(\frac{n}{\Lambda}\right)e ^{in\cdot x},\] _for all \(x\in\mathbb{R}^{d}\). Assume that there exist real numbers \(C,c>0\) such that the following estimate holds:_ \[|\theta(x)|+|\vec{\theta}(x)|\leqslant C(1+\|x\|)^{-d-c},\] _for all \(x\in\mathbb{R}^{d}\). Then \(K_{\Lambda}\) is a good kernel on \(\mathbb{T}^{d}\)._ Proof.: The conditions on \(\theta\) allow to apply the Poisson summation formula: \[K_{\Lambda}(x) =\sum_{n\in\mathbb{Z}^{d}}\theta\left(\frac{n}{\Lambda}\right)e^{ in\cdot x}\] \[=\sum_{n\in\mathbb{Z}^{d}}(D_{\Lambda}\theta)\overset{\sim}{ \rightharpoonup}(x+n)\] \[=\sum_{n\in\mathbb{Z}^{d}}\Lambda^{d}D_{\frac{1}{\Lambda}}\widetilde{ \theta}(x+n)\] \[=\sum_{n\in\mathbb{Z}^{d}}\Lambda^{d}\widetilde{\theta}(\Lambda(x+n )),\] for all \(x\in\mathbb{R}^{d}\), where \(D_{\Lambda}\) is the dilation operator on \(\mathrm{L}^{1}(\mathbb{R}^{d})\) given by \(D_{\Lambda}\theta(x)=\theta\left(\frac{x}{\Lambda}\right)\) and where we view \(K_{\Lambda}\) as a periodic function on \(\mathbb{R}^{d}\). This shows that the function \(K_{\Lambda}\) on \(\mathbb{T}^{d}\) is bounded in \(\mathrm{L}^{1}\)-norm, uniformly in \(\Lambda\): \[\int_{\mathbb{T}^{d}}|K_{\Lambda}(x)|dx=\int_{\mathbb{R}^{d}}\left|\Lambda^{d} \widetilde{\theta}(\Lambda x)\right|dx=\int_{\mathbb{R}^{d}}\left|\widetilde{ \theta}(x)\right|dx\] Furthermore, by periodicity of each summand of \(K_{\Lambda}\), it is clear that \(K_{\Lambda}\) integrates to \(1\) over \(\mathbb{T}^{d}\): \[\int_{\mathbb{T}^{d}}K_{\Lambda}(x)dx=K_{\Lambda}(0)=\theta(0)=1.\] Now fix \(\delta>0\). Then we have: \[\int_{\begin{subarray}{c}|x|\geqslant\delta\\ x\in\mathbb{T}^{d}\end{subarray}}|K_{\Lambda}(x)|dx =\int_{\begin{subarray}{c}|x|\geqslant\delta\\ x\in\mathbb{T}^{d}\end{subarray}}\left|\sum_{n\in\mathbb{Z}^{d}}\Lambda^{d} \widetilde{\theta}(\Lambda(x+n))\right|dx\] \[\leqslant\Lambda^{d}\int_{\begin{subarray}{c}|x|\geqslant \delta\\ x\in\mathbb{T}^{d}\end{subarray}}\sum_{n\in\mathbb{Z}^{d}}C\left(1+\left\| \Lambda(x+n)\right\|\right)^{-d-c}dx\] \[\leqslant\Lambda^{d}C^{\prime}_{\delta}\int_{\begin{subarray}{c }|x|\geqslant\delta\\ x\in\mathbb{T}^{d}\end{subarray}}\sum_{n\in\mathbb{Z}^{d}}\left\|\Lambda(x+n) \right\|^{-d-c}dx\] \[=\Lambda^{d}C^{\prime}_{\delta}\int_{x\in\mathbb{R}^{d}\setminus B _{\delta}(2\pi\mathbb{Z}^{d})}\left\|\Lambda x\right\|^{-d-c}dx\] \[\leqslant\Lambda^{-c}C^{\prime}_{\delta}\int_{x\in\mathbb{R}^{d} }\left\|x\right\|^{-d-c}dx,\] which clearly converges to \(0\) as \(\Lambda\to\infty\). Here \(C^{\prime}_{\delta}>0\) is a constant depending on the choice of \(\delta\) and \(B_{\delta}(2\pi\mathbb{Z}^{d})=\bigcup_{n\in\mathbb{Z}^{d}}B_{\delta}(2\pi n)\) is a \(\delta\)-neighborhood of \(2\pi\mathbb{Z}^{d}\subset\mathbb{R}^{d}\). Altogether, we see that \(K_{\Lambda}\) is a good kernel. Good kernels provide a way to approximate the identity on \(\mathrm{C}^{\infty}(\mathbb{T}^{d})\) not only in \(\mathrm{L}^{1}\)- and sup-norm, but also in Lipschitz norm: **Lemma 2.3**.: _If \(K_{\Lambda}\) is a good kernel, then, for all \(f\in\mathrm{C}^{\infty}(\mathbb{T}^{d})\), the following holds:_ \[\|f-K_{\Lambda}*f\|\leqslant\gamma_{\Lambda}\|[D,f]\|,\] _where \(\gamma_{\Lambda}\to 0\) as \(\Lambda\to\infty\)._ Proof.: The proof is as in [2, Lemma 5.13]. For all \(x\in\mathbb{T}^{d}\), we have: \[|K_{\Lambda}*f(x)-f(x)| \leqslant\int_{\mathbb{T}^{d}}|K_{\Lambda}(y)(f(x-y)-f(x))|dy\] \[\leqslant\int_{\mathbb{T}^{d}}|K_{\Lambda}(y)|\|y\|\|f\|_{\mathrm{ Lip}}dy\] \[=\int_{\mathbb{T}^{d}}|K_{\Lambda}(y)|\|y\|dy|[D,f]\|\] We set \(\gamma_{\Lambda}:=\int_{\mathbb{T}^{d}}|K_{\Lambda}(y)|\|y\|dy\). Let \(\varepsilon>0\). Let \(\Lambda_{0}\) be large enough such that, for all \(\Lambda\geq\Lambda_{0}\), \(\int_{\|y\|\geq\varepsilon}|K_{\Lambda}(y)|dy<\varepsilon\). Then, for \(\Lambda\geq\Lambda_{0}\), we obtain: \[\gamma_{\Lambda} =\int_{\|y\|\geq\varepsilon}|K_{\Lambda}(y)|\|y\|dy+\int_{\|y\|< \varepsilon}|K_{\Lambda}(y)|\|y\|dy\] \[\leq d\left(\int_{\|y\|\geq\varepsilon}|K_{\Lambda}(y)|dy+ \varepsilon\int_{|y|<\varepsilon}|K_{\Lambda}(y)|dy\right)\] \[\leq d\left(\varepsilon+\varepsilon C\right),\] where \(C=\sup_{\Lambda>0}\int_{\mathbb{T}^{d}}|K_{\Lambda}(y)|dy<\infty\). ### Fourier and Schur multipliers Let \(\Gamma\) be a discrete group and let \(\lambda:\Gamma\to\mathcal{B}(\ell^{2}(\Gamma))\) be its left-regular representation given by \(\lambda_{g}f(h)=f(gh)\). We denote by \(\mathrm{C}^{*}_{\lambda}(\Gamma)\) the reduced group \(\mathrm{C}^{*}\)-algebra, i.e. the completion of the group ring \(\mathbb{C}[\Gamma]\) in \(\mathcal{B}(\ell^{2}(\Gamma))\) with respect to the norm \(\|x\|_{\mathrm{red}}:=\|\lambda(x)\|_{\mathcal{B}(\ell^{2}(\Gamma))}\). A function \(\varphi:\Gamma\to\mathbb{C}\) gives rise to a _multiplier_ on the group ring as follows: \[\mathbb{C}[\Gamma] \to\mathbb{C}[\Gamma]\] \[\sum_{g\in\Gamma}a_{g}g \mapsto\sum_{g\in\Gamma}\varphi(g)a_{g}\] If this map extends to a bounded linear map \(\mathcal{M}_{\varphi}:\mathrm{C}^{*}_{\lambda}(\Gamma)\to\mathrm{C}^{*}_{ \lambda}(\Gamma)\) we call this extension the _multiplier on \(\mathrm{C}^{*}_{\lambda}(\Gamma)\) with symbol \(\varphi\)_. We record the triviality that if \(\varphi\) is finitely supported it always induces a multiplier on \(\mathrm{C}^{*}_{\lambda}(\Gamma)\). Recall that if \(\Gamma\) is abelian, then \(\mathrm{C}^{*}_{\lambda}(\Gamma)=\mathrm{C}^{*}(\Gamma)=\mathrm{C}(\widehat{ \Gamma})\), where \(\widehat{\Gamma}\) is the Pontryagin dual of \(\Gamma\). In this case, we call the multiplier on \(\mathrm{C}(\widehat{\Gamma})\) the _Fourier multiplier with symbol \(\varphi\)_ and denote it by \(\mathcal{F}_{\varphi}\). The Fourier multiplier takes on the following form: \[\mathcal{F}_{\varphi}(f)=\left(g\mapsto\varphi(g)\widehat{f}(g)\right)^{ \sim}\] Let \(k:\Gamma\times\Gamma\to\mathbb{C}\) be a function, also called a _kernel_. A kernel \(k\) gives rise to a linear map with domain \(\mathcal{B}(\ell^{2}(\Gamma))\) given by \(\mathcal{S}_{k}:(t_{g,h})_{g,h\in\Gamma}\mapsto(k(g,h)t_{g,h})_{g,h\in\Gamma}\), where \(t_{g,h}=\langle\delta_{g},T\delta_{h}\rangle\), for \(T\in\mathcal{B}(\ell^{2}(\Gamma))\). If this map is bounded with range in \(\mathcal{B}(\ell^{2}(\Gamma))\), we call it a _Schur multiplier_. See e.g. [37] for a survey and [26, Chapter 5] as a standard reference which includes a discussion of the connection with Grothendieck's theorem. We collect some well-known facts about Schur multipliers. **Proposition 2.4**.: _Let \(k:\Gamma\times\Gamma\to\mathbb{C}\) be a kernel. Then the following are equivalent:_ 1. \(\mathcal{S}_{k}\) _is a Schur multiplier of norm_ \(\|\mathcal{S}_{k}\|\leq 1\)_._ 2. \(\mathcal{S}_{k}\) _is a completely bounded Schur multiplier of_ \(\mathrm{cb}\)_-norm_ \(\|\mathcal{S}_{k}\|_{\mathrm{cb}}\leq 1\)_._ 3. _There exists a Hilbert space_ \(\mathcal{H}\) _and families of vectors_ \(\{\xi_{g}\}_{g\in\Gamma},\{\eta_{h}\}_{h\in\Gamma}\subset\mathcal{H}\) _with_ \(\|\xi_{g}\|,\|\eta_{h}\|\leq 1\) _such that_ \(k(g,h)=\langle\xi_{g},\eta_{h}\rangle\)_, for all_ \(g,h\in\Gamma\)_._ For an elementary proof of the equivalence of (i) and (ii), we refer to [25, Theorem 8.7 and Corollary 8.8]. A proof of the equivalence of (ii) and (iii) can be found e.g. in [5, Theorem D.4], which we now sketch: Assuming that \(\|\mathcal{S}_{k}\|_{\mathrm{cb}}\leq 1\), Wittstock's factorization theorem gives a factorization of \(\mathcal{S}_{k}\) through \(\mathcal{B}(\mathcal{H})\), for some Hilbert space \(\mathcal{H}\), which allows to construct appropriate \(\xi_{g}\) and \(\eta_{h}\). For the converse implication, the map \(\mathcal{S}_{k}\) is factorized through \(\mathcal{B}(\ell^{2}(\Gamma)\otimes\mathcal{H})\) as \(\mathcal{S}_{k}(T)=V^{*}(T\otimes\mathbf{1}_{\mathcal{H}})W\) for the contractions \(V\delta_{g}:=\delta_{g}\otimes\xi_{g}\) and \(W\delta_{h}:=\delta_{h}\otimes\eta_{h}\). The same works when tensoring with \(\mathbf{1}_{M_{n}}\), for arbitrary \(n\in\mathbb{N}\), which shows complete contractivity of \(\mathcal{S}_{k}\). We are mainly interested in Schur multipliers \(\mathcal{S}_{k}\) induced by a function \(\varphi:\Gamma\to\mathbb{C}\), i.e. \(k(g,h):=\varphi(gh^{-1})\). We call such a Schur multiplier a _Schur multiplier with symbol_\(\varphi\) and slightly abuse notation to denote it by \(\mathcal{S}_{\varphi}\). It is easy to see that \(\mathcal{S}_{\varphi}\big{|}_{C^{\mathsf{s}}_{\lambda}(\Gamma)}=\mathcal{M}_{\varphi}\). Indeed, let \(f\in\mathrm{C}^{\mathsf{s}}_{\lambda}(\Gamma)\) and \(\{\delta_{g}\}_{g\in\Gamma}\) be an orthonormal basis for \(\ell^{2}(\Gamma)\). Then the matrix associated to \(f\) (viewed as an element of \(\mathcal{B}(\ell^{2}(\Gamma))\)) is a Toeplitz matrix in the following sense: \[\langle\delta_{g},f\delta_{h}\rangle=\langle\delta_{g},\sum_{\gamma\in \Gamma}f_{\gamma}\lambda_{\gamma}(\delta_{h})\rangle=\sum_{\gamma\in\Gamma} \langle\delta_{g},f_{\gamma}\delta_{\gamma h}\rangle=f_{gh^{-1}}\] It follows that the matrix associated to \(\mathcal{M}_{\varphi}(f)\) is the following Toeplitz matrix: \[\langle\delta_{g},\mathcal{M}_{\varphi}(f)\delta_{h}\rangle=\varphi(gh^{-1}) f_{gh^{-1}},\] which shows that \(\mathcal{S}_{\varphi}((f_{gh^{-1}})_{g,h\in\Gamma})=((\mathcal{M}_{\varphi}(f))_ {g,h})_{g,h\in\Gamma}\). The following "transference theorem" goes back to [4]: **Proposition 2.5**.: _Let \(\varphi:\Gamma\to\mathbb{C}\) be a function. The multiplier on \(\mathrm{C}^{\mathsf{s}}_{\lambda}(\Gamma)\) with symbol \(\varphi\) is completely bounded if and only if the Schur multiplier on \(\mathcal{B}(\ell^{2}(\Gamma))\) with symbol \(\varphi\) is completely bounded, and we have:_ \[\|\mathcal{M}_{\varphi}\|_{\mathrm{cb}}=\|\mathcal{S}_{\varphi}\|_{\mathrm{cb}}\] For a proof, see [26, Theorem 6.4]. We will apply a similar argument as the one given there in our proof of Lemma 3.5. Let us point out that Proposition 2.4 shows that it is necessary for the function \(\varphi:\Gamma\to\mathbb{C}\) to be bounded in order to induce a Schur multiplier, since \(\sup_{g,h\in\Gamma}|\varphi(gh^{-1})|=\sup_{g,h\in\Gamma}|\langle\xi_{g},\eta_ {h}\rangle|\). By Proposition 2.5, this is also a necessary condition for \(\varphi\) to induce a multiplier on \(\mathrm{C}^{\mathsf{s}}_{\lambda}(\Gamma)\). **Corollary 2.6**.: _Let \(\varphi\in\ell^{2}(\Gamma)\) and assume that \(\mathcal{M}_{\varphi}\) is a completely bounded multiplier on \(\mathrm{C}^{\mathsf{s}}_{\lambda}(\Gamma)\). Then the following holds:_ \[\|\mathcal{M}_{\varphi}\|\leqslant\|\mathcal{M}_{\varphi}\|_{\mathrm{cb}}=\| \mathcal{S}_{\varphi}\|_{\mathrm{cb}}\leqslant\|\varphi\|_{\ell^{2}(\Gamma)}\] Proof.: The first inequality is trivial and the second equality is precisely Proposition 2.5. For the last inequality, assume w.l.o.g. that \(\|\mathcal{S}_{\varphi}\|_{\mathrm{cb}}\leqslant 1\). Then, by Proposition 2.4, there are elements \(\xi_{g},\eta_{h}\in\overline{\mathrm{B}}_{1}^{\mathcal{H}}\) in the closed unit ball of some Hilbert space such that \(\varphi(gh^{-1})=\langle\xi_{g},\eta_{h}\rangle\) which implies that \(\|\varphi\|_{\ell^{\infty}(\Gamma)}=\sup_{g,h\in\Gamma}|\varphi(gh^{-1})|\leqslant 1\). Since \(\|\varphi\|_{\ell^{\infty}(\Gamma)}\leqslant\|\varphi\|_{\ell^{2}(\Gamma)}\), we obtain that \(\|\mathcal{S}_{\varphi}\|_{\mathrm{cb}}\leqslant\|\varphi\|_{\ell^{2}(\Gamma)}\). ### The lattice point counting problem Let \(X\subset\mathbb{R}^{d}\) be a compact convex \(d\)-dimensional body. We denote the number of \(\mathbb{Z}^{d}\)-lattice points in \(\Lambda X\) by \(\mathcal{N}_{X}(\Lambda):=\#\Lambda X\cap\mathbb{Z}^{d}\) and the \(d\)-dimensional volume of \(\Lambda X\) by \(\mathcal{V}_{X}(\Lambda)\). The _lattice point counting problem_ (which originated with the Gauss circle problem) is to find the asymptotics of the error term \(|\mathcal{N}_{X}(\Lambda)-V_{X}(\Lambda)|\) when approximating the number of lattice points in \(\Lambda X\) by its volume, or equivalently of the so-called _discrepancy_ \[\Delta_{X}(\Lambda):=\,\frac{|\mathcal{N}_{X}(\Lambda)-V_{X}(\Lambda)|}{V_{X}( \Lambda)}, \tag{7}\] as \(\Lambda\to\infty\). See e.g. [15] for a survey on the lattice point counting problem and the monograph [18]. To abbreviate notation for the asymptotic behavior of the discrepancy of the lattice point counting problem, recall that, for \(f,g\) real functions on \(\mathbb{R}^{d}\), we write \(f=\mathcal{O}(g)\) if \(\limsup_{|x|\to\infty}\left|\frac{f(x)}{g(x)}\right|<\infty\), and \(f=\Omega(g)\) if \(\limsup_{|x|\to\infty}\left|\frac{f(x)}{g(x)}\right|>0\). We now assume that \(X\) be strictly convex and that \(\operatorname{\overline{B}}_{\frac{\sqrt{d}}{2}}(0)\subset\overset{\circ}{X}\). Observe that the distance of any two distinct \(\mathbb{Z}^{d}\)-lattice points is \(\sqrt{d}\), so any closed \(\frac{\sqrt{d}}{2}\)-neighborhood of any point in \(\mathbb{R}^{d}\) contains at least one \(\mathbb{Z}^{d}\)-lattice point. With this one realizes that difference \(|\mathcal{N}_{X}(\Lambda)-V_{X}(\Lambda)|\) is bounded from above by the volume of a \(\frac{\sqrt{d}}{2}\)-neighborhood of the boundary of \(X\): \[|\mathcal{N}_{X}(\Lambda)-\mathcal{V}_{X}(\Lambda)|\leq\sqrt{d}\mathcal{A}(X) \Lambda^{d-1}, \tag{8}\] where \(\mathcal{A}(X)\) denotes the \((d-1)\)-dimensional surface area of \(X\). In the planar case (\(d=2\)), this observation is a special instance of a theorem by Jarnik and Steinhaus [35] for rectifiable Jordan curves and in the case \(d\geq 3\), this result is due to Wilson [39], see also [15, Section 3]. With this, one obtains the following asymptotics for the discrepancy: **Lemma 2.7**.: _The discrepancy of the lattice point counting problem for \(X\) tends to \(0\) with rate of convergence at most \(\Lambda^{-1}\) as \(\Lambda\to 0\), i.e.:_ \[\Delta_{X}(\Lambda)=\mathcal{O}(\Lambda^{-1})\] Proof.: Since \(\mathcal{V}_{X}(\Lambda)=\mathcal{V}_{X}(1)\Lambda^{d}\), we obtain from (8): \[\Delta_{X}(\Lambda)\leq\frac{\sqrt{d}\mathcal{A}(X)}{\mathcal{V}_{X}(1)} \Lambda^{-1}\] In the case where \(X=\operatorname{\overline{B}}_{1}\) is the closed unit ball, we set \(\Delta_{\mathrm{B}}(\Lambda):=\Delta_{\operatorname{\overline{B}}_{1}}(\Lambda)\). We write \(\operatorname{\overline{B}}_{\Lambda}^{\mathbb{Z}}:=\operatorname{\overline{ B}}_{\Lambda}\cap\mathbb{Z}^{d}\) for the set of \(\mathbb{Z}^{d}\)-lattice points in the closed ball of radius \(\Lambda\). We denote by \(\operatorname{\overline{L}}_{\Lambda}(n):=\operatorname{\overline{B}}_{ \Lambda}\cap\operatorname{\overline{B}}_{\Lambda}(n)\) the closed _lense_ in the ball of radius \(\Lambda\) with translation parameter \(n\) and by \(\operatorname{\overline{L}}_{\Lambda}^{\mathbb{Z}}(n):=\operatorname{ \overline{L}}_{\Lambda}(n)\cap\mathbb{Z}^{d}\) the set of \(\mathbb{Z}^{d}\)-lattice points contained in it. Furthermore, we write \(\mathcal{N}_{\mathrm{L}}(\Lambda,n):=\#\operatorname{\overline{L}}_{\Lambda} ^{\mathbb{Z}}(n)\) and \(\mathcal{V}_{\mathrm{L}}(\Lambda,n)\) for the volume of the lense \(\operatorname{\overline{L}}_{\Lambda}(n)\). We set \(\Delta_{\mathrm{L}}(\Lambda,n):=\frac{|\mathcal{N}_{\mathrm{L}}(\Lambda,n)- \mathcal{V}_{\mathrm{L}}(\Lambda,n)|}{\mathcal{V}_{\mathrm{L}}(\Lambda,n)}\). Note that this notation is slightly ambiguous since \(\Delta_{\mathrm{L}}(\Lambda,n)\neq\Delta_{\operatorname{\overline{L}}_{ \Lambda}(n)}(1)\) in general. This is due to the fact that \(\operatorname{\overline{L}}_{\Lambda}(n)\) is not equal to \(\Lambda\operatorname{\overline{L}}_{1}(n)=\operatorname{\overline{L}}_{ \Lambda}(\Lambda n)\). However, the asymptotics of \(\Delta_{\mathrm{L}}(\Lambda,n)\) can still be estimated: **Lemma 2.8**.: _The following asymptotics hold:_ \[\Delta_{\mathrm{L}}(\Lambda,n)=\mathcal{O}(\Lambda^{-1})\] Proof.: Similarly as in (8), we have the following estimate, for large enough \(\Lambda\): \[|\mathcal{N}_{\mathrm{L}}(\Lambda,n)-\mathcal{V}_{\mathrm{L}}(\Lambda,n)|\leq \sqrt{d}\mathcal{A}(\operatorname{\overline{L}}_{\Lambda}(n))\leq\sqrt{d} \mathcal{A}(\operatorname{\overline{B}}_{1})\Lambda^{d-1}\] Now, observe that we have the inclusions \(\operatorname{\overline{B}}_{\Lambda-\frac{\lfloor n\rfloor}{2}}(\frac{n}{2} )\subset\operatorname{\overline{L}}_{\Lambda}(n)\subset\operatorname{ \overline{B}}_{\Lambda}\). Since we have \(\mathcal{V}_{\mathrm{B}}(\Lambda-\frac{\lfloor n\rfloor}{2})=\mathcal{O}(( \Lambda-\frac{\lfloor n\rfloor}{2})^{d})=\mathcal{O}(\Lambda^{d})=\mathcal{V}_ {\mathrm{B}}(\Lambda)\), it follows that \(\mathcal{V}_{\mathrm{L}}(\Lambda,n)=\mathcal{O}(\Lambda^{d})\). Hence we obtain the claim. We end this section with a lower bound, which will be useful later on. For a proof we refer to [19, Satz 5.8 and Satz 5.9]. **Lemma 2.9**.: _The following asymptotic bounds for the discrepancy of the lattice point counting problem for the ball hold, if \(d\geq 5\):_ \[\Delta_{\mathrm{B}}(\Lambda)=\mathcal{O}(\Lambda^{-2})\,\text{ and }\,\Delta_{ \mathrm{B}}(\Lambda)=\Omega(\Lambda^{-2}).\] ## 3. Convergence of Spectral Truncations of the \(d\)-Torus The goal of this section is to prove that the maps \(\rho_{\Lambda}:\mathrm{C}^{\infty}(\mathbb{T}^{d})\to\mathrm{C}(\mathbb{T}^{d })^{(\Lambda)}\), given by the compression \(\rho_{\Lambda}(f):=P_{\Lambda}fP_{\Lambda}\), and \(\sigma_{\Lambda}:\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\to\mathrm{C}^{\infty} (\mathbb{T}^{d})\), given by \(\sigma_{\Lambda}(T):=\frac{1}{\mathcal{N}_{\mathrm{B}}(\Lambda)}\mathrm{Tr}( \left|\psi\right\rangle\left\langle\psi\right|\alpha(T))\), form a \(\mathrm{C}^{1}\)-approximate order isomorphism in the sense of Definition 1.2 (at least in low dimensions \(d\)). ### A candidate for the \(\mathrm{C}^{1}\)-approximate order isomorphism We begin by checking unitality, positivity and \(\mathrm{C}^{1}\)-contractivity for the maps \(\rho_{\Lambda}\) and \(\sigma_{\Lambda}\). **Lemma 3.1**.: _The map \(\rho_{\Lambda}:\mathrm{C}^{\infty}(\mathbb{T}^{d})\to\mathrm{C}(\mathbb{T}^{d })^{(\Lambda)}\) is unital, positive and \(\mathrm{C}^{1}\)-contractive._ Proof.: Unitality is clear as \(P_{\Lambda}\mathbf{1}P_{\Lambda}\left(\sum_{n\in\overline{B}_{\Lambda}^{2}}a _{n}e_{n}\otimes s_{n}\right)=\sum_{n\in\overline{B}_{\Lambda}^{2}}a_{n}e_{n} \otimes s_{n}\), for all \(\sum_{n\in\overline{B}_{\Lambda}^{2}}a_{n}e_{n}\otimes s_{n}\in P_{\Lambda} \mathcal{H}\). Also positivity is obvious since \(\left\langle PaP\varphi,P\varphi\right\rangle_{P\mathcal{K}}=\left\langle aP \varphi,P\varphi\right\rangle_{\mathcal{K}}\geq 0\), for any Hilbert space \(\mathcal{K}\), any projection \(P\in\mathcal{B}(\mathcal{K})\) and any positive operator \(a\in\mathcal{B}(\mathcal{K})_{+}\). Contractivity in norm follows from Plancherel's theorem: \[\|P_{\Lambda}fP_{\Lambda}\|^{2}=\sup_{\begin{subarray}{c}\varphi\in P_{ \Lambda}\mathcal{H}\\ \left|\varphi\right|\leq 1\end{subarray}}\|P_{\Lambda}fP_{\Lambda}\varphi\|^{2} \leqslant\sum_{n\in\overline{B}_{\Lambda}^{2}}\left|\widehat{f}(n)\right|^{2 }\leqslant\sum_{n\in\mathbb{Z}^{d}}\left|\widehat{f}(n)\right|^{2}=\|f\|^{2}\] For contractivity in Lipschitz-norm, we first observe that \(\rho_{\Lambda}\) commutes with \([D,\cdot]\) in the following sense, which is an immediate consequence of the fact that \(D\) commutes with \(P_{\Lambda}\): \[[D_{\Lambda},\rho_{\Lambda}(f)]=P_{\Lambda}[D,f]P_{\Lambda}\] This immediately gives \(\|[D,\cdot]\|\)-contractivity: \[\|[D_{\Lambda},\rho_{\Lambda}(f)]\|=\|P_{\Lambda}[D,f]P_{\Lambda}\|\leqslant\|[ D,f]\|\] **Lemma 3.2**.: _The map \(\sigma_{\Lambda}:\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\to\mathrm{C}^{\infty} (\mathbb{T}^{d})\) is unital, positive and \(\mathrm{C}^{1}\)-contractive._ Proof.: Unitality is clear as, for all \(x\in\mathbb{T}^{d}\), we have: \[\sigma_{\Lambda}(\mathbf{1})(x)=\frac{1}{\mathcal{N}_{B}(\Lambda)}\mathrm{Tr }\left(\left|\psi\right\rangle\left\langle\psi\right|\alpha_{x}(\mathbf{1}) \right)=\frac{1}{\mathcal{N}_{B}(\Lambda)}\mathrm{Tr}\left(\left|\psi\right \rangle\left\langle\psi\right|\mathbf{1}\right)=1\] Positivity is also immediate from the definition. Namely, let \(T\in\mathrm{C}(\mathbb{T}^{d})_{+}^{(\Lambda)}\) be a positive operator on \(P_{\Lambda}\mathcal{H}\) and let \(P_{\Lambda}\mathcal{H}\ni\zeta\mapsto Q_{T}(\zeta):=\left\langle\zeta,T\zeta\right\rangle\) be its associated quadratic form. For \(\zeta=\sum_{n\in\overline{B}_{\Lambda}^{2}}e_{n}\), we obtain: \[0\leqslant\frac{1}{\mathcal{N}_{B}(\Lambda)}Q_{T}(\zeta)(x)=\frac{1}{ \mathcal{N}_{B}(\Lambda)}\mathrm{Tr}\left(\left|\psi\right\rangle\left\langle \psi\right|\left(e_{-(m-n)}t_{m-n}\right)_{m,n}\right)=\sigma_{\Lambda}(T)(x),\] for all \(x\in\mathbb{T}^{d}\). For contractivity, we compute: \[|\sigma_{\Lambda}(T)(x)|\leqslant\frac{1}{\mathcal{N}_{B}(\Lambda)}\operatorname{ Tr}(|\psi\rangle\langle\psi|)\|\alpha_{x}(T)\|=\|T\|\] For contractivity in Lipschitz-norm, we first observe that \(\sigma_{\Lambda}\) commutes with \([D,\cdot]\) in the following sense, which is an easy consequence of (6): \[[D,\sigma_{\Lambda}(T)] =-i\sum_{\mu=1}^{d}\frac{1}{\mathcal{N}_{\mathrm{B}}(\Lambda)} \sum_{n\in\overline{\mathrm{B}}_{\Lambda}^{x}}in_{\mu}t_{n}e_{n}\otimes \gamma^{\mu}\] \[=\sum_{\mu=1}^{d}\frac{1}{\mathcal{N}_{\mathrm{B}}(\Lambda)} \operatorname{Tr}\left(|\psi\rangle\langle\psi|\,\alpha\left(\left((n_{\mu}-m_ {\mu})t_{n-m}\right)_{n,m\in\overline{\mathrm{B}}_{\Lambda}^{x}}\right) \right)\otimes\gamma^{\mu}\] \[=\sigma_{\Lambda}\otimes\mathbf{1}([D_{\Lambda},T])\] This immediately gives \(\|[D,\cdot]\|\)-contractivity: \[\|[D,\sigma_{\Lambda}(T)]\|=\|\sigma_{\Lambda}\otimes\mathbf{1}\left([D_{ \Lambda},T]\right)\|\leqslant\|[D_{\Lambda},T]\|\] We now compute the compositions \(\sigma_{\Lambda}\circ\rho_{\Lambda}\) and \(\rho_{\Lambda}\circ\sigma_{\Lambda}\): **Lemma 3.3**.: _The two compositions \(\sigma_{\Lambda}\circ\rho_{\Lambda}:\mathrm{C}^{\infty}(\mathbb{T}^{d}) \to\mathrm{C}^{\infty}(\mathbb{T}^{d})\) and \(\rho_{\Lambda}\circ\sigma_{\Lambda}:\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)} \to\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\) are given respectively by the Fourier multiplier with symbol \(\mathfrak{m}_{\Lambda}\) and by Schur multiplication with the symbol \(\mathfrak{m}_{\Lambda}\):_ \[\sigma_{\Lambda}\circ\rho_{\Lambda} =\mathcal{F}_{\mathfrak{m}_{\Lambda}}\] \[\rho_{\Lambda}\circ\sigma_{\Lambda} =\mathcal{S}_{\mathfrak{m}_{\Lambda}}\] Proof.: Both identities are just simple computations: \[\sigma_{\Lambda}\circ\rho_{\Lambda}(f)(x) =\frac{1}{\mathcal{N}_{B}(\Lambda)}\operatorname{Tr}\left(|\psi \rangle\langle\psi|\,\alpha_{x}(P_{\Lambda}fP_{\Lambda})\right)\] \[=\frac{1}{\mathcal{N}_{B}(\Lambda)}\operatorname{Tr}\left(|\psi \rangle\langle\psi|\,\alpha_{x}\left(\left(\widehat{f}(k-l)\right)_{k,l\in \overline{\mathrm{B}}_{\Lambda}^{x}}\right)\right)\] \[=\frac{1}{\mathcal{N}_{B}(\Lambda)}\sum_{k,l\in\overline{ \mathrm{B}}_{\Lambda}^{x}}\widehat{f}(k-l)e^{(k-l)\cdot x}\] \[=\sum_{n\in\overline{\mathrm{B}}_{\Lambda}^{x}-\overline{ \mathrm{B}}_{\Lambda}^{x}}\frac{\mathcal{N}_{L}(\Lambda,n)}{\mathcal{N}_{B}( \Lambda)}\widehat{f}(n)e_{n}(x)\] \[\rho_{\Lambda}\circ\sigma_{\Lambda}(T) =\rho_{\Lambda}\left(\frac{1}{\mathcal{N}_{B}(\Lambda)} \operatorname{Tr}\left(|\psi\rangle\langle\psi|\,\alpha_{\bullet}(T)\right)\right)\] \[=\rho_{\Lambda}\left(\frac{1}{\mathcal{N}_{B}(\Lambda)}\sum_{k,l \in\overline{\mathrm{B}}_{\Lambda}^{x}}t_{k-l}e_{k-l}\right)\] \[=\rho_{\Lambda}\left(\sum_{n\in\overline{\mathrm{B}}_{\Lambda}^ {x}-\overline{\mathrm{B}}_{\Lambda}^{x}}\frac{\mathcal{N}_{L}(\Lambda,n)}{ \mathcal{N}_{B}(\Lambda)}t_{n}e_{n}\right)\] \[=\sum_{\mu=1}^{d}\left((1-\mathfrak{m}_{\Lambda}(k-l))\frac{k_{\mu}-l_{ \mu}}{\|k-l\|^{2}}\cdot(k_{\mu}-l_{\mu})t_{k-l}\right)_{k,l\in\overline{\mathbb{ B}}_{\Lambda}^{2}}\] \[=i\sum_{\mu,\nu=1}^{d}\mathcal{S}_{\mathfrak{w}_{\Lambda}^{\mu}} \left(((k_{\nu}-l_{\nu})t_{k-l})_{k,l\in\overline{\mathbb{B}}_{\Lambda}^{2}} \right)\cdot\delta^{\mu\nu}.\] The result now follows by combining the defining relations for the gamma-matrices as before with the expression (6) for the operator \([D_{\Lambda},T]\in\mathcal{B}(P_{\Lambda}\mathcal{H})\). Since the Schur and Fourier multipliers that appear in Lemma 3.4 involve gamma matrices and sums over \(\mu\), the transference theorem Proposition 2.5 does not apply directly. However, we can prove a variation on it which relates the cb-norms of the two linear maps that appear in Lemma 3.4: \[\mathcal{F}_{\mathfrak{w}_{\mathrm{A}}} :=\frac{i}{2}\sum_{\mu=1}^{d}\mathcal{F}_{\mathfrak{w}_{\mathrm{A} }^{\mu}}\otimes\{\gamma^{\mu},\cdot\}:[D,\mathrm{C}^{\infty}(\mathbb{T}^{d})] \rightarrow\mathrm{C}^{\infty}(\mathbb{T}^{d}); \tag{10}\] \[\mathcal{S}_{\mathfrak{w}_{\mathrm{A}}} :=\frac{i}{2}\sum_{\mu=1}^{d}\mathcal{S}_{\mathfrak{w}_{\mathrm{ A}}^{\mu}}\otimes\{\gamma^{\mu},\cdot\}:[D,\mathrm{C}(\mathbb{T}^{d})^{( \Lambda)}]\rightarrow\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}, \tag{9}\] Here we consider \([D,C^{\infty}(\mathbb{T}^{d})]\) and \([D,C(\mathbb{T}^{d})^{(\Lambda)}]\) as (dense subsets of) operator systems in \(\mathcal{B}(\mathrm{L}^{2}(\mathbb{T}^{d})\otimes V)\). **Lemma 3.5**.: _For the above two linear maps we have the following norm inequalities:_ \[\|\mathcal{S}_{\mathfrak{w}_{\mathrm{A}}}\|_{\mathrm{cb}}\leq\|\mathcal{F}_{ \mathfrak{w}_{\mathrm{A}}}\|_{\mathrm{cb}}=\|\mathcal{F}_{\mathfrak{w}_{ \mathrm{A}}}\|.\] Proof.: We vary on the proof of [26, Theorem 6.4]. First identify \(\mathrm{L}^{2}(\mathbb{T}^{d})\otimes V\cong\ell^{2}(\mathbb{Z}^{d})\otimes V\) using the Fourier basis \(\{e_{n}\}_{n\in\mathbb{Z}^{d}}\), and write \(\mathcal{H}:=\ell^{2}(\mathbb{Z}^{d})\otimes V\). Consider the unitary operator \(U\) defined on \(\mathcal{H}\otimes\mathcal{H}\) by a combination of a shift in Fourier space and a tensor flip in spinor space: \[U(e_{n}\otimes v\otimes e_{m}\otimes v^{\prime})=e_{n}\otimes v^{\prime} \otimes e_{n+m}\otimes v\] Recall that an elementary matrix \(E_{kl}\)\((k,l\in\mathbb{Z}^{d})\) acts on \(\mathcal{H}\) as: \[E_{kl}(e_{n}\otimes v)=\delta_{ln}e_{k}\otimes v,\] in contrast to a generator \(e_{k}\) in the group algebra \(\mathrm{C}^{*}(\mathbb{Z}^{d})=\mathrm{C}(\mathbb{T}^{d})\), which acts as \[e_{k}(e_{n}\otimes v)=e_{n+k}\otimes v.\] Note furthermore that under the identification \(\mathrm{L}^{2}(\mathbb{T}^{d})\cong\ell^{2}(\mathbb{Z}^{d})\) we have \[D(e_{n}\otimes v)=ne_{n}\otimes\gamma^{\mu}v,\qquad(n\in\mathbb{Z}^{d},v\in V). \tag{11}\] We then find that \[U(E_{kl}\otimes\mathbf{1}_{\mathcal{H}})U^{*} =E_{kl}\otimes e_{k-l}\] \[U(E_{kl}\gamma^{\mu}\otimes\mathbf{1}_{\mathcal{H}})U^{*} =E_{kl}\otimes e_{k-l}\gamma^{\mu}\] where \(\gamma^{\mu}\) acts of course on spinor space \(V\). Note that in view of Equation (11) we also have \(U([D,E_{kl}]\otimes\mathbf{1}_{\mathcal{H}})U^{*}=E_{kl}\otimes[D,e_{k-l}]\). Using this we may now show \[U\left((\mathcal{S}_{\mathfrak{w}_{\mathrm{A}}}\otimes\mathrm{ id})\big{(}[D,E_{kl}]\otimes\mathbf{1}_{\mathcal{H}}\big{)}\right)U^{*} =\sum_{\mu}U\left(\mathcal{S}_{\mathfrak{w}_{\mathrm{A}}}(\gamma^{ \mu}(k-l)_{\mu}E_{kl})\otimes\mathbf{1}_{\mathcal{H}}\right)U^{*}\] \[=i\sum_{\mu}U\left(\mathfrak{w}_{\mathrm{A}}^{\mu}(k-l)(k-l)_{\mu }E_{kl}\otimes\mathbf{1}_{\mathcal{H}}\right)U^{*}\] \[=i\sum_{\mu}E_{kl}\otimes\mathfrak{w}_{\mathrm{A}}^{\mu}(k-l)(k-l )_{\mu}e_{k-l}\] \[=E_{kl}\otimes\mathcal{F}_{\mathfrak{w}_{\mathrm{A}}}([D,e_{k-l}])\] \[=(\mathrm{id}\otimes\mathcal{F}_{\mathfrak{w}_{\mathrm{A}}}) \left(U([D,E_{kl}]\otimes\mathbf{1}_{\mathcal{H}})U^{*}\right).\] This extends by linearity to arbitrary \(x=\sum_{k,l\in\mathbbm{B}_{\mathrm{A}}^{x}}t_{k-l}E_{kl}\in C(\mathbb{T}^{d})^{ (\Lambda)}\) to yield \[U\left((\mathcal{S}_{\mathfrak{w}_{\mathrm{A}}}\otimes\mathrm{id})\big{(}[D,x ]\otimes\mathbf{1}_{\mathcal{H}}\big{)}\right)U^{*} =(\mathrm{id}\otimes\mathcal{F}_{\mathfrak{w}_{\mathrm{A}}}) \left(U([D,x]\otimes\mathbf{1}_{\mathcal{H}})U^{*}\right)\] From this it follows at once that \(\|\mathcal{S}_{\mathfrak{m}_{\Lambda}}\|_{\mathrm{cb}}\leq\|\mathcal{F}_{ \mathfrak{m}_{\Lambda}}\|_{\mathrm{cb}}\). Finally, supposing that \(\mathcal{F}_{\mathfrak{m}_{\Lambda}}\) is a bounded linear map its norm and cb-norm coincide because its range is given by a commutative C*-algebra, namely \(\mathrm{C}(\mathbb{T}^{d})\) (_cf._[25, Theorem 3.9]). Our task is thus reduced to the computation of the norm of the map \(\mathcal{F}_{\mathfrak{w}_{\Lambda}}:[D,\mathrm{C}^{\infty}(\mathbb{T}^{d})] \to\mathrm{C}^{\infty}(\mathbb{T}^{d})\) given in Equation (9). ### Estimating the norm of the map \(\mathcal{F}_{\mathfrak{w}_{\Lambda}}\) In this subsection, we assume \(d\geq 2\). In order to estimate \(\|\mathcal{F}_{\mathfrak{w}_{\Lambda}}\|\), we eventually need some information about the quotient \(\mathfrak{m}_{\Lambda}(n)=\frac{\mathcal{N}_{\mathrm{L}}(\Lambda,n)}{\mathcal{ N}_{\mathrm{B}}(\Lambda)}\). We introduce the following quantity which seems much more tractable: \[\theta_{\Lambda}(\xi):=\frac{\mathcal{V}_{\mathrm{L}}(\Lambda,\xi)}{\mathcal{ V}_{\mathrm{B}}(\Lambda)},\] for all \(\Lambda>0\), and \(\xi\in\mathbb{R}^{d}\). We call the convolution kernel induced by \(\theta_{\Lambda}\) via the following equation the _continuous spectral Fejer kernel_: \[K_{\Lambda}^{\mathrm{csp}}(x):=\sum_{n\in\mathbb{Z}^{d}}\theta_{\Lambda}(n)e^{ in\cdot x}, \tag{12}\] for all \(x\in\mathbb{R}^{d}\). With this, we can decompose the map \(\mathcal{F}_{\mathfrak{w}_{\Lambda}}:[D,\mathrm{C}^{\infty}(\mathbb{T}^{d})] \to\mathrm{C}^{\infty}(\mathbb{T}^{d})\) as follows: \[\begin{split}\mathcal{F}_{\mathfrak{w}_{\Lambda}}([D,f])& =\frac{i}{2}\sum_{\mu=1}^{d}\mathcal{F}_{\mathfrak{w}_{\Lambda}^{ \mu}}\otimes\{\gamma^{\mu},\cdot\}([D,f])\\ &=\frac{i}{2}\sum_{\mu=1}^{d}\left(\mathcal{F}_{(1-\theta_{ \Lambda})\frac{n_{\mu}}{|n|^{2}}}+\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_ {\Lambda})\frac{n_{\mu}}{|n|^{2}}}\right)\otimes\{\gamma^{\mu},\cdot\}([D,f] )\\ &=i(f-K_{\Lambda}^{\mathrm{csp}}*f)+\frac{i}{2}\sum_{\mu=1}^{d} \mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{|n|^{2}} }\otimes\{\gamma^{\mu},\cdot\}([D,f])\end{split} \tag{13}\] Note that, since \(\mathfrak{w}_{\Lambda}^{\mu}(n)=0\), if \(n=0\), we are only considering \((1-\theta_{\Lambda}(n))\frac{n_{\mu}}{|n|^{2}}\) and \((\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{|n|^{2}}\), for \(n\neq 0\). We see that we shall obtain an appropriate estimate for \(\|\mathcal{F}_{\mathfrak{w}_{\Lambda}}\|\) if we can prove that \(K_{\Lambda}^{\mathrm{csp}}\) is a good kernel (in view of Lemma 2.3) and if we can appropriately estimate the Fourier multiplier \(\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{|n|^{2}}}\) on \(\mathrm{C}^{\infty}(\mathbb{T}^{d})\). #### 3.3.1. The continuous spectral Fejer kernel Recall that we are assuming \(d\geq 2\). For all \(x\in\mathbb{R}^{d}\), we set \(\theta\left(\frac{x}{2\Lambda}\right):=\theta_{\Lambda}(x)\). **Lemma 3.6**.: _For all \(x\in\mathbb{R}^{d}\backslash\{0\}\), the following holds:_ \[\widetilde{\theta}(x)=2\frac{\Gamma(\frac{d}{2}+1)}{\pi^{\frac{d+1}{2}}}\|x\|^ {-d}J_{\frac{d}{2}}(\pi\|x\|)^{2},\] _where \(J_{\frac{n}{2}}\) denotes the Bessel function, for \(n\) a non-negative integer._ Proof.: It is clear from the definition that \(\theta\) is radially symmetric, non-negative, continuous and compactly supported. Hence, we may combine Corollary A.2 and Lemma B.1 to obtain the following expression for \(\widetilde{\theta}_{\Lambda}\): \[\widetilde{\theta}(x)=\frac{2\pi^{\frac{d}{2}-1}}{B\left(\frac{d+1}{2},\frac{1 }{2}\right)}\sum_{j=0}^{\infty}(-1)^{j}\frac{s^{2j}}{(2j)!}\frac{\Gamma\left(j+ \frac{1}{2}\right)}{\Gamma\left(j+\frac{d}{2}\right)}\int_{0}^{1}B_{1-r^{2}} \left(\frac{d+1}{2},\frac{1}{2}\right)r^{2j+d-1}dr,\] with \(s:=2\pi\|x\|\). We begin by computing the integral: \[\int_{0}^{1}B_{1-r^{2}}\left(\frac{d+1}{2},\frac{1}{2}\right)r^{2j+ d-1}dr\] \[=\frac{1}{2}\int_{0}^{1}B_{u}\left(\frac{d+1}{2},\frac{1}{2} \right)(1-u)^{j+\nicefrac{{d}}{{2}}-1}du\] \[=-\frac{1}{2j+d}\left(\left[(1-u)^{j+\nicefrac{{d}}{{2}}}B_{u} \left(\frac{d+1}{2},\frac{1}{2}\right)\right]_{u=0}^{1}-\int_{0}^{1}(1-u)^{j+ \nicefrac{{d}}{{2}}}\frac{d}{du}\int_{0}^{u}t^{\frac{d-1}{2}}(1-t)^{-\frac{1} {2}}dtdu\right)\] \[=\frac{1}{2j+d}B\left(\frac{d+1}{2},j+\frac{d+1}{2}\right)\] Here we applied the change of variables \(u:=1-r^{2}\) in the first step and integration by parts together with the definition of the incomplete Beta function in the second step. With this and by applying the well known facts that \(B(a,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\) and \(\Gamma(\frac{1}{2})=\sqrt{\pi}\), we obtain the following expression for \(\widetilde{\theta}\): \[\widetilde{\theta}(x) =\frac{2\pi^{\frac{d}{2}-1}}{B\left(\frac{d+1}{2},\frac{1}{2} \right)}\sum_{j=0}^{\infty}s^{2j}\frac{1}{2j+d}\frac{(-1)^{j}}{(2j)!}\frac{ \Gamma\left(j+\frac{1}{2}\right)}{\Gamma\left(j+\frac{d}{2}\right)}B\left( \frac{d+1}{2},j+\frac{d+1}{2}\right)\] \[=2\pi^{\frac{d}{2}-1}\frac{\Gamma(\frac{d}{2}+1)}{\Gamma(\frac{d +1}{2})\Gamma(\frac{1}{2})}\sum_{j=0}^{\infty}s^{2j}\frac{1}{2j+d}\frac{(-1)^{ j}}{(2j)!}\frac{\Gamma\left(j+\frac{1}{2}\right)}{\Gamma\left(j+\frac{d}{2} \right)}\frac{\Gamma(\frac{d+1}{2})\Gamma(j+\frac{d+1}{2})}{\Gamma(j+d+1)}\] \[=2\pi^{\frac{d-3}{2}}\Gamma(\frac{d}{2}+1)\sum_{j=0}^{\infty}s^{ 2j}\frac{1}{2j+d}\frac{(-1)^{j}}{(2j)!}\frac{\Gamma\left(j+\frac{1}{2}\right) }{\Gamma\left(j+\frac{d}{2}\right)}\frac{\Gamma(j+\frac{d+1}{2})}{\Gamma(j+d+ 1)}\] Setting \(t:=\frac{s}{2}=\pi\|x\|\) and rearranging, we obtain: \[\widetilde{\theta}(x) =2\frac{2^{d}\Gamma(\frac{d}{2}+1)}{\pi^{\frac{d}{2}+1}\|x\|^{d} }\frac{1}{\pi^{\nicefrac{{d}}{{2}}}}\left(\frac{t}{2}\right)^{d}\sum_{j=0}^{ \infty}t^{2j}\frac{(-1)^{j}}{(2j)!}\frac{\Gamma\left(j+\frac{1}{2}\right)}{ \Gamma(j+d+1)}\frac{2^{2j}}{2j+d}\frac{\Gamma(j+\frac{d+1}{2})}{\Gamma\left(j +\frac{d}{2}\right)}\] With Lemma B.3 we obtain: \[\widetilde{\theta}(x) =2\frac{2^{d}\Gamma(\frac{d}{2}+1)}{\pi^{\frac{d}{2}+1}\|x\|^{d} }\frac{\pi^{\nicefrac{{d}}{{2}}}}{2^{d}}\frac{1}{\Gamma(\frac{1}{2})}\left( \frac{t}{2}\right)^{d}\sum_{j=0}^{\infty}t^{2j}\frac{(-1)^{j}}{(2j)!}\frac{ \Gamma\left(j+\frac{1}{2}\right)}{\Gamma(j+d+1)}\frac{\Gamma(2j+d+1)}{\Gamma( j+\frac{d}{2}+1)^{2}}\] \[=2\frac{\Gamma(\frac{d}{2}+1)}{\pi^{\frac{d+1}{2}}\|x\|^{d}}J_{ \frac{d}{2}}(t)^{2}\] **Corollary 3.7**.: _The function \(\widetilde{\theta}\) is bounded on \(\mathbb{R}^{d}\) with \(|\widetilde{\theta}(x)|=\mathcal{O}(\|x\|^{-d-1})\) as \(\|x\|\to\infty\), that is, there exists a constant \(C>0\) such that the following holds:_ \[|\widetilde{\theta}(x)|\leq C(1+|x|)^{-d-c}, \tag{14}\] _for all \(x\in\mathbb{R}^{d}\)._ Proof.: Standard estimates on the asymptotics of the Bessel function (cf. e.g. [12, Appendix B.6]) give the following: \[J_{\frac{d}{2}}(r)^{2}=\begin{cases}2\frac{r^{d}}{2^{d}\Gamma(\frac{d}{2}+1)^{2}} +\mathcal{O}(r^{d+1}),\text{ as }r\to 0\\ \frac{2}{\pi r}\cos(r-\frac{d-1}{4}\pi)^{2}+\mathcal{O}(r^{-\frac{5}{2}}), \text{ as }r\to\infty.\end{cases}\] Combined with Lemma 3.6, this gives \[\lim_{\|x\|\to 0}\widetilde{\theta}(x)=2\frac{\pi^{\frac{d-1}{2}}}{2^{d} \Gamma(\frac{d}{2}+1)}\] and \[\widetilde{\theta}(x)=\mathcal{O}(\|x\|^{-d-1}),\text{ as }\|x\|\to\infty.\] Boundedness of \(\widetilde{\theta}\) is clear from continuity of \(J_{\frac{d}{2}}\). **Proposition 3.8**.: _The continuous spectral Fejer kernel (12) is a good kernel._ Proof.: It is clear that the function \(\mathbb{R}^{d}\ni\xi\mapsto\theta(\xi)\) is continuous, radially symmetric, non-increasing, has support in \(\widetilde{\mathbb{B}}_{1}\) and attains its supremum \(\sup_{\xi\in\mathbb{R}^{d}}|\theta(\xi)|=1\) at \(\xi=0\). Together with Corollary 3.7 we see that we can apply Lemma 2.2 to the convolution kernel \(K_{\Lambda}\) induced by the function \(\theta\) to see that it is a good kernel. Moreover, \(K_{\Lambda}^{\mathrm{csp}}=K_{\Lambda}\). #### 3.3.2. Comparing multipliers induced by a discrete and continuous version of a symbol **Proposition 3.9**.: _For \(d=2,3\), there exists \(\varepsilon_{\Lambda}\to 0\) (as \(\Lambda\to\infty\)) which bounds the norm of the linear map \(\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{|n|^{2}} }\otimes\{\gamma^{\mu},\cdot\}\), (with \((\theta_{\Lambda}(n)-\mathfrak{m}_{\Lambda}(n))\frac{n_{\mu}}{|n|^{2}}\) understood to be \(0\), if \(n=0\)):_ \[\left\|\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{| n|^{2}}}\otimes\{\gamma^{\mu},\cdot\}([D,f])\right\|\leq\varepsilon_{\Lambda} \|[D,f]\|\] Proof.: It is clear that \[\left\|\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{| n|^{2}}}\otimes\{\gamma^{\mu},\cdot\}\right\|\leq\left\|\mathcal{F}_{(\theta_{ \Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{|n|^{2}}}\right\|_{\mathrm{cb} }2\left\|\gamma^{\mu}\right\|\leq 2\left\|\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{ \Lambda})\frac{n_{\mu}}{|n|^{2}}}\right\|_{\mathrm{cb}}.\] Since \(\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{|n|^{2}}}\) has its range in the commutative C*-algebra \(\mathrm{C}(\mathbb{T}^{d})\) its cb-norm coincides with its norm. We then apply the estimate (Corollary 2.6) of the norm of a Fourier multiplier in terms of the \(\ell^{2}\)-norm of its symbol, _i.e._: \[\left\|\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{| n|^{2}}}\right\|\leq\left\|(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{ \mu}}{\|n\|^{2}}\right\|_{\ell^{2}(\mathbb{Z}^{d})}.\] For the right-hand side we then have \[\left\|(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{ \|n\|^{2}}\right\|_{\ell^{2}(\mathbb{Z}^{d})}^{2} =\sum_{n\in\mathbb{Z}^{d}}\left|(\theta_{\Lambda}(n)-\mathfrak{m}_ {\Lambda}(n))\frac{n_{\mu}}{\|n\|^{2}}\right|^{2}\] \[\leq\sum_{n\in\mathbb{B}_{2\Lambda}^{2}}\left|\frac{\mathcal{V}_{ \mathrm{L}}(\Lambda,n)}{\mathcal{V}_{\mathrm{B}}(\Lambda)}-\frac{\mathcal{N}_{ \mathrm{L}}(\Lambda,n)}{\mathcal{N}_{\mathrm{B}}(\Lambda)}\right|^{2}\frac{1}{ \|n\|^{2}}\] \[\leq\sum_{n\in\mathbb{B}_{2\Lambda}^{2}}\left(\frac{\mathcal{V}_{ \mathrm{L}}(\Lambda,n)}{\mathcal{N}_{\mathrm{B}}(\Lambda)}(\Delta_{\mathrm{L}}( \Lambda,n)+\Delta_{\mathrm{B}}(\Lambda))\right)^{2}\frac{1}{\|n\|^{2}}\] where \(\Delta_{\rm L}(\Lambda,n):=\frac{|\mathcal{V}_{\rm L}(\Lambda,n)-\mathcal{N}_{\rm L }(\Lambda,n)|}{\mathcal{V}_{\rm L}(\Lambda,n)}\) and \(\Delta_{\rm B}(\Lambda):=\frac{|\mathcal{V}_{\rm B}(\Lambda)-\mathcal{N}_{\rm B }(\Lambda)|}{\mathcal{V}_{\rm B}(\Lambda)}\) denote the respective discrepancies of the lattice point counting problem of the lense and of the ball as introduced in Subsection 2.4. Note that we have \(\frac{\mathcal{V}_{\rm L}(\Lambda,n)}{\mathcal{N}_{\rm B}(\Lambda)}\leq\frac{ \mathcal{V}_{\rm B}(\Lambda)}{\mathcal{N}_{\rm B}(\Lambda)}\) which is bounded and asymptotically constant and it follows from Lemma 2.7 and Lemma 2.8 that \((\Delta_{\rm L}(\Lambda,n)+\Delta_{\rm B}(\Lambda)))^{2}=\mathcal{O}( \Lambda^{-2})\). Moreover, we have the following estimate, for \(d=2\): \[\sum_{n\in\overline{\rm B}_{\Lambda}^{2}\setminus\{0\}}\frac{1}{ \|n\|^{2}} \leq\mathcal{N}_{\rm B}(1)-1+\sum_{n_{1},n_{2}=1}^{\Lambda}\frac{1}{n_{1}^ {2}+n_{2}^{2}}\] \[\leq 4+\sum_{n_{1},n_{2}=1}^{\Lambda}\frac{1}{2n_{1}n_{2}}\] \[=4+\frac{1}{2}\left(\sum_{n_{1}=1}^{\Lambda}\frac{1}{n_{1}} \right)\left(\sum_{n_{2}=1}^{\Lambda}\frac{1}{n_{2}}\right)\] \[=\mathcal{O}(\log(\Lambda)^{2}),\] where the second inequality follows immediately from the fact that \((n_{1}-n_{2})^{2}\geq 0\) and hence \(n_{1}^{2}+n_{2}^{2}\geq 2n_{1}n_{2}\). For \(d\geq 3\), we have: \[\sum_{n\in\overline{\rm B}_{\Lambda}^{2}\setminus\{0\}}\frac{1}{ \|n\|^{2}} \sim\mathcal{N}_{\rm B}(1)-1+\int_{1<|x|\leq\Lambda}\frac{1}{\|x\|^{2}}dx\] \[=\mathcal{N}_{\rm B}(1)-1+\int_{1}^{\Lambda}\frac{1}{r^{2}}r^{d-1 }dr\] \[=\mathcal{N}_{\rm B}(1)-1+\frac{1}{d-2}(\Lambda^{d-2}-1), \tag{15}\] where "\(\sim\)" denotes asymptotic behavior for large \(\Lambda\). Together, this gives the following asymptotics: \[\left\|(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{ \|n\|^{2}}\right\|_{\ell^{2}(\mathbb{Z}^{d})}^{2} =\mathcal{O}(\Lambda^{-2})\sum_{n\in\overline{\rm B}_{\Lambda}^{ 2}}\frac{1}{\|n\|^{2}}\] \[=\begin{cases}\mathcal{O}\left(\left(\frac{\log(\Lambda)}{ \Lambda}\right)^{2}\right),\text{ if }d=2\\ \mathcal{O}(\Lambda^{d-4}),\text{ if }d\geq 3\end{cases}\] This shows that \(\left\|\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{ \|n\|^{2}}}\right\|\leq\left\|(\theta_{\Lambda}-\mathfrak{m}_{\Lambda}) \frac{n_{\mu}}{\|n\|^{2}}\right\|_{\ell^{2}(\mathbb{Z}^{d})}\to 0\) as \(\Lambda\to\infty\), if \(d=2,3\). _Remark 3.10_.: Let us consider what happens in \(d=4\) and \(5\). First, in general one may argue as in [14, Satz 9] that \(|\mathcal{N}_{\rm B}(\Lambda)-\mathcal{V}_{\rm B}(\Lambda)|=\mathcal{O}( \Lambda^{\frac{d(d-1)}{d+1}})\). If we assume that \(\Delta_{\rm L}(\Lambda,n)=\mathcal{O}(\Delta_{\rm B}(\Lambda))\), then (15) implies that \[\left\|(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{\|n\|^{2}} \right\|_{\ell^{2}(\mathbb{Z}^{d})}^{2}=\mathcal{O}\left(\Lambda^{2\left( \frac{d(d-1)}{d+1}-d\right)}\right)\sum_{n\in\overline{\rm B}_{\Lambda}^{2}} \frac{1}{\|n\|^{2}}\] \[=\mathcal{O}\left(\Lambda^{\frac{d^{2}-5d-2}{d+1}}\right)\] \[=\begin{cases}\mathcal{O}(\Lambda^{-\frac{6}{5}}),\text{ if }d=4\\ \mathcal{O}(\Lambda^{-\frac{1}{5}}),\text{ if }d=5\end{cases}\] In particular, this would show that \(\left\|\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{ \|n|^{2}}}\right\|\leq\left\|(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_ {\mu}}{\|n|^{2}}\right\|_{\ell^{2}(\mathbb{Z}^{d})}\to 0\) as \(\Lambda\to\infty\), if \(d=4,5\). We leave the question whether indeed \(\Delta_{\mathrm{L}}(\Lambda,n)=\mathcal{O}(\Delta_{\mathrm{B}}(\Lambda))\) holds for future research, but emphasize that \(\Delta_{\mathrm{L}}(\Lambda,n)\) is not the discrepancy as defined in (7), since \(\mathcal{N}_{\mathrm{L}}(\Lambda,n)\) and \(\mathcal{V}_{\mathrm{L}}(\Lambda,n)\) are the quantities obtained from considering the intersection of rescaled balls \(\widetilde{\mathrm{L}}(\Lambda,n)=\widetilde{\mathrm{B}}_{\Lambda}\cap \widetilde{\mathrm{B}}_{\Lambda}(n)\), rather than from rescaling the lense itself. **Corollary 3.11**.: _If \(d\geq 6\), the \(\ell^{2}\)-norm \(\left\|(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{\|n|^{2}} \right\|_{\ell^{2}(\mathbb{Z}^{d})}^{2}\) does not converge to \(0\), as \(\Lambda\to\infty\)._ Proof.: From (15) and Lemma 2.9 we obtain: \[\left\|(\theta_{\Lambda}-\mathfrak{m}_{\Lambda})\frac{n_{\mu}}{\|n\|^{2}} \right\|_{\ell^{2}(\mathbb{Z}^{d})}^{2}=\Omega(\Lambda^{-4})\Lambda^{d-2}= \Omega(\Lambda^{d-6})\] ### Convergence of spectral truncations in low dimensions We now have all the ingredients needed to prove our main theorem: **Theorem 3.12**.: _For \(d=1,2,3\), spectral truncations of the \(d\)-torus \(\mathbb{T}^{d}\) converge._ Proof.: The case \(d=1\) was already proven in [36], but with the machinery developped we can shed some more light on it. In this case, we may assume that \(\Lambda=N\in\mathbb{N}_{0}\). Observe that we have: \[\mathcal{F}_{\mathfrak{m}_{N}}([D,f])=\frac{i}{2}\mathcal{F}_{(1-\mathfrak{m} _{N})\frac{n}{|n|^{2}}}\otimes\{\mathbf{1},\cdot\}([D,f])=f-K_{N}^{\mathrm{ Fejer}}\ast f\] where \(\mathfrak{m}_{N}(n)=\left(\frac{2N+1-|n|}{2N+1}\right)_{+}\), and where \(K_{N}^{\mathrm{Fejer}}\) is the _Fejer kernel_ on the circle. (As always, we interprete \((1-\mathfrak{m}_{\Lambda}(n))\frac{n}{|n|^{2}}\) to be \(0\), if \(n=0\).) It is well-known that \(K_{N}^{\mathrm{Fejer}}\) is a good kernel, which, by Lemma 2.3, implies that there exists a sequence of positive real numbers \(\delta_{N}\) such that we obtain the following Lipschitz estimate: \[\|\mathcal{F}_{\mathfrak{m}_{N}}([D,f])\|=2\|f-K_{N}^{\mathrm{Fejer}}\ast f\| \leq 2\delta_{N}\|[D,f]\|\to 0,\] for all \(f\in\mathrm{C}^{\infty}(\mathbb{T}^{1})\), as \(N\to\infty\). Setting \(\gamma_{N}:=2\delta_{N}\), we obtain the following estimates from Lemma 3.5: \[\|\mathcal{S}_{\mathfrak{m}_{N}}\|_{\mathrm{cb}}\leq\|\mathcal{F}_{\mathfrak{ m}_{N}}\|_{\mathrm{cb}}=\|\mathcal{F}_{\mathfrak{m}_{N}}\|\leq\gamma_{N}\to 0,\] as \(N\to\infty\). This is a shortcut compared to the original proof in [36] where the Lipschitz norm of the Schur multiplier was estimated without making use of its relation to the Lipschitz norm of the Fourier multiplier. In view of Lemma 3.4, this shows that the maps \(\sigma_{N}\circ\rho_{N}\) on \(\mathrm{C}^{\infty}(\mathbb{T}^{1})\) and \(\rho_{N}\circ\sigma_{N}\) on \(\mathrm{C}(\mathbb{T}^{1})^{(N)}\) approximate the respective identities in Lipschitz norm, if \(d=1\). Together with the fact that the maps \(\rho_{\Lambda}\) and \(\sigma_{\Lambda}\) are both unital, positive and \(\mathrm{C}^{1}\)-contractive (Lemma 3.1, Lemma 3.2), this shows that \((\rho,\sigma)\) is a C\({}^{1}\)-approximate order isomorphism (Definition 1.2). Now, we can apply [36, Theorem 5] to obtain the claim that spectral truncations of \(\mathbb{T}^{1}\) converge. In the cases \(d=2,3\), we need a bit more preparation. In view of (13), it is clear that the following estimate holds: \[\|\mathcal{F}_{\mathfrak{w}_{\Lambda}}([D,f])\|\leq 2\|f-K_{\Lambda}^{\rm csp} \ast f\|+\sum_{\mu=1}^{d}\|\mathcal{F}_{(\theta_{\Lambda}-\mathfrak{m}_{ \Lambda})\frac{n_{\mu}}{|n|^{2}}}\otimes\{\gamma^{\mu},\cdot\}([D,f])\|\] The fact that \(K_{\theta_{\Lambda}}\) is a good kernel (Proposition 3.8) together with Lemma 2.3 gives a Lipschitz estimate \(2\delta_{\Lambda}\to 0\) of the first norm and Proposition 3.9 gives a Lipschitz estimate \(d\varepsilon_{\Lambda}\to 0\) of the second norm. Hence, setting \(\gamma_{\Lambda}:=2\delta_{\Lambda}+d\varepsilon_{\Lambda}\), we obtain a similar estimate as in the case \(d=1\) above (applying Lemma 3.5): \[\|\mathcal{S}_{\mathfrak{w}_{\Lambda}}\|_{\rm cb}\leq\|\mathcal{F}_{ \mathfrak{w}_{\Lambda}}\|_{\rm cb}=\|\mathcal{F}_{\mathfrak{w}_{\Lambda}}\| \leq\gamma_{\Lambda}\to 0\] The rest of the proof is analogous to the argument given above for the one-dimensional case. _Remark 3.13_.: From Corollary 3.11, we see that our methods cannot provide an analogous version of Theorem 3.12, if \(d\geq 6\). ## 4. Structure Analysis of the Operator System C\((\mathbb{T}^{d})^{(\Lambda)}\) ### C*-envelope and propagation number Recall [13] (see also [25, Chapter 15]) that a C\({}^{*}\)_-extension_ of a unital operator system \(E\) is a unital C*-algebra \(A\) together with an injective completely positive map \(\iota:E\to A\) such that C*\((\iota(E))=A\). A C*-extension \(A\) of \(E\) is called the C*_-envelope_ and denoted by C\({}_{\rm env}^{*}(E)\) if, for every unital C*-algebra \(B\) and every unital completely positive map \(\phi:A\to B\), the map \(\phi\) is a complete order injection if the composition \(\phi\circ\iota\) is. Recall furthermore from [8] that the _propagation number_\({\rm prop}(E)\) is the smallest positive integer \(n\) such that \((\iota(E))^{\circ n}\subseteq{\rm C}_{\rm env}^{*}(E)\) is a C*-algebra, where \(F^{\circ n}={\rm span}\{f_{1}\cdots f_{n}\,:\,f_{i}\in F\}\), for a unital operator subsystem \(F\) of a unital C*-algebra \(A\). For \(p\in\overline{\rm B}_{\Lambda}^{\mathbb{Z}}+\overline{\rm B}_{\Lambda}^{ \mathbb{Z}}\), we define the following operator in C\((\mathbb{T}^{d})^{(\Lambda)}\subset\mathcal{B}(P_{\Lambda}\mathcal{H})\): \[T_{p}:=\sum_{n\in\overline{L}_{\Lambda}^{\mathbb{Z}}(p)}E_{n-p,n}=\sum_{n\in \overline{L}_{\Lambda}^{\mathbb{Z}}(-p)}E_{n,n+p}, \tag{16}\] where \(E_{k,l}\in\mathcal{B}(P_{\Lambda}\mathcal{H})\) is the _matrix unit_ given by \(\langle e_{m},E_{k,l}e_{n}\rangle=\delta_{mk}\delta_{ln}\), for \(k,l,m,n\in\overline{\rm B}_{\Lambda}^{\mathbb{Z}}\). It is not hard to check that \(\{T_{p}\}_{p\in\overline{\rm B}_{\Lambda}^{\mathbb{Z}}+\overline{\rm B}_{ \Lambda}^{\mathbb{Z}}}\) is a basis for the operator system C\((\mathbb{T}^{d})^{(\Lambda)}\). With the preparations of Appendix C, we are in position to treat the C*-envelope and propagation number of the operator system C\((\mathbb{T}^{d})^{(\Lambda)}\). **Proposition 4.1**.: _The C*-envelope and the propagation number of \({\rm C}(\mathbb{T}^{d})^{(\Lambda)}\) are given by \({\rm C}_{env}^{*}({\rm C}(\mathbb{T}^{d})^{(\Lambda)})=\mathcal{B}(P_{\Lambda }\mathcal{H})\) and \({\rm prop}({\rm C}(\mathbb{T}^{d})^{(\Lambda)})=2\)._ Proof.: The matrix order structure on C\((\mathbb{T}^{d})^{(\Lambda)}\) is the one inherited from the inclusion into \(\mathcal{B}(P_{\Lambda}\mathcal{H})\). It remains to show that the inclusion \(C(\mathbb{T}^{d})^{(\Lambda)}\hookrightarrow\mathcal{B}(P_{\Lambda}\mathcal{H})\) is a C*-extension, i.e. that it generates \(\mathcal{B}(P_{\Lambda}\mathcal{H})\). Indeed, if this is the case, it is clear that \(\mathcal{B}(P_{\Lambda}\mathcal{H})\simeq{\rm C}_{env}^{*}(C(\mathbb{T}^{d})^ {(\Lambda)})\) since \(\mathcal{B}(P_{\Lambda}\mathcal{H})\) is simple. We will see that \(\mathcal{B}(P_{\Lambda}\mathcal{H})\) is in fact spanned by respective products of two basic operators (16). To this end, let \(p,q\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}+\overline{\mathrm{B}}_{ \Lambda}^{\mathbb{Z}}\). Then, the following holds: \[T_{p}T_{q} =\left(\sum_{n\in\overline{L}_{\Lambda}^{\mathbb{Z}}(p)}E_{n-p,n }\right)\left(\sum_{n\in\overline{L}_{\Lambda}^{\mathbb{Z}}(-q)}E_{n,n+q}\right)\] \[=\sum_{n\in\overline{L}_{\Lambda}^{\mathbb{Z}}(p)\cap\overline{L }_{\Lambda}^{\mathbb{Z}}(-q)}E_{n-p,n+q}\] \[=\sum_{n\in\overline{L}_{\Lambda}^{\mathbb{Z}}(p+q)\cap\overline{ L}_{\Lambda}^{\mathbb{Z}}(q)}E_{n-p-q,n},\] where we used the fact that \(\left(\overline{L}_{\Lambda}(p)\cap\overline{L}_{\Lambda}(-q)\right)+q= \overline{L}_{\Lambda}(p+q)\cap\overline{L}_{\Lambda}(q)\) which can be easily checked. As a special case, for \(l,k\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}+\overline{\mathrm{B}}_{ \Lambda}^{\mathbb{Z}}\) such that \(l+k\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}+\overline{\mathrm{B}}_{ \Lambda}^{\mathbb{Z}}\), we obtain: \[T_{-k}T_{l+k}=\sum_{n\in\overline{L}_{\Lambda}^{\mathbb{Z}}(l)\cap\overline{L} _{\Lambda}^{\mathbb{Z}}(l+k)}E_{n-l,n} \tag{17}\] Note that this generalizes the formula given in the proof of [8, Proposition 4.2] where \(d\) was equal to \(1\). 1 Footnote 1: Moreover, this formula may be interpreted in a similar way: The operator \(T_{-k}T_{l+k}\) can be regarded as “matrix” (with multi-indexed entries) which has \(0\)-entries everywhere except for the \(l\)-th “diagonal” where its entries are either \(1\) or \(0\) depending on the parameter \(k\). We need some elementary geometric observations. For \(\Lambda^{\prime}\geqslant 0\), let \(K_{\Lambda^{\prime}}:=\mathrm{co}\left(\overline{\mathrm{B}}_{\Lambda^{\prime}} ^{\mathbb{Z}}\right)\) denote the convex hull of the set of integer lattice points in the closed ball of radius \(\Lambda^{\prime}\). Note that \(K_{\Lambda^{\prime}}^{\mathbb{Z}}:=K_{\Lambda^{\prime}}\cap\mathbb{Z}^{d}= \overline{\mathrm{B}}_{\Lambda^{\prime}}^{\mathbb{Z}}\). Furthermore, \(K_{\Lambda^{\prime}}\) is a polytope which is symmetric under reflections along coordinate axes and diagonals (i.e. under changing signs of coordinates and exchanging coordinates). Clearly, all the extreme points of \(K_{\Lambda^{\prime}}\) have integer coordinates. Moreover, if \(x\in K_{\Lambda^{\prime}}\) is of norm \(\|x\|=\Lambda^{\prime}\) it is an extreme point, but not necessarily all extreme points of \(K_{\Lambda^{\prime}}\) are of norm \(\Lambda^{\prime}\) as can be seen in the case \(d=2\), \(\Lambda^{\prime}=3\) (cf. Figure 1). In order to prove the claim it is enough to write every rank-one operator \(E_{p,q}\in\mathcal{B}(P_{\Lambda}\mathcal{H})\) as a linear combination of products of the form (17), where \(p,q\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}\) and \(l,k\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}+\overline{\mathrm{B}}_{ \Lambda}^{\mathbb{Z}}\) such that \(l+k\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}+\overline{\mathrm{B}}_{ \Lambda}^{\mathbb{Z}}\). To this end, fix \(p,q\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}\) and set \(l:=q-p\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}+\overline{\mathrm{B}}_{ \Lambda}^{\mathbb{Z}}\). Set \(\Lambda^{\prime}:=\|q\|\). We claim that we can find an extreme point \(m\in\mathrm{ex}(K_{\Lambda})\) such that the following holds: \[K_{\Lambda^{\prime}}\cap(K_{\Lambda}-m+q)=\{q\}. \tag{18}\] To see this, note that \(q\) is an extreme point of \(K_{\Lambda^{\prime}}\) and that the smallest cone \([0,\infty)\cdot(K_{\Lambda^{\prime}}-q)+q\) which contains \(K_{\Lambda^{\prime}}\) is locally compact (trivially in this finite dimensional case), closed, convex, proper and has vertex \(q\). By Corollary C.3 we can find an extreme point \(m\in\mathrm{ex}(K_{\Lambda})\) such that \((K_{\Lambda}-m)\cap(K_{\Lambda^{\prime}}-q)=(K_{\Lambda}-m)\cap([0,\infty) \cdot(K_{\Lambda^{\prime}}-q)+q)=\{0\}\), which is equivalent to (18). Now, fix an extreme point \(m\in\mathrm{ex}(K_{\Lambda})\) such that (18) is satisfied and set \(k:=q-l-m=p-m\). Note that for this \(l\) and \(k\) the product (17) makes sense, i.e. \(k\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}+\overline{\mathrm{B}}_{ \Lambda}^{\mathbb{Z}}\) and \(l+k\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}+\overline{\mathrm{B}}_{ \Lambda}^{\mathbb{Z}}\). Furthermore, the following holds: \[\left(\overline{\mathrm{L}}_{\Lambda}^{\mathbb{Z}}(l)\cap\overline{\mathrm{L}} _{\Lambda}^{\mathbb{Z}}(l+k)\right)\cap K_{\Lambda^{\prime}}=\{q\}\] The fact that the point \(q\) is an element of the set on the left-hand side is clear since \(q-l=p\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}\) and \(q-(l+k)=m\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}\). The converse inclusion is clear from (18) together with \(\mathrm{C}_{\Lambda}^{\mathbb{Z}}(l+k)\subset\overline{\mathrm{B}}_{\Lambda}^{ \mathbb{Z}}(l+k)=K_{\Lambda}^{\mathbb{Z}}+q-m\), where \(q-m=l+k\) was used. This shows that, for \(l=q-p\) and \(k=p-m\), the rank-one operator \(E_{p,q}\) is a summand of \(T_{-k}T_{l+k}\) as in (17) where, for all the other rank-one operators \(E_{n-l,n}\) appearing in the sum, we have that \(\|n\|>\|q\|\), i.e.: \[E_{p,q}=T_{-k}T_{l+k}-\sum_{\begin{subarray}{c}n\in\mathrm{C}_{\Lambda}^{ \mathbb{Z}}(l+k)\cap\mathrm{C}_{\Lambda}^{\mathbb{Z}}(l)\\ |n|=\lfloor\gamma\rfloor\sigma\end{subarray}}E_{n-l,n} \tag{19}\] Moreover, for each \(E_{n-l,n}\) in the above sum, a similar expression can be obtained, and so forth. Hence, after finitely many steps this gives a finite linear combination of products of the form (17) for \(E_{p,q}\). Altogether, this proves that \(\mathcal{B}(P_{\Lambda}\mathcal{H})\subseteq\mathrm{span}\left\{T_{-k}T_{l+k} \,\middle|\,l,k,l+k\in\overline{\mathrm{B}}_{\Lambda}^{\mathbb{Z}}+\overline{ \mathrm{B}}_{\Lambda}^{\mathbb{Z}}\right\}\) which shows that \(\mathrm{C}_{\mathrm{env}}^{*}(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)})\cong \mathcal{B}(P_{\Lambda}\mathcal{H})\) and \(\mathrm{prop}(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)})\leqslant 2\). Realizing that \(E_{0,0}\notin\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)})\) it is clear that \(\mathrm{prop}(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}))>1\). This finishes the proof. We illustrate the procedure described in the above proof of expressing elementary matrices \(E_{p,q}\in\mathcal{B}(P_{\Lambda}\mathcal{H})\) in terms of products of basic operators of the form (17) in the following two examples. **Example 4.2**.: Let \(p,q\in\overline{\mathbb{B}}_{\Lambda}^{\mathbb{Z}}\) such that \(\|q\|=\Lambda\). Finding an \(m\in\operatorname{ex}(K_{\Lambda})\) such that (18) holds is particularly easy in this case, namely, set \(m:=-q\). Then set \(k:=p-m=p+q\) and, with \(l=q-p\), we obtain: \[E_{p,q}=T_{-k}T_{l+k}=T_{-p-q}T_{q-p+p+q}=T_{-p-q}T_{2q},\] according to (19), since there are no \(n\in\overline{\mathbb{B}}_{\Lambda}^{\mathbb{Z}}\) with \(\|n\|>\|q\|=\Lambda\). **Example 4.3**.: Let \(d=2\) and \(\Lambda=\sqrt{2}\), i.e. \(\overline{\mathbb{B}}_{\Lambda}^{\mathbb{Z}}\) consists of \(9\) points and \(K_{\Lambda}=\operatorname{co}\left(\overline{\mathbb{B}}_{\Lambda}^{\mathbb{Z} }\right)\) is the square with side length \(2\). For \(p=(0,0)\) and \(q=(1,0)\), we want to express the matrix unit \(E_{p,q}\) as a linear combination of products of basic operators (17) with \(l=q-p=(1,0)\). Set \(\Lambda^{\prime}:=\|q\|=1\). Now, find an extreme point \(m\in\operatorname{ex}(K_{\Lambda})\) such that (18) holds. A valid choice is e.g. \(m:=(-1,-1)\). Set \(k:=p-m=(1,1)\). Then we have: \[\overline{\operatorname{L}}_{\Lambda}^{\mathbb{Z}}(l)\cap\overline{ \operatorname{L}}_{\Lambda}^{\mathbb{Z}}(l+k)=\overline{\operatorname{L}}_{ \sqrt{2}}^{\mathbb{Z}}(1,0)\cap\overline{\operatorname{L}}_{\sqrt{2}}^{ \mathbb{Z}}(2,1)=\{(1,0),(1,1)\}\] Therefore: \[T_{-k}T_{l+k}=T_{(-1,-1)}T_{(2,1)}=\sum_{n\in\overline{\operatorname{L}}_{ \sqrt{2}}^{\mathbb{Z}}(1,0)\cap\overline{\operatorname{L}}_{\sqrt{2}}^{ \mathbb{Z}}(2,1)}E_{n-(1,0),n}=\underbrace{E_{(0,0),(1,0)}}_{=E_{p,q}}+E_{(0, 1),(1,1)}\] Note that \(\|(1,1)\|=\sqrt{2}>\|q\|=1\). By Example 4.2, we have \(E_{(0,1),(1,1)}=T_{(-1,-2)}T_{(2,2)}\). Altogether, we obtain: \[E_{(0,0),(1,0)}=T_{(-1,-1)}T_{(2,1)}-T_{(-1,-2)}T_{(2,2)}\] ### Dual In [11, Theorem 3.1] it was shown that the operator system of \((N\times N)\)-Toeplitz matrices \(\operatorname{C}(S^{1})^{(N)}\) is dual to the Fejer-Riesz system \(\operatorname{C}(S^{1})_{(N)}\) which consists of trigonometric polynomials of the form \(\sum_{n=-N+1}^{N-1}a_{n}e_{n}\). In fact, this was already stated in [8] but it was only shown in [11] that \(\operatorname{C}(S^{1})^{(N)}\cong\left(\operatorname{C}(S^{1})_{(N)}\right)^ {\operatorname{dual}}\) in the sense that there is a unital _complete_ order isomorphism. For the proof, the (operator valued) Fejer-Riesz theorem plays an essential role. Recall that the Fejer-Riesz theorem states that every non-negative Laurent polynomial \(P=\sum_{k=-N}^{N}a_{k}e_{k}\geq 0\) can be expressed as a square of an analytic polynomial \(Q=\sum_{k=0}^{N}b_{k}e_{k}\), i.e. \[P=Q^{*}Q.\] A generalization to the case where the coefficients \(a_{k}\) and \(b_{k}\) are operators on a Hilbert space is due to Rosenblum. See [10] for a survey on the operator-valued Fejer-Riesz theorem. In view of the duality result for \(S^{1}\), it would be natural to expect that the operator system \(\operatorname{C}(\mathbb{T}^{d})^{(\Lambda)}\) is dual to the operator system \(\operatorname{C}(\mathbb{T}^{d})_{(2\Lambda)}\) which consists of trigonometric polynomials on the \(d\)-torus of the form \(\sum_{n\in\overline{\mathbb{B}}_{2\Lambda}^{\mathbb{Z}}}a_{n}e_{n}\). However, this duality must fail even algebraically as soon as \(d\geq 2\). Indeed, it is clear that \(\dim\left(\operatorname{C}(\mathbb{T}^{d})_{(2\Lambda)}\right)=\#\overline{ \mathbb{B}}_{2\Lambda}^{\mathbb{Z}}=\mathcal{N}_{\operatorname{B}}(2\Lambda)\) and from the above considerations about a basis we conclude that \(\dim\left(\operatorname{C}(\mathbb{T}^{d})^{(\Lambda)}\right)=\#\left( \overline{\mathbb{B}}_{\Lambda}^{\mathbb{Z}}-\overline{\mathbb{B}}_{\Lambda}^{ \mathbb{Z}}\right)=\#\left(\overline{\mathbb{B}}_{\Lambda}^{\mathbb{Z}}+ \overline{\mathbb{B}}_{\Lambda}^{\mathbb{Z}}\right)\). In general, however, the inclusion \(\overline{\mathbb{B}}_{\Lambda}^{\mathbb{Z}}+\overline{\mathbb{B}}_{\Lambda}^ {\mathbb{Z}}\subset\overline{\mathbb{B}}_{2\Lambda}^{\mathbb{Z}}\) is strict, as the following examples demonstrate: **Example 4.4**.: If \(d=2\), we have that \((3,2)\in\overline{\mathbb{B}}_{4}^{\mathbb{Z}}\backslash\left(\overline{ \mathbb{B}}_{2}^{\mathbb{Z}}+\overline{\mathbb{B}}_{2}^{\mathbb{Z}}\right)\). If \(d\geq 3\), we have that \((1,\cdots,1)\in\overline{\mathbb{B}}_{2}^{\mathbb{Z}}\backslash\left(\overline{ \mathbb{B}}_{1}^{\mathbb{Z}}+\overline{\mathbb{B}}_{1}^{\mathbb{Z}}\right)\). In view of these remarks, a more promising candidate for the dual operator system of \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\) might be the operator system consisting of trigonometric polynomials on the \(d\)-torus of the form \(\sum_{n\in\mathbbm{B}_{\Lambda}^{z}+\mathbbm{B}_{\Lambda}^{z}}a_{n}e_{n}\). In order to generalize the proof of [11] to this setting, a multivariate version of the (operator-valued) Fejer-Riesz theorem would be needed. In fact, it would be sufficient to be able to express a Laurent polynomial in \(d\) variables \(P=\sum_{n\in\mathbbm{B}_{\Lambda}^{z}}a_{n}e_{n}\) as a finite sum of squares of analytic polynomials \(Q_{i}=\sum_{n\in\left(\mathbbm{B}_{\Lambda}^{z}\right)_{+}}b_{n}^{i}e_{n}\), where \(\left(\mathbbm{B}_{\Lambda}^{z}\right)_{+}=\mathbbm{B}_{\Lambda}\cap( \mathbb{Z}_{\geqslant 0})^{d}\). However, at least for \(d\geqslant 3\) such a result must fail, even if higher degrees of the analytic polynomials \(Q_{i}\) are admitted ([33]). For \(d=2\), Dritschel proved ([9, Theorem 4.1]) that if \(P\) is a Laurent polynomial of degree \((d_{1},d_{2})\) (with Hilbert space operators as coefficients), then \(P\) is a sum of at most \(2d_{2}\) squares of analytic polynomials each of degree at most \((d_{1},d_{2}-1)\). However, as it is stated in the conclusion of that article it is not clear by how much the number of summands and in particular the bound on the degree can be improved. Yet, there are examples for non-negative polynomials of degree \((d_{1},d_{2})\) which are not a finite sum of squares of analytic polynomials of degree \((d_{1},d_{2})\) ([24], [31], [32, Section 3.6]). Note that it is in fact true that a strictly positive trigonometric polynomial \(P>0\) in \(d\) variables is a finite sum of squares of analytic polynomials each of the same degree as \(Q\) ([10, Theorem 5.1]). We see that it is apparently not as straightforward to determine the operator system dual of \(\mathrm{C}(\mathbb{T}^{d})^{(\Lambda)}\) as one might expect at first thought and we have to leave this to further research to be conducted. ## Appendix A The volume of the lense We express the volume \(\mathcal{V}_{\mathrm{L}}(\Lambda,\xi)\) of the lense \(\mathrm{L}(\Lambda,\xi)=\mathrm{B}_{\Lambda}\cap\mathrm{B}_{\Lambda}(\xi)\) in terms of the radius \(\Lambda\) and the translation parameter \(\xi\). Recall that \(B_{t}(a,b):=\int_{0}^{t}s^{a-1}(1-s)^{b-1}ds\), for \(0\leqslant t\leqslant 1\), is the _incomplete Beta function_, \(B(a,b):=B_{1}(a,b)\) is the _Beta function_ and \(I_{t}(a,b):=\frac{B_{t}(a,b)}{B(a,b)}\) is the _normalized incomplete Beta function_, where \(a\) and \(b\) may be any complex numbers with positive real parts. For \(0\leqslant h\leqslant\Lambda\), the _hyperspherical cap_ of height \(h\) in the ball of radius \(\Lambda\) is defined as follows: \[\mathrm{C}_{\Lambda}(h)=\left\{x=(x_{1},\cdots,x_{d})\in\mathbb{R}^{d}\,:\, \|x\|\leqslant\Lambda\text{ and }x_{d}>\Lambda-h\right\}\] Instead of the height \(h\), one could equivalently specify the radius of its base \(r\) or the colatitude angle \(\phi\) to define \(\mathrm{C}_{\Lambda}(h)\). It is easy to see that these three are related as follows: \[\Lambda-h =\Lambda\cos\phi\] \[r =\Lambda\sin\phi\] To compute the volume of \(\mathrm{C}_{\Lambda}(h)\), one can integrate the volume of the \((d-1)\)-dimensional ball \(\mathcal{V}_{B}^{d-1}(r)\) with radius \(r=\Lambda\sin\theta\) along the \(x_{d}\)-axis with height element \(\Lambda\cos\theta\), as \(\theta\) ranges from \(\phi\) to \(0\). This yields the following lemma: **Lemma A.1**.: _The volume \(\mathcal{V}_{\mathrm{C}}(\Lambda,h)\) of the hyperspherical cap \(\mathrm{C}_{\Lambda}(h)\) of height \(h\) in the \(d\)-dimensional ball of radius \(\Lambda\) is given as follows:_ \[\mathcal{V}_{\mathrm{C}}(\Lambda,h)=\frac{1}{2}\mathcal{V}_{\mathrm{B}}( \Lambda)I_{\sin^{2}\phi}\left(\frac{d+1}{2},\frac{1}{2}\right),\] _where \(\phi=\cos^{-1}\left(\frac{\Lambda-h}{\Lambda}\right)\) is the colatitude angle of \(\mathrm{C}_{\Lambda}(h)\)._ Proof.: The proof is as in [23] and requires the following identity: \[\int_{0}^{\phi}\sin^{d}\theta\mathrm{d}\theta =\frac{1}{2}B_{\sin^{2}\phi}\left(\frac{d+1}{2},\frac{1}{2}\right) \tag{20}\] This is obvious if \(\phi=0\). Taking the derivative of the right-hand side gives: \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}\phi}\int_{0}^{\sin^{2} \phi}t^{\frac{d-1}{2}}(1-t)^{-\frac{1}{2}}dt =\frac{1}{2}(\sin^{2}\phi)^{\frac{d-1}{2}}(1-\sin^{2}\phi)^{-\frac {1}{2}}2\sin\phi\cos\phi\] \[=\sin^{d}\phi\] \[=\frac{\mathrm{d}}{\mathrm{d}\phi}\int_{0}^{\phi}\sin^{d}\theta \mathrm{d}\theta,\] which proves (20). With this, we compute: \[\mathcal{V}_{\mathrm{C}}(\Lambda,h) =\int_{\theta=\phi}^{0}\mathcal{V}_{B}^{d-1}(\Lambda\sin\theta) \mathrm{d}\Lambda\cos\theta\] \[=\int_{0}^{\phi}\mathcal{V}_{B}^{d-1}(\Lambda\sin\theta)\Lambda \cos\theta\mathrm{d}\theta\] \[=\frac{\pi^{\frac{d-1}{2}}}{\Gamma(\frac{d-1}{2}+1)}\Lambda^{d} \int_{0}^{\phi}\sin^{d}\theta\mathrm{d}\theta\] \[=\frac{\pi^{\frac{d-1}{2}}}{\Gamma(\frac{d-1}{2}+1)}\Lambda^{d} \frac{1}{2}B_{\sin^{2}\phi}\left(\frac{d+1}{2},\frac{1}{2}\right)\] \[=\frac{1}{2}\frac{\pi^{\frac{d-1}{2}}}{\Gamma(\frac{d-1}{2}+1)} \Lambda^{d}\frac{\Gamma(\frac{d+1}{2})\Gamma(\frac{1}{2})}{\Gamma(\frac{d}{2} +1)}I_{\sin^{2}\phi}\left(\frac{d+1}{2},\frac{1}{2}\right)\] \[=\frac{1}{2}\frac{\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2}+1)} \Lambda^{d}I_{\sin^{2}\phi}\left(\frac{d+1}{2},\frac{1}{2}\right)\] \[=\frac{1}{2}\mathcal{V}_{\mathrm{B}}(\Lambda)I_{\sin^{2}\phi} \left(\frac{d+1}{2},\frac{1}{2}\right),\] where the second step is the computation of a Riemann-Stieltjes integral. Let \(\xi\in\mathbb{R}^{d}\) with \(\|\xi\|\leq 2\Lambda\). It is clear that the volume of the dense \(\mathrm{L}_{\Lambda}(\xi)\) is given by: \[\mathcal{V}_{\mathrm{L}}(\Lambda,\xi)=2\mathcal{V}_{\mathrm{C}}(\Lambda,h), \tag{21}\] where the height \(h\) is related to the translation parameter \(\xi\) by \(2(\Lambda-h)=\|\xi\|\), i.e. \(h=\Lambda-\frac{\|\xi\|}{2}\). We compute: \[\sin^{2}\phi=\sin^{2}\left(\cos^{-1}\left(\frac{\Lambda-h}{\Lambda}\right) \right)=\sin^{2}\left(\cos^{-1}\left(\frac{\|\xi\|}{2\Lambda}\right)\right)=1- \left(\frac{\|\xi\|}{2\Lambda}\right)^{2}\] This, together with (21) and Lemma A.1 immediately gives the following corollary, where we denote by \(t_{+}:=\begin{cases}t,\text{ if }t\geq 0\\ 0,\text{ if }t<0\end{cases}\) the positive part of the real number \(t\): **Corollary A.2**.: _For all \(\Lambda\geq 0\) and all \(\xi\in\mathbb{R}^{d}\), the following holds:_ \[\mathcal{V}_{\mathrm{L}}(\Lambda,\xi)=\mathcal{V}_{\mathrm{B}}(\Lambda)I_{t( \Lambda,\xi)_{+}}\left(\frac{d+1}{2},\frac{1}{2}\right),\] _for \(t(\Lambda,\xi):=1-\left(\frac{\|\xi\|}{2\Lambda}\right)^{2}\)._ ## Appendix B Some computations with special functions **Lemma B.1**.: _Let \(d\geq 2\) and let \(\varphi=\varphi_{0}(\|\cdot\|)\) be a radially symmetric Borel measurable function on \(\mathbb{R}^{d}\) with compact essential support. Then the following holds:_ \[\widehat{\varphi}(x)=\widetilde{\varphi}(x)=2\pi^{\frac{d}{2}-1}\sum_{j=0}^{ \infty}(-1)^{j}\frac{s^{2j}}{(2j)!}\frac{\Gamma\left(j+\frac{1}{2}\right)}{ \Gamma\left(j+\frac{d}{2}\right)}\int_{0}^{\infty}\varphi_{0}(r)r^{2j+d-1}dr,\] _for all \(x\in\mathbb{R}^{d}\)._ Proof.: We make use of the following expression for the inverse Fourier transform of a radially symmetric function \(\varphi=\varphi_{0}(\|\cdot\|)\in\mathrm{L}^{1}(\mathbb{R}^{d})\): \[\widetilde{\varphi}(x)=\frac{2\pi}{\|x\|^{\frac{d}{2}-1}}\int_{0}^{\infty} \varphi_{0}(r)J_{\frac{d}{2}-1}(2\pi r\|x\|)r^{\frac{d}{2}}dr, \tag{22}\] where \(J_{\mu}\) denotes the Bessel function, for \(\mu>-\frac{1}{2}\), for which we take its series representation as a definition: \[J_{\mu}(t)=\frac{1}{\Gamma\left(\frac{1}{2}\right)}\left(\frac{t}{2}\right)^{ \mu}\sum_{j=0}^{\infty}(-1)^{j}\frac{t^{2j}}{(2j)!}\frac{\Gamma\left(j+\frac{1 }{2}\right)}{\Gamma(j+\mu+1)}, \tag{23}\] for all \(t\in\mathbb{R}\). A derivation of (22) can be found in [12, Appendix B.5] where one should keep in mind that \(\widetilde{d\sigma}=\widehat{d\sigma}\), for \(d\sigma\) the surface measure on the \((d-1)\)-dimensional Euclidean sphere. The series representation (23) can be found in [38, Section 3.1] By plugging (23) into (22), substituting \(s:=2\pi\|x\|\) and applying \(\Gamma(\frac{1}{2})=\pi^{\frac{1}{2}}\), we obtain: \[\widetilde{\varphi}(x) =\frac{s}{\|x\|^{\frac{d}{2}}}\int_{0}^{\infty}\varphi_{0}(r)J_{ \frac{d}{2}-1}(rs)r^{\frac{d}{2}}dr\] \[=\frac{s}{\|x\|^{\frac{d}{2}}}\int_{0}^{\infty}\varphi_{0}(r) \left(\frac{1}{\Gamma\left(\frac{1}{2}\right)}\left(\frac{rs}{2}\right)^{ \frac{d}{2}-1}\sum_{j=0}^{\infty}(-1)^{j}\frac{(rs)^{2j}}{(2j)!}\frac{\Gamma \left(j+\frac{1}{2}\right)}{\Gamma\left(j+\frac{d}{2}\right)}\right)r^{\frac{ d}{2}}dr\] \[=\frac{s}{\|x\|^{\frac{d}{2}}}\frac{1}{\Gamma(\frac{1}{2})} \left(\frac{s}{2}\right)^{\frac{d}{2}-1}\sum_{j=0}^{\infty}(-1)^{j}\frac{s^{2 j}}{(2j)!}\frac{\Gamma\left(j+\frac{1}{2}\right)}{\Gamma\left(j+\frac{d}{2} \right)}\int_{0}^{\infty}\varphi_{0}(r)r^{2j+d-1}dr\] \[=2\pi^{\frac{d}{2}-1}\sum_{j=0}^{\infty}(-1)^{j}\frac{s^{2j}}{(2 j)!}\frac{\Gamma\left(j+\frac{1}{2}\right)}{\Gamma\left(j+\frac{d}{2}\right)} \int_{0}^{\infty}\varphi_{0}(r)r^{2j+d-1}dr,\] **Lemma B.2**.: _The square of the Bessel function has the following series representation, for \(k\) a non-negative integer and \(t\in\mathbb{R}\):_ \[J_{\frac{k}{2}}(t)^{2}=\frac{1}{\Gamma\left(\frac{1}{2}\right)}\left(\frac{t}{ 2}\right)^{k}\sum_{j=0}^{\infty}t^{2j}\frac{(-1)^{j}}{(2j)!}\frac{\Gamma\left( j+\frac{1}{2}\right)\Gamma(2j+k+1)}{\Gamma(j+k+1)\Gamma(j+\frac{k}{2}+1)^{2}}\] Proof.: The following series representation for \(J_{\frac{k}{2}}(t)^{2}\) can be found in [38, Section 5.41]: \[J_{\frac{k}{2}}(t)^{2}=\left(\frac{t}{2}\right)^{k}\sum_{j=0}^{\infty}(-1)^{j} \frac{t^{2j}}{j!}\left(\frac{1}{2}\right)^{2j}\frac{\Gamma(2j+k+1)}{\Gamma(j+k +1)\Gamma(j+\frac{k}{2}+1)^{2}}, \tag{24}\] for \(k\) a non-negative integer and for all \(t\in\mathbb{R}\). To fit this formula to our context, we remark that, for every non-negative integer \(j\), the following holds: \[\frac{\Gamma\left(j+\frac{1}{2}\right)\Gamma(j)}{\Gamma\left(\frac{1}{2} \right)\Gamma(2j)}=\left(\frac{1}{2}\right)^{2j+1} \tag{25}\] This can be easily proven e.g. by induction on \(j\) with the understanding that \(\frac{\Gamma(0)}{\Gamma(2\cdot 0)}=\lim_{t\searrow 0}\frac{\Gamma(t)}{ \Gamma(2t)}=\frac{1}{2}\). Plugging (25) into (24) we obtain: \[J_{\frac{k}{2}}(t)^{2} =\frac{1}{2}\left(\frac{t}{2}\right)^{k}\sum_{j=0}^{\infty}(-1)^ {j}\frac{t^{2j}}{j!}\frac{\Gamma\left(j+\frac{1}{2}\right)\Gamma(j)}{\Gamma \left(\frac{1}{2}\right)\Gamma(2j)}\frac{2j}{2j}\frac{\Gamma(2j+k+1)}{\Gamma(j +k+1)\Gamma(j+\frac{k}{2}+1)^{2}}\] \[=\frac{1}{\Gamma\left(\frac{1}{2}\right)}\left(\frac{t}{2} \right)^{k}\sum_{j=0}^{\infty}(-1)^{j}\frac{t^{2j}}{(2j)!}\frac{\Gamma\left(j+ \frac{1}{2}\right)\Gamma(2j+k+1)}{\Gamma(j+k+1)\Gamma(j+\frac{k}{2}+1)^{2}},\] where we used that \(\Gamma(j)=(j-1)!\), for integers \(j\geqslant 1\). **Lemma B.3**.: _The following identity holds, for all \(j\geqslant 0\) and \(d\geqslant 2\):_ \[\frac{2^{2j}}{2j+d}\frac{\Gamma(j+\frac{d+1}{2})}{\Gamma(j+\frac{d}{2})}= \frac{\pi^{\nicefrac{{1}}{{2}}}}{2^{d}}\frac{\Gamma(2j+d+1)}{\Gamma(j+\frac{ d}{2}+1)^{2}}\] Proof.: We apply the well-known property \(\frac{\Gamma(t+1)}{\Gamma(t)}=t\) and Legendre's duplication formula \(\frac{\Gamma(t)\Gamma(t+\frac{1}{2})}{\Gamma(2t)}=2^{1-2t}\pi^{\nicefrac{{ 1}}{{2}}}\), for all \(t>0\). With this, we obtain: \[2^{2j}\frac{\Gamma(2j+d)}{\Gamma(2j+d+1)}\frac{\Gamma(j+\frac{d+ 1}{2})}{\Gamma(j+\frac{d}{2})}\frac{\Gamma(j+\frac{d}{2}+1)^{2}}{\Gamma(2j+d+ 1)}\] \[=2^{2j}\frac{\Gamma(j+\frac{d+1}{2})\Gamma(j+\frac{d}{2}+1)}{ \Gamma(2j+d+1)}\frac{j+\frac{d}{2}}{2j+d}\] \[=2^{2j}2^{1-(2j+d+1)}\pi^{\nicefrac{{1}}{{2}}}\frac{1}{2}\] \[=\frac{\pi^{\nicefrac{{1}}{{2}}}}{2^{d}}\] ## Appendix C Some convex geometry In order to compute the propagation number of the operator system \(C(\mathbb{T}^{d})^{(\Lambda)}\), some facts from convex geometry are required. Since the natural setting for these is locally convex spaces we formulate all the required results in this abstract language even though we only make use of them in the finite dimensional case. See e.g. [30, Section 11] and [3, Chapter II] for much of the standard terminology. Throughout this section, let \(X\) be a Hausdorff locally convex space with continuous dual \(X^{\prime}\). Every linear functional \(l\) on \(X\) and every real number \(\alpha\in\mathbb{R}\) give rise to a hyperplane in \(X\) given by \(H_{l=\alpha}=\{x\in X\,|\,l(x)=\alpha\}\). Clearly, \(H_{l=\alpha}=\ker(l-\alpha)\) and the hyperplane \(H_{l=\alpha}\) is closed if and only if \(l\) is continuous which is the only case we consider. Every hyperplane \(H_{l=\alpha}\) gives rise to an open positive and an open negative half-space denoted by \(H_{l>\alpha}\) and \(H_{l<\alpha}\) respectively and defined by \(H_{l>\alpha}:=\{x\in X\,|\,l(x)>\alpha\}\) and similarly for \(H_{l<\alpha}\). Their respective closures are called the closed positive and the closed negative half-space associated to \(H_{l=\alpha}\) and denoted by \(H_{l\geq\alpha}\) and \(H_{l\leq\alpha}\) respectively. Of course \(H_{l\geq\alpha}=\{x\in X\,|\,l(x)\geq\alpha\}\) and similarly for \(H_{l\leq\alpha}\). If \(K,L\subseteq X\) are two convex sets, it is said that they are _separated_ by the hyperplane \(H_{l=\alpha}\) if \(K\subseteq H_{l\geq\alpha}\) and \(L\subseteq H_{l\leq\alpha}\). The sets \(K\) and \(L\) are called _properly separated_ if additionally \(K\uplus H_{l=\alpha}\) or \(L\uplus H_{l=\alpha}\). The sets \(K\) and \(L\) are called _strictly separated_ by \(H_{l=\alpha}\) if \(K\subseteq H_{l>\alpha}\) and \(L\subseteq H_{l<\alpha}\). A hyperplane \(H_{l=\alpha}\) is called a _supporting hyperplane_ for a non-empty convex set \(K\subset X\), if \(K\subseteq H_{l\geq\alpha}\) and \(x\in H_{l=\alpha}\), for at least one \(x\in K\). A supporting hyperplane \(H_{l=\alpha}\) for \(K\) is called _non-trivial_ if \(K\nsubseteq H_{l=\alpha}\). Recall that a cone \(C\subseteq X\) is called _pointed_ if \(0\in C\), _salient_ if it does not contain any \(1\)-dimensional subspaces of \(X\) and _proper_ if \(C\cap(-C)=\{0\}\). We only consider convex cones. A convex pointed cone is salient if and only if it is proper. For \(x\in X\), we call a set \(C+x\) a _cone with vertex \(x\)_ if \(C\) is a pointed cone. A cone with vertex \(x\) is called _proper_ if \(C-x\) is a proper cone. Let \(K\subset X\) be convex set with \(0\notin K\). Set \(C:=[0,\infty)\cdot K\). Clearly, \(C\) is a convex pointed cone. Moreover, \(C\) is the smallest convex pointed cone which which contains \(K\) in the sense that every convex pointed cone which contains \(K\) must contain \(C\). If \(C\) is a convex pointed cone, a subset \(B\subset C\) is called a _base_ (or _sole_ in [3, II, SS8.3]) if there exists a closed hyperplane \(H\not\ni 0\) such that \(B=H\cap C\) and such that \(C\) is the smallest convex pointed cone which contains \(B\). It is well-known that a convex subset \(B\) of a convex pointed cone \(C\) is a base if and only if, for every \(x\in C\backslash\{0\}\), there exists a unique pair \((\lambda,y)\in(0,\infty)\times B\) such that \(x=\lambda y\). The statement of the following lemma can be found in [3, II, SS7.2, Exercise 21a]. **Lemma C.1**.: _Let \(C\subset X\) be a locally compact, closed, convex, proper cone with vertex \(x\). Then there exists a closed supporting hyperplane \(H\) of \(C\) such that \(H\cap C=\{x\}\)._ Proof.: To simplify notation we assume, without loss of generality, that \(x=0\). Let \(U\subset X\) be a convex open neighborhood of \(0\) such that \(K:=\overline{U}\cap C\) is compact. We claim that \(C=[0,\infty)\cdot K\), i.e. \(C\) is the smallest convex pointed cone which contains \(K\). In fact, the inclusion \(C\supseteq[0,\infty)\cdot K\) is clear from the cone property. To see that \(C\subseteq[0,\infty)\cdot K\), let \(y\in C\). Since every \(0\)-neighborhood in a locally convex space is absorbent, there exists a positive scalar \(\lambda>0\) such that \(y\in\lambda U\). Hence, \(\frac{1}{\lambda}y\in U\cap\frac{1}{\lambda}C=U\cap C\subset K\) and therefore \(y=\lambda\frac{1}{\lambda}y\in\lambda K\subset[0,\infty)\cdot K\). By [3, II, SS7.1, Proposition 2], there is an open half-space \(H_{l<\alpha}\subset X\) such that \(0\in H_{l<\alpha}\cap K\subset U\cap C\). We may assume that the scalar \(\alpha\) is positive, otherwise pass to the functional \(-l\) instead. The boundary \(\partial H_{l<\alpha}=H_{l=\alpha}\) of this half-space is a closed hyperplane of \(X\) which does not contain \(0\). We claim that the cone \(C\) is the smallest closed convex pointed cone which contains \(H_{l=\alpha}\cap C\), i.e. \(C=[0,\infty)\cdot(H_{l=\alpha}\cap C)\). To see this, observe that the following inclusion holds: \[[0,\infty)\cdot(H_{l=\alpha}\cap C)=[0,\infty)\cdot(H_{l\leqslant\alpha}\cap C) \subseteq[0,\infty)\cdot K=C\] The converse inclusion \([0,\infty)\cdot(H_{l\leqslant\alpha}\cap C)\supseteq[0,\infty)\cdot K\) follows from the continuity and hence boundedness on the compact set \(K\) of the functional \(l\) by observing that there exists a positive real number \(\lambda>0\) such that \(\lambda H_{l\leqslant\alpha}=H_{l\leqslant\lambda\alpha}\supseteq K\). Now, set \(H:=H_{l=0}\). Then we have: \[H\cap C=H\cap[0,\infty)\cdot(H_{l=\alpha}\cap C)=(H\cap[0,\infty)\cdot H_{l= \alpha})\cap C=\{0\}\] So \(H\) is the desired closed supporting hyperplane for \(C\). **Lemma C.2**.: _Let \(K\subset X\) be a non-empty compact convex subset. Let \(l\in X^{\prime}\) be a continuous linear functional and \(H_{l=0}\) be the associated closed hyperplane through \(0\). Then there exists an extreme point \(x\in\operatorname{ex}(K)\) such that \(K-x\subset H_{l\geqslant 0}\)._ Proof.: By continuity of \(l\), the image \(l(K)\subset\mathbb{R}\) is bounded. Set \(\alpha:=\inf(l(K))\). In other words, \(\alpha\) is the largest real number such that \(K\subset H_{l\geqslant\alpha}\). In particular, \(H_{l=\alpha}\) is a closed supporting hyperplane of \(K\). By [3, II, SS7.1, Corollary to Proposition 1], the hyperplane \(H_{l=\alpha}\) contains an extreme point \(x\) of \(K\). Moreover, \(K-x\) is contained in the positive half-space \(H_{l\geqslant 0}\). Combining the previous two lemmas, we obtain the following immediate consequence: **Corollary C.3**.: _Let \(C\subset X\) be a locally compact, closed, convex, proper cone with vertex \(x\) and let \(K\subset X\) be a non-empty compact convex set. Then there exists an extreme point \(y\in\operatorname{ex}(K)\) and a closed hyperplane \(H\) which separates \(C-x\) and \(K-y\). Moreover, these two sets only intersect in \(0\), i.e. \((C-x)\cap(K-y)=\{0\}\)._
2310.19680
Integrating Pre-trained Language Model into Neural Machine Translation
Neural Machine Translation (NMT) has become a significant technology in natural language processing through extensive research and development. However, the deficiency of high-quality bilingual language pair data still poses a major challenge to improving NMT performance. Recent studies have been exploring the use of contextual information from pre-trained language model (PLM) to address this problem. Yet, the issue of incompatibility between PLM and NMT model remains unresolved. This study proposes PLM-integrated NMT (PiNMT) model to overcome the identified problems. PiNMT model consists of three critical components, PLM Multi Layer Converter, Embedding Fusion, and Cosine Alignment, each playing a vital role in providing effective PLM information to NMT. Furthermore, two training strategies, Separate Learning Rates and Dual Step Training, are also introduced in this paper. By implementing the proposed PiNMT model and training strategy, we achieve state-of-the-art performance on the IWSLT'14 En$\leftrightarrow$De dataset. This study's outcomes are noteworthy as they demonstrate a novel approach for efficiently integrating PLM with NMT to overcome incompatibility and enhance performance.
Soon-Jae Hwang, Chang-Sung Jeong
2023-10-30T16:00:13Z
http://arxiv.org/abs/2310.19680v4
# Integrating Pre-trained Language Model into Neural Machine Translation ###### Abstract Neural Machine Translation (NMT) has become a significant technology in natural language processing through extensive research and development. However, the deficiency of high-quality bilingual language pair data still poses a major challenge to improving NMT performance. Recent studies have been exploring the use of contextual information from pre-trained language model (PLM) to address this problem. Yet, the issue of incompatibility between PLM and NMT model remains unresolved. This study proposes PLM-integrated NMT (PiNMT) model to overcome the identified problems. PiNMT model consists of three critical components, PLM Multi Layer Converter, Embedding Fusion, and Cosine Alignment, each playing a vital role in providing effective PLM information to NMT. Furthermore, two training strategies, Separate Learning Rates and Dual Step Training, are also introduced in this paper. By implementing the proposed PiNMT model and training strategy, we achieve state-of-the-art performance on the IWSLT'14 En\(\leftrightarrow\)De dataset. This study's outcomes are noteworthy as they demonstrate a novel approach for efficiently integrating PLM with NMT to overcome incompatibility and enhance performance. Neural Machine Translation, Pre-trained Language Model, Catastrophic Forgetting, Incompatibility, Fine-tuning, Distillation ## I Introduction Neural Machine Translation (NMT) has emerged as a prominent research topic in artificial intelligence and natural language processing over recent years. Particularly, Transformer model [1] utilizing the Attention mechanism has played a decisive role in substantially enhancing the performance of NMT. However, there remain several challenges in training NMT model. One primary challenge is the requirement of vast amounts of high-quality bilingual pair language data. Collecting and curating such data entail significant costs and time. Following previous studies [2, 3], the absence of high-quality bilingual pair language data complicates the training of NMT model and leads to performance deterioration. Against this backdrop, Pre-trained Language Models (PLMs) such as ELMo [4], GPT [5], BERT [6], XLNet [7], BART [8], and T5 [9] acquire rich contextual information from readily available large-scale monolingual data. Leveraging this information, they undergo fine-tuning for downstream tasks and have achieved impressive results on key natural language processing benchmarks like GLUE [10] and SUPERGLUE [11]. Given the evident superior effects of fine-tuning PLMs for downstream tasks, we have delved into existing research to understand the potential integration of PLM into NMT model. Some explored methods include: initializing model parameters with PLM checkpoint instead of random initialization followed by fine-tuning [12, 13, 14, 15, 16]; indirectly employing PLM output in NMT through distillation [17, 18]; and directly utilizing PLM output as input for NMT model [14, 18, 19, 20]. However, incorporating PLM into NMT is not straightforward. Fine-tuning resulted in relatively lower performance when using the output of a frozen PLM as NMT input [19]. The reason is the occurrence of Catastrophic Forgetting [21] during the process of transferring pre-existing knowledge from PLM to NMT. Opting not to fine-tune and leaving PLM frozen also led to decreased performance [14]. This drop is due to the incompatibility arising from differences in the training task, model structure, and domain of train data between PLM and NMT. For instance, while PLMs like BERT [6] operate with an encoder structure restoring masked monolingual language data, NMT models employ an encoder-decoder structure, translating source language data to target language data. In conclusion, a new strategy is needed to overcome the issues identified above. This paper presents a novel PLM-integrated NMT (PiNMT) Model as a solution to the previously identified challenges, effectively merging PLM and NMT. The PiNMT model is composed of three primary components: PLM Multi Layer Converter that effectively transforms the deep and rich multi-layer contextual information from PLM into information suitable for NMT; Embedding Fusion that addresses the complex fine-tuning issues of PLM; and Cosine Alignment, which prevents potential information loss during the information transfer process between the two models. Additionally, to enhance the efficiency and accuracy of model training, we introduce two strategies: Separate Learning Rates, applying different learning rates considering the complexity and scale between PLM and NMT model; Dual Step Training, which further amplifies model performance through the use of bidirectional data. The code implementation is publicly accessible at the following repository1. Footnote 1: Available at: [https://github.com/vhch/PiNMT](https://github.com/vhch/PiNMT) Through these strategically designed approaches, our model exhibits a remarkable improvement on the IWSLT'14 En\(\leftrightarrow\)De dataset, showcasing a 5.16 BLEU score increase compared to the basic model. Notably, this result surpasses the previously highest-performing model on the same dataset by an additional 1.55 BLEU score, thereby solidifying its superior performance. ## II Related Work ### _Pretrained Language Model_ In the field of NLP, various PLMs have been proposed to exploit large-scale monolingual data across different languages and domains. Mikolov et al. [22] introduced two architectures, CBOW and Skip-Gram, effectively learning word vectors that reflect the context among words in a sentence. Although these methods efficiently learned context-reflecting vectors, they were unable to capture context-dependent meanings for polysemous words. To address this, Peters et al. [4] introduced ELMo, utilizing Bi-LSTM to generate dynamic word embeddings according to the given context, proving effective in various NLP tasks. Radford et al. [5] proposed GPT, based on Transformer decoder, which learned contextual information during sentence generation, showing outstanding results in text generation. Devlin et al. [6] introduced BERT, based on Transformer encoder, considering bidirectional context, and achieved remarkable performance in a range of NLP tasks including question answering, named entity recognition, etc. Building on these impressive performances, our research aims to utilize BERT, which considers bidirectional contextual information, to convey information to NMT. ### _Integrating PLM into NMT_ Numerous studies have been conducted on integrating PLM into NMT through various approaches. Ramachandran et al. [12] proposed initializing parts of NMT model with PLM and subsequently fine-tuning it. Ding et al. [23] suggested leveraging PLM embeddings in NMT. The pre-trained embeddings were kept static while being combined with additional embeddings, and only these new embeddings were trained, highlighting the significance of these supplementary embeddings. Yang et al. [17] introduced Asymptotic Distillation for transferring knowledge from PLM to NMT. A dynamic switch is employed to utilize the information from PLM in NMT dynamically. Additionally, the importance of differentiating learning rates between PLM and NMT during fine-tuning is conveyed through a proposed rate-scheduled learning. Zhu et al. [19] revealed in their preliminary exploration that using PLM output as an input to NMT proved to be more effective than initializing NMT with PLM parameters followed by fine-tuning. They also proposed BERT-fuse approach, which integrates an additional attention layer that interacts with PLM output in both the encoder and decoder. Weng et al. [18] incorporated a Dynamic Fusion Mechanism that considers information from all PLM layers in NMT Encoder. They also proposed a knowledge distillation paradigm for decoder, emphasizing the importance of utilizing multiple layers from PLM. Xu et al. [20] presented a methodology combining stochastic layer selection with bidirectional pre-training to effectively utilize multi-layers of PLM, underscoring the significance of bidirectional pre-training. Weng et al. [24] replaced NMT encoder with PLM and proposed a Layer-wise Coordination Structure to adjust the learning between PLM and NMT decoders. Subsequently, they introduced a segmented multi-task learning method for fine-tuning the pre-trained parameters, highlighting the need to reduce incompatibility between PLM and NMT. ## III Background ### _Neural Machine Translation_ The core principle of Neural Machine Translation involves learning the process of converting a given parallel sentence pair {x, y} from the source sequence x to the target sequence y. This transformation is facilitated through the Transformer model [1]. The overall structure of Transformer model is as follows: **Input Encoding**: The input sequences x, y first pass through an embedding layer, transforming them into continuous vector representations. Subsequently, position encoding, containing location information, is added to form the final input representation. \[E_{x}=\text{Emb}(x)+\text{Pos}(x) \tag{1}\] \[E_{y}=\text{Emb}(y)+\text{Pos}(y) \tag{2}\] **Encoder**: Multiple encoder layers process \(E_{x}\). The i-th encoder layer \(H_{E}^{i}\) comprises layer normalization (LN) [25], multi-head attention mechanism (MHA), and a feed-forward network (FFN) [1]. Each layer uses the output of the previous layer as input. \(S_{E}^{i}\) represents self-attention result of encoder layer. \[S_{E}^{i}=\text{LN}(H_{E}^{i-1}+\text{MHA}(H_{E}^{i-1},H_{E}^{i-1},H_{E}^{i-1})) \tag{3}\] \[H_{E}^{i}=\text{LN}(S_{E}^{i}+\text{FFN}(S_{E}^{i})) \tag{4}\] Here, \(H_{E}^{0}=E_{x}\). **Decoder**: Multiple decoder layers process \(E_{y}\). The i-th decoder layer \(H_{D}^{i}\) consists of LN, MHA, and FFN networks, and also includes attention relationship with the final output of encoder \(H_{E}^{N}\). \(S_{D}^{i}\) indicates self-attention result of decoder layer, which is used to calculate attention with the final output of encoder \(H_{E}^{N}\), obtaining \(C^{i}\). \[S_{D}^{i}=\text{LN}(H_{D}^{i-1}+\text{MHA}(H_{D}^{i-1},H_{D}^{i-1},H_{D}^{i-1})) \tag{5}\] \[C^{i}=\text{LN}(S_{D}^{i}+\text{MHA}(S_{D}^{i},H_{E}^{N},H_{E}^{N})) \tag{6}\] \[H_{D}^{i}=\text{LN}(C^{i}+\text{FFN}(C^{i})) \tag{7}\] Here, \(H_{D}^{0}=E_{y}\). **Output**: The final output of decoder is transformed into the probability distribution of the next token through a linear layer and softmax function. \[\text{Output}=\text{Softmax}(\text{Linear}(H_{D}^{N})) \tag{8}\] Here, \(H_{D}^{N}\) represents the output of the last layer of decoder. **Loss Function**: The training objective of NMT is to minimize the difference between the actual target and the model's prediction. The model's Output represents the probability distribution for each token of the target sequence. If the one-hot encoding of the actual target token is \(y_{true}\), then the cross-entropy loss is defined as follows: \[L_{CE}=-\sum y_{\text{true}}\log(\text{Output}) \tag{9}\] This loss function measures how close the model's predictions are to the actual target and updates the model's parameters to minimize this loss during training. ### _Pretrained Langue Model_ In recent years, a variety of Pre-trained Language Models (PLMs) such as ELMo [4], GPT [5], BERT [6], XLNet [7], BART [8], and T5 [9], capable of leveraging large-scale monolingual data, have been proposed. There are mainly two methods for training PLMs. The first is the auto-regressive approach [5], where the model operates by predicting the k-th token \(z_{k}\) based on the given context \(z_{<k}\) (i.e., the sequence before the k-th token). This can be represented mathematically as \(PLM(z_{k}|z_{<k};\theta)\). The second method is the masked language modeling approach introduced by BERT [6]. In this method, random tokens are masked, and these masked tokens are predicted using the surrounding context information. Mathematically, this can be expressed as \(PLM(z_{m}|z_{-m};\theta)\). ## IV Approach In this study, we introduce a novel model called PLM-integrated NMT (PiNMT), which integrates PLM into NMT. PiNMT is composed of three components: PLM Multi Layer Converter, Embedding Fusion, and Cosine Alignment. Fig. 1 provides a detailed representation of PiNMT architecture, and our approach is designed to address the issues previously raised. Additionally, we describe two training strategies necessary for overcoming these issues: Separate Learning Rates and Dual Step Training. ### _PLM Multi Layer Converter (PMLC)_ PLM are composed of multiple layers, and each layer captures a variety of contextual information [4, 26]. Previous research lacked a deep exploration of utilizing the multi-layered nature of PLM [2, 20]. We introduce a Converter technique to transform the multi-layer of PLM into a suitable source embeddings for NMT model. Additionally, we introduce a Dimensional Compression method to apply the high-dimensional information of PLM output to NMT model with parameter constraints, allowing NMT model to leverage PLM's information more effectively. **Converter** Converter transforms the multi-layer output of PLM into a source embeddings suitable for NMT model. #### Iv-A1 Vanilla As seen in Fig. 1(a), only the final layer of PLM is used for output. This method is based on the theory that the information extracted from the model's final layer is the richest and has the highest contextual understanding [27]. The last layer of PLM typically captures complex characteristics of the input data and has the capacity to understand sophisticated linguistic features and nuances. #### Iv-A2 Residual Inspired by existing research [28], we introduce the concept of shortcut connections to PLM, utilizing their multi layer architecture. The central idea is to combine the outputs of all layers before the last layer with the output of the final layer. This method simply adds the values of existing layers without additional parameters or complex operations, thus being computationally efficient. The equation is as follows, where \(H_{PLM}^{i}\) represents the output of the i-th layer of PLM. \[\hat{H}=\sum_{i=1}^{M}H_{PLM}^{i} \tag{10}\] A key difference between our study and ResNet [28] lies in the implementation and scope of shortcut connections. ResNet [28] modified the foundational model by introducing shortcut connections to its intermediate layers. In contrast, our proposed method maximizes the use of information from all layers without affecting the intermediate structure of the model. This approach minimizes the risk of pre-trained information degradation while allowing for more comprehensive utilization of information. #### Iv-A3 Concat Linear We propose a novel approach to resolve incompatibility issues when integrating PLM with NMT model. Our method involves transforming the multi layers of PLM into the input for an NMT model using learnable parameters for additional training. This process effectively converts the multi layers of PLM, enhancing compatibility between the two models. Consequently, it efficiently integrates the robust language understanding capabilities of PLM into NMT model. Specifically, we concatenate the outputs of each layer of PLM and then use a Linear Layer to reduce the dimensions and produce the final output. \[H^{\prime}=[H_{PLM}^{1};H_{PLM}^{2};\dots;H_{PLM}^{M}] \tag{11}\] \[\hat{H}=WH^{\prime}+b \tag{12}\] This method is similar to the existing Linear Combination approach [29] but with several notable differences. The traditional approach combines each layer's output after passing through a linear layer without a bias term. In contrast, our method first concatenates each layer's output and then passes it through a linear layer that includes a bias term. Our proposal introduces a low-dimensional bias parameter to the model. The introduction of this bias parameter enhances the convergence speed during the learning process and significantly contributes to the overall performance improvement of the model. #### Ii-A4 Hierarchical As observed in the study by Vaswani et al. [1], structuring layers in a deep and complex manner can capture information more effectively. From this perspective, instead of using a simple linear layer, we design a deeper, more intricate Converter. Our objective is to propose a structure that merges nodes hierarchically and deeply, an idea inspired by the research on Hierarchical Aggregation [29]. The core concept introduces an aggregation (AGG) node \(\hat{H}^{i}\). This node combines information from either two or three layers using AGG function, depending on specific conditions. AGG function concatenates multiple input layers and forwards them to a Feed Forward Network (FFN). FFN's outcome connects back to the original inputs through a shortcut connection, and the final result is normalized through Layer Normalization (LN). \[\hat{H}^{i}=\begin{cases}\text{AGG}(H^{2i-1},H^{2i})&\text{if }i=1\\ \text{AGG}(H^{2i-1},H^{2i},\hat{H}^{i-1})&\text{if }i>1\end{cases} \tag{13}\] \[\text{AGG}(x,y,z)=\text{LN}(\text{FFN}([x;y;z])+x+y+z) \tag{14}\] While earlier studies [29] employed a structure that fed aggregation nodes back into the original backbone. In our approach, we modify this to prevent aggregation node from being re-supplied to the backbone. This adjustment stems from the risk of compromising the pre-trained information in PLM. **Dimensional Compression** The output from Converter encompasses deep, high-dimensional information acquired from large datasets. This high-dimensional data, although rich in meaning, is problematic due to its excessive dimensionality, especially when incorporated into an NMT model with parameter constraints. To address this, we suggest compressing the output dimensions of the Converter via a Linear layer. This enables NMT model to effectively leverage the information extracted from PLM. Mathematically, this can be represented as: \[H^{\prime}_{PLM}=WH_{PLM}+b \tag{15}\] Such a compressed output is then feed into NMT model, allowing for the efficient use of high-dimensional information while optimizing parameter efficiency. ### _Embedding Fusion_ Embedding Fusion is an approach designed to overcome the limitations of fine-tuning PLM. Most PLMs are expansive, making direct fine-tuning a challenging task. Prior studies have frozen the parameters of PLM and integrated it with an NMT model. An Extra Source Embeddings was added to this combined model, which was then fine-tuned to enhance performance [23]. This method essentially emulates the effects of directly fine-tuning PLM. It alleviates the incompatibility issues between PLM and NMT model, while preserving the Fig. 1: The architecture of the PiNMT model Fig. 2: Converter pre-trained information. However, research on the effective utilization of Extra Source Embeddings remains scant. Hence, this study proposes an optimized method to harness the potential of Extra Source Embeddings. #### Iii-B1 Addition To maximize the combination of PLM output and Extra Source Embeddings, we apply a simple yet effective element-wise sum technique. Specifically, PLM output \(H_{PLM}\) and Extra Source Embeddings \(E_{x}\) are summed to generate a new embedding \(E^{\prime}_{x}\). This method preserves features from both sources and effectively combines their information. Formally, this can be represented as: \[E^{\prime}_{x}=H_{PLM}+E_{x} \tag{16}\] #### Iii-B2 Multiplication We employ an element-wise multiplication technique to more vividly model interactions between the two embeddings. This emphasizes the interdependency and relevance of each feature, resulting in an embedding that closely intertwines the characteristics of the two original embeddings. Mathematically, it is depicted as: \[E^{\prime}_{x}=H_{PLM}\odot E_{x} \tag{17}\] #### Iii-B3 Weighted Sum Mere combination might not adequately reflect the relative importance between two embeddings. To address this, we introduce a learnable weight to balance between the two embeddings. Specifically, the weight \(\gamma\) dynamically adjusts the importance of the two embeddings, striving for an optimal combination. This is mathematically captured as: \[E^{\prime}_{x}=\gamma H_{PLM}+(1-\gamma)E_{x} \tag{18}\] #### Iii-B4 Projection In the projection approach, each embedding undergoes a linear layer transformation before combining. This ensures both embeddings map onto the same feature space, facilitating efficient information amalgamation and adjustment. This can be mathematically represented as: \[E^{\prime}_{x}=(W_{1}H_{PLM}+b_{1})+(W_{2}E_{x}+b_{2}) \tag{19}\] #### Iii-B5 Concatenation The embeddings are concatenated. By directly merging features obtained from various sources through concatenation, the model can utilize the information from both embeddings. However, as the combined embedding might differ in dimensionality from the original space, a Linear Layer is employed for adjustments. The formula for this is: \[E^{\prime}_{x}=Linear([H_{PLM};E_{x}]) \tag{20}\] #### Iii-B6 Dynamic Switch Based on Dynamic switch [17], this method features the introduction of a context gate, crucial for the optimal integration of the two embeddings. The context gate, rooted in sigmoid neural network layer, determines the importance of each element within the input vectors received from PLM and Extra Source Embeddings. It's given by the equation: \[g=\sigma(WH_{\text{PLM}}+UE_{x}+b) \tag{21}\] Using the computed value of \(g\), the two embeddings are dynamically combined. The value \(g\) adjusts the significance of each embedding, ensuring balanced information integration. This is performed through the equation: \[h=g\odot h_{\text{PLM}}+(1-g)\odot E_{x} \tag{22}\] While the previous research [17] applied Dynamic Switch to each individual encoder layer, this study focuses solely on Extra Source Embeddings. ### _Cosine Alignment_ In previous studies, methods have been proposed for the effective transfer of knowledge from large models to smaller ones through Distillation [17, 18, 30]. Among these, the methodology presented by Yang et al. [17] for distilling information from PLM to an NMT minimizes the mean-squared-error loss between PLM output and the outputs of NMT encoder or decoder, thereby transferring PLM's knowledge. However, subsequent experimental results indicate that distilling from PLM to an NMT model in low-resource data scenarios either results in suboptimal performance enhancement or even degradation. A primary reason for this phenomenon appears to be the incompatibility between PLM and NMT model. To address this, Cosine Alignment is proposed. This approach adds cosine similarity between the output of PLM and the last layer of NMT model's decoder to the existing loss function. Since the outputs of PLM and NMT Decoder do not match in sequence length, the average value of each sequence is used. \[H_{PLM}=\{h^{1}_{PLM},\dots,h^{I}_{PLM}\} \tag{23}\] \[H_{D}=\{h^{1}_{D},\dots,h^{L}_{D}\}\] (24) \[h_{PLM_{avg}}=\frac{1}{I}\sum_{i=1}^{I}h^{i}_{PLM}\] (25) \[h_{D_{avg}}=\frac{1}{L}\sum_{l=1}^{L}h^{l}_{D}\] (26) \[L_{similarity}=\text{cosine similarity}(h_{PLM_{avg}},h_{D_{avg}})\] (27) \[=\frac{h_{PLM_{avg}}\cdot h_{D_{avg}}}{||h_{PLM_{avg}}||\times|| h_{D_{avg}}||}\] Here, \(H_{PLM}\) represents the output of PLM with a sequence length of \(I\), and \(H_{D}\) represents the last layer of decoder's output with a sequence length of \(L\). The proposed \(L_{similarity}\) is combined with the traditional cross-entropy loss \(L_{CE}\) to define the final loss function: \[L=L_{CE}+\alpha L_{similarity} \tag{28}\] In this context, \(\alpha\) serves as a hyper-parameter that adjusts the weights between the two losses. ### _Separate Learning Rates_ PLMs are often large and intricate, and during fine-tuning, there's a risk of losing pre-trained information. Conversely, not fine-tuning PLM can cause incompatibility issues between PLM and NMT model. To mitigate these challenges, research has been proposed to set varying learning rates for different layers [17, 27, 31]. Previous studies have demonstrated the efficacy of this approach. In this study, we incorporate the training strategy suggested by Yang et al. [17] to implement Separate Learning Rates in our PiNMT model. We adjust the learning rate for PLM to be relatively lower compared to that of NMT model. Mathematically, this can be represented as follows: \[\eta^{PLM}=\rho\times\eta^{NMT} \tag{29}\] Where \(\eta^{PLM}\) denotes the learning rate of PLM, \(\eta^{NMT}\) denotes the learning rate of NMT model, and \(\rho\) indicates the relative coefficient between these learning rates. ### _Dual Step Training_ Many NMT models are trained solely on unidirectional data, which makes it challenging to harness bidirectional linguistic features effectively. However, recent studies [20, 32] have reported that implementing bidirectional training can substantially enhance NMT performance. In this study, we refer to the pre-existing training method [20] and apply Dual Step Training to PiNMT model. The core idea behind this approach is to invert the direction of unidirectional data, thereby augmenting it to create bidirectional data (e.g., from En \(\rightarrow\) De, we derive En + De \(\rightarrow\) De + En). Utilizing this newly formulated bidirectional data, we conduct a pre-training phase for NMT model. This pre-training facilitates the model in learning bidirectional linguistic attributes, thereby enhancing its generalization capabilities. Subsequently, we fine-tune the pre-trained NMT model with the original unidirectional data to optimize the model's performance for specific translation directions. ## V Dataset and Baseline Settings ### _Dataset_ To validate the efficacy of our proposed methodology, we evaluate it using the IWSLT'14 dataset [33] for the English\(\leftrightarrow\)German (En\(\leftrightarrow\)De) language pair. The IWSLT'14 English\(\leftrightarrow\)German dataset comprises a total of 160K parallel bilingual language pairs, allowing for a quantitative grasp of the model's performance. The distribution ratios of the training, validation, and testing data are detailed in Table I. ### _Evaluation_ For evaluation metrics, we adopt the commonly used tokenized BLEU Score [34]. Without the use of Dual Step Training, we set the beam search width to 4 and the length penalty to 0.6. When employing Dual Step Training, the beam search width is increased to 5, and the length penalty is set at 1.0. ### _Settings_ #### V-C1 Plm In our study, we choose BiBERT [20] as our PLM. The original BERT model [6] is pre-trained for a single language. However, BiBERT is concurrently trained on both English and German. Built upon the RoBERTa architecture [35], BiBERT model consists of 12 layers, has a model dimension of 768, and includes 12 attention heads. The training data for BiBERT combined and shuffled 145GB of German text and 146GB of English text from OSCAR [36]. For the text tokenization process, 67GB of randomly sampled English and German texts from the training dataset were used. Using WordPiece tokenizer [37], a total of 52K vocabulary was constructed. #### V-C2 Nmt For NMT model implementation, we utilize fairseq framework [38]. As the base model, we choose Transformer model [1] with transformer_iwslt_de_en settings. This model comprises 6 encoder-decoder layers, has a model dimension of 512, and includes 4 attention heads. Without employing Dimensional Compression, we set the model's dimension to match PLM output, which is 768. However, when applying Dimensional Compression, it is set to 512. Various parameters is used during the training phase to optimize the model's performance. We apply a label smoothing rate of 0.1 to the cross-entropy loss. The maximum tokens per batch are set at 2048, with an update frequency of 16. For learning rate scheduling, we opt for the inverse_sqrt method. The Beta values for the Adam optimizer are set at (0.9, 0.98), and the initial learning rate is established at 4e-4. The vocabulary construction for NMT follow BiBERT [20] implementation method. The encoder use a vocabulary size of 52k, matching PLM. The decoder's vocabulary is built based on the IWSLT' 14 data. Without using Dual Step Training, a vocabulary size of 8K is created from the target language data. Conversely, when leveraging Dual Step Training, we construct a 12k-sized English-German joint vocabulary. #### V-C3 PiNMT When applying Dimension Compression, both Concat Linear and Linear Combination methods are compressed using the existing linear layer as they already contain a linear layer. In experiments not using PMLC, PLM is utilized with Vanilla as the base model. For conveying information in Cosine Alignment to encoder, we do not use length averages and instead use the original distillation method. Conversely, when conveying information to decoder in distillation, we use length averages, similar to Cosine Alignment. The hyperparameter \(\alpha\) is set to 500. ## VI Results and Analysis In this section, we review proposed PiNMT model and two training strategies using the IWSLT'14 En\(\leftrightarrow\)De dataset. ### _How does Dimensional Compression affect performance?_ Significant findings can be observed in Table II. Compared to using Transformer that solely trains on NMT model \begin{table} \begin{tabular}{|c|c|} \hline **IWSLT’14 (En\(\leftrightarrow\)De)** & **Count** \\ \hline train & 160239 \\ valid & 7283 \\ test & 6750 \\ \hline \end{tabular} \end{table} TABLE I: Data Distribution with dimensional compression, there is a larger performance enhancement when performing dimensional compression in Vanilla, which utilizes PLM as an input to NMT model. This emphasizes the importance of dimensional compression in the interaction between PLM's information and NMT model. Furthermore, since PLM possesses rich contextual information and high-dimensional features, it suggests that appropriate compression in the model's dimension is required to effectively convey this information to NMT model, which has a limited number of parameters. ### _Performance Comparison of PMLC_ We aim to compare the performance of various PMLC methods with Vanilla serving as the baseline model. Firstly, we introduce strategies proposed in previous studies. Linear Combination [29] linearly combines the outputs of all layers without any bias term. Next, ELMo [2] combines the outputs of each layer with learnable scalar weights to generate a new embedding. Additionally, Stochastic Layer Selection [20] involves randomly selecting and utilizing various layers of PLM during the training process. According to the results presenting in Table III, all models that harness the multi-layer capabilities of PLM outperform basic Vanilla approach. Notably, models equipped with learnable parameters display more significant improvements than their counterparts. This enhancement can be attributed to the learnable parameters' effectiveness in addressing the model's incompatibility issues. By using vector-based methods, which deploy more parameters than scalar approaches, the model achieves exceptional performance. Likewise, Hierarchical approach, with its more profound layer structure, facilitates intricate learning, marking the highest performance among all the discussed strategies. In our subsequent experiments, we delve deeper into analyzing the PMLC, helping us ascertain the most effective strategy. ### _Performance Comparison of Embedding Fusion_ Embedding Fusion methods are evaluated using Vanilla as the base. Upon examining the results in Table IV, we observe performance enhancements in all methods, with the exception of Multiplication approach, compared to Vanilla. Multiplication emphasizes the interaction between the two embeddings. However, its relatively lower performance suggests that the Extra Source Embeddings provides novel information specialized for NMT, which has a comparatively lower correlation with PLM. The performance improvement noted in all techniques, excluding Multiplication, indicates that the additional learning of Extra Source Embeddings can serve as a solution to incompatibility while preserving the pre-trained information of PLM. The methods show higher performance in the order of Addition, Weighted Sum, and others, which had fewer parameters. This is due to the insufficient quantity of data required for training parameters that model the interaction between the two embeddings. Consequently, it is evident that choosing an effective model structure based on the amount of data is crucial. In subsequent experiments, Addition method is employed as Embedding Fusion. ### _Distillation vs. Cosine Alignment_ In Table V, the results on the left indicate that, generally, both Distillation and Cosine Alignment methods enhance performance in Transformer. However, applying Distillation to NMT decoder results in a performance decline. Moreover, the performance improvements compared to Vanilla model are not substantial for either method. The results on the right side of Table V show a performance deterioration in both the encoder and decoder when Distillation technique is applied to Vanilla model. Cosine Alignment, on the other hand, improve performance but only in the decoder. Both methods lead to performance degradation in the encoder, primarily because the encoder, already processing PLM's output, experiences an information collision. Upon closer examination of the reasons for the performance decrease with Distillation in the decoder for both Transformer and Vanilla models, and why it increases with Cosine Alignment. It reveals that Distillation conveys both magnitude and direction of PLM's output vectors to NMT model. In contrast, Cosine Alignment only conveys the vector's directional information. This characteristic alleviates compatibility issues between PLM and NMT model, allowing only the necessary information to be transmitted efficiently. ### _Is Separate Learning Rates Strategy Effective?_ We conduct experiments using Vanilla model with a dimension of 712 as the base. Upon reviewing the results in Table VI, it is evident that the magnitude of the \(\rho\) value significantly impacts performance. If the \(\rho\) value is too large, there's a risk of damaging the contextual information from PLM, potentially leading to a decline in performance. Conversely, if the \(\rho\) \begin{table} \begin{tabular}{|l|l|} \hline **Models** & **BLEU** \\ \hline Transformer (\(d_{model}\)=768) & 33.99 \\ Transformer (\(d_{model}\)=512) & 34.12 \\ Vanilla (\(d_{model}\)=768) & 37.64 \\ Vanilla (\(d_{model}\)=512) & 38.16 \\ \hline \end{tabular} \end{table} TABLE II: Dimensional Compression Effects on De\(\rightarrow\)En \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Models** & **BLEU** & **Models** & **BLEU** \\ \hline Vanilla (\(d_{model}\)=768) & 37.64 & Projection & 37.97 \\ Addition & 38.48 & Concatenation & 38.00 \\ Multiplication & 35.70 & Dynamic switch & 37.99 \\ Weighted Sum & 38.07 & & \\ \hline \end{tabular} \end{table} TABLE IV: Performance Comparison of Embedding Fusion on De\(\rightarrow\)En \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Models** & **BLEU** & **Existing Models** & **BLEU** \\ \hline Vanilla (\(d_{model}\)=512) & 38.16 & Linear Combination & 38.77 \\ Residual & 38.67 & ELMo & 38.66 \\ Concat Linear & 38.78 & Stochastic Layer Selection & 38.39 \\ Hierarchical & 38.96 & & \\ \hline \end{tabular} \end{table} TABLE III: Performance Comparison of PMLC on De\(\rightarrow\)En value is too small, incompatibility issues may arise between PLM and NMT model, resulting in potential performance degradation. Therefore, setting an appropriate \(\rho\) value is of paramount importance. Rate-scheduled learning method [17] demonstrated exemplary performance on extensive resources in past research but fails to replicate the same effect on the low-resource IWSLT'14. This suggests that different datasets, with their unique characteristics and sizes, may require distinct learning rate strategies. In the experiments, a \(\rho\) value of 0.01 yields the most optimal results. Thus, this value will be employed in subsequent experiments. ### _What is an optimal PMLC for Combination?_ We establish our baseline by using Dimensional Compression to set the model's dimension at 512. Performance is analyzed by combining PMLC with Embedding Fusion (EF), Cosine Alignment (CA), and Separate Learning Rates (SLR). Our primary experiments focus on three PMLC approaches: Hierarchical, Linear Combination [29], and Concat Linear. Based on the results in Table VII, when applying Embedding Fusion and Cosine Alignment, Hierarchical approach witnesses a decline in performance. This suggests that the intricate structure of Hierarchical method can lead to excessive complexity when integrating additional techniques, making optimization challenging. On the other hand, both Linear Combination and Concat Linear methods have a relatively straightforward structure. This simplicity allows for more potential improvements in performance when implementing additional techniques. Notably, Concat Linear method consistently exhibit superior performance across various combinations. This indicates the inherent flexibility of Concat Linear approach, effectively integrating diverse forms of data and techniques. However, Linear Combination without bias term, despite some enhancements, is comparatively limited. This can be attributed to the bias term providing an additional degree of freedom to the model, enabling it to better capture specific data structures or patterns. Without the bias term, the model might not have the extra information it needs to detect subtle patterns, which could limit its performance gains. In summary, this research demonstrates that comparatively simpler and more flexible models are better adapted for integrating and combining diverse techniques. It emphasizes the importance of balancing complexity and flexibility when considering technique integration. As a result, we have chosen Concat Linear approach for Converter. ### _Is Dual Step Training Strategy Effective in PiNMT?_ Using a Transformer Model with a dimension of 512 as the base, we analyze the results of a study applying a Dual Step Training to PiNMT combined with Separate Learning Rates. Observing Table VIII, it is evident that even the sole application of Bidirectional Pre-training enhances the model's performance. This implies that the model benefits from recognizing the bidirectional characteristics of translation. Additionally, performance improvements are observable with Unidirectional Fine-tuning, indicating that deep learning across extensive data alone is insufficient. It emphasizes the need for learning tailored to specific domains. ### _Compared with Previous Work_ Table IX compares our research with various studies based on the IWSLT'14 En\(\leftrightarrow\)De dataset. BERT-Fuse [19] introduced a new attention layer to augment PLM interactions. UniDrop [39] consolidated multiple dropout strategies. R-Drop [40] employed a regularization technique utilizing dropout, BIBERT [20] harnessed PLM multi layers and engaged in bidirectional pre-training, and Bi-SimCut [41] enhanced performance by seamlessly integrating data augmentation and bidirectional pre-training. Compared to Transformer, our method showcases an ascent of 5.16 in the BLEU score, and it exceeds the prior peak performance set by Bi-SimCut by an additional 1.55 BLEU score. These results provide compelling evidence for the effective resolution of the identified challenges. understand and generate text within a single language, NMT models were tasked with translating between different languages. The inherent differences between these tasks led to incompatibility issues. To address these, we proposed PiNMT model, incorporating key components like PMLC, Embedding Fusion, and Cosine Alignment. We designed PiNMT model to leverage the rich contextual insights from PLM, all the while overcoming the challenges of their integration with NMT. In addition, to make model training more effective, we incorporated strategies like Separate Learning Rates and Dual Step Training. By adopting these methodologies, we achieved SOTA performance on the IWSLT'14 En+\(\cdot\)De dataset. As with any research, our methodology can be further refined. Further tests considering diverse languages and scales, as well as extended research, are necessary. In conclusion, this study served as a foundational step in strengthening the linkage between PLM and NMT, laying a critical groundwork for future advancements in translation models.
2302.09682
Mimicking a Pathologist: Dual Attention Model for Scoring of Gigapixel Histology Images
Some major challenges associated with the automated processing of whole slide images (WSIs) includes their sheer size, different magnification levels and high resolution. Utilizing these images directly in AI frameworks is computationally expensive due to memory constraints, while downsampling WSIs incurs information loss and splitting WSIs into tiles and patches results in loss of important contextual information. We propose a novel dual attention approach, consisting of two main components, to mimic visual examination by a pathologist. The first component is a soft attention model which takes as input a high-level view of the WSI to determine various regions of interest. We employ a custom sampling method to extract diverse and spatially distinct image tiles from selected high attention areas. The second component is a hard attention classification model, which further extracts a sequence of multi-resolution glimpses from each tile for classification. Since hard attention is non-differentiable, we train this component using reinforcement learning and predict the location of glimpses without processing all patches of a given tile, thereby aligning with pathologist's way of diagnosis. We train our components both separately and in an end-to-end fashion using a joint loss function to demonstrate the efficacy of our proposed model. We employ our proposed model on two different IHC use cases: HER2 prediction on breast cancer and prediction of Intact/Loss status of two MMR biomarkers, for colorectal cancer. We show that the proposed model achieves accuracy comparable to state-of-the-art methods while only processing a small fraction of the WSI at highest magnification.
Manahil Raza, Ruqayya Awan, Raja Muhammad Saad Bashir, Talha Qaiser, Nasir M. Rajpoot
2023-02-19T22:26:25Z
http://arxiv.org/abs/2302.09682v1
# Mimicking a Pathologist: Dual Attention Model for Scoring of Gigapixel Histology Images ###### Abstract Some major challenges associated with the automated processing of whole slide images (WSIs) includes their sheer size, different magnification levels and high resolution. Utilizing these images directly in AI frameworks is computationally expensive due to memory constraints, while downsampling WSIs incurs information loss and splitting WSIs into tiles and patches results in loss of important contextual information. We propose a novel dual attention approach, consisting of two main components, to mimic visual examination by a pathologist. The first component is a soft attention model which takes as input a high-level view of the WSI to determine various regions of interest. We employ a custom sampling method to extract diverse and spatially distinct image tiles from selected high attention areas. The second component is a hard attention classification model, which further extracts a sequence of multi-resolution glimpses from each tile for classification. Since hard attention is non-differentiable, we train this component using reinforcement learning and predict the location of glimpses without processing all patches of a given tile, thereby aligning with pathologist's way of diagnosis. We train our components both separately and in an end-to-end fashion using a joint loss function to demonstrate the efficacy of our proposed model. We employ our proposed model on two different IHC use cases: HER2 prediction on breast cancer and prediction of Intact/Loss status of two MMR biomarkers, for colorectal cancer. We show that the proposed model achieves accuracy comparable to state-of-the-art methods while only processing a small fraction of the WSI at highest magnification. Computational pathology, Dual attention, Reinforcement learning, Automated slide scoring ## I Introduction Immunohistochemistry (IHC) refers to a way of "talking with cells" [1] and is a variant of immunostaining that has emerged as a _gold standard_ in clinical cancer diagnostics. In histopathology, IHC is widely used to detect and quantify the presence of a specific protein marker for detailed disease profiling. It allows detailed assessment of the expression of various biomarkers by utilizing stains produced as a consequence of antigen and antibody reactions in the histologically relevant regions of the examined tissue sections [2]. Assessment of these biomarkers is crucial for assessing the risk, diagnosis, prognosis, response to therapy and progression of different cancers in patient management [3]. The human epidermal growth factor receptor 2 (HER2) gene has emerged as a key cell membrane biomarker and its expression is overamplified in approximately 20% of all invasive breast carcinoma (BC) cases [4]. Higher recurrence and poor survival rates have been linked with the overexpression of HER2 protein which is responsible for accelerating the growth of malignant epithelial cells [5]. However, the development of anti-HER2 treatments has significantly improved prognosis for HER2 positive patients. As such, current clinical guidelines recommend that all patients with breast cancer undergo HER2 testing which can be performed using IHC staining. Current diagnostic procedures involve the manual examination of IHC-stained HER2 slides by pathologists who assign a score (0-3+) to each slide. Fig. 1 contains examples of regions of interest (ROIs) from slides with different HER2 scores. Similarly, the Mismatch repair (MMR) status of tumors has emerged as another promising predictive biomarker for colorectal cancer, crucial for the selection of patients for immunotherapy. MutL homologue 1 (MLH1), postmeiotic segregation increased 2 (PMS2), mutS homologue 2 (MSH2) and mutS homologue 6 (MSH6) collectively form a panel of MMR proteins responsible for rectifying any errors made during the duplication of microsatellites, repetitive nucleotide sequences in the genome. Patients with microsatellite instability (MSI) or deficient DNA mismatch repair (dMMR) have discrepancies in the length of these repeat DNA sequences and have been shown to respond favorably to immunotherapy [6].Routinely, MMR status is assessed by performing IHC staining for all four proteins. The expression in MMR biomarkers is quantified as either positive/intact if the DAB stain detected in the cancerous nuclei is of similar or stronger intensity as in the normal tissue cells or negative/loss if weak staining is detected in the tumor nuclei in comparison to normal tissue cells. Fig.1 shows ROIs from slides with differing MMR marker status. Currently, a pathologist manually examines IHC stained tissue slides under a microscope in search of desired patterns and abnormalities. This is a tedious and time-consuming process and subject to inter- and intra-observer variability [7]. The advent of whole-slide image (WSI) scanners has enabled tissue glass slides to be scanned into digitized images, which initiated the adoption of Digital Pathology in routine clinical practice. The integration of WSIs into the mainstream pathology workflow has sprung forth a new and fast-expanding field of research, Computational Pathology (CPATH), which enables the quantitative assessment of slides, thus avoiding human bias and allowing precise and reproducible extraction of data from slides [8]. Recent years have seen an increased interest in applying deep learning (DL) techniques for CPATH problems. However, computational and memory requirements for direct processing of multi-gigapixel WSIs at a high magnification exceed the capabilities of current hardware devices. One of the most common approaches for the automated analysis of histopathology WSIs is to exhaustively analyze patches extracted from the entire tissue or tumor area using the "sliding window" paradigm. This method has two disadvantages: first, not all patches are diagnostically relevant and second, patches extracted from the same spatial area can potentially provide redundant information to the model. Consequently, various patch-selection strategies and attention-based methods have been proposed to alleviate the above-mentioned problems. In order to decide upon which strategy to employ, we first need to understand a pathologist's diagnostic process. In 1967, King _et al._[9] raised the following question, "How does a pathologist make a diagnosis?". This question was first answered by Pena _et al._[10] who attempted to describe the processes by which a pathologist may reach a decision. Rather than using an exhaustive strategy wherein all possible data is examined and evaluated, analogous to the sliding window paradigm in computational pathology, an experienced pathologist may employ some "shortcuts". Initially, pathologists may use cognitive processes such as saccades to gather information from the given histopathology slide. One of the diagnostic strategies they proposed thereafter was multiple branching wherein a conclusion is reached by investigating one of the potential paths for diagnosis, decided upon by preceding diagnostic inquiries. Another diagnostic strategy, pattern recognition, is used to identify histological patterns that match previously learned disease descriptions, a concept applied in supervised learning algorithms. The potential integration of AI-based models in routine clinical diagnostics depends, at least in part, on the model's ability to mimic the diagnostic procedures employed by pathologists on the WSIs. We propose a dual attention model that uses two attention mechanisms to alleviate the problems that arise from the disjoint patching dilemma, which leads to loss in visual context, important information needed to understand the spatial organization of tissue components in the tumor microenvironment (TME). The proposed workflow aims to model the aforementioned processes employed by the pathologist in routine clinical practice by sequentially predicting and analyzing some of the diagnostically relevant regions using dual visual attention and reinforcement learning. We begin by employing a soft attention module at a high level (low magnification) view of the WSI to identify potential ROIs in need of further inspection. This is analogous to a pathologist initially performing visual examination on the entire tissue slide at a low magnification and deciding upon areas in the slide which require further attention. Similar to a pathologist "zooming in" on some of the diagnostically relevant regions, we extract high-power fields (HPFs) from the most attentive areas identified by our soft attention module and pass them to the hard attention module. This module extracts a sequence of glimpses or patches at a higher magnification from the HPF in an attempt to model a sequence of saccades during visual examination. The location of each patch is decided upon by the preceding glimpses, like the multiple branching technique, and information from all the glimpses is collectively processed to come to a classification decision for the HPF in question. The process is repeated iteratively until all the HPFs are processed in order to assign a slide-level score to the WSI. Our main contributions are as follows: * We introduce a novel dual attention model for the automated scoring of gigapixel IHC stained images using soft and hard attention to extract most informative regions of interest from the WSIs. * We propose a dynamic joint loss function to train the model in an end-to-end fashion and enable both attention modules to be trained simultaneously. * We validate our method on two separate IHC scoring problems, namely the prediction of HER2 Scores for breast cancer and the prediction of the Intact or Loss status for two MMR-markers in colorectal cancer. * We demonstrate that in both use cases, the proposed method achieves performance comparable to conventional methods while only analyzing a fraction of the WSI at the highest resolution. Our proposed method for HER2 outperforms 10+ existing methods by analyzing only 5% of the WSI regions at the highest magnification. ## II Related work Existing methods for automated processing of IHC stained histopathology WSIs can be broadly categorized into three groups: a) patch selective and patch exhaustive methods b) reinforcement learning and hard attention, and c) automated IHC scoring. ### _Patch Selective and Patch Exhaustive Methods_ A straightforward approach commonly employed by several existing methods uses the classical "sliding window" paradigm to exhaustively extract patches from the tissue or tumor area. In the absence of patch-level annotations, the underlying assumption is that the image level label can be assigned to each image patch [11][12]. However, this assumption turns a blind eye to the fact that not every patch extracted from the Fig. 1: Examples of regions of interest from (first row) breast cancer whole-slide images: a) HER2 Score 0, b) HER2 Score 1+, c) HER2 Score 2+, d) HER2 Score 3+ and (second row) colorectal cancer whole-slide images: e) Loss status of MLH1, f) Loss status of PMS2, g) Intact status of MLH1, h) Intact status of PMS2. tissue, or even the tumor region contributes equally to the task at hand. For example, a patch containing adipose tissue has little information about tumor grades. Recent years have seen an increased level of interest in the application of attention mechanisms to identify important tissue regions. Soft attention networks aim to learn the weighted importance of pixels in an image. Yang _et al._[13] used coarse region-level annotations to guide the model to focus on diagnostically relevant areas while suppressing noise for the purpose of breast cancer classification. A deep selective approach was used by Xu _et al._[14] which made use of two networks, a soft attention classification network and a decision network which was used to "decide" if a patch could potentially improve the discriminative ability of its sister network. Kong _et al._[15] proposed a model that successively uses attention maps to gradually "zoom in" to informative regions within the slide. Katharopoulos _et al._[16] proposed a novel attention sampling strategy whereby the authors used a down-sampled low resolution version of the original image to compute an attention distribution. The attention map corresponds to a probability distribution which was used to identify potential ROI. Instead of extracting all patches at high resolution the authors only extracted patches at high resolution from the original image at locations provided by the attention map. The extracted patches were used to predict the classification label of interest. Their proposed model predicted the presence of epithelial cells in H&E slides of colon cancer without having access to patch level annotations. In a similar vein, Zhang _et al._[17] introduced a novel attention sampling method that uses a thumbnail of the original image to generate attention distributions at different magnifications. Based on the attention maps they sample patches from both different spatial locations as well as different magnifications. However, this approach involves creating multiple attention maps at different magnifications and is more computationally expensive, whereas the proposed approach only utilizes a single attention map. ### _Reinforcement Learning and Hard Attention_ The application of deep reinforcement learning (DRL) techniques in the fields of medical imaging and computational pathology is a subject of considerable ongoing research. Many papers have entertained the notion of performing classification tasks, without having access to the entire image. Minh _et al._[18] was among the seminal papers that modeled human visual perception by employing reinforcement learning concepts (REINFORCE) with a hard attention retina-inspired model on small handwritten digit images of \(28\times 28\), by the name of recurrent attention model (RAM). Ba _et al._[19] extended this work by proposing a deep recurrent attention model (DRAM) capable of recognizing multiple objects. Their proposed model incorporated contextual information and outperformed state-of-the-art CNNs on a house-number recognition dataset containing small images of size \(54\times 54\). BenTaieb _et al._[20] extended upon the RAM and applied it to computational pathology data with the aim of predicting the presence of cancer in lymph nodes. However, they too resorted to manual patch extraction. Xu _et al._[21] proposed a hybrid attention model wherein a soft attention mechanism was applied to the "glimpses" or patches extracted from an image using hard attention. Their model not only selected diagnostically relevant patches but also determined discriminative regions from within those patches thereby managing to classify images while only using 15% of the images. Our approach employs soft attention rather than hard attention as the initial step. This not only allows for more interpretability regarding the relative importance of different areas in the WSI with regards to the classification task, it also allows us to sort and cluster tiles based on their attention values, to create masks for instance. ### _IHC Scoring_ Mukundan _et al._[22] proposed a set of image features inspired by the visual markers used to assess slides recommended for HER2 testing. These features included characteristic curves and rotation-invariant uniform local binary pattern curves. Rodner _et al._[23] sampled tiles centered around a point of interest determined by a single click "probe" in the data. The respective patches were used to calculate deep bilinear features, using CNNs. Saha _et al._[24] proposed a deep network for HER2 classification by way of cell membrane and nuclei segmentation using Trapezoidal LSTM networks. Pitkaaho _et al._[25] used a variant of the AlexNet architecture and achieved comparable results. However, they manually selected ROI from HER2 images at a lower resolution. The proposed model extends the work of Qaiser _et al_, [12], which modelled the prediction of IHC scoring as a sequential learning problem. Conventionally, the histopathological does not visually analyze each and every visual field to score the IHC slide, rather they focus on a small number of visual fields or ROIs. The model employed DRL to effectively identify ROIs by way of a parametrized policy that prevented the model from revisiting locations it had already analyzed.However, the selection of these ROI takes place in the context of a single image tile. This again necessitated the identification of tissue regions and consequently the manual and exhaustive extraction of image tiles from the identified tissue components using the sliding window paradigm. The proposed model aims to automate the tiling process and presents a cohesive end-to-end alternative thereby overcoming the challenges inherent in patch-based models. ## III Methodology : The Dual Attention Method The proposed model consists of two main components which can function simultaneously to mimic a histopathologist examining a tissue slide under the microscope. In routine clinical practice, for a given tissue slide, a pathologist usually visualizes different tissue components at a lower magnification to localize diagnostically relevant HPFs and then observes the morphological differences in selected HPFs at a higher magnification. In our proposed model, the first component is a soft attention model which takes as input a high-level (low magnification) view of the entire WSI to determine various ROIs. Image tiles (at higher magnification) are extracted from the previously selected high-attention areas for classification purposes using a custom sampling method. The second component is the classification model, which uses these image tiles and further extracts a sequence of multi-resolution glimpses or patches from these tiles. The illustration of our proposed workflow is shown in Fig. 2. In the following sections we explain both the soft and hard attention modules and their respective loss functions as well as the attention sampling procedure and the joint loss function. ### _Soft Attention Model_ Inspired by [16], the Soft Attention Model (SAM) \(f_{s}(\theta_{s})\) with learnable parameters \(\theta_{s}\), aims to discover potential ROIs in the input image that could prove conducive to the classification task at hand. Let the WSI be denoted as \(I\in R^{H\times W\times C}\) where \(H,W\) and \(C\) represent the height, width and channels of the image respectively. Due to computational and memory constraints it is not feasible to train the model with the entire WSI, at the highest magnification. To reduce the computational complexity of the proposed framework we take as input a down sampled version of the WSI scaled by a factor \(s\), \(I_{0}\in R^{h\times w\times C}\) where \(h\ll H\) and \(w\ll W\). Let \(f_{e}(\cdot)\) parameterized by \(\theta_{e}\), represent a function of feature extractor network consisting of a convolutional neural network (CNN) with ReLU non-linearities that extracts discriminative features from the input image \(I_{0}\). Employing an attention mechanism \(f_{a}(.)\) on the extracted features is similar to learning a probability distribution over the pixels of the image such that the probability always sums up to 1 as shown in (1), \[\sum_{j=0}^{N}f_{a}(I_{0},\theta_{a})_{j}=1 \tag{1}\] where \(I_{0}\) denotes a downsampled image passed to the model and \(N=h\times w\) denotes the number of pixels. The probabilities correspond to the relevance of the areas for the classification task. The overall definition of the proposed soft attention model is given in (2). This results in an attention map \(A\in R^{h\times w}\) which highlights potentially informative regions in the image \(I_{0}\). \[f_{s}(I_{0},\theta_{s})=f_{a}(f_{e}(I_{0},\theta_{e}),\theta_{a}) \tag{2}\] ### _Attention Sampling_ The attention map \(A\) contains both high and low attention regions, where the attention probabilities indicate the relative importance of different regions within the image for the classification task. Fig. 3 shows attention maps for a single slide as training progresses. We sample a set of locations from the high attention regions without replacement in order to extract a set of representative image tiles from the WSI. A major challenge in this regard is that the attention map may contain false positives for high attention regions e.g. tissues artifacts that are misclassified as ROIs due the use of entropy. We attempt to solve this problem by adding noise to the attention distribution. In contrast to the Gumbel-max method used in [16], our custom sampling method only introduces noise within the tissue regions thereby ensuring no image tiles containing background information are used for downstream analysis. We created a noise vector, \(\textbf{n}\in R^{h\times w}\), by generating random numbers sampled from a uniform distribution in the range \((low,high]\) where both \(low\) and \(high\in R\) and \(high>low\). The benefits of this scheme are two-fold, first it minimizes the effects of the false positive regions on downstream tasks and secondly it introduces diversity in the choice of informative tiles provided to the classifier during the training process. To extract the image tiles, we first normalize the attention distribution \(A\) to the range \((0,1]\), denoted as \(\bar{A}\) using minmax normalization. Employing our custom sampling method on two different use-cases necessitated the use of varied sampling methods. Fig. 2: An overall concept of the proposed dual attention method depicting the downsampled WSIs passed as input to the soft attention module and the resulting attention maps. The sampling method extracts image tiles from the locations depicted in the diagram using a mapping function. Each extracted image tile is passed to the hard attention module which extracts multi resolution glimpses at 20\(\times\) and 40\(\times\). This is due to the fact that while tumor areas generally exist in the form of a "cluster", HER2 positive score depends on the intensity of DAB staining whereas the loss/intact status of the MMR bio-markers is determined simply by the detecting the presence of DAB staining. We employed our proposed framework using two different attention refining methods. #### Iii-B1 Tumor Attention Maps The tumor attention maps aim to identify tumor areas within the Comet-MMR slides with a reasonable degree of accuracy by detecting the presence of DAB stains. In this regard we used the \(k\)-means clustering with heuristic of putting attention map into three categories i.e. background, normal tissue and tumor areas. #### Iii-B2 Positive Attention Maps The positive attention maps were used to locate regions within the image with the highest probability of being HER2+ by using the intensity of DAB staining. We generated a mask \(m\) by using adaptive thresholding on the normalized attention map \(\bar{A}\) to extract highly probable image tiles. In order to focus the "noise" only on the relevant areas within the slide we multiplied the noise vector with the mask \(m\). Additionally, we also multiplied the normalized attention map with the mask which gives us our final attention distribution \(F_{a}=(m\times n)+(m\times\bar{A})\). Thereafter we performed top \(k\)-sampling on our final attention maps with target noise. This gives us a set of location indices. Let \(N\) be the desired number of tiles to extract from the WSI then for each image \(I_{0}\) we initially sampled \(2\times N\) locations \(L\in R^{N\times 2}\) from the high probability regions. Out of these \(2\times N\) locations some may be spatially close to each other thereby contributing to data redundancy. To ensure that we obtain spatially distinct tiles we initially created an empty set to represent our "shortlisted" final locations, \(L_{F}\in R^{N}\). At each time-step, a location index \(L_{t}(x,y)\) is only added to \(L_{F}\) if the Euclidian distance, \(d\) between the coordinates \(L_{t}\) and all coordinate pairs previously in \(L_{F}\) is greater than a certain threshold, \(d_{t}\) We define a mapping function \(M(I,L_{F})\) that maps each coordinate pair in \(L_{F}=(x,y)\) to a location in the WSI \(I\in R^{H\times W\times C}(X,Y)\) using a pre-defined scaling factor \(s\) between the WSI \(I\in R^{H\times W\times C}\) and image \(I_{0}\in R^{h\times w\times C}\)[26]. The sampled image tiles of size \(2048\times 2048\) were resized to \(128\times 128\) and passed on to a second feature extractor network \(f_{f}\) to obtain the feature vectors. Similar to [16] the gradient of the attention distribution was derived by calculating the expectation of the sampled tiles based on their attention values. ### _Soft Attention Regularization_ The soft attention mechanism represents a multinomial distribution over the pixels in the image. Let \(E\) be the multinomial entropy function for the attention maps. To ensure the attention distribution does not focus on a small number locations where it receives maximum reward, we used an entropy regularizer [16][17]. This attempts to resolve the exploration-exploitation problem preventing the model from quickly assigning high probabilities to a few selected locations. Instead, it encourages the model to 'explore' various locations: \[L_{SA}=\beta E(f_{s}(I_{0};\theta_{s})) \tag{3}\] where \(\beta\) denotes the entropy regularizer and \(f_{s}\) denotes the soft attention mechanism. The above formulation results in a relatively uniform attention distribution due to the usage of entropy. ### _Hard Attention RL Model_ The tiles extracted from high attention regions are still large enough to contain irrelevant areas. Moreover training a CNN directly with the tiles is still not feasible due to computational and memory constraints. Therefore, we proposed a hierarchical dual attention method that incorporates soft attention at the WSI level to extract relevant tiles and then hard attention at the tile level for the underlying task. Our soft attention module provides us with a set of \(N\) image tiles of size \(2048\times 2048\) at resolution \(40\)x extracted from the WSI where we used a reitan-inspired hard attention mechanism [18] in our classification model to selectively analyze informative parts of a single image tile. The classifier used is inspired from [12]. Our model can be formalized by what is known as a Partially Observable Markov Decision Process (POMDP) in reinforcement learning literature. The image tile can be thought of as an environment and the neural networks collectively act as an agent, a decision-making entity that linearly interacts with the environment. Considering a single image tile \(i\) sampled from \(I\), we extract multi-resolution patches from within the tile to reach a classification decision. At each time-step \(t\), the model receives a state from the environment, consisting of two multi-resolution glimpses centered around an individual location \(l_{t}=(x_{t},y_{t})\). The two glimpses \(g_{t}=(g_{0t},g_{t1})\in R^{hg\times wg\times C}\) of size \(128\times 128\) are extracted at resolutions \(40\times\) and \(20\times\) respectively, mimicking the peripheral vision around a single point of focus. A CNN \(f_{g}\) parameterized by \(\theta_{g}\) acts as a nonlinear function which takes as input these two RGB images and aggregates the features extracted from these two glimpses to create a feature representation vector \(v_{gt}\). To capture both the semantic and spatial information, the features of the location coordinates \(v_{lt}\) are also extracted by way of a linear layer and combined with the feature vector \(\mathbf{v}_{gt}\) to obtain \(v_{t}=v_{gt}\times v_{lt}\). In line with our objective to mimic a pathologist's visual attention we use multiple such glimpses analogous to a sequence of saccades. An example of this sequence of glimpses can be found in Fig. 4. The backbone of the hard attention classification model is a recurrent neural network (RNN). In the absence of unrestricted access to the entire image tile, \(i\) the RNN sequentially builds up an internal representation of the environment using glimpses. To retain information from earlier time-steps and to learn the spatial dependencies between the glimpses, we use a long short-term memory (LSTM) based RNN. At each timestep \(t\) the RNN \(f_{r}(\theta_{r})\) with learnable parameters \(\theta_{r}\) processes the aggregated features of the glimpses to update the parameters of its internal hidden states (memory units), \(h_{t}\). \[h_{t+1}=f_{r}(h_{t},v_{t};\theta_{r}). \tag{4}\] The CNN and RNN collectively act as an agent. Given the current state or "glimpse" of the environment, the agent predicts the location of the next glimpse \(l_{t+1}=(x,y)\) and a classification decision. Another CNN \(f_{c}\) was employed to embed contextual awareness in the model and to aid in the selection of the ROIs. This model with learnable parameters \(\theta_{c}\) takes as input a down-scaled version of the original image tile, \(i_{d}\) to provide contextual information and assist in the selection of locations for the glimpses. To ensure that the model selects spatially distinct locations an "Inhibition of Return" (IOR), which suppresses the textural information of previously visited locations by blacking out those regions, was incorporated into the process. The outputs of the last hidden layer of the recurrent neural network and the context module are combined using the Hadamard product formula to obtain a feature vector. The location network \(f_{l}(\theta_{l})\) linearly transforms the resulting feature vector to predict the next location \(l_{t+1}=(x_{t+1},y_{t+1})\). In order to model human visual attention, the entire process is repeated iteratively for \(T\) time-steps on a single image tile. At the end of this sequence of saccades \((g_{t-1},g_{t},g_{t+1}...g_{T})\), referred to as an "episode" in RL literature, the models predict the final classification label \(Y_{T}\) for the image tile \(i_{t}\) under consideration. ### _Hard Attention Loss function_ All the individual components of the retina-inspired hard attention model were trained in an end-to-end fashion where we optimized the loss obtained from each neural network present in the overall framework i.e. the glimpse network \(f_{g}\), the recurrent model \(f_{r}\), the contextual CNN \(f_{c}\) and the location network \(f_{l}\). Similar to [12] in order to optimize the loss, the hard attention model learns a parameterized DRL policy \(\pi\) where each state is mapped to a set of actions by maximizing the sum of the expected reward of said actions while following the parameterized policy. In our research study, the actions consist of coordinates of the next location \(l_{t+1}(x_{t+1},y_{t+1})\) from where the multi-resolution glimpses will be extracted and classification decision \(Y_{T}\), provided the input is the glimpse \(g_{t}=(g_{0t},g_{1t})\) at time-step \(t\) from location \(l_{t}(x_{t},y_{t})\) and the down sampled version of the image tile \(i_{d}\). \[\pi((l_{t+1}(x_{t+1},y_{t+1}),Y_{T})|(v_{t},i_{d});\theta_{HA}). \tag{5}\] where \(\theta_{HA}=\theta_{g},\theta_{r},\theta_{c},\theta_{l}\). The reward is calculated using the following equation on all glimpses for a given tile. \[r_{t}=\begin{cases}1\text{ if }GT=Y_{t}\\ 0\text{ otherwise}\end{cases} \tag{6}\] where \(r_{t}\) and \(Y_{t}\) denote the reward and predicted score for each glimpse respectively and GT denotes the ground truth. The final reward \(R_{T}=\sum_{t=0}^{T}1_{t-1}r_{t}\) is calculated by taking sum of all rewards of the entire glimpse episode \((g_{t-1},g_{t},g_{t+1}...g_{T})\), where 1 is a weighing factor (a concept commonly used in RL literature). The parameterized policy \(\pi\) is optimized with the help of maximizing the model's performance with respect to the reward and this objective can be achieved by the reverse of gradient descent. Rather, to maximize the policy gradients the REINFORCE rule has been employed where in each episodic scenario the actions leading to higher rewards are computed using the gradients and log probabilities such that the actions with low rewards are minimized. In order to reduce the intra-class variance, a common drawback of the REINFORCE method, a baseline function was introduced by normalizing the model parameters. \[\triangledown L_{\theta}=\sum_{t=0}^{T}\triangledown_{\theta}log\pi((l_{t+1},Y_ {T})|(v_{t},i_{d});\theta_{HA})(R_{t}-B_{t}) \tag{7}\] To prevent the model from extracting glimpses from the same location, the model was discouraged from visiting previously attended locations. This is achieved by essentially "blacking out" regions from where the previous glimpses have been extracted within the down sampled image tile \(i_{d}\) which provides contextual information to the model. An additional term was introduced where the sum of the overlapping bounding boxes for the glimpse the locations was calculated. This term was incorporated into the final loss function to encourage the model to select spatially distinct locations. \[L_{bb}=\frac{1}{{}_{T}C_{2}}\sum_{t=0}^{T}\sum_{tb=t+1}^{T}b(g_{t})\cap b(g_{ tb}) \tag{8}\] where \({}_{T}C_{2}\) is the number of possible combinations of T distinct objects taken 2 at a time and b denotes a function that calculates the bounding box around glimpse \(g\) at time-step \(t\). Fig. 3: (left to right) The first column shows the original MLH1 WSI. The remaining columns show the evolution of the attention maps along different epochs as training progresses for the given slide. Moreover, to penalize wrong predictions in accordance with their clinical significance, an additional task-specific regularization term was added, \(L_{s}=|Y_{t}-GT|\). \(L_{s}\) is a measure of the absolute difference between the ground truth values and the predicted values, considering both are numerical scores. Therefore, this particular component in the loss function will penalize the model based on how "far apart" the ground truth and predicted values are. Finally, these losses are trained as in (9), where \(\delta\) is used to control the effect of \(L_{s}\) and \(L_{bb}\). \[L_{HA}=L_{\theta}+\delta(L_{bb}+L_{s}) \tag{9}\] ### _Joint Loss Function_ We have trained our model with both components being trained separately and together to demonstrate the flexibility of our proposed framework. In the absence of an explicit stopping criteria, we have proposed a method of training both components simultaneously. Since the retina-inspired classification model takes much longer to train than the soft attention model, the loss of the soft attention model is incorporated into the final loss function using a decaying coefficient. This allows us to train the model end-to-end with a dynamic loss function [27]. The joint loss function is as in (10), where, \(\alpha\in R\) denotes a hyperparameter and \(e\) denotes the epoch number. \[L_{J}=L_{HA}+(\alpha^{e}\times L_{SA}) \tag{10}\] ## IV Experimental Results We have employed our proposed model on two separate use-cases, HER2 prediction for invasive breast carcinoma and the prediction of the Loss/Intact Status of two MMR biomarkers, MLH1 and PMS2. ### _Prediction of HER2 Expression Status in Breast cancer_ #### Iv-A1 Dataset The experiments were carried out on the HER2 Scoring Contest dataset [28]. The HER2 challenge dataset is publicly available, covered by the Nottingham Research Ethics Committee 2 approval no. REC 2020313 (R&D reference 03HI01). The contest dataset comprised of a total number of 172 WSIs obtained from 86 invasive breast cancer cases. Each case contains an IHC stained HER2 slide and the corresponding H&E slide. Similar to routine clinical practice, for our experiments, we only take the IHC-stained slides into consideration for assigning HER2 scores. The training dataset consisted of 52 cases, with 13 cases belonging to each of the four scores, while 28 cases were reserved as part of the test data-set. On average the images were made up of approximately \(10^{10}\) pixels and were scanned using the Hamamatsu NanoZoomer C9600. The WSIs possessed a multi-resolution pyramidal structure in the range 4\(\times\) to 40\(\times\). The GT for each case was obtained from clinical reports marked by at-least two histopathologists and included the slide-level HER2 score for each case \((0-3+)\). For our experiments, we performed four-folds cross-validation with the training dataset to create our training and validation data and have used the 28 slides in the contest testing dataset as the test data [28]. #### Iv-A2 Pre-processing and Experimental Setup The WSIs were down-sampled by a factor of 32 to level 5 at resolution 2.5\(\times\) as \(I_{0}\). In order to prevent the attention model from focusing on any tissue artifacts present in the slide, a tissue mask was generated and applied to each image \(I_{0}\) using Otsu thresholding and morphological operations [29]. The soft attention model comprises of four convolutional layers followed by rectified linear (ReLu) activation functions, channels were progressively doubled in each subsequent layer. To reduce the size of the sampling space max-pooling was applied to the output of the convolutional layers. Both the initial size of the channels and the size of max pooling kernel was set as 8. A residual network with 8 layers and 32 channels was used for the extraction of features [16]. The parameters high and low for the noise vector were set to range \([0,1]\). Since we aim to find the "most attentive" regions in a given WSI, the addition of negative noise could be considered counterproductive. The minimum area threshold was set to 20 tiles, where initially we select 20 locations and then we distill 10 spatially distinct tiles by identifying the tiles with the maximum euclidean distance. To ensure a fair comparison, we have tried to keep the values of the hyperparameters of the classification model similar to the original configuration [12]. Each of the \(10\) coordinate pairs \(L_{F}\) was mapped to a location in the original WSI using a scale factor \(s=32\). The image tiles \(i\) of size \(2048\times 2048\times 3\) were extracted at resolution \(40\times\). Furthermore, Fig. 4: The first image is an example of an image tile with the locations of six glimpses or patches shown with colored circles for the prediction of HER2 score 2+. The remaining images show the glimpses extracted at the circled locations at resolution 40\(\times\) and 20\(\times\). we performed standard data augmentations : random rotations \((0,90,180,270)\) and horizontal and vertical flipping on the extracted image tiles. During training, in each mini-batch step \(40\) image tiles were extracted from four down sampled images of different HER2 scores \([0-3+]\), resulting in an equal representation of all scores in a single batch. Given a single image tile \(i\) from the batch, we extract six sequential multi-resolution glimpses of size \(128\times 128\times 3\) at magnification levels \(20\times\) and \(40\times\). The location of the first glimpse was selected randomly. The combined representation of the glimpse's image and location features were of size \(1\times 256\) and the hidden layers of the RNN contained \(256\) and \(128\) neurons. The Adam (Kingma & Ba) optimizer [30] and the StepLR scheduler was used for training the model. The learning rate for the soft attention module was initially set to \(0.0001\) and the hard attention module was set to \(0.001\). In our experiments we set the value of the regularizer as \(1\) and in the joint loss experiments, we set the value of alpha as \(0.5\). #### Iv-A3 Results As in the HER2 scoring contest [28], we have evaluated the performance of the proposed method using three separate criteria, namely a) agreement points, b) weighted confidence and c) combined points. For the first assessment criterion of agreement points, a total of 15 points were awarded based on the variation between the ground truth HER2 labels and the predicted HER2 scores for each case. The second assessment criterion of weighted confidence was used to evaluate the reliability of the predictions by weighing each predicted score with a corresponding confidence value. The third criterion, combined points was calculated by taking a product of the two preceding criteria for each case. We have used agreement points as the main metric for the evaluation of the proposed model. For the 10 tiles extracted per WSI, the most dominant class was selected as the slide-level score. Mean aggregation was used to aggregate the results of the four folds to arrive at a final HER2 slide-level score for each WSI in the testing dataset. The confidence value was obtained by calculating the average probability values of the 10 tiles extracted for each WSI for each fold. The proposed model, as shown in Table I achieved a total number of 402.5 points. Our method outperformed other proposed approaches with a significant margin and achieved comparable performance to [12]. It is important to highlight that [12] extracted approximately 58,000 image tiles whereas our model achieved similar results while only extracting 510 image tiles (\(<1\%\) of image tiles) with the help of attention mechanisms. ### _Prediction of MMR Marker Status in Colon Cancer_ #### Iv-B1 Dataset This set of experiments was performed on an internal dataset, COMET-MMR dataset, which comprises of 72 WSIs obtained from patients with colorectal cancer.The COMET dataset was collected using the East Midlands Research Ethics Committee (reference 11/WM/0170). The slides were scanned using the Ormnyx VL120 scanner and stained with 4 MMR markers. For this IHC study, we only considered the MLH1 and PMS2. The dataset consisted of 14 negative (loss) cases and 58 positive (intact) cases for both MLH1 and PMS2. For our experiments, we performed five-folds cross-validation to create our training, validation and testing datasets. The GT for the WSI level MMR scores were provided by an expert pathologist[35]. #### Iv-B2 Pre-processing and Experimental Setup The same experimental setup mentioned previously in Section IV-A2 has been used for these experiments with slight modifications. The soft attention network consisted of a five-layer CNN, starting with 8 channels. Channels were doubled in each subsequent layer and like before ReLu activation functions were used. A max-pooling layer of size 3 was used to reduce the size and a softmax layer was used to assign probability values. The same networks mentioned previously were used for feature extraction. Unlike the HER2 Challenge dataset, the IHC stained slides present in the COMET-MMR dataset contained many visual artifacts, some of which escaped the preceding tissue masking steps, e.g. blood vessels and tissue folds. The soft attention model with multinomial entropy was susceptible to focusing on these diagnostically irrelevant structures. While these can be removed using manual masks, in order to show the robustness of the proposed model, it was trained with these artifacts both during training and test time. An example of such artifacts and the corresponding attention masks and GT tumor masks are shown in Fig. 5. However, to reduce the probability of Fig. 5: a) The predicted tumor mask overlay on the WSI with the red box indicating the visual artifacts incorrectly included in the mask; b) The ground truth tumor mask. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \hline Teams & Points & W.C. & C.Pts \\ \hline **The proposed method** & **402.5** & **23.59** & **347.2** \\ \hline Learning Where to See [12] & **405** & **24.1** & **359.1** \\ \hline VISILAB-I (GoogleNet [31]) & 382.5 & **23.55** & **348** \\ \hline FSUJena [23] & 370 & 23 & 345 \\ \hline HUANGCH (AdaBoost) & 377.5 & 22.62 & 345.7 \\ \hline MTB NLP (AlexNet[32]) & 390 & 22.94 & 335.7 \\ \hline VISILAB-II (contour analysis) & 377.5 & 21.88 & 322 \\ \hline Team Indus (LeNet[33]) & **402.5** & 18.45 & 321.4 \\ \hline UC-CCSE [34] & 390 & 21.07 & 316 \\ \hline MUCS-III [25] & 390 & 20.43 & 300.8 \\ \hline MUCS-II (GoogleNet [31]) & 385 & 19.51 & 290.1 \\ \hline \hline \end{tabular} \end{table} TABLE I: HER2 Comparative Results these structured being picked up by the attention sampling model, negative noise was introduced into our workflow: the parameters \(high\) and \(low\) for the noise vector were set as \(-2\) to \(1\). Once we obtained the attention maps, _k_-means clustering with number of clusters = 3 was used to separate the distinct regions namely high attention areas, low attention areas and background region. From the high attention areas \(30\) locations were pooled to shortlist \(15\) spatially distinct tiles as \(L_{f}\). Unlike HER2 where there was no explicit stopping criterion for the attention model, in this dataset we used the dice score loss of the predicted tumor mask with the GT masks to decide when to stop the training of the attention model. #### Iv-B3 Baseline Experiments Random Sampling MethodTo evaluate the performance of our dual attention model we have proposed a baseline experiment by randomly extracting image tiles of size \(2048\times 2048\) at resolution 40x, without replacement, from the tissue area of the image. The tiles were extracted without using soft attention by using the Gumbel-max trick [36]. The image tiles were then passed to the RL-based hard attention component explained in Section III-D to obtain a classification prediction. The input images and the experimental setup of the RL-based hard attention classifier are the same as in the proposed model. Sliding Window Method (weakly supervised learning)To further compare the performance of our proposed framework, we proposed a second DAB mask based ResNet18 [37] baseline experiment. For each WSI, after the application of a tissue mask, a DAB mask was created by using a threshold of \(0.85\) on a grayscale image. After the application of both masks, patches of size \(224\times 224\) with a tissue area threshold of greater than \(0.35\) were extracted using the sliding window approach with no overlap. The patches were extracted at magnification \(20\times\) in order to incorporate more contextual information. #### Iv-B4 Results We choose F1-score as the main metric for the evaluation of our proposed method. For both the dual attention model and the Random Sampling baseline method we calculated the average probabilities of the 15 tiles selected per WSI to calculate the F1-scores and AUROC scores. For the Sliding Window baseline, for each slide, the top 15 probability values were used. Five-folds cross validation was performed with the same folds used in all the experiments, to ensure a fair comparison. The comparison of our proposed model with the baseline experiments is shown in Table II where our model outperforms the other techniques with an F1-scores of 0.883 and 0.817 for the biomarkers MLH1 and PMS2, respectively. The second best performing network is Sliding Window method and the lowest performance was acheived by the Random Sampling method The superior performance of our model can be attributed to the fact that we selectively process only those image tiles that are identified by the soft attention module as potential ROI. Whereas, the poor performance of the Random Sampling Method is due to the fact that the randomly sampled image tiles may include noise from non-tumor areas. This exhibits the advantage of using soft attention during the sampling process, when the same input was provided to both frameworks. Regarding the AUROC scores, it can be seen from Table II that the proposed model achieves equivalent performance to the Sliding Window baseline model. The performance of the proposed model is comparable for the biomarker MLH1 whereas in PMS2 it lags behind by a margin of approximately 1%. The AUROC and F1-scores for this model can be explained by the fact that it ranks the test samples in the unbalanced dataset better at higher thresholds. The application of DAB masks as a pre-processing step for the Sliding Window baseline also brings about a significant advantage for the scoring of IHC stained slides by reducing the amount of noise, as compared to the random-sampling procedure. It is important to note here that the Sliding Window baseline extracts patches from the entire image whereas our model uses only 15 tiles for a single image. The first of the two baselines, random sampling aims to showcase the efficacy of the soft attention component of our proposed model and to exhibit the advantage of employing an attention mechanism in comparison to randomly extracting tiles. The second baseline, Sliding Window, follows the standard CPATH pipeline. The motivation behind the selection of this baseline was to demonstrate that we can achieve performance comparable to that of the standard CPATH methods without needing to process the entire WSI, which is both computationally inefficient and time-consuming. ## V Conclusions In this study, we have presented a novel dual attention model that combines soft attention at the slide level and hard attention at the patch level for automated scoring of IHC stained WSIs. We model a pathologist's visual analysis of a WSI, by first employing a soft attention mechanism at low resolution to identify areas in need of further investigation and then extracting a set of spatially distinct and diverse image tiles from these regions. Additionally, instead of processing these extracted tiles entirely, we further employ reinforcement learning to extract a sequence of saccades to score each tile. These scores are then aggregated to reach a slide-level classification decision. We trained the two attention modules both separately and together for two IHC datasets, where the binary prediction of the Loss/Intact status for the MLH1 and PMS2 biomarkers in colorectal cancer is dependent on the presence of DAB staining and HER2 scoring in breast cancer is associated with the intensity of DAB staining. Conventional WSI pre-processing pipelines employ the sliding window paradigm, where the standard approach is to independently extract and save the image patches. Not only is this a time-consuming task but it requires vast reserves of memory. We \begin{table} \begin{tabular}{|c|c|c|c|} \hline \hline Biomarker & Method & F1-Score & AUROC \\ \hline \multirow{3}{*}{MLH1} & Proposed Method & \(\textbf{0.88\pm 0.09}\) & \(\textbf{0.92\pm 0.09}\) \\ & Random Sampling & \(0.66\pm 0.06\) & \(0.67\pm 0.12\) \\ & Sliding Window & \(0.72\pm 0.07\) & \(\textbf{0.92\pm 0.04}\) \\ \hline \multirow{3}{*}{PMS2} & Proposed Method & \(\textbf{0.82\pm 0.08}\) & \(0.89\pm 0.09\) \\ & Random Sampling & \(0.62\pm 0.11\) & \(0.64\pm 0.04\) \\ \cline{1-1} & Sliding Window & \(0.71\pm 0.09\) & \(\textbf{0.93\pm 0.08}\) \\ \hline \end{tabular} \end{table} TABLE II: MMR Status Prediction : Comparison with Baselines take inspiration from the principle of "working smarter, not harder", and achieve results comparable to the state-of-the-art methods while only processing a fraction of the WSI at the highest resolution. While the average WSI can be of the order of \(10^{10}\) pixels, we perform slide-level classification while only attending to less than 20 glimpses of size 128 \(\times\) 128 each, at the highest resolution (40\(\times\) and 20\(\times\)) from a single slide, which amounts to less than 5% of the entire slide area, while achieving results comparable to conventional methods. The proposed method also attempts to address the memory issues inherent in all patch-based models, by extracting a batch of image tiles in real time, performing the required analysis, updating the gradients accordingly and then discarding the image tiles without saving them, potentially saving memory. This can allow for the deployment of this method in labs with limited memory resources. Additionally, in standard WSI pipelines which extract hundreds of image tiles per slide, it is difficult to ascertain which tiles have contributed to the final score. In comparison, our selective attention mechanism which identifies truly useful image tiles, can allow a pathologist to conveniently assess and corroborate the predicted results and can be incorporated as an assistive diagnostic tool. An open problem is how to select the optimal number of tiles extracted from each slide for slide-level classification.
2301.10683
Note on the conjugacy classes of elements and their centralizers for the free product of two groups
We describe the conjugacy classes of the elements of the free product of two groups and their centralizers and, as a consequence, we correct the calculation of the cyclic and periodic cyclic homology of the group ring of the free product of two groups given in a previous paper.
Dan Burghelea
2023-01-25T16:36:06Z
http://arxiv.org/abs/2301.10683v1
# Note on the conjugacy classes of elements and their centralizers for the free product of two groups ###### Abstract We describe the conjugacy classes of the elements of the free product of two groups and their centralizers and, as a consequence, we correct the calculation of the cyclic and periodic cyclic homology of the group ring of the free product of two groups given in a previous paper. ## 1 Introduction This Note was prompted by a mistake pointed out by Markus Land about the cyclic and periodic cyclic homology of the group ring of the free product, precisely Propositions II and IIp in the paper **The cyclic homology of the group rings** published in Comment. Math. Helv., 60 (1985) no 3, 354-365. The mistake was the result of a miscalculation of the centralizers of the conjugacy classes of elements of the free product of two groups. In this note we provide a correct description of them and, as a consequence, correct the statements of Proposition II and IIp in [1]. In consistency with the notation in [1] for a group \(G\) and an element \(x\in G\) one denotes by \(G_{x}:=\{y\in G\mid y\cdot x=x\cdot y\}\) the centralizer of \(x\), by \(\{x\}\) the subgroup generated by the element \(x\) and by \(N_{x}\) the quotient group \(N_{x}:=G_{x}/\{x\}\). These groups remain isomorphic for all \(x\) in the same conjugacy class. Denote by \(\langle G\rangle\) the set of conjugacy classes of elements of \(G\) and for \(x\in G\) write \(\hat{x}\) for the conjugacy class of \(x\). For the groups \(H\) and \(G\) one denotes the nonzero elements by \(h\) and \(g\) and the neuter elements by \(e_{H}\) and \(e_{G}\). Consider the free product \(P=H*G.\) Any element of \(x\in P\) is representable (not uniquely) by a _word_\(s_{1}s_{2}\cdots s_{r}\) with \(s_{i}\in H\sqcup G.\) The product of the elements \(x\) and \(x^{\prime}\), represented by the words \(s_{1}s_{2}\cdots s_{r}\) and \(s_{1}^{\prime}s_{2}^{\prime}\cdots s_{r^{\prime}}^{\prime}\) is representable by the concatenation \(s_{1}s_{2}\cdots s_{r}s_{1}^{\prime}s_{2}^{\prime}\cdots s_{r^{\prime}}^{\prime}.\) Modifying a word representation of an element \(x\in P\) by i) removing all elements \(s_{i}\) of the form \(e_{H}\) and \(e_{G},\) ii) replacing consecutive elements \(\cdots s_{i}s_{i+1}\cdots\) by their product when in either \(H\) or \(G,\) leads to a smaller word representations of \(x\in P,\) the _reduced word_ representation, which is unique. (The empty word is the reduced representation of \(e_{P}\)). The reduced word representation \(s_{1}s_{2}\cdots s_{r}\) for the element \(x\in P\) is characterized by a) \(s_{i}\in(H\setminus e_{H})\sqcup(G\setminus e_{G})\) b) consecutive \(s_{i}\) and \(s_{i+1}\) belong to different groups. Consequently, a nontrivial elements of \(x\in P\) has an unique _reduced word_ representation of one of the following seven types: Type 1: \(w=\)\(h_{1}g_{1}h_{2}g_{2}\cdots h_{k}g_{k},\)\(k\geq 1,\) Type 2: \(w=\)\(h_{1}g_{1}h_{2}g_{2}\cdots h_{k}g_{k}\)\(h,\)\(k\geq 1,\) Type 3: \(w=g\ h_{1}g_{1}h_{2}g_{2}\cdots h_{k}g_{k},\quad k\geq 1,\) Type 4: \(w=g\ h_{1}g_{1}h_{2}g_{2}\cdots h_{k}g_{k}\ h,\ \ k\geq 1,\) Type 5: \(w=g,\) Type 6: \(w=h,\) Type 7: \(w=gh.\) Because of the unicity of the reduced word representation the following facts hold true: 1. if two nontrivial elements commute then they belong to the same type; as a consequence if \(x\) is represented by a reduced word of type 5 or type 6 then \(P_{x}=G_{x}\) or \(P_{x}=H_{x}.\) 2. any nontrivial element \(x\in P\) of type different from type 5 and type 6 is conjugate to an element of type 1. 3. the following proposition holds true **Proposition 1.1**: _Suppose \(w^{\prime}=h_{1}\ g_{1}\ h_{2}\ g_{2}\cdots h_{r}\ g_{r}\) and \(w^{\prime}=h^{\prime}_{1}\ g^{\prime}_{1}\ h^{\prime}_{2}\ g^{\prime}_{2} \cdots h^{\prime}_{r}\ g^{\prime}_{r^{\prime}}\) are two reduced words of type 1, representing elements \(x\) and \(x^{\prime}\) in \(P\) s.t. \(x\cdot x^{\prime}=x^{\prime}\cdot x\) and suppose \(c\) is the greatest common divisor of \(r\) and \(r^{\prime}.\) Then there exists a reduced word of type 1, \(w_{0}=h^{\prime\prime}_{1}\ g^{\prime\prime}_{1}\ h^{\prime\prime}_{2}\ g^{ \prime\prime}_{2}\cdots h^{\prime\prime}_{c}\ g^{\prime\prime}_{c},\) such that \(w\) is the concatenation of \(r/c\) copies of \(w_{0}\) and \(w^{\prime}\) is the concatenation of \(r^{\prime}/c\) copies of \(w_{0}.\)_ **Observation 1.2**: _Items 1 and 2 above imply that \(\langle P\rangle=e_{P}\sqcup(\langle H\rangle\setminus e_{H})\sqcup(\langle G \rangle\setminus e_{G})\sqcup U\) with \(U=\{\hat{x}\in\langle P\rangle\mid\hat{x}\cap(e_{H}*G)=\emptyset,\hat{x}\cap( H*e_{G})=\emptyset\}\) and the centralizers in \(P\) of the elements in \(e_{H}*G\subset P\) resp. in \(H*e_{G}\subset P\) remain the same as the centralizers in \(G\) and \(H\) resp.._ _Item 3 ( i.e. Proposition (1.1) shows that for \(x\in\hat{x}\in U,\) the pair group-subgroup \((P_{x},\{x\})\) is isomorphic to the pair \((\mathbb{Z},k(x)\mathbb{Z}),\) hence \(N_{x}\simeq\mathbb{Z}_{k(x)}.\) Here \(k(x)\) is the largest integer \(k\) s.t. \(x=y^{k}.\)_ ## 2 Proof of Proposition 1.1 Let \(\mathbb{S}\) be a set of symbols. Let \(\mathcal{S}:=\{s_{1},s_{2},\cdots,s_{r}\cdots,s_{n}\}\) be an ordered set of symbols with \(s_{i}\in\mathbb{S}\) (i.e. an word with letters in \(\mathbb{S}\)), \(p<n\) and \(d=n-p.\) **Lemma 2.1**: _Suppose that the collection \(\mathcal{S}\) satisfies:_ 1. \(s_{i}=s_{d+i}\) _for_ \(i\leq p,\)__ 2. \(s_{i}=s_{i+p}\) _for_ \(i\leq d.\)__ _1. _If_ \(n\) _and_ \(p\) _are relatively prime then all_ \(s_{i}\) _are equal._ _2. If \(c\) is the greatest common divisor of \(n\) and \(p\) and \(\mathcal{S}^{\prime}=s_{1},s_{2},\cdots s_{c}\) is the ordered set of the first \(c\) symbols of \(\mathcal{S}\) then \(\mathcal{S}\) is the concatenation of \(n/c\) copies of \(\mathcal{S}^{\prime}.\)_ _Proof:_ For any \(r=1,2,\cdots,d\) define the subsets of \(\mathcal{S}(r)\subset\mathcal{S},\) consisting of all elements of \(\mathcal{S}\) indexed by \(r+kd\) for \(k=0,1,\cdots,\) namely \[\mathcal{S}(r):=\{s_{r},s_{r+d},s_{r+2d},\cdots s_{r+kd}\cdots\}.\] Note that the sets \(\mathcal{S}(r)\) are disjoint and their union is \(\mathcal{S}.\) Proof of item 1: For each \(\mathcal{S}(r)\) let \(k_{r}\) be the unique integer such that \(r+(k_{r}-1)d\leq p<r+k_{r}d.\) The first inequality guaranties that \[r+k_{r}d-p\leq d. \tag{1}\] In view of the hypothesis (i) all elements of \({\cal S}(r)\) are equal and in view of the hypothesis (ii) the elements of the collections \(S(r)\) and \({\cal S}(r+k_{r}d-p)\) are equal; Consider the pairs of integers \((r_{i},\kappa_{i})\) with \(1\leq i\leq d\) defined inductively by: a) \(r_{1}=1\) and \(\kappa_{1}:=k_{1}\) b) \(r_{i+1}=1+(\kappa_{1}+\kappa_{2}+\cdots\kappa_{i})d-ip\) and \(\kappa_{i+1}=k_{r_{i+1}}\) Note that in view of inequality (1) all \(r_{i}\leq d.\) Let \({\cal S}_{i}=:{\cal S}(r_{i}).\) Since \(p\) and \(d\) are relatively prime \({\cal S}_{i}\) and \({\cal S}_{j}\) for \(i\neq j\) can never be the same since \(d\) can not divide \(i-j.\) Then the sets \({\cal S}_{i},i=1,2,\cdots d\) provide a permutation of the sets \({\cal S}(r),r=1,2,\cdots d.\) Hypothesis (ii) implies that the elements of \({\cal S}_{i}\) and \({\cal S}_{i+1}\) are equal for any \(i\) hence all elements of \({\cal S}\) are equal. Proof of item 2: Consider the set of symbols \({\mathbb{S}}^{\prime}={\mathbb{S}}\times{\mathbb{S}}\times\cdots{\mathbb{S}},\) the \(c-\) fold cartesian product of \({\mathbb{S}};\) clearly \({\cal S}^{\prime}\in{\mathbb{S}}^{\prime}.\) Interpret \({\cal S}\) as an ordered set of \(n/c\) symbols of \({\mathbb{S}}^{\prime}.\) Clearly item 1. implies item 2. To prove Proposition 1.1 consider the set of symbols \({\mathbb{S}}=(H\setminus e_{H})\times(G\setminus e_{G})\) and write \(w=s_{1}\ s_{2}\ \cdots\ s_{n},\) and \(w^{\prime}=s^{\prime}_{1}\ s^{\prime}_{2}\ \cdots\ s^{\prime}_{p},\) Since the concatenation \(ww^{\prime}\) and \(w^{\prime}w\) are the same one has 1. \(s^{\prime}_{i}=s_{i}\) for \(i\leq p\) 2. \(s^{\prime}_{i}=s_{d+i}\) for \(i\leq p\) 3. \(s_{i}=s_{i+p}\) for \(i\leq n-p=d\) which implies \(s_{i}=s_{d+i}\) for all \(i\leq p\) and \(s_{i}=s_{p+i}\) for \(i\leq d,\) which in view of Lemma (2.1) implies that \(w\) is the concatenation exactly \(n/c\) times of \(w_{0}=s_{1}s_{2}\cdots,s_{c}\) and in view of the equality \(s_{i}=s^{\prime}_{i}\) for \(i\leq d,\)\(w^{\prime}\) is the concatenation of exactly \(p/c\) copies of \(w_{0}.\) ## 3 Cyclic resp. periodic cyclic homology of the group-ring \(R[P],\) Let \(R\) be a commutative ring with unit and \(G\) a group. Recall that the reduced cyclic resp. periodic cyclic homology, \(\tilde{H}C_{*}(R[G])\) resp. \(P\tilde{H}C_{*}(R[G]),\) of the group ring \(R[G]\) is the co-kernel of the split injective map \(HC_{*}(R[e_{g}])=HC_{*}(R)\to HC_{*}(R[G])\) resp. \(PHC_{*}(R[e_{G}])=PHC_{*}(R)\to PHC_{*}(R[G])\) induced by the inclusion of the trivial subgroup \(e_{G}\) to \(G.\) One refers to the cyclic resp. periodic cyclic homology of the group ring \(R[G]\) as the _unreduced_ version of these homologies. Clearly, the unreduced version is the direct sum of the reduced version with one copy of the cyclic resp. periodic cyclic homology of \(R.\) As shown in [1] all these homologies, reduced or unreduced, say \({\cal H}_{*}(R[G]),\) are graded \(R-\)modules which are direct sums of graded \(R-\)modules \({\cal H}_{*}(R[G])_{\hat{x}}\) indexed by the conjugacy classes \(\hat{x}\in\langle G\rangle,\) referred to as the contribution of \(\hat{x},\) \[{\cal H}_{*}(R[G])=\oplus_{\hat{x}\in\langle G\rangle}{\cal H}(R[G])_{\hat{x}}.\] For each \(\hat{x}\neq e_{G}\) the contribution to the reduced and unreduced version are the same but for \(e_{G}\) the unreduced version is equal to the reduced version direct sum \({\cal H}_{*}(R).\) For each conjugacy class \(\hat{x}\) one defines \(n(\hat{x}):=n(x)\) the order of the element \(x\) and \(k(\hat{x}):=\kappa(x)\) the largest \(k\) s.t. \(x=y^{k}\); clearly \(n(x)\) and \(\kappa(x)\) are the same for all \(x\) in the same conjugacy class. Recall from [1] the notations: 1. \[K_{*}(R[G]):=\begin{cases}\oplus_{n\geq 0}H_{2n}(BG;R)\text{ if }*=\text{even}\\ \oplus_{n\geq 0}H_{2n+1}(BG;R)\text{ if }*=\text{odd}\end{cases}\quad\tilde{K}_{*}(R[G]):= \begin{cases}\oplus_{n>0}H_{2n}(BG;R)\text{ if }*=\text{even}\\ \oplus_{n\geq 0}H_{2n+1}(BG;R)\text{ if }*=\text{odd}\end{cases}\] 2. for \(x\in G\) with \(n(x)=\infty\) \[T_{*}(\hat{x};R)=T_{*}(x;R):=\lim(\begin{array}{c}\cdots\end{array}H_{*+2n} (BN_{x}:r)\xrightarrow{S}H_{*+2n-2}(BN_{x};R)\xrightarrow{}\cdots\end{array}\] with \(S\) the Gysin homomorphism of the fibration \(B\{x\}=S^{1}\to BG_{x}\to BN_{x}\) where \(N_{x}=G_{x}/\{x\}\), which up to isomorphism depends only on the conjugacy class of \(x\) and then denoted by \(T(\hat{x};R)\). Recall from [1] that the contribution of \(\hat{x}\) when \(0\neq n(\hat{x})<\infty\) is \[HC_{*}(R[G])_{\hat{x}}=H_{*}(B(N_{\hat{x}})\times BS^{1}\times K(\mathbb{Z}_{n (\hat{x})},1);R),\quad PHC_{*}(R[G])_{\hat{x}}=K_{*}(R[N_{\hat{x}}])\] and when \(n(\hat{x})=\infty\) is \[HC_{*}(R[G])_{\hat{x}}=H_{*}(B(N_{x});R),\quad PHC_{*}(R[G])_{\hat{x}}=T_{*}( \hat{x};R)\] while for \(n(\hat{x})=0,\) hence \(\hat{x}=e_{G}\), is \[HC_{*}(R[G])_{e_{G}}=H_{*}(B(G)\times BS^{1};R),\quad PHC_{*}(R[G])_{e_{G}}=K_ {*}(R[G])\] and \[\tilde{HC}_{*}(R[G])_{e_{G}}=H_{*}(B(G)\times BS^{1}/*\times BS^{1};R).\quad P \tilde{HC}_{*}(R[G])_{e_{G}}=\tilde{K}_{*}(R[G])\] In particular one has **Proposition 3.1**: \[P\tilde{HC}_{*}(R[G])=\tilde{K}_{*}(R[G])\bigoplus(\oplus_{\hat{x}\in((G)^{ \prime}\setminus e_{G})}K_{*}(BN_{\hat{x}};R))\bigoplus(\oplus_{\hat{x}\in(G)^ {\prime\prime}}T_{*}(\hat{x};R))\] _where \(\langle G\rangle^{\prime}:=\{\hat{x}\in\langle G\rangle\mid n(\hat{x})<\infty\}\) and \(\langle G\rangle^{\prime\prime}=\{\hat{x}\in\langle G\rangle\mid n(\hat{x})= \infty\}.\)_ An equivalent form of this proposition is stated in [1] for a field of characteristic zero as Theorem 1'. Let \(H\) and \(G\) be two groups and \(P=H*G\) their free product. Recall that \(B(H*G)=BH\lor BG\) the base point union of the spaces \(BH\) and \(BG\). As an immediate consequence of Observation (1.2) (description of conjugacy classes of elements of \(P\) and of their centralizers) in the previous section one has **Proposition 3.2**: \[\tilde{HC}_{*}(R[H*G])=\tilde{HC}_{*}(R[H])\bigoplus\tilde{HC}_{*}(R[G]) \bigoplus(\oplus_{\hat{x}\in U}H_{*}(B\mathbb{Z}_{k(\hat{x})};R))\] \[P\tilde{HC}_{*}(R[H*G])=P\tilde{HC}_{*}(R[H])\bigoplus P\tilde{HC}_{*}(R[G]) \bigoplus(\oplus_{\hat{x}\in U}T_{*}(\hat{x};R))\] _where_ \[H_{*}(B(\mathbb{Z}_{k});R)=\begin{cases}R\text{ for }*=0\\ H_{1}(B(\mathbb{Z}_{k});R)\text{ for }*\text{ odd}\\ H_{2}(B(\mathbb{Z}_{k});R)\text{ for }*\text{ even }\neq 0\end{cases}\] _and_ \[T_{*}(\hat{x};R)=\begin{cases}H_{1}(B(\mathbb{Z}_{k(x)});R)\text{ for }*\text{ odd}\\ H_{2}(B(\mathbb{Z}_{k(x)});R)\text{ for }*\text{ even}\end{cases}\] Note that if \(R\) is an algebra over a field of characteristic zero then \(H_{*}(B(\mathbb{Z}_{k});R)\) is concentrated in degree zero and isomorphic to \(R\) and for any \(\hat{x}\in U,\)\(T_{*}(\hat{x};R)\) vanishes. To correct all inaccuracies in [1] we insert the following errata to [1] Errata to the paper The cyclic homology of the group rings, Comment. Math. Helv., 60, 1985, no. 3, 354-365 In the paper **The cyclic homology of the group rings** published in Comment. Math. Helv., 60 (1985) no 3, 354-365 Propositions II and IIp, straightforward consequences of the main result,Theorem I, are not true as stated. The statements become correct provided the cyclic resp. periodic cyclic homology are replaced by their reduced versions and the ring \(R\) is a \(\mathbb{Q}-\)algebra, for instance a field of characteristic zero. The reduced cyclic resp. periodic cyclic homology of \(R[G]\) is quotient of the obvious split injective maps, \(i_{G}:HC_{*}(R)\to HC_{*}(R[G])\) resp. \(i_{G}:PHC_{*}(R)\to PHC(R[G])\) induced by the inclusion \(e_{G}\in G.\) Also on line 1 page 363 to make the statement correct one shall replace "\(P_{x}\neq\{x\}\)" by "\(N_{x}\neq\mathbb{Z}_{k(x)}\) with \(\mathbb{Z}_{k}\) denoting the finite cyclic group of order \(k,\) and \(k(x)\) the largest integer \(k\) s.t. \(x=y^{k}.\) I thank **Markus Land** for bringing this to my attention and suggesting the use of reduced cyclic and periodic cyclic homology for group rings. For an arbitrary commutative ring with unit \(R,\) Propositions II and IIp should be corrected as follows 1. In Proposition II, \(HC_{*}\) has to be replaced by the reduced version \(\tilde{H}C_{*}\) and the sentence _"_\(R_{\hat{\alpha}}=R\)_regarded as a graded module concentrated in the degree zero"_ by _"\(R_{\hat{\alpha}}=H_{*}(B(\mathbb{Z}_{k(x)});R),x\in\hat{\alpha}.\)_"_ 2. In Proposition IIp, \(PHC_{*}\) has to be replaced by its reduced version, \(P\tilde{H}C_{*},\) and to the right side of the equality one should add \(\oplus_{\hat{x}\in U}T_{*}(\hat{x};R),\) with \(U=\{\hat{x}\in\langle H*G\rangle\mid\hat{x}\cap(e_{H}*G)=\emptyset,\hat{x}\cap (H*e_{G})=\emptyset\}\) where \(\langle\Gamma\rangle\) denotes the set of conjugacy classes of elements of the group \(\Gamma\) and \(e_{\Gamma}\) the neuter element of \(\Gamma.\) Note that \[H_{*}(B(\mathbb{Z}_{k});R)=\begin{cases}R\ {\rm for}\ *=0\\ H_{1}(B(\mathbb{Z}_{k});R)\ {\rm for}\ *\ {\rm odd}\\ H_{2}(B(\mathbb{Z}_{k});R)\ {\rm for}\ *\ {\rm even}\neq 0\end{cases}\] and for \(\hat{x}\in U,\) and \(x\in\hat{x}\) the pair of group - subgroup \((G_{x},\{x\})\) is isomorphic to the pair \((\mathbb{Z},k(x)\mathbb{Z}),\) hence \(N_{x}=\mathbb{Z}_{k(x)},\) and \[T_{*}(\hat{x};R)=\begin{cases}H_{1}(B(\mathbb{Z}_{k(x)});R)\ {\rm for}\ *\ {\rm odd}\\ H_{2}(B(\mathbb{Z}_{k(x)});R)\ {\rm for}\ *\ {\rm even}\end{cases}\ \ \ \ \ {}^{1}.\]
2308.02954
Choosing the Correct Generalized Inverse for the Numerical Solution of the Inverse Kinematics of Incommensurate Robotic Manipulators
Numerical methods for Inverse Kinematics (IK) employ iterative, linear approximations of the IK until the end-effector is brought from its initial pose to the desired final pose. These methods require the computation of the Jacobian of the Forward Kinematics (FK) and its inverse in the linear approximation of the IK. Despite all the successful implementations reported in the literature, Jacobian-based IK methods can still fail to preserve certain useful properties if an improper matrix inverse, e.g. Moore-Penrose (MP), is employed for incommensurate robotic systems. In this paper, we propose a systematic, robust and accurate numerical solution for the IK problem using the Mixed (MX) Generalized Inverse (GI) applied to any type of Jacobians (e.g., analytical, numerical or geometric) derived for any commensurate and incommensurate robot. This approach is robust to whether the system is under-determined (less than 6 DoF) or over-determined (more than 6 DoF). We investigate six robotics manipulators with various Degrees of Freedom (DoF) to demonstrate that commonly used GI's fail to guarantee the same system behaviors when the units are varied for incommensurate robotics manipulators. In addition, we evaluate the proposed methodology as a global IK solver and compare against well-known IK methods for redundant manipulators. Based on the experimental results, we conclude that the right choice of GI is crucial in preserving certain properties of the system (i.e. unit-consistency).
Jacket Demby's, Jeffrey Uhlmann, Guilherme N. DeSouza
2023-08-05T21:25:46Z
http://arxiv.org/abs/2308.02954v1
Choosing the Correct Generalized Inverse for the Numerical Solution of the Inverse Kinematics of Incommensurate Robotic Manipulators ###### Abstract Numerical methods for Inverse Kinematics (IK) employ iterative, linear approximations of the IK until the end-effector is brought from its initial pose to the desired final pose. These methods require the computation of the Jacobian of the Forward Kinematics (FK) and its inverse in the linear approximation of the IK. Despite all the successful implementations reported in the literature, Jacobian-based IK methods can still fail to preserve certain useful properties if an improper matrix inverse, e.g. Moore-Penrose (MP), is employed for incommensurate robotic systems. In this paper, we propose a systematic, robust and accurate numerical solution for the IK problem using the Mixed (MX) Generalized Inverse (GI) applied to any type of Jacobians (e.g., analytical, numerical or geometric) derived for any commensurate and incommensurate robot. This approach is robust to whether the system is under-determined (less than 6 DoF) or over-determined (more than 6 DoF). We investigate six robotics manipulators with various Degrees of Freedom (DoF) to demonstrate that commonly used GI's fail to guarantee the same system behaviors when the units are varied for incommensurate robotics manipulators. In addition, we evaluate the proposed methodology as a global IK solver and compare against well-known IK methods for redundant manipulators. Based on the experimental results, we conclude that the right choice of GI is crucial in preserving certain properties of the system (i.e. unit-consistency). Inverse kinematics, generalized matrix inverses, incommensurate robotic manipulators. ## I Introduction Incommensurate robotic manipulators refer to robotic systems having a combination of prismatic (linear) and revolute (rotational) joints. Such manipulators combine variables expressed in different units: i.e. pose vectors \(\bar{D}\) with a combination of units of distance (meters, centimeters, etc.) and orientations (radians and degrees), with joint vector \(\bar{Q}=[Q_{1},Q_{2},..,Q_{n}]\) where \(Q_{i}=d_{i}\) is a prismatic joint, \(Q_{i}=\theta_{i}\) is a revolute joint, and \(n\) the number of DoF [1, 2]. The joint \(\bar{Q}\) and pose \(\bar{D}\) vectors are linearly related by equation \(\frac{\partial D}{\partial t}=J,\frac{\partial Q}{\partial t}\Longleftrightarrow \frac{\partial Q}{\partial t}=J^{-1}.\frac{\partial D}{\partial t}\), where \(J\) is the Jacobian matrix (e.g., analytical [3], geometric [3], numerical [4] or elementary transform sequence [5, 6]), and \(J^{-1}\) any Generalized Inverse (GI) that can be employed to find the inverse of this matrix. In the context of articulated robotic manipulators, the FK is a highly non-linear mapping from the joint space to the pose space of the end-effector. While in most useful cases these functions are neither injective (one-to-one) nor surjective (onto), depending on the robot configuration - i.e. the sequence of prismatic and revolute joints, and the number of Degrees of Freedom (DoF) - the associated IK problem may be practically or even theoretically impossible to be solved analytically [7, 8, 9, 10]. Therefore, in the past decades, several approximate methods have been developed for many instances of robots. These approximate methods can be divided into two distinct categories: data-driven and numerical approaches. In this paper, we will focus on numerical approaches [7, 8, 9, 11], but the reader should report to [12, 13] for other considerations and implementations of data-driven methods. Despite many successful implementations, numerical approaches for IK may fail if an improper matrix inverse is employed. In fact, numerical IK methods often resort to inverting a Jacobian matrix at each iteration of the process, and whenever this Jacobian becomes singular - either under-determined (less than 6 DoF) or over-determined (more than 6 DoF) - its inverse cannot be uniquely defined. So, typically a left or right pseudo-inverse, or more generically, the Moore-Penrose (MP) GI [14] is often employed without regard to whether the same GI can guarantee the required properties of the system. In particular, the MP is by far the most widely employed GI for robotics-related IK applications even though it often fails to preserve critical properties, e.g., in the case of incommensurate manipulators that require consistency with respect to the change of units of certain joint variables [15, 16, 17]. In other words, an IK solution may exhibit unrecognized sensitivity to the choice of units when clearly the behavior of the system should not be affected by such choices [1, 2, 18, 19]. The problem can be understood by recognizing that the MP inverse provides rotational consistency, but not unit consistency [15, 16, 17, 20]. Therefore, an alternative GI, namely the Unit-Consistent (UC), represents the appropriate choice of GI in such cases [15, 17]. On the other hand, while the UC inverse is consistent with respect to arbitrary changes of units, it does not provide consistency with respect to rotation of the reference frame or any frame prior to the frame where the unit change occurred - according to the D-H representation of the robot [21, 10, 22]. Thus, most IK problems will involve components with differing consistency requirements. When that is the case, one can use a Mixed (MX) GI [15, 16, 17] to selectively provide consistency with respect to arbitrary changes of both units and rotations of the coordinate system or prior frames. The main contributions of this paper are: (1) bringing to light the need to investigate the robustness of commonly used GI's [8, 15] towards unit changes when computing inverse matrices in many robotic control problems, in particular the Jacobian for numerical IK methods; and (2) a systematic way for selecting the correct GI by inspection of the Denavit-Hartenberg (D-H) representation of the robot, while handling static (under/over-determined) and dynamic singularities of the Jacobian for arbitrary attenuation parameters \(\alpha\) (aka gain) controlling smoothness and rate of convergence, without slowing down the numerical IK solver [8, 9] and guaranteeing unit-consistency requirements for incommensurate robotic manipulators. ## II Background and Related Work As mentioned earlier, the reader should report to [12, 13] for more details on: data-driven approaches using: 1) Artificial Neural Networks (ANN), both in task-driven [23, 24, 25, 26, 27] and task-independent cases [13, 28, 29, 30]; 2) Soft-computing methods - MLP, ANFIS, and GA [30]; 3) Quantum behaved Particle Swarm Optimization (QPSO) [31]; etc. In this work, we focus on well accepted numerical approaches for accurately approximating the IK for task-independent workspaces [7, 11]. Unfortunately, these approaches have been shown to fail in preserving unit-consistencies in the case of incommensurate manipulators [15, 16, 17]. Robotic systems with incommensurate units are very common and refer to systems having different types of units [1, 2]. In such a system, a pose vector may combine positions with units of distance, and orientations with units of angle, while a joint vector may also combine revolute joints with units of angle and prismatic joints with units of distance. For such systems to be considered stable, an arbitrary change in a specific unit should not affect the behavior of the whole system [1, 2, 18, 19]. Numerical IK approaches have frequently defined the MP inverse as the inverse for singular Jacobian matrices. However, the MP inverse does not always guarantee the stability of the robotic systems of interest as it may produce inconsistent or erroneous results and lead researchers asirty in the presence of incommensurate systems [15, 16, 17, 32, 33, 34]. In fact, Schwartz et al. [1, 2, 33, 35, 36] conducted several investigations on the effects of the units involved in the control of incommensurate robotic systems for which control algorithms use eigenvectors, eigenvalues or singular values of the Jacobian matrix. Those investigations concluded that the use of Singular Value Decomposition (SVD) in Jacobian based kinematics methods yields erroneous or arbitrary solutions. One of the possibilities found in the literature, to circumvent the inconsistencies of incommensurate systems is the use of user-defined units-adjusting weights [37, 38]. However, the choice of units-adjusting weights just adds another level of arbitrariness between systems and applications, as mentioned in [1]. In that sense, Uhlmann [15] developed two generalized inverses UC and MX inverses applicable to incommensurate robotic systems, while Zhang and Uhlmann [16] used an incommensurate 3DoF manipulator to show that the MP inverse was affecting the end-effector trajectories and making the system unstable when the units were being varied. They used the UC inverse and succeeded in achieving a more reliable control of the manipulator end-effector. Similarly, Zhang and Uhlmann [17] examined the UC and MX inverses with an incommensurate robotic system made of a rover and a robotic arm. They showed that the MP inverse failed to preserve consistencies with respect to changes of units while the UC inverse failed to preserve consistencies with respect to changes in rotation of the coordinate frame. Interestingly, a combination of these two GI's in the MX inverse based on the block matrix inverse definition was able to provide a reliable behavior for the system with respect to rotations and units consistencies. ### _Analytical, geometric and numerical Jacobians_ The Jacobian matrix describes the relationship between the joint velocities and the corresponding end-effector velocities. When, the end-effector can be expressed with a minimal representation in the operational space of the manipulator (e.g., manipulators with a small number of DoF), this matrix is obtained through differentiation of the Forward Kinematics (FK) function with respect to the joint variables. When that is the case, this matrix is termed as the _Analytical Jacobian_ (\(J_{A}\))[3]. Let \(\vec{D}\) and \(\vec{Q}\), be respectively the pose and joint vectors, \(J_{A}\) can be expressed by: \[J_{A}=\frac{\partial\vec{D}}{\partial\vec{Q}} \tag{1}\] Through, the DH methodology, one can easily find \(\vec{D}\) from the last column of the total transformation matrix. As \(n\) the number of DoF increases, \(J_{A}\) becomes difficult to derive and is often replaced by the _Geometric Jacobian_ (\(J_{G}\)) [3] which is easier to compute. Let \(J_{P_{1}}\) and \(J_{O_{1}}\) where \(i=1,...,n\), be respectively the position and orientation (3x1) vector contributions in the jacobian matrix. \(J_{G}\), can be defined by: \[J_{G}=\begin{bmatrix}\vec{J}_{P_{1}}&\cdots&\vec{J}_{P_{n}}\\ \vec{J}_{O_{1}}&\cdots&\vec{J}_{O_{n}}\end{bmatrix} \tag{2}\] \[\vec{J}_{P_{1}}=\begin{cases}\vec{z}_{i-1}&\text{for a prismatic joint}\\ \vec{z}_{i-1}*(\vec{p}-\vec{p}_{i-1})&\text{for a revolute joints}\end{cases} \tag{3}\] \[\vec{J}_{O_{1}}=\begin{cases}\vec{0}&\text{for a prismatic joint}\\ \vec{z}_{i-1}&\text{for a revolute joints}\end{cases} \tag{4}\] where \(\mathbf{z}_{i-1}\) is given by the third column of the rotation matrix \(R_{i-1}^{0}\), \(\mathbf{p}\) is given by the first three elements of the fourth column of the transformation matrix \(T_{n}^{0}\), \(\mathbf{p}_{i-\mathbf{\cdot}}\) is given by the first three elements of the fourth column of the transformation matrix \(T_{i-1}^{0}\). Furthermore, as explained, in [4, 11, 39], the _Numerical Jacobian_ (\(J_{N}\)) is also employed in IK solvers. By causing small arbitrarily and individual perturbations to each joint variable, each column \(J_{N_{c}}\) of \(J_{N}\) can be retrieved by: \[J_{N_{c}}=\frac{\partial\mathbf{D}}{\partial\mathbf{Q_{c}}}=\mathbf{D}_{t}-f(\mathbf{Q}_{t}+ \begin{bmatrix}\vdots\\ 0\\ 0.01\\ 0\\ \vdots\end{bmatrix}) \tag{5}\] where the subscript \(c\) indicates the selected column of \(J_{N}\) and \(t\) the IK update step. As mentioned in [3], these three Jacobians may be different and yield completely different results. In this work, we investigated all the above-mentioned Jacobians to make sure the change of units is inherent or not to all or some of them. ### _Generalized Inverses_ #### Ii-B1 Moore-Penrose (MP) Inverse MP is a uniquely determined GI that can be defined based on the Singular Value Decomposition (SVD) [40, 41, 42, 14] of the matrix to invert. Considering an arbitrary matrix \(A\) whose SVD is given by equation \(A=USV^{*}\), where \(U\) and \(V\) are unit/orthonormal matrices containing the singular vectors, \(S\) is a diagonal matrix containing the singular values and \(V^{*}\) is the conjugate transpose of \(V\); the MP Generalized Inverse \(A^{-P}\) is defined by \(A^{-P}=(USV^{*})^{-P}=VS^{-1}U^{*}\), where \(U^{*}\) is the conjugate transpose of the matrix \(U\), and \(S^{-1}\) is the inverse of the diagonal matrix \(S\). The property established by this equation implies that the MP inverse is applicable to problems defined in a certain Euclidean space for which the behavior of the system should be invariant with respect to arbitrary rotations of the coordinate frame. Unfortunately, the MP inverse has been shown to fail, as it does not always provide reliable and stable control in the case of incommensurate manipulators [16] when units are varied - it is hence said that the MP inverse does not satisfy unit consistency. #### Ii-B2 Unit-Consistent (UC) Inverse UC is a GI defined with the property of invariance with respect to the choice of units used on different state variables, e.g, miles, kilometers, meters, centimeters, etc. It can be expressed based on the Unit-Invariant Singular Valued Decomposition (UI-SVD) [15]. Hence, the UI-SVD of \(A\) is given by \(A=DA^{\prime}E=D(USV^{*})E\), where \(D\) and \(E\) are diagonal scale matrices resulting from the scaling decomposition [43] of \(A\), that takes into account the scale due to the change of units; and \(A^{{}^{\prime}}\) is a matrix satisfying row and column product constraints. From this equation, the formula of the UC inverse can be expressed by \(A^{-U}=\left(D(USV^{*})E\right)^{-U}=E^{-U}VS^{-U}U^{*}D^{-U}=E^{-1}VS^{-U}U^ {*}D^{-1}\). While the UC inverse is consistent with respect to arbitrary changes of units, it may also fail to provide consistent inverses in the presence of rotation of the reference frame or any frame defined in the D-H representation of the robot that appears earlier than the frame where the unit change occurred. In those cases, we need to systematically provide both rotation and unit consistencies through another GI (i.e. the MX inverse [15]). #### Ii-B3 Mixed (MX) Inverse MX is an inverse that selectively provides invariance with respect to arbitrary changes of units as well as with respect to rotations [15]. The MX inverse is derived using the concept of block matrix inverse, where variables requiring unit consistency are block-partitioned in the top left of \(A\), and the variables requiring rotation consistency are block-partitioned in the bottom right of \(A\). This partitioning is expressed as: \(A=\begin{bmatrix}A_{W}&A_{X}\\ A_{Y}&A_{Z}\end{bmatrix}\), where \(A_{W}\) is the block of variables requiring unit-consistency, \(A_{Z}\) is the block of variables requiring rotation consistency, \(A_{X}\) and \(A_{Y}\) are blocks of variables requiring rotation and unit consistencies. Once, the variables of \(A\) have been partitioned, the block-matrix inverse is applied to \(A\) to compute its MX inverse: \[A^{-M}=\left[\begin{array}{cc}(A_{W}-A_{X}A_{Z}^{-P}A_{Y})^{-U}&-A_{W}^{-U} A_{X}(A_{Z}-A_{Y}A_{X}^{-U}A_{X})^{-P}\\ -A_{Z}^{-U}A_{Y}(A_{W}-A_{X}A_{Z}^{-P}A_{X})^{-U}&(A_{Z}-A_{Y}A_{W}^{-U}A_{X}) ^{-P}\end{array}\right] \tag{6}\] #### Ii-B4 Other Inverses However, they are other inverse Jacobian methods used in the literature as summarized in Table I. The reader should report to [44, 45, 46, 9, 14, 47, 48] for more details about these methods. In general, robotics applications that require matrix inversion of the Jacobian will employ the two-sided inverse whenever the same Jacobian matrix is square and non-singular. However, when that is not the case - i.e. for temporarily singular configurations of the robot and in the cases of over (\(>\)6DoF) or under (\(<\)6DoF) determined system, as illustrated later - left/right pseudo-inverses and the MP inverse are typically a substitution for the two-sided inverse. As explained before, this can represent a problem for systems requiring properties such as unit and/or rotation invariance [2, 15]. Moreover, even if no particular need of using UC is identified for the corresponding (i.e. 6DoF) robot, providing an implementation using real inverse is not a solution as robots undergo singular configurations during a trajectory towards its goal. In this paper, the three GI's, described in the subsections II-B1, II-B2 and II-B3, are exploited to create a systematic solution for the IK computation of any arbitrary incommensurate serial robot using a Jacobian-based IK solver such as in [4, 8, 9]. The keyword here is "systematically": a task that, so far, had been done manually by human inspection, and its automation is one of the main contributions of this work. The proposed inverse solution is also compared to other well accepted inverse Jacobian methods mentioned in subsection II-B4. ## III Proposed Approach All the main steps followed in this approach can be found in the pseudo-code presented in Algorithm 1. This algorithm, which is based on the Jacobian-based IK solver framework, mainly takes the initial joints configuration \(\vec{Q}_{0}\), the final desired pose \(\vec{X}_{final}\), the \(DH\) parameters, the desired pose error \(\epsilon_{r}\), the Jacobian type \(J_{type}\), and the inverse Jacobian method \(invJ_{method}\) as input parameters to compute the IK solution. In this paper, we focus on three important aspects of this algorithm embedded in line 7 and expressed in the motion control equation \(\Delta\vec{Q}_{t}=\vec{J_{t}}^{-1}*\alpha_{t}(\vec{D}_{final}-\vec{D}_{t})\): the attenuation parameter \(\alpha_{t}\), and the inverse Jacobian \(\vec{J_{t}}^{-1}\) which \begin{table} \begin{tabular}{|c|c|c|c|} \hline Method & Abbreviation & Reference & Year \\ \hline \hline Moore-Penrose & MP & [14] & 1955 \\ \hline Error Damping & ED & [44] & 1988 \\ \hline Filtered Jacobian & JF & [45] & 1991 \\ \hline Damped Jacobian & JD & [46] & 1994 \\ \hline Selective Damping & SD & [47] & 2005 \\ \hline Improve Error Damping & IED & [48] & 2011 \\ \hline Singular Value Filtering & SVF & [9] & 2012 \\ \hline Unit-Consistent & UC & [15] & 2018 \\ \hline Mixed & MX & [15] & 2018 \\ \hline \end{tabular} \end{table} TABLE I: Investigated inverse Jacobian methods itself depends on the Jacobian \(J_{t}\). The attenuation parameter \(\alpha_{t}\) is selected between \([0,1]\) to smooth the path of the end-effector and control the convergence of the algorithm. The inverse Jacobian (\(J_{t}^{-1}\)) denotes the use of a GI, as mentioned in Table I, to find the inverse Jacobian matrix. Most of the time, \(J_{t}^{-1}\) is calculated using the MP (\(J_{t}^{-P}\)) inverse, hence \(J_{t}^{-1}\Leftarrow J_{t}^{-P}\). As previously explained in Section II, the MP inverse cannot always guarantee reliable IK solutions in the presence of incommensurate systems. So here, we investigate the use of the MX (\(J_{t}^{-M}\)) inverse to achieve more reliable and stable IK solutions through the use of the D-H methodology. In the D-H, a sequence of coordinate frames is attached to each of the robot joints. Each coordinate frame moves with respect to the previous coordinate frames, but it is stationary with respect to motions of the following frames. Since every movement is either a rotation or a translation with respect to the \(Z-axis\) and there is no movement with respect to the other 2 axes, we can define a systematic way to assign joint variables to the blocks \(A_{W}\), \(A_{X}\), \(A_{Y}\) and \(A_{Z}\) for the computation of the MX inverse based on the sequence of prismatic and revolute joints according to the D-H table as depicted in line 2 of algorithm 1. We formulate the following rule of thumb for the use of the MX inverse: all revolute joints appearing before a prismatic joint of interest whose \(Z-axis\) are not parallel need to be included in \(A_{W}\). That is because if the \(Z-axis\) of a revolute joint prior to a prismatic joint are not parallel, a rotation caused by the revolute joint will affect the prismatic joint, which will violate unit-consistency unless they are placed in the \(A_{W}\) block partitioning where they can be handled by the UC inverse. On the other hand, if the two \(Z-axis\) are parallel, the revolute joint will not affect the prismatic joint, and hence they need to be placed in \(A_{Z}\), so that they can be handled by the MP inverse. This rule of thumb needs to be checked continuously as dynamic configurations of the robot can lead to temporary alignment of the \(Z-axis\) even when they are not explicit in the D-H table. ## IV Experimental Results and Discussion In this section, we present and discuss the experiments that were performed when using the rule of thumb introduced in section II-B. For these experiments, the numerical IK algorithm presented in section III was applied to various configurations and motions of six manipulators (five incommensurate and one commensurate): 1) a 3DoF planar arm; 2) a 4DoF SCARA (Selective Compliance Articulated Robot Arm) arm; 3) a 5DoF (modified) Stanford arm; 4) a 6DoF Stanford arm; 5) a 7DoF GP66+1 arm, and 6) a 7DoF WAM arm. Their D-H parameters are presented in Table II. These arm robots were chosen to illustrate various cases of the application of the MX inverse in the computation of the inverse Jacobian while correctly/incorrectly selecting the variables to be included in each block partition. Correct selections will achieve unit-invariant behaviors of the robot end-effector, while incorrect ones will lead to inappropriate use of MP and UC inverses inside the MX inverse, which will lead to uncontrollable situations (oscillations) when the units are varied from \(m\) to \(mm\). ### _Sanity test_ In this experiment, we investigated the behavior of all the inverse methods described in section II-B for the 3DoF (2RP) robot. That is, the Geometric Jacobian \(J_{G}\) was computed at the joint configuration \(\bar{Q}=[\theta_{1}=30^{o},\theta_{2}=30^{o},d_{3}=-0.7m]\) while the unit of the prismatic joint \(d_{3}\) was varied from \(m\) to \(mm\). Table III presents the computed inverse Jacobians \(J^{-1}\) with eight floating point numbers to emphasize their differences. This table clearly shows how only the highlighted UC and MX inverses are providing consistent results across all units. As explained in section III, applying the proposed MX rule of thumb for this manipulator happens to reduce the MX inverse to the UC inverse. We made the same observation when replacing \(J_{G}\) with \(J_{A}\) or \(J_{N}\); hence, revealing that the unit \begin{table} \end{table} TABLE II: D-H Parameters of the serial robots used in the experiments. The angles \(\theta\) and \(\alpha\) are expressed in degrees. The variables \(d\) and \(a\) are shown in millimeters (\(mm\)) – but they were changed in the implementation to match the different choices of units. consistency issues are inherent to all Jacobian types commonly used in IK solvers. ### _Cases in which the MP, UC or MX must be used_ We explored several motions for each of the incommensurate robots using the MP, UC and MX GI's; however, due to space limitations, we will only provide results for one motion in this paper. We provide experimental results for an extensive list of motions for all manipulators in our website. 1 Footnote 1: [http://vigir.missouri.edu/demphys/publications/GI2023/index.html](http://vigir.missouri.edu/demphys/publications/GI2023/index.html) #### Iv-B1 Case in which the MP must be used When the axis of movement of the joint variables are parallel to each other, our MX rule of thumb established that the rotational joints before the prismatic joint where the change of units occurs, and the prismatic joint need to be handled by the MP only. In fact, in such a configuration the change of units does not affect the end-effector path when using the MP inverse. This case is illustrated by the use of a 4DoF Scara manipulator whose configuration involves all the joints \(Z-axes\) being parallel to each other. So, for this 4DoF serial manipulator with a RRPR configuration, since the axis of translation of the linear joint is aligned (parallel) to the axes of rotation of the revolute joints, the linear joint is not affected by these rotations, and hence it does not need to be included in the \(A_{W}\) block and any change of units will be adequately handled by the MP inverse. So, given the Jacobian \(J\) for this 4DoF robot: \[J=\begin{bmatrix}A_{W}&A_{X}\\ A_{Y}&A_{Z}\end{bmatrix}=\begin{bmatrix}\frac{\partial X}{\partial\theta_{1}}& \frac{\partial X}{\partial\theta_{2}}&\frac{\partial X}{\partial\theta_{3}}& \frac{\partial X}{\partial\theta_{4}}&\frac{\partial X}{\partial\theta_{4}} \\ \frac{\partial\theta_{1}}{\partial\theta_{2}}&\frac{\partial\theta_{3}}{ \partial\theta_{4}}&\frac{\partial X}{\partial\theta_{4}}&\frac{\partial X}{ \partial\theta_{4}}\\ \frac{\partial\theta_{2}}{\partial\theta_{3}}&\frac{\partial\theta_{4}}{ \partial\theta_{4}}&\frac{\partial X}{\partial\theta_{4}}&\frac{\partial X}{ \partial\theta_{4}}\\ \frac{\partial\theta_{3}}{\partial\theta_{4}}&\frac{\partial\theta_{4}}{ \partial\theta_{4}}&\frac{\partial X}{\partial\theta_{4}}&\frac{\partial X}{ \partial\theta_{4}}\\ \frac{\partial\theta_{4}}{\partial\theta_{4}}&\frac{\partial\theta_{4}}{ \partial\theta_{4}}&\frac{\partial X}{\partial\theta_{4}}&\frac{\partial X}{ \partial\theta_{4}}\\ \end{bmatrix} \tag{7}\] and the fact that all the variables should be in the bottom-right block \(A_{Z}\), so it is handled by the MP inverse, the block partitioning for this robot becomes: \(A_{W}=[0]\) a 1x1 matrix of zeros; \(A_{Y}=[0\;0\;0\;0\;0\;0\;0]^{T}\) a 6x1 matrix of zeros; and the entire 6x4 original Jacobian matrix in 6. Then, the MX inverse \(J^{-M}\) can be computed by \(J^{-M}=\begin{bmatrix}0&0\\ 0&A_{Z}^{-P}\end{bmatrix}\). This final \(J^{-M}\) inverse Jacobian matrix is a 5x7 matrix with one row and one column of zeros. Even though, our ultimate goal is to always apply the MX inverse; we still provide the results obtained from the application of the MP and UC inverses. Figure 1 shows the paths of the end-effector when the units of the linear joints in the 4DoF robot are varied from \(m\) to \(mm\), still for the same motion. Because, the axes of rotation of the revolute joints are parallel to the axis of translation of the linear joint, the behavior of the end-effector is the same when the units are varied as presented in Sub-Figures (a)a, (c)c, (e)e, and (g)g where the attenuation factor \(\alpha=1\). We can observe that, changes of units do not affect the paths of the end-effector and the system is handled consistently by simply using the MP inverse. Next, we studied the effects of the attenuation parameter \(\alpha\) on the path followed by the end-effector under different units and still with the MP inverse. Sub-Figures (b)b, (d)d, (f)f, and (h)h show these results, and as it can be observed, while \(\alpha\) can be effectively used to make the path smoother, the paths followed by each choice of \(\alpha\) are exactly the same despite the change of units. We also applied the UC inverse to the same motion of the 4DoF robot. As before, Figure 2 shows the paths of the end-effector when the units are varied from \(m\) to \(mm\). Not surprisingly, the behavior of the robot is quite different and unpredictable when the units are varied as presented in Sub-Figures (a)a, (e)e, (c)e, and (g)g for an attenuation factor \(\alpha=1\). That is, the UC inverse is mishandling variables that are not affected by the change of units of the linear joint. As it can be seen, a simple change of units causes the robot to follow quite different paths. We also studied the effects of the attenuation \begin{table} \begin{tabular}{|c|c c|c c|c c|c c|} \hline **Method** & \(m\) & & & \(dm\) & & \(cm\) & & \(mm\) \\ \hline \hline \multirow{3}{*}{\(J_{G}\)} & \(-1.8026\) & \(-1.3026\) & \(0.866\) & \(-18.026\) & \(-13.026\) & \(0.866\) & \(-180.26\) & \(-130.26\) & \(0.866\) \\ & \(0.8098\) & \(-0.0562\) & \(-0.500\) & \(8.098\) & \(-0.562\) & \(-0.500\) & \(809.8\) & \(-56.2\) & \(-0.500\) \\ \hline \multirow{3}{*}{MP} & \(-0.08838467\) & \(0.7199628\) & \(-0.00491752\) & \(0.11208989\) & \(-0.00048627\) & \(0.01125670\) & \(-0.00004862\) & \(0.00112562\) \\ & \(-0.66903925\) & \(-1.44117808\) & \(-0.00702365\) & \(-0.15574331\) & \(-0.00700392\) & \(-0.0155906\) & \(-0.00007039\) & \(-0.00155907\) \\ & \(-0.06567335\) & \(-0.008156105\) & \(-0.0091335\) & \(-0.00948012\) & \(-0.0000917\) & \(-0.0000917\) & \(-0.00000099\) & \(-0.0000009\) \\ \hline \multirow{3}{*}{ED} & \(-0.143941953\) & \(0.515545428\) & \(-0.00049348\) & \(0.110208080\) & \(-0.004084627\) & \(0.0112570\) & \(-0.00008620\) & \(0.001121662\) \\ & \(-0.474929060\) & \(-0.87372762\) & \(-0.00998827\) & \(-0.1556305\) & \(-0.00700391\) & \(-0.00515905\) & \(-0.0007039\) & \(-0.00155907\) \\ & \(-0.00083756\) & \(-0.46317921\) & \(-0.00419195\) & \(-0.0047473\) & \(-0.00009017\) & \(-0.0000917\) & \(-0.0000099\) & \(-0.0000009\) \\ \hline \multirow{3}{*}{FJ} & \(-0.11303971\) & \(0.64082068\) & \(-0.0049654\) & \(0.1192648\) & \(-0.00048652\) & \(0.0112654\) & \(-0.00004862\) & \(0.00112562\) \\ & \(-0.62873610\) & \(-0.26212814\) & \(-0.00946946\) & \(-0.15548427\) & \(-0.000700384\) & \(-0.0155903\) & \(-0.0007039\) & \(-0.0007039\) & \(-0.000515907\) \\ & \(-0.04032062\) & \(-0.0630287\) & \(-0.00993039\) & \(-0.00496481\) & \(-0.0009017\) & \(-0.0000917\) & \(-0.00000517\) & \(-0.0000009\) & \(-0.0000009\) \\ \hline \multirow{3}{*}{DJ} & \(-0.09000226\) & \(0.70808060\) & \(0.004902036\) & \(0.11207869\) & \(-0.00048628\) & \(0.01126569\) & \(-0.00004862\) & \(0.00112562\) \\ & \(-0.68471480\) & \(-1.42861719\) & \(-0.007001866\) & \(-0.1557213\) & \(-0.00700039\) & \(-0.0155904\) & \(-0.00070039\) & \(-0.00155907\) \\ & \(- parameter \(\alpha\) on the path followed by the end-effector under different units and still using the UC inverse. Sub-Figures (b)b, (f)f, (d)d, and (h)h show these results. Now, also as expected, the attenuation parameter cannot correct the negative effect on the path followed by the robot since the the UC is unable to provide consistency when the units are changed. The last GI used was the MX inverse with the same motion of the 4DoF robot. As we explained above, in this case, the MX inverse reduces to the MP inverse. Indeed, we observe the exact same results obtained for the MP and presented in Figure 1 - and the reader can check this fact in the corresponding figures in our website. Our website also presents a complete table that summarizes the overall performance of the proposed IK solver for \(\alpha=1\), by presenting the input parameters provided to the algorithm; the number of iterations; and final end-effector pose error with respect to the desired pose for the reported 4DoF motion while using all the three GI's MP, UC and MX along with all the units from \(m\), to \(dm\), to \(cm\), and finally to \(mm\). #### Iii-B2 Case in which the UC must be used When the axis of the joint variables are not parallel to each other, the MX rule of thumb establishes that the rotational joints before the prismatic joint where the change of units occurs and the same prismatic joint need to be handled by the UC inverse as this will provide unit consistency. The remaining variables must be handled by the MP inverse. This case is illustrated by the use of a 3DoF Planar manipulator with a RRP configuration and the \(Z-axes\) of the prismatic joint not being parallel to the others - an inspection of the two early rotations will show that all variables are affected by them. So, given the Jacobian for this 3DoF robot: \[J=\begin{bmatrix}Aw&Ax\\ A_{Y}&A_{Z}\end{bmatrix}=\begin{bmatrix}\frac{\partial X}{\partial\theta}& \frac{\partial X}{\partial\theta}&\frac{\partial X}{\partial\theta}\\ \frac{\partial Y}{\partial\theta}&\frac{\partial Y}{\partial\theta}&\frac{ \partial Y}{\partial\theta}\end{bmatrix} \tag{8}\] Fig. 1: Behavior of the trajectories of the end-effector of the 4DoF robot when varying the units while using the MP inverse with (1a),(1c), (1e), and (1g) the attenuation parameter \(\alpha=1\) and (1b), (1d), (1f), and (1h) multiple values of \(\alpha\). Fig. 2: Behavior of the trajectories of the end-effector of the 4DoF robot when varying the units while using the UC inverse with (2a), (2e),(2c), and (2g) the attenuation parameter \(\alpha=1\) and (2b), (2f), (2d), and (2h) multiple values of \(\alpha\). and the fact that all the variables should be in the top-left block \(A_{W}\), so it is handled by the UC inverse, the block partitioning for \(J\) becomes: \(A_{W}=J\) the entire 2x3 matrix; \(A_{X}=[0\;0]^{T}\) is a 2x1 matrix of zeros, \(A_{Y}=[0\;0\;0]\) is 1x3 matrix of zeros and \(A_{Z}=[0]\) is a 1x1 matrix of zeros. Then, the MX inverse \(J^{-M}\) reduces to a simple UC inverse as showed by \(J^{-M}=\begin{bmatrix}A_{W}^{-U}&0\\ 0&0\end{bmatrix}\). The resulting \(J^{-M}\) inverse Jacobian matrix is a 4x3 matrix with one row and one column of zeros. Once again, we provide the results obtained from applying the MP and UC inverses, before applying the MX inverse. For the MP inverse, the results, presented in Figure 3 shows that the behavior (path) of the robot is quite different and unpredictable when the unit of the linear joints in the 3DoF are varied from \(m\), to \(mm\). That is, since the MP inverse does not guarantee unit consistency in this case, a simple change of units causes the robot to follow quite different paths for an attenuation parameter \(\alpha=1\). Also, when studying the effects of \(\alpha\) on the path followed by the end-effector under different units and still with the MP inverse, the results show that the attenuation parameter \(\alpha\) cannot remedy the effects of the unit changes. In other words, the simple use of a GI that does not guarantee unit consistency can cause from mild differences to severe oscillations in the trajectory of this particular robot end-effector. As before, we also applied the UC inverse to this same motion of the 3DoF robot. As expected, in Figure 4, the behavior of the robot is now exactly the same, no matter what units are used. That is, since the UC inverse do guarantee unit consistency, no matter what units one chooses (\(m\), \(dm\), \(cm\), or \(mm\)), the robot still follows the exact same path. In this case, the attenuation parameter \(\alpha\) can also reliably control the path followed by the end-effector under different units - i.e. it can help smooth the path and it has no negative effect on the path followed by the robot. Finally, the MX inverse was applied to the same motion Fig. 4: Behavior of the trajectories of the end-effector of the 3DoF robot when varying the units while using the UC inverse with (4a), (4c), (4e), and (4g) the attenuation parameter \(\alpha=1\) and (4b), (4d), (4f), and (4h) multiple values of \(\alpha\). Fig. 3: Behavior of the trajectories of the end-effector of the 3DoF robot when varying the units while using the MP Inverse with (3a), (3c), (3e), and (3g) the attenuation parameter \(\alpha=1\) and (3b), (3d), (3f), and (3h) multiple values of \(\alpha\). of the 3DoF robot. As we explained at the beginning of this section, the MX inverse reduces to the UC inverse. So, we expected and we did observe the exact same results as for the UC inverse presented in Figure 4. Once again, the reader can check this fact, as well as obtain more information about the input parameters provided to the algorithm; the number of iterations; and the final error in the position of the end-effector with respect to the desired position in the corresponding figures and tables in our website. #### Iv-B3 Case of a System Involving a Mix of the Two Previous Cases In this case, we explore the 5DoF RRPR (modified) Stanford manipulator with a non-full ranked Jacobian to evaluate the proposed rule-of-thumb when the MX inverse does not reduce to either the MP or UC inverses. Here, since the prismatic joint appears in the middle between two sets of two revolute joints, and since the \(Z-axis\) of the prior revolute joints are not aligned with the \(Z-axis\) of the prismatic one; a combination of MP and UC in the MX inverse is required - i.e. the MX does not reduce to either the MP nor the UC. Therefore, the use of either one exclusively leads to inconsistencies and uncontrollable paths, and only the MX can provide stable motion under any choice of the attenuation parameter. More specifically, an inspection of the two early rotations of this RRPR manipulator shows that both \(\theta_{1}\) and \(\theta_{2}\) affect the linear (prismatic) joint \(d_{3}\), which in turn affects \(X\), \(Y\), and \(Z\) coordinates. So, the block partitioning of the MX inverse applied to the Jacobian of this 5DoF robot becomes as follows, given the Jacobian matrix \(J\): \[J=\begin{bmatrix}A_{W}&A_{X}\\ A_{Y}&A_{Z}\end{bmatrix}=\begin{bmatrix}\frac{\partial X}{\partial\theta_{1}} &\frac{\partial X}{\partial\theta_{2}}&\frac{\partial X}{\partial\theta_{3}} &\frac{\partial X}{\partial\theta_{4}}&\frac{\partial X}{\partial\theta_{5}} \\ \frac{\partial Y}{\partial\theta_{1}}&\frac{\partial Y}{\partial\theta_{2}}& \frac{\partial Y}{\partial\theta_{3}}&\frac{\partial Y}{\partial\theta_{4}} &\frac{\partial Y}{\partial\theta_{5}}\\ \frac{\partial Z}{\partial\theta_{1}}&\frac{\partial Z}{\partial\theta_{2}}& \frac{\partial Z}{\partial\theta_{3}}&\frac{\partial Z}{\partial\theta_{4}} &\frac{\partial Z}{\partial\theta_{5}}\\ \frac{\partial Z}{\partial\theta_{1}}&\frac{\partial Z}{\partial\theta_{2}}& \frac{\partial Z}{\partial\theta_{3}}&\frac{\partial Z}{\partial\theta_{4}} &\frac{\partial Z}{\partial\theta_{5}}\\ \frac{\partial Z}{\partial\theta_{1}}&\frac{\partial Z}{\partial\theta_{2}}& \frac{\partial Z}{\partial\theta_{3}}&\frac{\partial Z}{\partial\theta_{4}} &\frac{\partial Z}{\partial\theta_{5}}\\ \frac{\partial Z}{\partial\theta_{1}}&\frac{\partial Z}{\partial\theta_{2}}& \frac{\partial Z}{\partial\theta_{3}}&\frac{\partial Z}{\partial\theta_{4}} &\frac{\partial Z}{\partial\theta_{5}}\\ \end{bmatrix} \tag{9}\] Fig. 5: Behavior of the trajectories of the end-effector of the 5DoF robot when varying the units while using the MP inverse with (5a), (5c), (5e), and (5g) the attenuation parameter \(\alpha=1\) and (5b), (5d), (5f), and (5h) multiple values of \(\alpha\). Fig. 6: Behavior of the trajectories of the end-effector of the 5DoF robot when varying the units while using the UC inverse with (6a), (6c), (6e), and (6g) the attenuation parameter \(\alpha=1\) and (6b), (6d), (6f), and (6h) multiple values of \(\alpha\). and the fact that the variables requiring unit consistency should be in \(A_{W}\), so it is handled by the UC inverse, we have: \[A_{W}=\begin{bmatrix}\frac{\partial X}{\partial\theta}&\frac{ \partial X}{\partial\theta}&\frac{\partial X}{\partial\phi}\\ \frac{\partial X}{\partial\theta}&\frac{\partial Y}{\partial\theta}&\frac{ \partial Y}{\partial\phi}\\ \frac{\partial Y}{\partial\theta}&\frac{\partial Y}{\partial\theta}&\frac{ \partial Y}{\partial\phi}\\ \end{bmatrix},\qquad A_{X}=\begin{bmatrix}\frac{\partial X}{\partial\theta}& \frac{\partial X}{\partial\theta}\\ \frac{\partial Y}{\partial\theta}&\frac{\partial Y}{\partial\theta}\\ \frac{\partial Y}{\partial\theta}&\frac{\partial Y}{\partial\theta}\\ \frac{\partial Y}{\partial\theta}&\frac{\partial Y}{\partial\theta}\\ \end{bmatrix},\qquad A_{Z}=\begin{bmatrix}\frac{\partial X}{\partial\theta}& \frac{\partial X}{\partial\theta}\\ \frac{\partial Y}{\partial\theta}&\frac{\partial Y}{\partial\theta}\\ \frac{\partial Y}{\partial\theta}&\frac{\partial Y}{\partial\theta}\\ \end{bmatrix},\] \[A_{Y}=\begin{bmatrix}\frac{\partial X}{\partial\theta}&\frac{ \partial X}{\partial\theta}&\frac{\partial Y}{\partial\theta}\\ \frac{\partial Y}{\partial\theta}&\frac{\partial Y}{\partial\theta}\\ \frac{\partial Y}{\partial\theta}&\frac{\partial Y}{\partial\theta}\\ \end{bmatrix},\] Based on this partitioning, the MX inverse \(J^{-M}\) can be computed by replacing each block in equation (6). In this case, the MX is not reduced to either MP or UC but needs the combination of both generalized inverses to produce unit invariant IK solutions. The resulting \(J^{-M}\) is a 5x6 matrix. Once again, we provide the results obtained from applying the MP and UC inverses, before applying the MX inverse. The results presented in Figure 5 show that the behavior (path) of the robot end-effector is different when the unit of the linear joints in the 5DoF are varied from \(m\), to \(mn\). That is, for a robotic system that requires a mix of units and rotation consistency, the MP inverse alone cannot guarantee a reliable behavior of the system. The same observation is made when the attenuation parameter \(\alpha\) is varied as depicted in Sub-Figures (b)b, (d)d, (f)f and (h)h. We also applied the UC inverse to the same motion of the 5DoF robot. As depicted in Figure 6, the behavior of the robot is still quite different when the units are varied from \(m\), to \(mn\) but also when the attenuation \(\alpha\) is varied. That is, the UC inverse is unable to provide unit-consistency in this case and \(\alpha\) cannot remedy the effects of the units change as depicted in Sub-Figures (b)b, (d)d, (f)f and (h)h. Finally, we applied the MX inverse to this same motion of the 5DoF robot. As we explained at the beginning of this section, the MX inverse does not reduce to either the MP inverse or the UC inverse. Here, it combines both inverses using the block-matrix inverse as shown in equation (6) to provide reliable results. In Figure 7, we observe that the behavior of the robot end-effector is exactly the same as the units are varied from \(m\) to \(mn\). The results also show that the attenuation parameter \(\alpha\) is able to smooth the end-effector path and remedy the effects of the unit-changes as depicted in Sub-Figures (b)b, (d)d, (f)f and (h)h. #### Iii-B4 Other Case Studies Many other motions were experimented for three additional case studies and are included in our website, as for example: 1) the 6DoF serial manipulator with a RRPRR configuration involving non-singular configurations, and therefore, with a full-rank Jacobian. Here too, an inspection of the two early revolute joints also shows that both \(\theta_{1}\) and \(\theta_{2}\) affect the prismatic joint \(d_{3}\), which in turn could potentially affect \(X\), \(Y\), and \(Z\) coordinates. So, the block partitioning of the MX inverse applied to the Jacobian of this 6DoF robot becomes a combination of all four, non-zero blocks \(A_{W}\), \(A_{X}\), \(A_{Y}\), and \(A_{Z}\). However, since this Jacobian is non-singular in this case, we will have \(J^{-1}=J^{-P}=J^{-U}=J^{-M}\) which makes the use of any of the generalized inverses equivalent (and apparently unnecessary) unless the 6DoF robotic arm reaches a singular configuration in its joint space (i.e. the \(det(J)=0\). 2) the same 6DoF while starting at a singular configuration. Based on the dynamic rule of thumb, the MX inverse is reduced to the MP inverse at the singular position and is computed with the formula given by equation (6) at any other position. The results show that the MX inverse is equivalent to the MP inverse in this case. 3) the 7DoF serial manipulator with a RRPRRRR configuration with a non-full rank Jacobian matrix. In this case also, only the MX inverse with a combination of the MP and UC inverses is able to provide unit and rotation consistency when the units are varied from \(m\) to \(mm\). Due to space issue, complete example motions for these additional cases are provided in our website. ### _Comparison with Other Inverse Jacobian Methods_ We also compared the MX GI's with commonly used inverse Jacobian methods as global Jacobian-based IK solvers. That is, all the Jacobian types presented in section II-A Fig. 7: Behavior of the trajectories of the end-effector of the 5DoF robot when varying the units while using the MX inverse with (7a), (7c), (7e), and (7g) the attenuation parameter \(\alpha=1\) and (7b), (7d), (7f), and (7h) multiple values of \(\alpha\). and inverse Jacobian methods presented in section II-B were implemented for three redundant robotic manipulators: a 3DoF (2RP), a 7DoF (7R), and a 7DoF (2RP4R) with their D-H parameters found in Table II. For each manipulator, 1000 random initial joint configurations and final poses were generated based on a uniform probability distribution between each joint's limits. Comparisons were performed based on the percentage of solutions found (%**SoI**), the average computation time (\(\overline{t}\)) in milliseconds (\(ms\)), the average position error (\(\overline{\epsilon_{P}}\)) in millimeter (\(mm\)), the average orientation error (\(\overline{\epsilon_{O}}\)) in degrees (\(deg\)), and the average number of iterations (\(\overline{iter}\)) for each unit when a solution was found. To evaluate the consistency of the system behaviors when the units are varied; we also compared the percentage of identical paths (%**IPs**) found across all the investigated units. For the results presented in this section, the maximum number of iterations in the search of an IK solution is set to 500, the accepted position and orientation errors are respectively set to \(1mm\) and \(1degree\). For the 3DoF (2RP); as it can be seen from the results presented in Table IV, the MX inverse, which reduces to the UC inverse in this case, guarantees all the solutions found within the maximum number of iterations to be unit-consistent across all the investigated units and Jacobian types. However, for all the other inverse Jacobian methods, not only the %**SoI** is different from one unit to another, but also the %**IPs** is not 100% which clearly shows that they do not satisfy unit-consistency requirements in this case. For the 7DoF (7R), the results can be seen in Table V. Applying the proposed rule of thumb to this commensurate 7DoF manipulator where all the joints are revolute, reduces the MX inverse to the MP inverse. In this case, also, the MX inverse (reduced to the MP inverse) guarantees all the solutions found within the maximum number of iterations to be unit-consistent across all the investigated units with the \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\(J^{-1}\)} & \multicolumn{4}{c|}{**m**} & \multicolumn{4}{c|}{**dm**} & \multicolumn{4}{c|}{**cm**} & \multicolumn{4}{c|}{**mm**} & \multicolumn{4}{c|}{**t******} \\ \cline{2-15} & \multicolumn{1}{c|}{**4**SoI} & \multicolumn{1}{c|}{\(\tau\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{P}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{P}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{P}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{O}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{P}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{O}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{P}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{O}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{P}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{O}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{P}}\)} & \multicolumn{1}{c|}{\(\overline{\epsilon_{O}}\)} & \ %**IPs** being 100%. Here, we also observed that all the other inverse Jacobian methods did not guarantee all solutions to be unit-consistent as the %**IPs** was not 100% even when the %**Sol** was 100% for each unit of a specific inverse method. For the 7DoF (2RP4R), the use of the MX inverse requires a combination of the MP and UC inverses. As it can be seen in Table VI, only the MX inverse guarantees all the solutions found within the maximum number of iterations to be unit-consistent across all the investigated units with the %**IPs** being 100% while the other inverse methods do not. ## V Conclusion This research investigated the most commonly used Jacobian types and Generalized Inverses to compute the inverse Jacobian matrix for numerical IK solvers. It showed that the most relevant Jacobian-based IK approaches fail to preserve the same system behavior when the units are varied for incommensurate robotic manipulators. While the Moore-Penrose (MP) inverse is applicable to systems that require rotation consistency, it may fail in the presence of systems that require unit consistency where the Unit-Consistency (UC) inverse is applicable instead. With the goal of always applying the MX inverse that combines the MP and UC inverses to achieve unit and rotation invariant robotic systems, a new dynamic rule of thumb based on the Denhavit-Hartenberg (D-H) methodology was introduced to determine which specific variables to place in each block of Mixed (MX) inverse. This new rule was validated by conducting multiple experiments with six different manipulators and various configuration cases. Furthermore, we investigated the effects of the attenuation parameter (gain) \(\alpha\) used in the numerical IK solvers, both on the accuracy of the final IK solution and the smoothness between the initial and final poses of the end-effector. For future work, we would like to further investigate the integration of the proposed approach to reliably provide unit-consistency in cases of kinematic singularities and multiple solutions for IK solvers. In addition, the proposed approach will be extended to Pseudoinverse-based Path Planning (PPP) and Repetitive Motion Planning (RMP) where commonly used GI's may also alter the system behavior when units are varied for incommensurate robotic manipulators. It is a common practice to reformulate PPP and PRMP schemes as Quadratic Programming (QP) problems, another research direction will investigate if these methods also suffer from the same problem. Finally, GI's are ubiquitous to many robotics and machine learning problems, another potential research direction will conduct a vast investigation to survey, identify, and address the issues related to the use of GI's in various subjects of these areas.
2307.03217
Quantification of Uncertainty with Adversarial Models
Quantifying uncertainty is important for actionable predictions in real-world applications. A crucial part of predictive uncertainty quantification is the estimation of epistemic uncertainty, which is defined as an integral of the product between a divergence function and the posterior. Current methods such as Deep Ensembles or MC dropout underperform at estimating the epistemic uncertainty, since they primarily consider the posterior when sampling models. We suggest Quantification of Uncertainty with Adversarial Models (QUAM) to better estimate the epistemic uncertainty. QUAM identifies regions where the whole product under the integral is large, not just the posterior. Consequently, QUAM has lower approximation error of the epistemic uncertainty compared to previous methods. Models for which the product is large correspond to adversarial models (not adversarial examples!). Adversarial models have both a high posterior as well as a high divergence between their predictions and that of a reference model. Our experiments show that QUAM excels in capturing epistemic uncertainty for deep learning models and outperforms previous methods on challenging tasks in the vision domain.
Kajetan Schweighofer, Lukas Aichberger, Mykyta Ielanskyi, Günter Klambauer, Sepp Hochreiter
2023-07-06T17:56:10Z
http://arxiv.org/abs/2307.03217v2
# Quantification of Uncertainty with Adversarial Models ###### Abstract Quantifying uncertainty is important for actionable predictions in real-world applications. A crucial part of predictive uncertainty quantification is the estimation of epistemic uncertainty, which is defined as an integral of the product between a divergence function and the posterior. Current methods such as Deep Ensembles or MC dropout underperform at estimating the epistemic uncertainty, since they primarily consider the posterior when sampling models. We suggest Quantification of Uncertainty with Adversarial Models (QUAM) to better estimate the epistemic uncertainty. QUAM identifies regions where the whole product under the integral is large, not just the posterior. Consequently, QUAM has lower approximation error of the epistemic uncertainty compared to previous methods. Models for which the product is large correspond to adversarial models (not adversarial examples!). Adversarial models have both a high posterior as well as a high divergence between their predictions and that of a reference model. Our experiments show that QUAM excels in capturing epistemic uncertainty for deep learning models and outperforms previous methods on challenging tasks in the vision domain. ## 1 Introduction Actionable predictions typically require risk assessment based on predictive uncertainty quantification (Apostolakis, 1991). This is of utmost importance in high stake applications, such as medical diagnosis or drug discovery, where human lives or extensive investments are at risk. In such settings, even a single prediction has far-reaching real-world impact, thus necessitating the most precise quantification of the associated uncertainties. Furthermore, foundation models or specialized models that are obtained externally are becoming increasingly prevalent, also in high stake applications. It is crucial to assess the robustness and reliability of those unknown models before applying them. Therefore, the predictive uncertainty of given, pre-selected models at specific test points should be quantified, which we address in this work. We consider predictive uncertainty quantification (see Fig. 1) for deep neural networks (Gal, 2016; Hullermeier and Waegeman, 2021). According to Vesely and Rasmuson (1984); Apostolakis (1991); Helton (1993); McKone (1994); Helton (1997), predictive uncertainty can be categorized into two types. First, _aleatoric_ (Type A, variability, stochastic, true, irreducible) uncertainty refers to the variability when drawing samples or when repeating the same experiment. Second, _epistemic_ (Type B, lack of knowledge, subjective, reducible) uncertainty refers to the lack of knowledge about the true model. Epistemic uncertainty can result from imprecision in parameter estimates, incompleteness in modeling, or indefiniteness in the applicability of the model. Epistemic uncertainty can be reduced by more data, better models, or more knowledge about the problem, while aleatoric uncertainty cannot be reduced. We follow Helton (1997) and consider epistemic uncertainty as the imprecision or variability of parameters that determine a distribution. Vesely and Rasmuson (1984) calls this epistemic uncertainty "parameter uncertainty", which results from an imperfect learning algorithm or from insufficiently many training samples. Consequently, we consider uncertainty quantification as characterizing a stochastic model of the world, where aleatoric uncertainty is the stochasticity of the model and epistemic uncertainty is the uncertainty about model parameters. Quantifying predictive uncertainty, especially for deep learning models, is an active area of research. Classical uncertainty quantification methods such as Bayesian Neural Networks (BNNs) (MacKay, 1992; Neal, 1996) are challenging for deep learning, since (i) the Hessian or maximum-a-posterior (MAP) is difficult to estimate and (ii) regularization & normalization techniques cannot be treated (Antorian et al., 2022). Epistemic neural networks (Osband et al., 2021) add a variance term (the epinet) to the output only. Bayes By Backprop (Blundell et al., 2015) and variational neural networks (Oleksiienko et al., 2022) work only for small models as they require considerably more parameters. Monte-Carlo (MC) dropout (Gal and Ghahramani, 2016) casts applying dropout during inference as sampling from an approximate distribution. MC dropout was generalized to MC dropconnect (Mobiny et al., 2021). Deep Ensembles (Lakshminarayanan et al., 2017) are often the best-performing uncertainty quantification method (Ovadia et al., 2019; Wursthorn et al., 2022). Masksembles or Dropout Ensembles combine ensembling with MC dropout (Durasov et al., 2021). Stochastic Weight Averaging approximates the posterior over the weights (Maddox et al., 2019). Single forward pass methods are efficient, as they aim to capture epistemic uncertainty through the distribution or distances of latent representations (Bradshaw et al., 2017; Liu et al., 2020; Mukhoti et al., 2021; van Amersfoort et al., 2021), but were found to have lower performance under distribution shifts (Postels et al., 2021). For further methods see Abdar et al. (2021) and Gawlikowski et al. (2021). Current uncertainty quantification methods such as Deep Ensembles (Lakshminarayanan et al., 2017) or MC dropout (Gal and Ghahramani, 2016) underperform at estimating the epistemic uncertainty, since they primarily consider the posterior when sampling models. Thus they are prone to miss important posterior modes, where the integrand of the integral defining the epistemic uncertainty is large. To identify those posterior modes, we introduce Quantification of Uncertainty with Adversarial Models (QUAM) for uncertainty quantification. QUAM searches for those posterior modes via adversarial models and uses them to reduce the approximation error when estimating the integral defining the epistemic uncertainty. Adversarial models are characterized by a large value of the integrand of the integral defining the epistemic uncertainty. Thus, they considerably differ to the reference model's prediction at a test point while having a similarly high posterior probability. Consequently, they are counterexamples of the reference model that predict differently for a new input, but explain the training data equally well. Fig. 1 shows examples of adversarial models which assign different classes to a test point, but agree on the training data. Our main contributions are: * We introduce QUAM as a framework for uncertainty quantification. QUAM approximates the integrals that define the epistemic uncertainty substantially better than previous methods, since it reduces the approximation error of the integral estimator. * We introduce the concept of adversarial models for estimating posterior integrals with non-negative integrands. For a given test point, adversarial models have considerably different predictions than a reference model while having similarly high posterior probability. * We introduce a new setting for uncertainty quantification, where the uncertainty of a given, pre-selected model is quantified. Figure 1: Adversarial models. For the red test point, the predictive uncertainty is high as it is far from the training data. High uncertainties are detected by different adversarial models that assign the red test point to different classes, although all of them explain the training data equally well. As a result, the true class remains ambiguous. Current Methods to Estimate the Epistemic Uncertainty ### Definition of Predictive Uncertainty Predictive uncertainty quantification is about describing a stochastic model of the world, where aleatoric uncertainty is the stochasticity of the model and epistemic uncertainty is the uncertainty about the parameters of the model. We consider two distinct settings of predictive uncertainty quantification. **(a)** First, the expected predictive uncertainty at a new test point when selecting a model given a training dataset (Gal, 2016; Hullermeier and Waegeman, 2021). This definition of uncertainty comprises uncertainty about which model will be selected (epistemic) and the prediction uncertainty of the selected model (aleatoric). In this setting, epistemic uncertainty is the uncertainty about which parameters will be selected. **(b)** Second, the predictive uncertainty of a given, pre-selected model at a new test point. This definition of uncertainty comprises uncertainty about the true model of the world (epistemic) and prediction uncertainty of the given, pre-selected model (aleatoric). In this setting, epistemic uncertainty is the uncertainty about the parameters of the true model that produced the training data (Apostolakis, 1991; Helton, 1997). As an example, assume we have initial data from an epidemic, but we do not know the exact infection rate which is a parameter of a prediction model. The goal is to predict the number of infected persons at a specific time in the future, where each time point is a test point. In setting (a), we are interested in the uncertainty at test point predictions of all models using infection rates that explain the initial data. If all potential models agree for a given new test point, the prediction of any of those models can be trusted, otherwise we can not trust the prediction regardless of which model is selected in the end. In setting (b), we have selected a specific infection rate from the initial data as parameter for our model to make predictions. We refer to this model as the given, pre-selected model. However, we do not know the true infection rate of the epidemic. All models with infection rates that are consistent with the initial data are likely to be the true model. If the likely models agree with the given, pre-selected model for a given new test point, the prediction of the model can be trusted. Measuring Predictive Uncertainty.We consider the predictive distribution of a single model \(p(\mathbf{y}\mid\mathbf{x},\mathbf{w})\), which is a stochastic model of the world. Depending on the task, the output distribution of this stochastic model can be a categorical distribution for classification or a Gaussian distribution for regression. The Bayesian framework offers a principled way to treat the uncertainty about the parameters through the posterior \(p(\mathbf{w}\mid\mathcal{D})\propto p(\mathcal{D}\mid\mathbf{w})p(\mathbf{w})\) for a given dataset \(\mathcal{D}\). The Bayesian model average (BMA) predictive distribution is given by \(p(\mathbf{y}\mid\mathbf{x},\mathcal{D})=\int_{\mathcal{W}}p(\mathbf{y}\mid\mathbf{x},\tilde{ \mathbf{w}})p(\tilde{\mathbf{w}}\mid\mathcal{D})\mathrm{d}\tilde{\mathbf{w}}\). Following Gal (2016), Depeweg et al. (2018), Smith and Gal (2018), Hullermeier and Waegeman (2021), the uncertainty of the BMA predictive distribution is commonly measured by the entropy \(\mathrm{H}\left[p(\mathbf{y}\mid\mathbf{x},\mathcal{D})\right]\), which can be decomposed into an aleatoric and epistemic part. This entropy is equal to the posterior expectation of the cross-entropy between the predictive distribution of potential models and the BMA, which corresponds to setting (a). The expected cross-entropy is also applicable to setting (b). A more detailed discussion about the entropy and cross-entropy as measures of uncertainty is given in Sec. B.1.1 in the Appendix. In the following, we formalize how to measure the notions of uncertainty in setting (a) and (b) using the expected cross-entropy over the posterior. Setting (a): Expected uncertainty when selecting a model.We estimate the predictive uncertainty at a test point \(\mathbf{x}\) when selecting a model \(\tilde{\mathbf{w}}\) given a training dataset \(\mathcal{D}\). The total uncertainty is the expected cross-entropy between the predictive distribution of candidate models \(p(\mathbf{y}\mid\mathbf{x},\tilde{\mathbf{w}})\) and the BMA predictive distribution \(p(\mathbf{y}\mid\mathbf{x},\mathcal{D})\), where the expectation is with respect to the posterior: \[\int_{\mathcal{W}} \mathrm{CE}(p(\mathbf{y}\mid\mathbf{x},\tilde{\mathbf{w}})\;;\;p(\mathbf{y}\mid \mathbf{x},\mathcal{D}))\;p(\tilde{\mathbf{w}}\mid\mathcal{D})\;\mathrm{d}\tilde{\mathbf{ w}}\;=\;\mathrm{H}\left[p(\mathbf{y}\mid\mathbf{x},\mathcal{D})\right] \tag{1}\] \[=\;\int_{\mathcal{W}}\mathrm{H}\left[p(\mathbf{y}\mid\mathbf{x},\tilde{ \mathbf{w}})\right]\;p(\tilde{\mathbf{w}}\mid\mathcal{D})\;\mathrm{d}\tilde{\mathbf{w}}\;+ \;\mathrm{I}(Y\;;\;W\mid\mathbf{x},\mathcal{D})\] \[=\;\underbrace{\int_{\mathcal{W}}\mathrm{H}\left[p(\mathbf{y}\mid\mathbf{ x},\tilde{\mathbf{w}})\right]\;p(\tilde{\mathbf{w}}\mid\mathcal{D})\;\mathrm{d} \tilde{\mathbf{w}}}_{\text{aleatoric}}\;+\;\underbrace{\int_{\mathcal{W}}\mathrm{KL} (p(\mathbf{y}\mid\mathbf{x},\tilde{\mathbf{w}})\parallel p(\mathbf{y}\mid\mathbf{x},\mathcal{D})) \;p(\tilde{\mathbf{w}}\mid\mathcal{D})\;\mathrm{d}\tilde{\mathbf{w}}}_{\text{ epistemic}}\;.\] The aleatoric uncertainty characterizes the uncertainty due to the stochasticity of the predictive distribution of the candidate model \(p(\mathbf{y}\mid\mathbf{x},\tilde{\mathbf{w}})\). The epistemic uncertainty characterizes the uncertainty due to the mismatch between the predictive distribution of candidate models and the BMA predictive distribution. See Appendix Sec. B.1 for a more detailed derivation. Setting (b): Uncertainty of a given, pre-selected model.We estimate the predictive uncertainty of a given, pre-selected model \(\mathbf{w}\) at a test point \(\mathbf{x}\). We assume that the dataset \(\mathcal{D}\) is produced according to the true distribution \(p(\mathbf{y}\mid\mathbf{x},\mathbf{w}^{*})\) parameterized by \(\mathbf{w}^{*}\). The posterior \(p(\tilde{\mathbf{w}}\mid\mathcal{D})\) is an estimate of how likely \(\tilde{\mathbf{w}}\) match \(\mathbf{w}^{*}\). For epistemic uncertainty, we should measure the difference between the predictive distributions under \(\mathbf{w}\) and \(\mathbf{w}^{*}\), but \(\mathbf{w}^{*}\) is unknown. Therefore, we measure the expected difference between the predictive distributions under \(\mathbf{w}\) and \(\tilde{\mathbf{w}}\). In accordance with Apostolakis (1991) and Helton (1997), the total uncertainty is therefore the expected cross-entropy between the predictive distributions of a given, pre-selected model \(\mathbf{w}\) and candidate models \(\tilde{\mathbf{w}}\), as those could be the true model \(\mathbf{w}^{*}\) according to the posterior: \[\int_{\mathcal{W}}\mathrm{CE}(p(\mathbf{y}\mid\mathbf{x},\mathbf{w})\;;\;p( \mathbf{y}\mid\mathbf{x},\tilde{\mathbf{w}}))\;p(\tilde{\mathbf{w}}\mid\mathcal{D})\;\mathrm{d }\tilde{\mathbf{w}} \tag{2}\] \[=\;\underbrace{\mathrm{H}\left[p(\mathbf{y}\mid\mathbf{x},\mathbf{w})\right] }_{\text{alactoric}}\;+\;\underbrace{\int_{\mathcal{W}}\mathrm{KL}(p(\mathbf{y} \mid\mathbf{x},\mathbf{w})\parallel p(\mathbf{y}\mid\mathbf{x},\tilde{\mathbf{w}}))\;p(\tilde{\bm {w}}\mid\mathcal{D})\;\mathrm{d}\tilde{\mathbf{w}}}_{\text{epistemic}}\;.\] The aleatoric uncertainty characterizes the uncertainty due to the stochasticity of the predictive distribution of the given, pre-selected model \(p(\mathbf{y}\mid\mathbf{x},\mathbf{w})\). The epistemic uncertainty characterizes the uncertainty due to the mismatch between the predictive distribution of the given, pre-selected model and the predictive distribution of candidate models that could be the true model. See Appendix Sec. B.1 for a more detailed derivation. ### Estimating the Integral for Epistemic Uncertainty Current methods for predictive uncertainty quantification suffer from underestimating the epistemic uncertainty. The epistemic uncertainty is given by the respective terms in Eq. (1) for setting (a) and Eq. (2) for our new setting (b). To estimate these integrals, almost all methods use gradient descent on the training data. Thus, posterior modes that are hidden from the gradient flow remain undiscovered and the epistemic uncertainty is underestimated. An illustrative example is depicted in Fig. 2. Posterior expectations as in Eq. (1) and Eq. (2) that define the epistemic uncertainty are generally approximated using Monte Carlo integration. A good approximation of posterior integrals through Monte Carlo integration requires to capture all large values of the non-negative integrand (Wilson and Izmailov, 2020), which is not only large values of the posterior, but also large values of the KL-divergence. Variational inference (Graves, 2011; Blundell et al., 2015; Gal and Ghahramani, 2016) and ensemble methods (Lakshminarayanan et al., 2017) estimate the posterior integral based on models with high posterior. Posterior modes may be hidden from gradient descent based techniques as they only discover mechanistically similar models. Two models are mechanistically similar if they rely on the same input attributes for making their predictions, that is, they are invariant to the same input attributes (Lubana et al., 2022). However, gradient descent will always start by extracting input attributes that are highly correlated to the target as they determine the steepest descent in the error landscape. These input attributes create a large basin in the error landscape into which the parameter vector is drawn via gradient descent. Consequently, other modes further away from such basins are almost never found. Thus, the epistemic uncertainty is underestimated. Caldeira and Nord (2020) found that neither BNNs, Concrete Dropout, nor Deep Ensembles performed well in estimating the epistemic uncertainty for samples far from the training distribution. Another reason that posterior modes may be hidden from gradient descent is the presence of different labeling hypotheses. If there is more than one way to explain the training data, gradient descent will use all of them as they give the steepest error descent. Other work focuses on MCMC sampling according to the posterior distribution, which is approximated by stochastic gradient variants (Welling and Teh, 2011; Chen et al., 2014) for large datasets and models. Those are known to face issues to efficiently explore the highly complex and multimodal parameter space and escape local posterior modes. There are attempts to alleviate the problem (Li et al., 2016; Zhang et al., 2020). However, those methods do not explicitly look for important posterior modes, where the output distribution of sampled models contribute strongly to the approximation of the posterior integral, and thus have large values for the KL-divergence. ## 3 Adversarial Models to Estimate the Epistemic Uncertainty Intuition.The epistemic uncertainty in Eq. (1) for setting (a) compares possible models with the BMA. Thus, the BMA is used as reference model. The epistemic uncertainty in Eq. (2) for our new setting (b) compares models that are candidates for the true model with the given, pre-selected model. Thus, the given, pre-selected model is used as reference model. If the reference model makes some prediction at the test point, and if other models (the adversaries) make different predictions while explaining the training data equally well, then one should be uncertain about the prediction. Adversarial models are plausible outcomes of model selection, while having a different prediction at the test data point than the reference model. In court, the same principle is used: if the prosecutor presents a scenario but the advocate presents alternative equally plausible scenarios, the judges become uncertain about what happened and rule in favor of the defendant. We use adversarial models to identify locations where the integrand of the epistemic uncertainty in Eq. (1) or Eq. (2) is large. These locations are used to construct a mixture distribution that is used for mixture importance sampling to estimate the desired integrals in Eq. (1) or Eq. (2). Using the mixture distribution for sampling, we aim to considerably reduce the approximation error of the estimator. Mixture Importance Sampling.We estimate the integrals of epistemic uncertainty in Eq. (1) and in Eq. (2). In the following, we focus on setting (b) with Eq. (2), but all results hold for setting (a) with Eq. (1) as well. Most methods sample from a distribution \(q(\tilde{\mathbf{w}})\) to approximate the integral: \[v\ =\ \int_{\mathcal{W}}\mathrm{KL}(p(\mathbf{y}\mid\mathbf{x},\mathbf{w})\parallel p( \mathbf{y}\mid\mathbf{x},\tilde{\mathbf{w}}))\ p(\tilde{\mathbf{w}}\mid\mathcal{D})\ \mathrm{d}\tilde{\mathbf{w}}\ =\ \int_{ \mathcal{W}}\frac{u(\mathbf{x},\mathbf{w},\tilde{\mathbf{w}})}{q(\tilde{\mathbf{w}})}\ q( \tilde{\mathbf{w}})\ \mathrm{d}\tilde{\mathbf{w}}\, \tag{3}\] where \(u(\mathbf{x},\mathbf{w},\tilde{\mathbf{w}})=\mathrm{KL}(p(\mathbf{y}\mid\mathbf{x},\mathbf{w})\parallel p (\mathbf{y}\mid\mathbf{x},\tilde{\mathbf{w}}))p(\tilde{\mathbf{w}}\mid\mathcal{D})\). As with Deep Ensembles or MC dropout, posterior sampling is often approximated by a sampling distribution \(q(\tilde{\mathbf{w}})\) that is close to \(p(\tilde{\mathbf{w}}\mid\mathcal{D})\). Monte Carlo (MC) integration estimates \(v\) by \[\hat{v}=\frac{1}{N}\sum_{n=1}^{N}\frac{u(\mathbf{x},\mathbf{w},\tilde{\mathbf{w}}_{n})}{q( \tilde{\mathbf{w}}_{n})}\,\qquad\tilde{\mathbf{w}}_{n}\sim q(\tilde{\mathbf{w}}). \tag{4}\] If the posterior has different modes, the estimate under a unimodal approximate distribution has high variance and converges very slowly (Steele et al., 2006). Thus, we use mixture importance sampling (MIS) (Hesterberg, 1995). MIS utilizes a mixture distribution instead of the unimodal distribution in standard importance sampling (Owen and Zhou, 2000). Furthermore, many MIS methods iteratively enhance the sampling distribution by incorporating new modes (Raftery and Bao, 2010). In contrast to the usually applied iterative enrichment methods which find new modes by chance, we have a much more favorable situation. We can explicitly search for posterior modes where the KL divergence is Figure 2: Softmax outputs (black) of individual models of Deep Ensembles (a) and MC dropout (b), as well as their average output (red) on a probability simplex. Left, right and top corners denote 100% probability mass at the blue, orange and green class in (c) respectively. Models were selected on the training data, and evaluated on the new test point (red) depicted in (c). The background color denotes the maximum likelihood of the training data that is achievable by a model having an output distribution (softmax values) equal to the respective location on the simplex. We see that Deep Ensembles and MC dropout fail to find models predicting the orange class, although there would be likely models that do so. Details on the experimental setup are given in the Appendix, Sec. C.2. large, as we can cast it as a supervised learning problem. Each of these modes determines the location of a mixture component of the mixture distribution. **Theorem 1**.: _The expected mean squared error of importance sampling with \(q(\tilde{\mathbf{w}})\) can be bounded by_ \[\mathrm{E}_{q(\tilde{\mathbf{w}})}\left[\left(\hat{v}\ -\ v\right)^{2}\right]\leqslant \mathrm{E}_{q(\tilde{\mathbf{w}})}\left[\left(\frac{u(\mathbf{x},\mathbf{w},\tilde{\mathbf{w}}) }{q(\tilde{\mathbf{w}})}\right)^{2}\right]\frac{4}{N}. \tag{5}\] Proof.: The inequality Eq. (5) follows from Theorem 1 in Akyildiz and Miguez (2021), when considering \(0\leqslant u(\mathbf{x},\mathbf{w},\tilde{\mathbf{w}})\) as an unnormalized distribution and setting \(\varphi=1\). Approximating only the posterior \(p(\tilde{\mathbf{w}}\mid\mathcal{D})\) as done by Deep Ensembles or MC dropout is insufficient to guarantee a low expected mean squared error, since the sampling variance cannot be bounded (see Appendix Sec. B.2). **Corollary 1**.: _With constant \(c\), \(\mathrm{E}_{q(\tilde{\mathbf{w}})}\left[\left(\hat{v}\ -\ v\right)^{2}\right] \leqslant 4c^{2}/N\) holds if \(u(\mathbf{x},\mathbf{w},\tilde{\mathbf{w}})\leqslant c\ q(\tilde{\mathbf{w}})\)._ Consequently, \(q(\tilde{\mathbf{w}})\) must have modes where \(u(\mathbf{x},\mathbf{w},\tilde{\mathbf{w}})\) has modes even if the \(q\)-modes are a factor \(c\) smaller. The modes of \(u(\mathbf{x},\mathbf{w},\tilde{\mathbf{w}})\) are models \(\tilde{\mathbf{w}}\) with both high posterior and high KL-divergence. We are searching for these modes to determine the locations \(\tilde{\mathbf{w}}_{k}\) of the components of a mixture distribution \(q(\tilde{\mathbf{w}})\): \[q(\tilde{\mathbf{w}})\ =\ \sum_{k=1}^{K}\alpha_{k}\ \mathcal{P}(\tilde{\mathbf{w}} \ ;\tilde{\mathbf{w}}_{k},\mathbf{\theta})\, \tag{6}\] with \(\alpha_{k}=1/K\) for \(K\) such models \(\tilde{\mathbf{w}}_{k}\) that determine a mode. Adversarial model search finds the locations \(\tilde{\mathbf{w}}_{k}\) of the mixture components, where \(\tilde{\mathbf{w}}_{k}\) is an adversarial model. The reference model does not define a mixture component, as it has zero KL-divergence to itself. We then sample from a distribution \(\mathcal{P}\) at the local posterior mode with mean \(\tilde{\mathbf{w}}_{k}\) and a set of shape parameters \(\mathbf{\theta}\). The simplest choice for \(\mathcal{P}\) is a Dirac delta distribution, but one could use e.g. a local Laplace approximation of the posterior (MacKay, 1992), or a Gaussian distribution in some weight-subspace (Maddox et al., 2019). Furthermore, one could use \(\tilde{\mathbf{w}}_{k}\) as starting point for SG-MCMC chains (Welling and Teh, 2011; Chen et al., 2014; Zhang et al., 2020, 2022). More details regarding MIS are given in the Appendix in Sec. B.2. In the following, we propose an algorithm to find those models with both high posterior and high KL-divergence to the output distribution of the reference model. Adversarial Model Search.Adversarial model search is the concept of searching for a model that has a large distance / divergence to the reference predictive distribution and at the same time a high posterior. We call such models "adversarial models" as they act as adversaries to the reference model by contradicting its prediction. A formal definition of an adversarial model is given by Def. 1: **Definition 1**.: _Given are a reference conditional probability model \(p(.\mid.,\mathbf{w})\) from a model class parameterized by \(\mathbf{w}\), a divergence or distance measure \(\mathrm{D}(.;.)\) for probability distributions, \(\ \gamma>0\), \(\Lambda>0\), a dataset \(\mathcal{D}\), and a new test data point \(\mathbf{x}\). Then a model with parameters \(\tilde{\mathbf{w}}\) that satisfies the inequalities \(\mid\log p(\mathbf{w}\mid\mathcal{D})-\log p(\tilde{\mathbf{w}}\mid\mathcal{D})\mid \leqslant\gamma\) and \(\ \mathrm{D}(p(.\mid\mathbf{x},\mathbf{w});p(.\mid\mathbf{x},\tilde{\mathbf{w}}))\geq\Lambda\) is called an \((\gamma,\Lambda)-\)adversarial model._ Adversarial model search corresponds to the following optimization problem: \[\max_{\mathbf{\delta}\in\Delta}\ \mathrm{D}(p(\mathbf{y}\mid\mathbf{x},\mathbf{w})\ ;\ p(\mathbf{y}\mid\mathbf{x},\mathbf{w}\ +\ \mathbf{\delta}))\quad\text{s.t.}\ \log p(\mathbf{w}\mid\mathcal{D})\ -\ \log p(\mathbf{w}\ +\ \mathbf{\delta}\mid\mathcal{D})\ \leqslant\ \gamma. \tag{7}\] We are searching for a weight perturbation \(\mathbf{\delta}\) that maximizes the distance \(\mathrm{D}(.\ ;\.)\) to the reference distribution without decreasing the log posterior more than \(\gamma\). The search for adversarial models is restricted to \(\mathbf{\delta}\in\Delta\), for example by only optimizing the last layer of the reference model or by bounding the norm of \(\mathbf{\delta}\). This optimization problem can be rewritten as: \[\max_{\mathbf{\delta}\in\Delta}\ \mathrm{D}(p(\mathbf{y}\mid\mathbf{x},\mathbf{w})\ ;\ p(\mathbf{y}\mid\mathbf{x},\mathbf{w}\ +\ \mathbf{\delta}))\ +\ c\ (\log p(\mathbf{w}\ +\ \mathbf{\delta}\mid\mathcal{D})\ -\ \log p(\mathbf{w}\mid\mathcal{D})\ +\ \gamma). \tag{8}\] where \(c\) is a hyperparameter. According to the _Karush-Kuhn-Tucker (KKT) theorem_(Karush, 1939; Kuhn and Tucker, 1950; May, 2020; Luenberger and Ye, 2016): If \(\mathbf{\delta}^{*}\) is the solution to the problem Eq. (7), then there exists a \(c^{*}\geq 0\) with \(\nabla_{\mathbf{\delta}}\mathcal{L}(\mathbf{\delta}^{*},c^{*})=\mathbf{0}\) (\(\mathcal{L}\) is the Lagrangian) and \(c^{*}\ (\log p(\mathbf{w}\ |\ \mathcal{D})-\log p(\mathbf{w}+\mathbf{\delta}^{*}\ |\ \mathcal{D})-\gamma)=0\). This is a necessary condition for an optimal point according to Theorem on Page 326 of Luenberger and Ye (2016). We solve this optimization problem by the penalty method, which relies on the KKT theorem (Zangwill, 1967). A penalty algorithm solves a series of unconstrained problems, solutions of which converge to the solution of the original constrained problem (see e.g. Fiacco and McCormick (1990)). The unconstrained problems are constructed by adding a weighted penalty function that measures the violation of the constraints to the objective function. At every step, the weight of the penalty is increased, thus the constraints are less violated. If exists, the solution to the constraint optimization problem is an adversarial model that is located within a posterior mode but has a different predictive distribution compared to the reference model. We summarize the adversarial model search in Algorithm 1. Further implementation details are given in the Appendix Sec. C.1. ``` 0: Adversarial model \(\tilde{\mathbf{w}}\) with maximum \(\mathrm{L}_{\text{adv}}\) and \(\mathrm{L}_{\text{pen}}\leqslant 0\) 0: Test point \(\mathbf{x}\), training dataset \(\mathcal{D}=\{(\mathbf{x}_{k},\mathbf{y}_{k})\}_{k=1}^{K}\), reference model \(\mathbf{w}\), loss function \(l\), loss of reference model on the training dataset \(\mathrm{L}_{\text{ref}}=\frac{1}{K}\sum_{k=1}^{K}l(p(\mathbf{y}\ |\ \mathbf{x}_{k},\mathbf{w}),\mathbf{y}_{k})\), minimization procedure MINIMIZE, number of penalty iterations \(M\), initial penalty parameter \(c_{0}\), penalty parameter increase scheduler \(\eta\), slack parameter \(\gamma\), distance / divergence measure \(\mathrm{D}(\ ;\ ;\ )\). 1:\(\tilde{\mathbf{w}}\leftarrow\mathbf{w};\ \ \tilde{\mathbf{w}}\leftarrow\mathbf{w};\ \ c\gets c_{0}\) 2:for\(m\gets 1\) to \(M\)do 3:\(\mathrm{L}_{\text{pen}}\leftarrow\frac{1}{K}\sum_{k=1}^{K}l(p(\mathbf{y}\ |\ \mathbf{x}_{k},\tilde{\mathbf{w}}),\mathbf{y}_{k})\ -\ ( \mathrm{L}_{\text{ref}}\ +\ \gamma)\) 4:\(\mathrm{L}_{\text{adv}}\leftarrow-\mathrm{D}(p(\mathbf{y}\ |\ \mathbf{x},\mathbf{w})\ ;\ p(\mathbf{y}\ |\ \mathbf{x},\tilde{\mathbf{w}}))\) 5:\(\mathrm{L}\leftarrow\mathrm{L}_{\text{adv}}+c\ \mathrm{L}_{\text{pen}}\) 6:\(\tilde{\mathbf{w}}\leftarrow\text{MINIMIZE}(\mathrm{L}(\tilde{\mathbf{w}}))\) 7:if\(\mathrm{L}_{\text{adv}}\) larger than all previous and \(\mathrm{L}_{\text{pen}}\leqslant 0\)then 8:\(\tilde{\mathbf{w}}\leftarrow\tilde{\mathbf{w}}\) 9:\(c\leftarrow\eta(c)\) 10:return\(\tilde{\mathbf{w}}\) ``` **Algorithm 1** Adversarial Model Search (used by QUAM) ## 4 Experiments In this section, we compare previous uncertainty quantification methods and our method QUAM in a set of experiments. First, we assess the considered methods on a synthetic benchmark, on which it is feasible to compute a ground truth epistemic uncertainty. Then, we conduct challenging out-of-distribution (OOD) detection, adversarial example detection, misclassification detection and selective prediction experiments in the vision domain. We compare the following methods (1) QUAM, (2) cSG-HMC (Zhang et al., 2020), (3) an efficient Laplace approximation (Daxberger et al., 2021), (4) MC dropout (Gal and Ghahramani, 2016) and (5) Deep Ensembles (Lakshminarayanan et al., 2017) on their ability to estimate the epistemic uncertainty. Those baseline methods, especially Deep Ensembles, are persistently among the best performing uncertainty quantification methods across various benchmark tasks (Filos et al., 2019; Ovadia et al., 2019; Caldeira and Nord, 2020; Ashukha et al., 2020; Band et al., 2022) ### Epistemic Uncertainty on Synthetic Dataset We evaluated all considered methods on the two-moons dataset, created using the implementation of Pedregosa et al. (2011). To obtain the ground truth uncertainty, we utilized full-batch Hamiltonian Monte Carlo (HMC) (Neal, 1996). HMC is regarded as the most precise algorithm to approximate posterior expectations (Izmailov et al., 2021), but necessitates extreme computational expenses to be applied to models and datasets of practical scale. The results are depicted in Fig. 3. QUAM most closely matches the uncertainty estimate of the ground truth epistemic uncertainty obtained by HMC and excels especially on the regions further away from the decision boundary such as in the top left and bottom right of the plots. All other methods fail to capture the epistemic uncertainty in those regions as gradient descent on the training set fails to capture posterior modes with alternative predictive distributions in those parts and misses the important integral components. Experimental details and results for the epistemic uncertainty as in Eq. (2) are given in the Appendix Sec. C.3. ### Epistemic Uncertainty on Vision Datasets We benchmark the ability of different methods to estimate the epistemic uncertainty of a given, pre-selected model (setting (b) as in Eq. (2)) in the context of (i) out-of-distribution (OOD) detection, (ii) adversarial sample detection, (iii) misclassification detection and (iv) selective prediction. In all experiments, we assume to have access to a pre-trained model on the in-distribution (ID) training dataset, which we refer to as reference model. The epistemic uncertainty is expected to be higher for OOD samples, as they can be assigned to multiple ID classes, depending on the utilized features. Adversarial samples indicate that the model is misspecified on those inputs, thus we expect a higher epistemic uncertainty, the uncertainty about the model parameters. Furthermore, we expect higher epistemic uncertainty for misclassified samples than for correctly classified samples. Similarly, we expect the classifier to perform better on a subset of more certain samples. This is tested by evaluating the accuracy of the classifier on retained subsets of a certain fraction of samples with the lowest epistemic uncertainty (Filos et al., 2019; Band et al., 2022). We report the AUROC for classifying the ID vs. OOD samples (i), the ID vs. the adversarial examples (ii), or the correctly classified vs. the misclassified samples (iii), using the epistemic uncertainty as score to distinguish the two classes respectively. For the selective prediction experiment (iv), we report the AUC of the accuracy vs. fraction of retained samples, using the epistemic uncertainty to determine the retained subsets. Mnist.We perform OOD detection on the FMNIST (Xiao et al., 2017), KMNIST (Clanuwat et al., 2018), EMNIST (Cohen et al., 2017) and OMNIGLOT (Lake et al., 2015) test datasets as OOD datasets, using the LeNet (LeCun et al., 1998) architecture. The test dataset of MNIST (LeCun et al., 1998) is used as ID dataset. We utilize the aleatoric uncertainty of the reference model (see aleatoric uncertainty in Eq. (2)) as a baseline to assess the added value of estimating the epistemic uncertainty of the reference model. The results are listed in Tab. 1. QUAM outperforms all other methods on this task, with Deep Ensembles being the runner up method on all dataset pairs. Furthermore, we observed, that only the epistemic uncertainties obtained by Deep Ensembles and QUAM are able to surpass the performance of using the aleatoric uncertainty of the reference model. ImageNet-1K.We conduct OOD detection, adversarial example detection, misclassification detection and selective prediction experiments on ImageNet-1K (Deng et al., 2009). As OOD dataset, we use ImageNet-O (Hendrycks et al., 2021), which is a challenging OOD dataset that was explicitly created to be classified as an ID dataset with high confidence by conventional ImageNet-1K classifiers. Figure 3: Epistemic uncertainty as in Eq. (1). Yellow denotes high epistemic uncertainty, purple denotes low epistemic uncertainty. HMC is considered to be the ground truth for the expected epistemic uncertainty. The estimate of QUAM is closest to the ground truth. All other methods underestimate the epistemic uncertainty in the top left and bottom right corner, as all models sampled by those predict the same class with high confidence for those regions. Similarly, ImageNet-A (Hendrycks et al., 2021) is a dataset consisting of natural adversarial examples, which belong to the ID classes of ImageNet-1K, but are misclassified with high confidence by conventional ImageNet-1K classifiers. Furthermore, we evaluated the utility of the uncertainty score for misclassification detection of predictions of the reference model on the ImageNet-1K validation dataset. On the same dataset, we evaluated the accuracy of the reference model when only predicting on fractions of samples with the lowest epistemic uncertainty. All ImageNet experiments were performed on variations of the EfficientNet architecture (Tan and Le, 2019). Recent work by Kirichenko et al. (2022) showed that typical ImageNet-1K classifiers learn desired features of the data even if they rely on simple, spurious features for their prediction. Furthermore, they found, that last layer retraining on a dataset without the spurious correlation is sufficient to re-weight the importance that the classifier places on different features. This allows the classifier to ignore the spurious features and utilize the desired features for its prediction. Similarly, we apply QUAM on the last layer of the reference model. We compare against cSG-HMC applied to the last layer, MC dropout and Deep Ensembles. MC dropout was applied to the last layer as well, since the EfficientNet architectures utilize dropout only before the last layer. Two versions of Deep Ensembles were considered. First, Deep Ensembles aggregated from pre-trained EfficientNets of different network sizes (DE (all)). Second, Deep Ensembles of retrained last layers on the same encoder network (DE (LL)). While the latter is a more fair comparison to the other methods, the former represents a beneficial scenario for Deep Ensembles: ensembling not just over various parametrizations, but different model architectures. We further utilize the aleatoric uncertainty of the reference model (see aleatoric uncertainty in Eq. (2)) as a baseline to assess the additional benefit of estimating the epistemic uncertainty of the reference model. The Laplace approximation was not feasible to compute on our hardware, even only for the last layer. The results are listed in Tab. 2. Furthermore, we show the ROC curve of misclassification detection as well as the curve of the accuracy over retained samples for the selective prediction experiment in Fig. 4. We observe that using the epistemic uncertainty provided by Deep Ensembles on the last layer has the worst performance throughout all experiments. While Deep Ensembles composed of multiple trained models performed second best on most tasks, MC dropout outperforms it on OOD detection on the ImageNet-O dataset. QUAM outperforms all other methods on all tasks we evaluated, except for ImageNet-A, where it performed on par with Deep Ensembles composed of multiple trained models. Details about all experiments and additional results are given in the Appendix Sec. C.4. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(\mathcal{D}_{\text{ood}}\) & Reference & cSG-HMC & Laplace & MCD & DE & QUAM \\ \hline FMNIST & \(.986_{\pm.005}\) & \(.977_{\pm.004}\) & \(.978_{\pm.004}\) & \(.978_{\pm.005}\) & \(.988_{\pm.001}\) & \(\textbf{.994}_{\pm.001}\) \\ KMNIST & \(.966_{\pm.005}\) & \(.957_{\pm.005}\) & \(.959_{\pm.006}\) & \(.956_{\pm.006}\) & \(.990_{\pm.001}\) & \(\textbf{.994}_{\pm.001}\) \\ EMNIST & \(.888_{\pm.007}\) & \(.869_{\pm.012}\) & \(.877_{\pm.011}\) & \(.876_{\pm.008}\) & \(.924_{\pm.003}\) & \(\textbf{.937}_{\pm.008}\) \\ OMNIGLOT & \(.973_{\pm.003}\) & \(.963_{\pm.004}\) & \(.963_{\pm.003}\) & \(.965_{\pm.003}\) & \(.983_{\pm.001}\) & \(\textbf{.992}_{\pm.001}\) \\ \hline \hline \end{tabular} \end{table} Table 1: MNIST results: AUROC using the epistemic uncertainty of a given, pre-selected model (as in Eq. (2)) as a score to distinguish between ID (MNIST) and OOD samples. Additionally, we report the AUROC when using the aleatoric uncertainty of the reference model (Reference). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(\mathcal{D}_{\text{ood}}\) // Task & Reference & cSG-HMC & MCD & DE (LL) & DE (all) & QUAM \\ \hline ImageNet-O & \(.626_{\pm.004}\) & \(.677_{\pm.005}\) & \(.680_{\pm.003}\) & \(.562_{\pm.004}\) & \(.709_{\pm.005}\) & \(\textbf{.753}_{\pm.011}\) \\ ImageNet-A & \(.792_{\pm.002}\) & \(.799_{\pm.001}\) & \(.827_{\pm.002}\) & \(.686_{\pm.001}\) & \(\textbf{.874}_{\pm.004}\) & \(\textbf{.872}_{\pm.003}\) \\ \hline \hline Misclassification & \(.867_{\pm.007}\) & \(.772_{\pm.011}\) & \(.796_{\pm.014}\) & \(.657_{\pm.009}\) & \(.780_{\pm.009}\) & \(\textbf{.904}_{\pm.008}\) \\ Selective prediction & \(.958_{\pm.003}\) & \(.931_{\pm.003}\) & \(.935_{\pm.006}\) & \(.911_{\pm.004}\) & \(.950_{\pm.002}\) & \(\textbf{.969}_{\pm.002}\) \\ \hline \hline \end{tabular} \end{table} Table 2: ImageNet-1K results: AUROC using the epistemic uncertainty of a given, pre-selected model (as in Eq. (2)) to distinguish between ID (ImageNet-1K) and OOD samples. Furthermore we report the AUROC when using the epistemic uncertainty for misclassification detection and the AUC of the accuracy over fraction retained predictions curve on the ImageNet-1K validation dataset. We also report results for all experiments, using the aleatoric uncertainty of the reference model instead of the epistemic uncertainty (Reference). ## 5 Conclusion We have introduced QUAM, a novel method that quantifies predictive uncertainty using adversarial models. Adversarial models identify important posterior modes that are missed by previous uncertainty quantification methods. We conducted various experiments on deep neural networks, for which epistemic uncertainty is challenging to estimate. On a synthetic dataset, we highlighted the strength of our method to capture epistemic uncertainty. Furthermore, we conducted experiments on large-scale benchmarks in the vision domain, where QUAM outperformed all previous methods. Searching for adversarial models is computationally expensive and has to be done for each new test point. However, more efficient versions can be utilized. One can search for adversarial models while restricting the search to a subset of the parameters, e.g. to the last layer as done for the ImageNet experiments, to the normalization parameters, or to the bias weights. Furthermore, there are a lot of advances for efficient fine-tuning of large models (Houlsby et al., 2019; Hu et al., 2021). Utilizing those for more efficient versions of our algorithm is an interesting direction for future work. Additionally, in the classification setting, one could search for adversarial models only for a subset of classes with highest probability assigned by the reference model. Nevertheless, high stake applications justify this effort to obtain the best estimate of predictive uncertainty for each new test point. Furthermore, QUAM is applicable to quantify the predictive uncertainty of any single given model, regardless of whether uncertainty estimation was considered during the modeling process. This allows to assess the predictive uncertainty of foundation models or specialized models that are obtained externally. ## Acknowledgements We would like to thank Angela Bitto-Nemling, Daniel Klotz and Sebastian Lehner for helpful discussions and feedback during all stages of this research project. The ELLIS Unit Linz, the LIT AI Lab, the Institute for Machine Learning, are supported by the Federal State Upper Austria. IARAI is supported by Here Technologies. We thank the projects AI-MOTION (LIT-2018-6-YOU-212), DeepFlood (LIT-2019-8-YOU-213), Medical Cognitive Computing Center (MC3), INCONTROL-RL (FFG-881064), PRIMAL (FFG-873979), S3AI (FFG-872172), DL for GranularFlow (FFG-871302), EPILEPSIA (FFG-892171), AIRI FG 9-N (FWF-36284, FWF-36235), ELISE (H2020-ICT-2019-3 ID: 951847), Stars4Waters (HORIZON-CL6-2021-CLIMATE-01-01). We thank Audi.JKU Deep Learning Center, TGW LOGISTICS GROUP GMBH, Silicon Austria Labs (SAL), FILL Gesellschaft mbH, Anyline GmbH, Google, ZF Friedrichshafen AG, Robert Bosch GmbH, UCB Biopharma SRL, Merck Healthcare KGaA, Verbund AG, GLS (Univ. Waterloo), Software Competence Center Hagenberg GmbH, TUV Austria, Frauscher Sensonic and the NVIDIA Corporation. Figure 4: Misclassification detection and selective prediction on ImageNet-1K validation dataset. (a) ROC curves using the epistemic uncertainty of a given, pre-selected model (as in Eq. (2)) to distinguish between the model’s correctly and incorrectly classified samples. (b) Accuracy of given, pre-selected model on subset composed of samples that exhibit the lowest epistemic uncertainty. QUAM outperforms all other methods in both experiments.
2303.07586
Teacher-Student Knowledge Distillation for Radar Perception on Embedded Accelerators
Many radar signal processing methodologies are being developed for critical road safety perception tasks. Unfortunately, these signal processing algorithms are often poorly suited to run on embedded hardware accelerators used in automobiles. Conversely, end-to-end machine learning (ML) approaches better exploit the performance gains brought by specialized accelerators. In this paper, we propose a teacher-student knowledge distillation approach for low-level radar perception tasks. We utilize a hybrid model for stationary object detection as a teacher to train an end-to-end ML student model. The student can efficiently harness embedded compute for real-time deployment. We demonstrate that the proposed student model runs at speeds 100x faster than the teacher model.
Steven Shaw, Kanishka Tyagi, Shan Zhang
2023-03-14T02:12:00Z
http://arxiv.org/abs/2303.07586v1
# Teacher-Student Knowledge Distillation for Radar Perception on Embedded Accelerators ###### Abstract Many radar signal processing methodologies are being developed for critical road safety perception tasks. Unfortunately, these signal processing algorithms are often poorly suited to run on embedded hardware accelerators used in automobiles. Conversely, end-to-end machine learning (ML) approaches better exploit the performance gains brought by specialized accelerators. In this paper, we propose a teacher-student knowledge distillation approach for low-level radar perception tasks. We utilize a hybrid model for stationary object detection as a teacher to train an end-to-end ML student model. The student can efficiently harness embedded compute for real-time deployment. We demonstrate that the proposed student model runs at speeds 100x faster than the teacher model. ## 1 Introduction With the steady advances in autonomous driving, advanced safety features using one or more sensors are highly desirable. In order to avoid collisions and unintended breaking maneuvers, it is crucial to detect potential road obstacles accurately. Although camera and LiDAR-based object detection have been studied in the literature [1, 2], it's only recently that interest in radar-based object detection using ML methods has begun, primarily because of its low cost, long-range detection capability, and robustness to poor weather conditions. Traditionally, automotive radar-based object detection is performed through peak detection using simple local thresholding methods such as the Constant False-Alarm Rate (CFAR) algorithm [3]. With the breakthroughs of ML in numerous applications [4, 5, 6, 7], radar-based object perception using ML has attracted attention [8, 9, 10, 11, 12, 13]. In [8, 9], point cloud radar data was used for object detection. Low-level radar spectra-based object perception was studied in [10, 11, 12, 13]. Radar data can be represented in many forms, such as point cloud, compressed data cube (CDC), or low-level spectra. Point cloud data is a detection level representation retaining the least amount of the original information. CDC [14] consists of a subsection of beam vectors (BVs) that exceeds a CFAR threshold. Unlike point cloud and CDC, low-level spectra retains all of the information returned from the radar. In this work, we propose an ML based stationary object detection system using low-level spectra data. [8, 9, 10, 11, 12, 13], study large and mostly non-stationary objects such as pedestrians, cars, bikes, and trucks were considered for perception tasks. In contrast, our work focuses on detecting stationary debris objects in the driving path. Labeling low-level radar spectrum data is typically expensive. Large-scale deep models in [14, 10, 11, 12, 13] often require a large amount of labeled data. With limited low-level radar spectrum data, a hybrid system can be used for perception tasks, e.g., object detection. However, many of the constituent traditional signal processing techniques cannot make use of hardware acceleration. Hybrid systems and large-scale deep models have computation efficiency requirements which make practical deployment to embedded systems a great challenge. The essential factors for computing hardware deployed in next generation electric automobiles are cost and power efficiency. To achieve these goals, hardware accelerators such as matrix multiplication accelerators (MMA) are becoming increasingly popular. However, with MMA, deploying hybrid systems or even large-scale ML models on devices with limited resources, such as automotive, is still challenging. One solution to avoid performance loss is to take advantage of lightweight ML models that can efficiently run on MMA's. Since we typically develop hybrid systems or large-scale deep models for perception tasks, knowledge distillation (KD) from these models (teacher models) to student models can enable fast inference on embedded systems [15]. In this paper, we propose what we believe is the first teacher-student KD approach for stationary object detection using low-level radar spectrum data, where detection knowledge is transferred from a block-by-block (BB) hybrid teacher model to a light weight end-to-end ML student model intended for deployment to an embedded hardware accelerator. Overall, our main contributions are as follows. First, customized modules in the BB hybrid teacher model, i.e., the non-trainable blocks, effectively take advantage of the radar-specific properties. Second, the BB hybrid teacher model does automated labeling purely on low-level radar data and requires minimum to no human labeling efforts. Third, the proposed student model is a direct embedded implementation with low power and memory usage. Fourth, the student-teacher model works entirely on low-level radar datasets for stationary object detection problems. i.e., no other sensor is used to make the detection decisions; last, new evaluation methods for the teacher-student model is proposed and justified for its effectiveness. ## 2 Problem formulation In our teacher-student framework shown in Figure 1, knowledge from a hybrid teacher is transferred to an end-to-end deep learning-based student. Since we are focusing on stationary object detection, the radar spectra can be reduced to a range azimuth spectrum maps. Therefore, the input spectrum data for the teacher and student models are range azimuth maps. The teacher model processes these range azimuth maps, creating labeled training data. The student model is then trained in a supervised fashion to mimic the same detection task that the teacher model performed. The student can later be easily deployed on an embedded platform and run much faster, more efficiently, and is product ready for the automotive market. Following [16], [15], we modify the idea of KD by changing the teacher model. In our work, the teacher is a BB hybrid model containing trainable and non-trainable parametric blocks. By combining the traditional radar knowledge and deep learning approach in a BB hybrid model, we can significantly reduce the data needed for training [17] and the time-consuming data labeling efforts in prior end-to-end deep learning studies [10, 11, 13]. ## 3 Proposed Algorithm The teacher model contains sophisticated radar processing algorithms, some of which are trainable (e.g. a multi-layer perceptron (MLP)) and some non-trainable (e.g., interpolation, convolution based signal processing, feature extraction, Figure 1: Teacher-Student framework. Figure 2: Cartesian and polar representation of debris detection for in-lane, out-lane objects. and post processing). The non-trainable algorithms cannot benefit from the MMA core on an embedded devices, designed to run fast matrix multiplication calculations as in Figure 3. In our study, we observed that the teacher model's non-trainable components take the majority of the processing time, making it unsuitable to deploy on an embedded devices. The teacher model takes as inputs a range azimuth map which is obtained by processing low-level time series radar data using traditional radar signal processing methods [3] and host vehicle speed information. The teacher model uses the host vehicle speed for input spectrum interpolation. Data interpolation is needed for the purpose of manual feature extraction. We assume that we have \(N\) number of range azimuth samples. We use \(\mathbf{X}_{i}\in\mathbb{R}^{464\times 256}\) to denote the range azimuth map at time \(i\), where \(464\) is the number of range bins and \(256\) is the number of azimuth bins, and \(i=1,\cdots,N\). The output of the teacher model is a probability vector containing probabilities of in-lane objects at each range bin over time. After applying a decision boundary on the probability vector, we obtain a in-lane object prediction vector \(\mathbf{y}_{i}\in\mathbb{R}^{464\times 1}\) at time \(i,i=1,\cdots,N\). We use \(\mathbf{Y}\in\mathbb{R}^{N\times 464}\) denote the output decisions of the teacher model for all the data samples, where \(\mathbf{Y}_{i,j}\in\{0,1\}\) is the detection decision at range bin \(j,j=0,\cdots,463\), and \(\mathbf{Y}_{i,j}=1\) denotes that there is an obstacle at range bin \(i\), otherwise, \(\mathbf{Y}_{i,j}=0\). The teacher model contains radar processing tasks involving ego lane determination using specified geometry information, interpolation of the spectrum data, feature extraction with accumulated information across time, and a MLP detection network, to produce labels for the student model. The student model must learn all the mentioned tasks using only the input spectrum data and provided labels. Further more, the student model is able to provide high detection confidence without feature accumulation over time. Although the raw input to the student model is the same shape as the teacher model, the first step taken by the student is to is narrow its focus to the middle angular bins. We will denote the input range azimuth map for the student model by \(\widetilde{\mathbf{X}}_{i}^{s}\in\mathbb{R}^{464\times 30},i=1,\cdots,N\), where \(30\) is the number of azimuth bins we select. These 30 angular bins ensure to capture of the widest portion of the ego lane, including a small buffer. Angular bin selection significantly improves the training time. Now, we present the network structure details of the student model. The student model comprises of five convolutional layers with ReLU activations and a Sigmoid activation for the final convolutional layer. Deploying fully connected layers on an embedded MMA accelerators is typically limited to such an extent as to not useful. Typically the solution is to utilize 1x1 convolution kernels to emulate fully connected layers. In our case, we use a 3x1 convolutional kernel in the final layer. Since objects usually span multiple range bins, using kernels with extended receptive field in the range direction allows the model to use the information from neighboring range bins to make a better predictions. Figure 3 details the teacher-student paradigm along with the architectural details of each block. The collected radar dataset is extremely imbalanced. There are 464 range bins but typically just one in-lane object which may only span a few range bins. Therefore, we propose to use a weighted mean squared error (WMSE) to boost the loss values where in-lane objects are located by the teacher model, which is given as Figure 3: Teacher-Student embedded architecture. The timings for each block (in milliseconds) demonstrate the advantage of student algorithm. \[\text{WMSE}=\frac{1}{N}\sum_{i=1}^{N}w_{i}(\hat{\mathbf{Y}}_{i,j}-\mathbf{Y}_{i,j} )^{2}, \tag{1}\] where \(\hat{\mathbf{Y}}\in\mathbb{R}^{N\times 464}\) is the predicted probability score matrix, \(\hat{\mathbf{Y}}_{i,j}\) is the predicted score and \(\mathbf{Y}_{i,j}\) is the ground truth for time frame \(i\) and range bin \(j\). \(w_{i}=1\) when \(\mathbf{Y}_{i,j}=0\) and \(w_{i}=\frac{|\mathbf{Y}^{-}|}{|\mathbf{Y}^{+}|}\) when \(\mathbf{Y}_{i,j}=1\), \(i=1,\cdots,N\) and \(j=0,\cdots,463\). Also, \(|\mathbf{Y}^{-}|\) denotes the number of negative samples and \(|\mathbf{Y}^{+}|\) denotes the number of positive samples. In the following, we show the training process of the student model. The input range azimuth maps are processed by the teacher algorithm to produce corresponding labels. The labels are used to train the student model in a supervised setting. Knowledge distillation during training from the teacher to student model is illustrated in Figure 1. In order to discriminate between in- and out-of-lane objects, the student model must learn the ego lane geometry which is a function range as shown in Figure 2. However, learning this function using solely convolutional layers presents a problem because convolutions are translationally invariant. To address this, we implement a modified version of the CoordConv algorithm [18] by appending two additional channels to label the range and azimuth respectively. This allows the convolution kernels to utilize their location on the range azimuth map in order to learn the lane width function. In our study, we found that CoordConv also allows the student to better model the apparent difference in signal strength between near and far objects. For an input range-azimuth \(\widetilde{\mathbf{X}}_{i}^{s}\in\mathbb{R}^{464\times 30},i=1,\cdots,N\), we add two more range CoordConv and azimuth CoordConv channels, denoted by \(\mathbf{R}\in\mathbb{R}^{464\times 30}\) and \(\mathbf{A}\in\mathbb{R}^{464\times 30}\), respectively. The improved input data is denoted by \(\mathbf{X}_{i}^{s}\in\mathbb{R}^{464\times 30\times 3},i=1,\cdots,N\). \(\mathbf{X}_{i}^{s}=[\mathbf{x}_{i}^{s_{0}}\ \mathbf{x}_{i}^{s_{1}}\ \mathbf{x}_{i}^{s_{2}}]\) where \(\mathbf{x}_{i}^{s_{0}}=\widetilde{\mathbf{X}}_{i}^{s}\), \(\mathbf{x}_{i}^{s_{1}}=\mathbf{R}\), and \(\mathbf{x}_{i}^{s_{2}}=\mathbf{A}\). \(\mathbf{R}\) and \(\mathbf{A}\) are given as: \[\mathbf{R}[n,:]=\frac{n}{R_{\text{max}}}, \tag{2}\] \[\mathbf{A}[:,m]=2|\frac{m-\frac{4_{\text{max}}}{2}}{A_{\text{max} }}|, \tag{3}\] where \(n,n=0,\cdots,463\) is the index for the range bin, \(m,m=0,\cdots,29\) is the index for the azimuth bin, \(\mathbf{R}_{\text{max}}\) is the maximum range index and \(\mathbf{A}_{\text{max}}\) is the maximum azimuth index. \(|\cdot|\) denotes absolute value operation. The teacher model must look at multiple time steps before predicting the debris detection probability with reasonable confidence. Also, the teacher model interpolation step requires a minimum critical speed in order to make predictions. In order to avoid both of these issues, the student model is only trained with samples where the teacher was able to positively identify at least one debris object at some range. The intuition is to not penalize the student for extending the maximum distance at which debris can be detected as well as allow the student to fill in the gaps where the teacher could not make predictions due to slow host speed. The same intuitions underpinning the selective training technique have implications on the evaluation metric as discussed in the next section. ## 4 Experimental Results Given the lack of a public low-level automotive radar dataset for early stationary object detection, we have collected our dataset. Our data collection vehicle has a high-definition radar mounted on the front bumper. Since it is costly to collect enough open environment data with stationary objects in-lane, the data we collected in a controlled environment is a set of debris objects including a dehumidifier, tire with and without rim, wooden pallet and stationary car next to repeated structures such as guardrails and signposts. The objects were placed 300m from the host vehicle before the beginning of each collection run. We also consider different rotations of each stationary object in order to get a complete characterization. The data was split into 70 % training, 20 % validation and 10 % testing along with five fold validation. We used the Tensorflow library on a Windows i7 CPU to train the machine learning model. The setup of our experiment requires particular metrics for evaluating performance. We have modified the recall metric and call it R-score. R-score represents the percentage of true positives predicted by the student compared to the corresponding number of positives predicted by the teacher. Our R-Score can also allow a \(+/-1\) range bin offset allowance. We use \(R_{0}\) to denote the score without range bin offset allowance, and \(R_{1}\) to denote the score with \(+/-1\) variance in the range bin direction. Note \(R_{0}\) score is equivalent to recall, \(\frac{tp}{tp+fn}\). \(R_{0}\) and \(R_{1}\) are given as \[R_{0}=\frac{\sum_{i=1}^{N}\sum_{j=1}^{M}\mathbf{Y}_{i,j}\hat{\mathbf{Y}}_{i,j}^ {\prime}}{\sum_{i=1}^{N}\sum_{j=1}^{M}\mathbf{Y}_{i,j}}, \tag{5}\] \[R_{1}=\frac{\sum_{i=1}^{N}\sum_{j=1}^{M}\mathbf{Y}_{i,j}\text{max}[\hat{\mathbf{Y}}_ {i,j-1}^{{}^{\prime}},\hat{\mathbf{Y}}_{i,j}^{{}^{\prime}},\hat{\mathbf{Y}}_{i, j+1}^{{}^{\prime}}]}{\sum_{i=1}^{N}\sum_{j=1}^{M}\mathbf{Y}_{i,j}}, \tag{6}\] where \(\mathbf{Y}\) is the ground truth and \(\hat{\mathbf{Y}}^{{}^{\prime}}\) is the predicted labels of the student model after a decision boundary is applied. The student can extend the maximum range at which objects are detected. If all teacher samples were used as ground truth, the extended range performance of the student would be judged as false positives. Therefore, we have modified the precision score, \(\frac{R_{1}}{tp+fp}\), and call it P-score. \(P_{1}\) precision score allows a \(+/-\) range bin offset allowance for false positive identification. \(P_{0}\) precision score is evaluated without any offset allowance. P-score and the true negative rate, specificity = \(\frac{tn}{tn+fp}\), are only evaluated on samples which the teacher model has at least one positive prediction of an object at some range. Table 1 shows results from the evaluation sets used in the experiment. The student extended the maximum range of debris detection by an average of 47 meters in the evaluation sets. The student scored an average of 0.91 for the \(R_{1}\) recall score, an average of 0.93 for \(P_{1}\) precision score, and an average of 0.99 for specificity score. The student also reduced the processing time on the desktop CPU benchmarks by an average factor 36 for the evaluation sets. This speed up is even greater on the embedded device where the student runs 100x faster than the teacher, see Figure 3. ## 5 Conclusion The student model was successful in learning necessary mappings from the teacher. The student also exhibits many additional benefits over the teacher. The teacher's processing speed is dependent on the host vehicle's speed. As the host speed increases, the teacher produces more interpolated frames to processes. In contrast, the student processing time is independent of host speed allowing for constant runtime and memory usage, which are highly desirable attributes for embedded deployments. The student also exceeds the teacher's performance in the low-speed regime, where the teacher fails to make predictions. Through selective training, the student can take what it learns from the teacher's higher-speed examples and apply it to the low-speed samples. The student model vastly simplifies the deployment complexity onto embedded accelerators, all while running faster, using less memory and less power, and extending the maximum range of detections.
2306.02220
Kramers-Wannier Duality and Random Bond Ising Model
We present a new combinatorial approach to the Ising model incorporating arbitrary bond weights on planar graphs. In contrast to existing methodologies, the exact free energy is expressed as the determinant of a set of ordered and disordered operators defined on vertices and dual vertices respectively, thereby explicitly demonstrating the Kramers-Wannier duality. The implications of our derived formula for the random bond Ising model are further elucidated.
Chaoming Song
2023-06-04T00:22:59Z
http://arxiv.org/abs/2306.02220v1
# Kramers-Wannier Duality and Random Bond Ising Model ###### Abstract We present a new combinatorial approach to the Ising model incorporating arbitrary bond weights on planar graphs. In contrast to existing methodologies, the exact free energy is expressed as the determinant of a set of ordered and disordered operators defined on vertices and dual vertices respectively, thereby explicitly demonstrating the Kramers-Wannier duality. The implications of our derived formula for the random bond Ising model are further elucidated. **Introduction:** The well-established duality between order and disorder phases observed in the two-dimensional Ising model was initially exploited by Kramers and Wannier to pinpoint its criticality [1], predating the celebrated solution of Onsager's free energy [2; 3; 4]. Furthermore, Kadanoff and Ceva illustrate that the correlation function can be derived by contemplating the disorder operator defined on the dual graph [5], providing substantial insights into the underlying physics [6]. However, the standard methodologies used for the computation of free energy--either algebraically or combinatorially [7]--exhibit the Kramers-Wannier (KW) duality only in the final form after extensive calculations. Despite a tremendous volume of work over the last century devoted to the exact solution of the Ising model, a formula for calculating its free energy with manifest KW duality is still absent in the literature. This gap limits the utility of the exact free energy in broader contexts. For example, in the case of a random-bond Ising model (RBIM) employed for understanding spin glasses, deriving an explicit free energy remains challenging. It is worth mentioning that the combinatorial approach, pioneered by Kac and Ward [8], provides an alternative pathway to derive Onsager's free energy. The Kac-Ward methodology hinges on the path-integral form on a planar graph \(G=(V,E)\) with \(n=|V|\) and \(m=|E|\), which is based on an elegant identity \[\zeta_{F}(G,u)^{-1}\equiv\prod_{[p]}\left(1-(-1)^{w(p)}u^{l(p)} \right)=Z_{\rm{ising}}^{2}(1-u^{2})^{m}, \tag{1}\] where \(u=\tanh(\beta J)\) represents the coupling constant. This identity draws an analogy with Riemann's zeta function, where the Euler product is over all prime cycles, with \(l(p)\) and \(w(p)\) representing the length and winding number of the prime cycles \(p\), respectively. The subscript \(F\) highlights the fermionic character of the Ising model, which assigns a negative weight to odd numbers of windings. Initially conjectured by Feynman [9; 10], identity (1) was later formally proved by Sherman [11; 12; 9] and Burgoyne [10]. Based on Eq. (1), Kac and Ward demonstrated that \[\zeta_{F}(G,u)^{-1}=\det(I_{2m}-uT_{KW}), \tag{2}\] where the Kac-Ward (KW) operator \(T_{KW}\) is a \(2m\times 2m\) matrix, further elucidated in the subsequent discussion. A variant of the combinatorial formulation, mapping the Ising model to the dimer model using Pfaffians, was developed by Green and Hurst [13] and later expanded by others [14; 15; 16; 17]. This approach essentially corresponds to a skew-symmetric version of Eq. (2) via a similarity transformation. More recently, a resurgence of the combinatorial approach [18; 19; 20] focuses on the discrete version of the conformal invariance of the critical Ising model on planar graphs [21]. The combinatorial approach can be seamlessly generalized to accommodate an arbitrary set of bond weights \({\bf u}=u_{e}|e\in E\), thereby presenting a robust numerical tool for probing RBIM. However, Eq. (2) reveals little physical insight. For example, the KW duality indicates that \(\zeta_{F}\) should remain invariant (up to some prefactor) under the transformation \(u\to u^{*}=(1-u)/(1+u)\) on the dual graph \(G^{*}\), i.e., \(\zeta_{F}(G,u)\sim\zeta_{F}(G^{*},u^{*})\). Regrettably, the manifestation of the KW duality only emerges following the resolution of the determinant, a process that is impractical for numerous disordered systems. This hidden symmetry within Eq. (2) remains far from obvious and poses a significant challenge. It was only recently proven that Eq. (2) indeed satisfies KW duality under general conditions [22; 23]. Consequently, despite its elegance, the combinatorial approach is primarily utilized as a numerical tool. To the best knowledge of the author, no explicit free energy formula demonstrating manifest KW duality for an arbitrary planar graph and weight set has yet been identified. In this letter, we propose a new free-energy formula for the Ising model with arbitrary bond weights on planar graphs. Contrasting with existing methodologies, our formula is expressed as the determinant of the summation of local ordered and disordered operators, each defined on vertices \(V\) and dual vertices \(V^{*}\), thereby explicitly exhibiting the KW duality. In addition, it establishes a tangible connection with non-local ordered and disordered operators, offering insights into the nature of duality. We elucidate the implications of our formula in the context of RBIM. **Ihara zeta function:** To hint at the existence of the manifest dual formula, we initiate our discussion with a warm-up exercise by considering the bosonic counterpart of Eq. (1), \[\zeta_{B}(u)^{-1}=\prod_{[p]}\left(1-u^{l(p)}\right). \tag{3}\] This definition, known as the Ihara zeta function [24; 25], serves as a p-adic analogue of the Selberg zeta function that counts the number of closed geodesics on a hyperbolic surface. Analogous to Eq. (2), we express \[\zeta_{B}(u)^{-1}=\det(I_{2m}-uT), \tag{4}\] where \(T\) denotes the \(2m\times 2m\) Hashimoto's edge adjacency operator [26] in analogy to the KW operator \(K_{KW}\). Specifically, \(T\) applies on the space of oriented edges \(\{E,\bar{E}\}\), where an edge \(\bar{e}\in\bar{E}\) symbolizes the directional inverse of a corresponding edge \(e\in E\). The matrix \(T_{e^{\prime},e}=1\) only if the oriented edge \(e\) follows \(e^{\prime}\) backtracklessly, meaning that the terminal vertex of \(e\) is the starting vertex of \(e^{\prime}\) and \(e^{\prime}\neq\bar{e}\). The equivalence of Eq. (3) and Eq. (4) can be demonstrated directly by applying the logarithm and aligning the power expansion term by term. For a regular graph of degree \(q+1\), the Ihara zeta function displays self-duality under the transformation \(u\to q/u\), which mirrors Riemann's functional equation. However, in similarity to its fermionic counterpart, Eq. (4) does not explicitly reveal this duality. Intriguingly, a second formula exists, as proposed in Ihara's original paper [24], \[\zeta_{B}(u)^{-1}=(1-u^{2})^{m-n}\det(I_{n}-uA+u^{2}Q), \tag{5}\] where \(A\) is the adjacency matrix, and \(Q\) is the diagonal matrix with the degree diminished by one. Assuming the graph is regular, that is, \(Q=qI_{n}\), it becomes evident that \(\zeta_{B}^{-1}(G,u)\sim\zeta_{B}^{-1}(G,q/u)\). Consequently, Eq. (5) presents a manifestly dual formula for the bosonic zeta function. The derivation of Eq. (5) from Eq. (4) provides illuminating insights. Here, we present a streamlined approach based on the original proof by Bass [27; 28; 29]. We introduce the matrix \(S=T+J\), where \(J_{e^{\prime},e}=\delta_{e^{\prime},\bar{e}}\) and \(S_{e^{\prime},e}\) enumerates all successors \(e^{\prime}\) following \(e\), including its inverse \(\bar{e}\). A key observation arises from the factorization of matrix \(S=Y^{t}X\), where \(X\) and \(Y\) are \(n\times 2m\) matrices with \(X_{v,e}=1\) or \(Y_{v,e}=1\) if \(v\) is the starting vertex or the terminal vertex of the orient edge \(e\), respectively. Leveraging this factorization, we obtain \[\det (I_{m}-uT)=\det\left((I_{2m}+uJ)-uS\right)\] \[=(1-u^{2})^{m}\det\left(I_{n}-uX(I_{2m}+uJ)^{-1}Y^{t}\right)\] \[=(1-u^{2})^{m-n}\det\left(I_{n}-uA+u^{2}Q\right),\] where the second line ensues from the generalized matrix determinant lemma and \(\det(I_{2m}+uJ)=(1-u^{2})^{m}\). The third line employs the identity \((I_{2m}+uJ)^{-1}=(1-u^{2})^{-1}(I_{2m}-uJ)\), while noting \(A=XY^{t}\) and \(Q=XJY^{t}-I_{n}\). This completes the proof of Eq. (5). The critical component of this proof involves the use of the generalized matrix determinant lemma, predicated on the factorability of \(S\), which can be reinterpreted as an index theorem over a chain complex [30]. Figure 1: (a) The embedding of both \(G\) and its dual \(G^{*}\). The quadrilateral \(q\) is delineated by a vertex \(v\) and a neighboring dual vertex \(v^{*}\), along with their respective edges. The relationships \(\theta_{L}+\theta_{R}^{*}=\theta_{R}+\theta_{L}^{*}=\pi/2\) are satisfied. (b) The local order and disorder operators \(d_{v}\) and \(d_{v^{*}}^{\dagger}\) for quadrilaterals. Each operator acts as a curl operator around the vertex \(v\) and the dual \(v^{*}\), respectively. Consider a more generalized setup involving an arbitrary set of weights \(\mathbf{u}=\{u_{e}|e\in E\}\) assigned to each edge. By employing a similar approach, we obtain \[\zeta_{B}(\mathbf{u})^{-1}=\prod_{e\in E}(1-u_{e}^{2})\det\left(I_{n}-\tilde{A}( \mathbf{u})+\tilde{D}(\mathbf{u})\right), \tag{6}\] where \(\tilde{A}\) represents the weighted adjacency matrix defined as \(\tilde{A}_{v,v^{\prime}}=\frac{u_{uv^{\prime}}}{1-u_{vv^{\prime}}^{2}}\), and \(\tilde{D}\) denotes the weighted degree defined as \(D_{v,v}=\sum_{(v,v^{\prime})\in E}\frac{u_{vv^{\prime}}^{2}}{1-u_{vv^{\prime} }^{2}}\). Specifically, for the \(\pm J\) disorders, i.e., \(u_{e}=u\tau_{e}\) with \(\tau_{e}=\pm 1\), Equation (6) simplifies to \[\zeta_{B}(u)^{-1}=(1-u^{2})^{m-n}\det(I_{n}-uA^{\prime}+u^{2}Q),\] where \(A^{\prime}\) includes entries of \(0\) and \(\pm 1\) to account for bond disorders. It becomes evident that \(A^{\prime}+I\) serves as the adjacency matrix of the percolation model, thereby mapping the Ihara zeta function with \(\pm J\) disorder to a percolation problem. **Manifestly dual formula:** Consider the fermionic case represented by Eq. (1), where the winding number is suitably defined only after immersion into a surface with a given spin structure. This marks a notable distinction from its bosonic counterpart in Eq. (3) and creates a host of technical challenges when applying a similar approach to the one used for the bosonic zeta function. For ease of discussion, our examination is confined to a planar graph \(G\) embedded in a plane. However, extending this discussion to surfaces with a higher genus is straightforward. To explicitly reveal the KW duality, we embed the dual graph \(G^{*}\) over \(G\). In this arrangement, each vertex of \(G\) is located inside a face of \(G^{*}\), and vice versa; each edge in \(G\) intersects with the corresponding edge in \(G^{*}\). For technical convenience, we require these intersections to be perpendicular. Note that the embedding need not be isoradial in general. Figure 1a illustrates such an embedding, where red and blue colors represent \(G\) and \(G^{*}\), respectively. A crucial element in our depiction involves the quadrilaterals (gray domain in Fig. 1a). Each quadrilateral is formed by two neighboring edges in both \(G\) and \(G^{*}\), along with a vertex pair \((v,v^{*})\) where \(v\in V\) and \(v^{*}\in V^{*}\). We denote the angles associated with the left and right edges of \(v\) and \(v^{*}\) as \(\theta_{L}\) and \(\theta_{R}\), and \(\theta_{L}^{*}\) and \(\theta_{R}^{*}\) respectively. These angles satisfy the relations \(\theta_{L}+\theta_{R}^{*}=\theta_{R}+\theta_{L}^{*}=\pi/2\) as depicted in Fig. 1a. The collection of \(2m\) quadrilaterals, which collectively tile the entire plane, plays a critical role in our new formulation. The Kac-Ward operator \(T_{KW}\) in Eq. (2) is defined similarly to Hashimoto's edge adjacency operator \(T\). However, we must introduce a phase change between two consecutive edges to account for the fermionic nature. Specifically, we impose a gauge transformation \[(T_{KW})_{e^{\prime},e}=e^{i\alpha(e^{\prime},e)/2}=ie^{-i\beta(\bar{e},e^{ \prime})/2}\] if edge \(e^{\prime}\) follows \(e\) without backtracking, where \(\alpha(e^{\prime},e)\) is the exterior angle from \(e\) to \(e^{\prime}\), and \(\beta(\bar{e},e^{\prime})\) is the interior angle between \(e^{\prime}\) and its inverse \(\bar{e}\) (Fig. 2a). This condition ensures that the summation of half exterior angles contributes to a total \(\pi\) phase change over a cycle, effectively capturing the fermionic sign in Eq. (1). Following a similar approach as that applied to the bosonic zeta function, we introduce the gauged successor operator \(S^{\prime}=T_{KW}-iJ\), appending an additional element between edge \(e\) and its inverse \(\bar{e}\) with weight \(ie^{-i\pi}=-i\). However, \(S^{\prime}\) no longer exhibits factorability, preventing from directly applying the matrix determinant lemma in this scenario. To tackle this issue, we follow the strategy outlined in Refs. [22; 23], introducing operator \(Q\) between neighboring edges \(e\) and \(e^{\prime}\) that share a common starting vertex \(v\). Specifically, for each quadrilateral \(q\), the operator \(Q\) maps its right edge \(e\) to the left edge \(e^{\prime}\) while inducing a phase shift \(Q_{e,e^{\prime}}=e^{i(\theta_{L}+\theta_{R})/2}\). As illustrated in Fig. 2b, \(Q_{e^{\prime},e^{\prime\prime}}e^{-i\beta(e^{\prime\prime},e)/2}=e^{-i\beta(e^ {\prime},e)/2}\) if \(e^{\prime}\neq\bar{e}\). However, the fermionic nature possesses a non-trivial monodromy, resulting in a branch cut after a rotation of \(2\pi\). This observation leads to the discontinuity \[S^{\prime}-QS^{\prime}=-2iJ. \tag{7}\] As the operator \(Q\) acts on the edges attached to the quadrilaterals, it facilitates a natural factorization \(Q=L^{t}R\), where \(L\) and \(R\) are \(2m\times 2m\) matrices associating each quadrilateral \(q\) with its left and right edges, respectively. Moreover, we assign weights \(L_{q,e}=e^{i\theta_{L}(q)/2}\) and \(R_{q,e}=e^{i\theta_{R}(q)/2}\). Building on this factorization, we find \[\det(I-Q)\det(I-uT_{KW})=\det(I+iuJ-Q(I-iuJ))\] \[=(1+u^{2})^{m}\det(I-R(I-iuJ)(I+iuJ)^{-1}L^{t})\] \[=(1+u^{2})^{m}\det\left(I-\frac{1-u^{2}}{1+u^{2}}RL^{t}-\frac{2u }{1+u^{2}}Re^{-i\pi/2}JL^{t}\right),\] where the first equality follows from Eq. (7), and the second stems from the generalized matrix determinant lemma and \(\det(I+iuJ)=(1+u^{2})^{m}\). We then introduce the discrete curl operator \(d=RL^{t}\), acting on the space of quadrilaterals, with elements \[d_{q^{\prime},q}=e^{i(\theta_{L}(q)+\theta_{R}(q^{\prime}))/2},\] when the quadrilaterals \(q^{\prime}\) and \(q\) share a common edge with \(q^{\prime}\) positioned counterclockwise next to \(q\). This definition suggests that \(d\) can be decomposed into a set of operators \(d_{v}\), each acting on the quadrilaterals around vertex \(v\). Therefore, \(d=\sum_{v\in V}d_{v}\), as depicted in Fig. 1b. Similarly, the dual operator \(d_{*}=\sum_{v^{*}eV^{*}}d_{v^{*}}\) is the summation of the operators \(d_{v^{*}}\) around the dual vertex \(v^{*}\). Observe that \((d_{*}^{\dagger})_{q^{\prime},q}=e^{-i(\pi/2-(\theta_{L}(q)+\theta_{R}(q^{ \prime}))/2}\) holds if \(q\) possesses a left edge \(e\) and \(q^{\prime}\) a right edge \(\bar{e}\) (see Fig.1). This yields \(d_{*}^{\dagger}=Re^{-i\pi/2}JL^{t}\), which applies to quadrilaterals around dual vertices clockwise. Taking together, we obtain \[\zeta_{F}^{-1}=2^{-n}(1+u^{2})^{m}\det\left(I_{2m}-\frac{1-u^{2}}{ 1+u^{2}}d-\frac{1-{u^{*}}^{2}}{1+{u^{*}}^{2}}d_{*}^{\dagger}\right), \tag{8}\] where we employ the identities \(\det(I-W)=(1-(-1))^{n}=2^{n}\) and \(\frac{1-{u^{*}}^{2}}{1+{u^{*}}^{2}}=\frac{2u}{1+u^{2}}\). Drawing parallels with Eq. (6), we can generalize Eq. (8) to incorporate a set of bond weights \(\mathbf{u}\) as follows: \[\zeta_{F}^{-1}(G,\mathbf{u})=2^{-n}\prod_{e\in E}(1+u_{e}^{2}) \det\left(I_{2m}-D(\mathbf{u})-D_{*}^{\dagger}(\mathbf{u}^{*})\right) \tag{9}\] where \(D=R\frac{1-\mathbf{u}^{2}}{1+\mathbf{u}^{2}}L^{t}\) and \(D_{*}\) represent the weighted curl operators around the vertices in \(V\) and \(V^{*}\), respectively. These operators permit a natural decomposition \[D(\mathbf{u})=\sum_{v}D_{v}\mathbf{u},\quad D(\mathbf{u}^{*})= \sum_{v^{*}}D_{v^{*}}(\mathbf{u}^{*}). \tag{10}\] The determinant present in Eq. (9) is manifestly symmetric under the dual transformation, thereby reinstating the KW duality for an arbitrary set of bond weights [22; 23] \[2^{-|V|}\prod_{e\in E}(1+u_{e})\zeta_{F}(G,\mathbf{u})=2^{-|V^{ *}|}\prod_{e\in E}(1+u_{e}^{*})\zeta_{F}(G^{*},\mathbf{u}^{*}). \tag{11}\] **Order and disorder operators:** Equation (9) suggests that the fermionic zeta function \(\zeta_{F}^{-1}=\det(I-H(\mathbf{u}))\) constitutes the characteristic polynomial of a non-Hermitian Hamiltonian: \[H\equiv D(\mathbf{u})+D^{\dagger}(\mathbf{u}^{*})=\sum_{v\in V}D_{v}(\mathbf{ u})+\sum_{v^{*}\in V^{*}}D_{v^{*}}^{\dagger}(\mathbf{u}^{*}).\] Assuming the \(\pm J\) disorders of \(u_{e}=u\tau_{e}\), where \(\tau_{e}=\pm 1\), the equation simplifies to: \[H=\frac{1-u^{2}}{1+u^{2}}\sum_{v\in V}d_{v}+\frac{2u}{1+u^{2}}\sum_{v^{*}\in V ^{*}}\tilde{d}_{v^{*}}^{\dagger}, \tag{12}\] where \((\tilde{d}_{v^{*}})_{q,q^{\prime}}=e^{i(\theta_{L}(q)+\theta_{R}(q^{\prime}) )/2}\tau_{e}\), applicable for two quadrilaterals sharing the dual vertex \(v^{*}\) and edge \(e\). Evidently, only the dual curl operator \(D_{*}\) represents the disorder, while the curl operator \(D\) remains unaffected by the disorder. Thus, we interpret \(D_{v}\) and \(D_{v^{*}}\) as local order and disorder operators respectively, echoing the nonlocal disorder operator introduced by Kananoff and Ceva [5]. To establish a connection explicitly, consider a defect that changes a line \(\mathcal{L}\) of ferromagnetic bonds to antiferromagnetic, i.e., \(\tau_{e}=-1\) only for \(e\in\mathcal{L}\). The corresponding correlation function of nonlocal disorder operators involves a shift in the free energy: \[\Delta F=-\frac{kT}{2}\ln\det(I-H^{\prime}G), \tag{13}\] where \(G=(I-H_{0})^{-1}\) represents the Green's function of the ferromagnetic Hamiltonian \(H_{0}=\frac{1-u^{2}}{1+u^{2}}d-\frac{1-{u^{*}}^{2}}{1+{u^{*}}^{2}}d_{*}^{*}\), and the defect operator \(H^{\prime}=\frac{2u}{1+u^{2}}\sum_{v^{*}\in\mathcal{L}}(\tilde{d}_{v^{*}}-d_ {v^{*}})\). It becomes apparent that nonlocal disorder operators correspond to a line integral over the local disorder operator \(D_{v^{*}}\). Consequently, the KW duality presented in Eq. (9) reflects an exact interchange of local order and disorder operators under the duality transformation. **Implication to RBIM:** We now turn to the implications of our new formula for the RBIM. For the sake of technical convenience, our discussion will primarily focus on \(\pm J\) disorder on a square lattice, i.e., \(P(\tau)=(1-p)\delta(\tau-1)+p\delta(\tau+1)\). A straightforward generalization applies to arbitrary disorder. As we demonstrated earlier, the free energy of \(\pm J\) bond disorder is dictated by the spectrum of Eq. (12), where only the disorder operator \(\tilde{d}_{*}\) accounts for the bond disorders \(\tau_{e}\). In the absence of disorder (\(p=1\)), it is straightforward to diagonalize \(H\) via the Fourier transform, which consequently recovers Onsager's free energy. Considering a scenario where \(p\) is close to \(1\), we can apply the Dyson series expansion to Eq. (13). This approach enables us to determine the critical coupling \(u_{c}(p)\) as a series expansion of disorder probability \(1-p\). At the leading order, we find \[u_{c}(p)=(\sqrt{2}-1)(1+2\sqrt{2}(1-p))+O((1-p)^{2}). \tag{14}\] This result aligns with the findings initially obtained using the replica trick [31]. We now turn our attention to the zero-temperature limit \(\beta\to\infty\) of Eq. (12). In this limit, \((1-u^{2})/(1+u^{2})\approx 2e^{-2\beta J}\) and \(2u/(1+u^{2})\approx 1\). Consequently, in this scenario, disorder operators dominate the spectrum. Direct computation yields \[\det(I-d_{v^{*}}^{\dagger})=1+W_{v^{*}}, \tag{15}\] where the frustration \(W_{v^{*}}\equiv\prod_{e\in P(v^{*})}\tau_{e}=\pm 1\) is defined as the product of edge disorders around the plaquette of the corresponding dual vertices \(v^{*}\). It is clear that when the plaquette is frustrated, i.e., \(W_{v^{*}}=-1\), the determinant acquires a correction from the order operator \(d\) with an order at most \(e^{-2\beta J}\). On average, there is a \(\frac{1-(2p-1)^{4}}{2}\) chance of \(W_{v^{*}}=-1\), which results in a lower bound of the ground state energy density: \[e/J\geq-2+\frac{1-(2p-1)^{4}}{2}. \tag{16}\] This finding is consistent with the results in Refs [32; 33] obtained using geometric approaches. **Discussion:** In conclusion, we have unveiled a novel combinatorial approach to Ising models with arbitrary bond weights. In contrast to previous methods, our new formulation distinctly manifests the KW duality via order and disorder operators. We have presented preliminary implications for RBIM at the leading order, and our findings are consistent with results derived from alternative approaches. However, our method has the distinct advantage of seamlessly integrating with the standard framework of perturbative techniques, thereby simplifying the extension to higher-order terms. This also opens up the potential to employ non-perturbative methodologies for a more nuanced understanding of the phase diagram of RBIM. Additionally, our approach can be directly applied to other planar graphs, such as triangular and hexagonal lattices. On the other hand, it has been suggested that the RBIM may exhibit a disorder duality based on the replica argument, potentially localizing the multicritical point [34]. The exactness of this duality and its connection to our method remain unclear. We aim to explore these questions in future research. Our methodology can also be readily generalized to anyonic statistics by considering a non-half-integer phase shift, a topic we plan to discuss elsewhere. Further, a non-Abelian generalization seems feasible. These generalizations have close ties with parafermionic models [35]. Moreover, given that the Ihara zeta function can generalize to high-dimensional objects [36], it is enticing to contemplate a similar higher-dimensional generalization for its fermionic counterpart. Such an extension may hold promise for a solution to the 3D Ising model.
2307.02109
Aeroacoustic investigation of airfoil at near stall conditions
This paper presents a detailed aeroacoustic investigation of a Controlled-Diffusion airfoil at near stall condition. The study aims at answering two research questions: identify the flow mechanism responsible for separation noise for an airfoil near stall conditions and whether the noise is generated by a dipole for airfoil close to stall and can be quantified by Amiet's diffraction theory. The study uses synchronized PIV, RMP and far-field microphone measurements to perform experiments at two chord based Reynolds numbers of about 150,000 and 250,000. The results show that when the airfoil is placed at a higher angle of attack, such as $15^{\circ}$, strong amplification of flow disturbance is seen, resulting in the rolling up of the shear layer in the aft-region of the airfoil, forming large coherent structures. While these rollers play a central role in the increase in noise due to flow separation, the flapping of shear layer does not contribute to the separation noise. The present study conclusively shows that separation noise is dipolar in nature, and that the quadrupolar contribution for low-speed airfoils at near-stall conditions can be neglected. However, the increase in flow disturbances measured close to the trailing-edge of the airfoil implies that the assumption of small amplitude disturbance is no longer valid, which is the central premise of the thin linearized airfoil theory. Outside the frequency range at which flow separation operates, Amiet's theory is able to predict the far-field noise even at high angles of attack.
Prateek Jaiswal, Jose Rendón, Stéphane Moreau
2023-07-05T08:36:09Z
http://arxiv.org/abs/2307.02109v1
# Aeroacoustic investigation of airfoil at near stall conditions ###### Abstract This paper presents a detailed aeroacoustic investigation of a Controlled-Diffusion airfoil at near stall condition. The study aims at answering two research questions: identify the flow mechanism responsible for separation noise for an airfoil near stall conditions and whether the noise is generated by a dipole for airfoil close to stall and can be quantified by Amiet's diffraction theory. The study uses synchronized PIV, RMP and far-field microphone measurements to perform experiments at two chord based Reynolds numbers of about 150,000 and 250,000. The results show that when the airfoil is placed at a higher angle of attack, such as \(15^{\circ}\), strong amplification of flow disturbance is seen, resulting in the rolling up of the shear layer in the afternoon of the airfoil, forming large coherent structures. While these rollers play a central role in the increase in noise due to flow separation, the flapping of shear layer does not contribute to the separation noise. The present study conclusively shows that separation noise is dipolar in nature, and that the quadrupolar contribution for low-speed airfoils at near-stall conditions can be neglected. However, the increase in flow disturbances measured close to the trailing-edge of the airfoil implies that the assumption of small amplitude disturbance is no longer valid, which is the central premise of the thin linearized airfoil theory. Outside the frequency range at which flow separation operates, Amiet's theory is able to predict the far-field noise even at high angles of attack. + Footnote †: preprint: AIP/123-QED ## Nomenclature \begin{tabular}{l l} \(C\) & Airfoil chord \\ \(c_{0}\) & Speed of sound \\ \(C_{p}\) & Mean pressure coefficient \\ \(C_{prms}\) & Root-mean-square of the wall-pressure coefficient \\ \(\overline{E_{11}}\) & Pre-multiplied turbulent energy spectra \\ \(H\) & Boundary layer shape factor \\ \(M_{\infty}\) & Inlet Mach number \\ \(p_{\infty}\) & Inlet static pressure \\ \(p_{rms}\) & root-mean-square of the wall pressure \\ \(p^{\prime}_{a}\) & Far-field acoustic pressure \\ \(p^{\prime}_{w}\) & Fluctuating wall-pressure \\ \(R_{ij}\) & Second order two-point zero time delay correlation \\ \(Re_{c}\) & Reynolds number based on the chord \\ \(S_{pp}\) & Far-field acoustic power spectral density \\ \(u_{i}\) & Fluctuating velocity component \\ \(U_{c}\) & Convective speed of wall-pressure fluctuations \\ \(U_{\infty}\) & Inlet velocity \\ \(U_{e}\) & Boundary layer edge velocity \\ \(U_{1},U_{2},U_{3}\) & Mean velocity in trailing edge reference frame \\ \(\overline{u_{1}u_{1}},\overline{u_{2}u_{2}},\overline{-u_{1}u_{2}}\) & Root-mean-square of velocity fluctuations in trailing edge reference frame \\ \(-\rho\ \overline{u_{1}u_{2}}_{max}\) & maximum Reynolds shear stress \\ \(V_{x},V_{y}\) & Mean velocity in wind tunnel reference frame \\ \(x,y,z\) & Wind tunnel coordinate system \\ \(x_{1},x_{2},x_{3}\) & Coordinate system aligned with the airfoil trailing edge \\ \(x^{\prime}_{1},x^{\prime}_{2},x^{\prime}_{3}\) & Coordinate system aligned with the airfoil leading edge \\ \(\delta_{95}\) & Boundary layer thickness based on 95\% of \(U_{e}\) \\ \(\delta^{*}\) & Boundary layer displacement thickness \\ \(\Lambda\) & Dimensionless radiation ratio \\ \end{tabular} ## I Introduction Airfoil trailing-edge noise is dominant in a host of engineering applications. Several of the distinct mechanisms, which are referred as airfoil self-noise, are related to scattering of pressure gust past the airfoil trailing edge. Among them, noise due to flow separation is found to be dominant at high angles of attack, where large scale flow separation may occur. This is particularly the case for some wind turbine architectures, such as the H-Darrieus type wind turbine [Venkatraman _et al._2023]. Therefore, accurate models are needed during the pre-design phase to estimate acoustic noise by such machines. To achieve this a better understanding of the noise generation mechanism is needed. Nevertheless, only few comprehensive aeroacoustic studies have been performed, for airfoils placed at high angles of attack [Kalyani, Moreau, and Ragni 2022; Lacagnina _et al._2019; Moreau and Roger 2009; Raus _et al._2022; Zang, Mayer, and Azarpeyvand 2021]. As such, the overall objectives of the present manuscript are to identify the dominant flow mechanism(s) responsible for separation noise, and test the applicability of diffraction theory [Amiet 1976] to predict noise at high angles of attack. Numerically, Moreau and co-workers had performed several high-fidelity incompressible simulations for airfoil at high incidence almost a decade ago [Christophe and Moreau 2008; Moreau, Christophe, and Roger 2008; Moreau, Roger, and Christophe 2009]. In these simulations, an isolated airfoil installed in an open-jet anechoic wind tunnel (the test configuration) was simulated as opposed to the full scale wind turbine [Venkatraman _et al._2023]. As such only the noise due to the boundary-layer and its separation were studied. The far-field noise was quantified using both acoustic analogies [Curle 1955; Ffowcs Williams and Hall 1970] and Amiet's (1976) diffraction theory. Christophe and Moreau [2008] reported over-prediction of the wall-pressure by the LES, and the far-field acoustic spectra estimated by Amiet's (1976) model to be 10 dB higher than the measurements. In particular, this disagreement was present only at the low frequency, where the noise due to separation is expected to be the dominant mechanism. Similarly, the semi-empirical models for far-field noise based on Amiet's (1976) theory, referred to as MODA (Bertagnolio _et al._, 2017), have been shown to yield poor results. However, the reason for this disagreement when predicting separation noise with Amiet's (1976) model is unknown and requires further investigation. More recently, compressible simulations have been performed by Turner and Kim (2022) to quantify the individual contributions of equivalent source type (dipole and quadrupole) from low-speed airfoils in near stall conditions. They achieve this by subtracting noise estimated by the solid formulation of Ffowcs Williams and Hawkings's (1969) acoustic analogy from the noise estimated by the permeable formulation. Turner and Kim (2022) show that the noise contribution by quadrupole sources is significant, when an airfoil is placed at high incidence. However, while the porous formulation is complete, the solid formulation ignores the correlation between the dipole and quadrupole noise sources. This can lead to spurious directivity patterns as already demonstrated by Spalart _et al._ (2019). Nevertheless, it is important to quantify the individual contributions of various equivalent source types that may contribute to far-field noise. While equivalent noise sources are an important metric in aeroacoustics research, they are by no means unique. This is because the multipole expansion (Goldstein, 1976) dictates that one equivalent image source can be replaced by another. For instance, a quadrupole can be expressed as two dipoles that are of equal strength but in phase opposition. As such correct identification of equivalent noise source cannot by itself describe or confirm the precise flow mechanism behind separation noise. As such it is imperative to perform a detailed flow quantification and analysis to understand the noise mechanism. Previously, Brooks, Pope, and Marcolini (1989) hypothesized that airfoil separation noise results from the interaction between turbulent structures in the shear layer and the airfoil trailing-edge, as separated structures are convected past the airfoil, resulting in significant pressure fluctuations. However, previous experiments were unable to accurately identify the noise mechanism, as flow-field measurements were unavailable. More recently, using PIV and synchronized wall-pressure and hot-wire measurements, Lacagnina _et al._ (2019) identified three possible distinct noise generation mechanisms to explain noise generation by an airfoil close to stall. Importantly, all of these mechanisms were linked to instabilities in the shear layer and were localized in a region within the separated shear layer away from the wall. The separated shear layer may not only result in a substantial increase in the contribution of quadrupole noise (Turner and Kim, 2022), but may also invalidate the unsteady Kutta condition. This is because the latter relies on the flow leaving the airfoil trailing edge smoothly. Furthermore, separation noise is dominant for airfoil placed at high angles of attack. Therefore, the central premise of the thin-airfoil linearized theory may not hold for such cases because the amplitude of the induced disturbance by the flow separation may not be small. Evidently, changes may occur in resulting radiation ratio, and thus Amiet's (1976) radiation factor may not be able to correctly quantify the hydrodynamic-to-acoustic conversion [see figures 3 and 12 of Roger and Moreau 2004, for instance]. Therefore, in the present manuscript, we ask the question: can the separation noise be fully quantified using a dipole source, such as those outlined in Amiet's diffraction theory? If so, are there other possible mechanisms of noise generation that may explain noise generation due to an airfoil close to stall? Furthermore, is the mechanism behind the separation noise universal? To this end, aeroacoustic measurements have been performed in the anechoic flow facility at Universite de Sherbrooke. In particular, planar PIV measurement, wall-pressure and far-field acoustic measurements have been achieved. For the present study, a Controlled-Diffusion (CD) airfoil is used. These measurements have been performed at a fixed geometric angle of attack of \(15^{\circ}\). For the CD airfoil at this angle of attack, flow separation near the leading-edge region was reported by Christophe and Moreau (2008). As such, the present aeroacoustic investigation is performed to understand noise due to flow separation, for an airfoil that is close to stall conditions (Kalyani, Moreau, and Ragni 2022). Comparing the flow and pressure characteristics between the present case and that reported earlier, where the boundary-layer is fully attached near the trailing-edge of the airfoil (see Jaiswal _et al._ 2020; Wu, Moreau, and Sandberg 2019 2020, for instance], is expected to elucidate the true contribution of separation noise. ## II Experimental set-up and instrumentation: The aero-acoustic measurements were performed in the anechoic wind tunnel at Universite de Sherbrooke (UdeS). The anechoic room, is about \(7\times 5.5\times 4\) m\({}^{3}\) in dimension. The open jet has a dimension of \(50\times 30\) cm\({}^{2}\), and can achieve a maximum velocity of 40 m/s. As the temperature of the open jet can be controlled, all the measurements are performed at a constant free-stream density \(\rho\). The CD airfoil is placed at a \(15^{\circ}\) geometric angle of attack with the help of plexiglass plates of thickness 4.25 mm laser cut to reduce uncertainty in angle of attack while placing the airfoil and at the same time giving good optical access. All the measurements are performed at a free-stream velocity \(U_{\infty}\) of 16 m/s and 28 m/s, which respectively corresponds to Mach numbers \(M_{\infty}\equiv U_{\infty}/c_{0}\simeq 0.05\) and \(M_{\infty}\simeq 0.08\) (\(c_{0}\) speed of sound) and Reynolds numbers based on the airfoil chord length \(C\) and the free-stream velocity of \(Re_{c}\simeq 1.5\times 10^{5}\) and \(Re_{c}\simeq 2.5\times 10^{5}\). ## Appendix A Planar PIV measurements setup Two-dimensional PIV measurements were performed on the suction side of the airfoil, as shown in figure 1. Three sCMOS cameras, with a 5.5 megapixel sensors each, were used to acquire images in a dual frame mode. A ND:YAG dual pulsed laser from Lavision was used for illumination. The light sheet for Planar PIV was generated with a set of spherical lenses and a diverging cylindrical lens with a focal length of -20 mm. Tracer particles of about 1 \(\mu\)m were generated to seed the flow. The images were recorded for each case at Figure 1: Planar-PIV setup an acquisition frequency of 2 Hz. Inter-frame time was increased until the cross-correlation coefficient remains between \(0.6-0.9\). The resulting inter-frame time meant particle image displacement of more than 20 pixels was achieved in the free stream. This ensures low relative error (\(\sim 0.5\%\)) in the estimation of particle image displacement. The data collected at a free-stream velocity of \(U_{\infty}=16\) m/s were processed using Lavision's Davis 8 software while for the \(U_{\infty}=28\) m/s case they were processed with the newer Davis 10 software. The final vector calculations were performed on the computer clusters from Digital Research Alliance of Canada. For \(U_{\infty}=16\) m/s case, a total of 11 passes were used for the multi-grid scheme, starting with an initial window of \(128\times 128\) pixels to the final window size of \(4\times 4\) pixels. In the iterative multi-grid scheme, an overlap of 75 % and an elliptical weighting (elongated in the mean-flow direction) is used. The final window size was about \(0.0923\times 0.0923\) mm\({}^{2}\). In contrast, for the \(U_{\infty}=28\) m/s case, a reduced window overlap of 50 % was used while keeping all the other parameters the same as for the \(U_{\infty}=16\) m/s case. This was done to accelerate the vector calculations and to reduce the size of the final vector field. ### Steady wall-pressure measurements Figure 2 shows the pinholes located along the chord of the airfoil, which are used to measure the mean wall-pressure coefficient. There are in total 21 probes on both suction and pressure sides of the airfoil, 18 of them are placed in the streamwise direction and the last 3 in the spanwise direction. Pinholes on the pressure side of the airfoil are labelled as 4, \begin{table} \begin{tabular}{l l l l} \hline Parameters & Leading-edge & Mid-chord (M2) & Trailing-edge \\ & (M1) & & (M3) \\ \hline Number of Images & 1800 & 1400 & 1800 \\ Interrogation window [pixel\({}^{2}\)] & \(4\times 4\) & \(4\times 4\) & \(4\times 4\) \\ Lenses focal Length [mm] & 200 & 50 & 200 \\ Final window size [mm\({}^{2}\)] & \(0.11\times 0.11\) & \(0.32\times 0.32\) & \(0.09\times 0.09\) \\ Maximum particle image displacement 20 & 24 & 20 \\ [pixel] & & & \\ \hline \end{tabular} \end{table} Table 2: Parameters used for the Planar PIV boundary-layer measurements. 8, 10, 12 and 29, whereas those on the suction side are 1 to 6 at the leading edge, 7, 9 and 11 at mid-chord and 21 to 28 at the trailing edge (see figure 2). The differential pressure is measured using an array of miniature amplified low pressure sensors in order to get a full reading along the airfoil chord [Neal 2010]. These miniature amplified low pressure sensors have an accuracy of 0.25% Full Scale (FSS), which is between \(1244.2-248.84\) Pa. Details on the setup and acquisition can be found in Jaiswal [2020]. In the current paper, the wall differential pressure are normalized by inlet free stream dynamic pressure, which yields the mean pressure coefficient \(C_{p}\equiv(p-p_{\infty})/(0.5\,\rho\,U_{\infty}^{2})\) with \(p_{\infty}\) the inlet pressure. ### Unsteady wall-pressure measurements The pinholes on both sides of the airfoil are are also connected to Remote Microphone Probes (RMP) to record unsteady static wall-pressure measurements [Moreau and Roger 2005; Perennes and Roger 1998]. For the present set of experiments, Knowles FG 23329-P07 miniature microphones were used. These microphones have a flat response over a large range of frequency (\(0.1-10\) kHz), and have a nominal sensitivity of 22.4 mV/Pa. The pinhole Figure 2: Location of pinholes on the CD airfoil. Some of the pinhole locations on the suction side of the airfoil have been indicated. diameter of 0.5 mm ensures spectral averaging is avoided well beyond 10 kHz (Grasso _et al._, 2019). As these microphones are connected remotely to the pinhole, a correction in phase and magnitude is needed. This is achieved following the methodology outlined by Jaiswal _et al._ (2020). ### Hot wire measurements Hot wire anemometry (HWA) is used to investigate the spectral content of the velocity disturbance over the airfoil. The HWA probe is placed directly above RMP 26 (\(x/C=0.98\)), on the suction side of airfoil. The hot wire measurements were performed using a TSI \(1210-\mathrm{T}\) 1.5 single wire probe. The probe consists of a platinum quoted tungsten wire with a 0.0038 mm diameter and a 1.27 mm length, which satisfies the recommended wire length-to-diameter ratio of 200 (Ligrani and Bradshaw, 1987). The hot wire probe was connected to a TSI IFA 300 anemometer operating in Constant Temperature Anemometry (CTA) mode. The output signals of this anemometer were recorded with 25600 kHz acquisition frequency using a NI 9234 24 bit module. In order to attenuate any unwanted parasitic noise, a low pass filter of 1000 Hz was applied. Based on previous wall-pressure measurements (Moreau and Roger, 2005) no substantial contributions of velocity disturbances beyond this frequency is expected for the \(15^{\circ}\) angle-of-attack case. Furthermore, the HWA were performed only at \(U_{\infty}=16\) m/s. The total recording time for each of these point-wise measurements was about 60 seconds. For more details on the setup the reader is referred to Jaiswal (2020). ### Acoustic measurements Far-field acoustic pressure was measured using Integrated Circuit Piezoelectric (ICP) microphones with a 1/2 inch diaphragm. The microphones are placed in the airfoil mid-chord plane. In total 8 microphones were placed on an circular arc around the airfoil at a distance of 1.21 m (or about 10 times the chord length) to ensure they are in an acoustic far-field location. The microphones were calibrated using a B&K piston-phone, which ensures the calibration uncertainty is within 0.2 dB. ### Synchronized measurements In order to relate the near-field velocity disturbance field to the resultant far-field acoustic noise, synchronized velocity-pressure measurements have been performed as previously done at a lower \(5^{\circ}\) angle-of-attack [Jaiswal _et al._2022]. Furthermore, the wall-pressure measurements were also synchronized to study the footprint of velocity disturbances on the wall. To obtain acoustic directivity pattern caused by diffraction of unsteady gust, the far-field microphones were synchronized with the RMPs. The near-field and far-field pressure measurements are time resolved compared to PIV measurements, which has a limited time resolution. As such, the acquisition frequency for all the measurements performed are set to powers of two. In particular, the PIV measurements were performed at 2 Hz while unsteady near and far-field pressure were recorded at an acquisition frequency of 65536 Hz (or \(2^{16}\) Hz). The synchronization between PIV and pressure measurements is achieved using the procedure outlined by Henning _et al._ [2008], where further details on the implementation can be found. ## III Results To ensure that the flow facility and installation do not dictate overall flow dynamics [see Moreau and Roger 2005; Wu _et al._2016, for instance], the mean wall-pressure coefficient has first been compared. The results in two different facilities, in which the CD airfoil has been tested in within a 50 cm wide jet, show an overall good agreement over most of the airfoil chord, \(C\), as shown in figure 3. \((x,y)\) represents the fixed laboratory reference frame at the airfoil midspan, \(x\) being parallel to the jet axis and oriented with the flow. The origin of the reference frame is taken at the airfoil trailing edge. Repeatability tests at UdeS have also been achieved [Kalyani, Moreau, and Ragni 2022]. Previous experimental and numerical studies on this airfoil [Christophe, Anthoine, and Moreau 2008; Kalyani, Moreau, and Ragni 2022; Moreau and Roger 2005] have reported an increase in low-frequency noise, when it is placed at a high angle of attack. This observation is confirmed by the far-field microphone measurements shown in figure 4, which shows an overall increase in low frequency sound pressure levels when comparing the \(8^{\circ}\) and \(15^{\circ}\) cases for the two Reynolds numbers. As shown in figure 4, this is also consi measurements by Moreau and Roger [2005] (open symbols). This noise increase is most likely linked to an overall increase in r.m.s levels of wall pressure close to the trailing-edge region, as shown in figure 3 (b). As the overall goal of the present manuscript is to identify flow mechanisms responsible for separation noise and to test the applicability of diffraction theory [Amiet 1976], the cause (flow disturbances) to the effect (far-field noise) will be established with the help of the latter. Amiet's model and its extension [Amiet 1976; Moreau and Roger 2009; Roger and Moreau 2005' 2012] relies on Curle's analogy combined with a compressible linearized Euler model for the wall-pressure fluctuations on an infinitely thin flat plate se Figure 4: Sound pressure level at 1.2 m from the airfoil trailing edge on the suction side. Open circles for ECL measurements [Moreau and Roger 2005] Figure 3: (a) Mean wall-pressure coefficient (\(-C_{p}\)) in two different anechoic wind tunnels; (b) r.m.s of wall-pressure coefficient (\(C_{prms}\)) for RMPs 21, 23, 24 and 26 at \(U_{\infty}=16\) m/s. The PSD of the far-field acoustic pressure at any observer located at \(\mathbf{X}=(X_{1},X_{2},X_{3})\), for any angular frequency \(\omega\), generated by a flat plate of chord length \(C\) and span \(L\) then reads: \[S_{pp}(\mathbf{X},\omega)\,\approx\,\left(\frac{k\,C\,X_{2}}{4\pi S_{0}^{2}} \right)^{2}\frac{L}{2}\left|\mathcal{I}\left(\frac{\omega}{U_{c}},k\frac{X_{3} }{S_{0}}\right)\right|\Phi_{pp}(\omega)\,l_{z}\left(\omega,k\frac{X_{3}}{S_{0 }}\right), \tag{1}\] where \(k\) is the acoustic wave number, \(S_{0}\) the corrected distance to the observer, \(\mathcal{I}\) the analytical radiation integral (or acoustic transfer function) given in Roger and Moreau (2005), \(U_{c}\) the streamwise convection velocity, \(\Phi_{pp}\) the wall-pressure spectrum and \(l_{z}\) the spanwise coherence length. In summary, the wall-pressure field can be characterized by the PSD of wall-pressure fluctuations, the convection velocity and the spanwise correlation length. In order to explain the increase in the low frequency far-field acoustic spectra, the statistical description of the incident wall-pressure field will be explored in the next section. ### Unsteady wall-pressure field Figure 5 shows PSD measurements using RMPs on the suction side of the airfoil along its chord. The first two probes located at the leading edge, show rapid decay in spectral energy most likely because of the laminar nature of the boundary layer. The humps and peaks observed in RMP 3 probe (\(x/C\simeq 0.09\)) can be linked to boundary-layer instabilities (Jaiswal _et al._, 2020), which are present due to the existence of a Laminar Separation Bubble (LSB). RMP 5 (\(x/C\simeq 0.15\)) onward the wall-pressure spectra decay is much slower than it was for the first three probes, suggesting a possible turbulent re-attachment. Near the mid-chord region (RMP 9), the wall-pressure statistics almost attains a \(-5\) slope at high-frequencies, suggesting a mean attached turbulent boundary layer. On the aft part of the airfoil, an almost constant gradient in the wall-pressure spectra, of \(f^{-2.2}\), emerges in the mid-frequency range. Similar observations were made by Zang, Mayer, and Azarpeyvand (2021) who used the NACA-65-410 airfoil for their study at high angles of attack, and by Raus _et al._ (2022) on the oscillatory NACA-0012 airfoil in the similar light-stall flow regime. In contrast, at low frequencies, the wall-pressure spectrum becomes flat to an extent that its slope is near zero for probes beyond RMP 9. As such, the classical \(f^{2}\)(Goody, 2004) scaling at low frequency is not observed. It is hypothesized that this change in slope is more linked to the presence of the jet, which predominantly contributes to the low frequency and interacts more with the airfoil at high angle of attacks. At higher frequencies, a constant spectral slope of \(f^{-5}\) emerges, which is consitent with previous studies made on the CD airfoil (Jaiswal, 2020). Overall, the spectra become statistically similar beyond \(0.85~{}c\). To quantify the effects of mean pressure gradient on wall-pressure fluctuations, differences in PSD between the airfoil at \(8^{\circ}\) and \(15^{\circ}\) angles of attack at \(Re_{c}\simeq 150000\) are plotted in figure 6 for the trailing-edge sensors \(21-25\) only between \(10\) and \(1000\) Hz. An increase in spectral content is clearly shown for the \(15^{\circ}\) case compared to the \(8^{\circ}\) case (Jaiswal, 2020). The second quantity of interest, the convection velocity \(U_{c}\), was estimated by correlation analysis between two RMPs, \(23\) and \(24\), which are separated by a finite streamwise distance (about \(0.02~{}C\)). This was performed at several band-passed frequencies to obtain the convection velocity at frequencies where the separation noise dominates. The results obtained are shown in figure 7. Open circles represent the \(15^{\circ}\) angle-of-attack case, while cross symbols stand for the \(8^{\circ}\) angle-of-attack case. The dashed line refers to the mean convection velocity (\(0.75~{}U_{\infty}\)) estimated by Moreau and Roger (2005) from the phase slope of two sensors at the trailing edge at \(15^{\circ}\) and \(16\) m/s. The present results are therefore consistent with the previous ECL measurements and also with the estimate provided by Grasso _et al._ (2019) based on the direct numerical simulation (Wu, Moreau, and Sandberg, 2020) for the \(8^{\circ}\) case (\(0.72~{}U_{\infty}\)). The lower value at \(400\) Hz is also consistent with that reported Figure 5: PSD of wall-pressure fluctuations at \(Re_{c}=150000\) on the airfoil suction side (color transition from gray to black with increase in \(x/C\)): (a) RMP 1 to 9 (thick solid line to highlight RMP 3); (b) RMP 21-28 (black dotted lines for spanwise sensors. Black plus for LES (Christophe and Moreau, 2008), and grey circles for ECL measurements (Moreau and Roger, 2005)). by Kalyani, Moreau, and Ragni [2022] for this frequency range (0.52 \(U_{\infty}\)). Note that the observed variations can be caused by the uncertainty in the estimation of \(U_{c}\), which should be a function of total recording time of the signal as shown in the appendix. In the low frequency ranges below 400 Hz, an increase in convection velocity is observed for the 15\({}^{\circ}\) angle-of-attack case compared to the 8\({}^{\circ}\) angle-of-attack case. This result is quite surprising, as an increase in adverse pressure gradient leads to a decrease in convection velocity [see, for instance, Schloemer 1967]. Therefore, this observation will be addressed in the subsequent section. Figure 6: PSD differences between the 15\({}^{\circ}\) and 8\({}^{\circ}\) cases for RMPs 21 and 25. Color transition from gray to black signals increase in \(x/C\). Figure 7: Convection velocity (\(U_{c}\)) measured between RMPs 23 and 24. Dotted grey line is Moreau and Roger’s (2005) mean estimate of \(U_{c}\) for 15\({}^{\circ}\) angle of attack and 16 m/s case. sections. Nevertheless, at higher frequencies beyond 400 Hz, the convection velocity for the 15\({}^{\circ}\) angle-of-attack case does become lower than that for the 8\({}^{\circ}\) angle-of-attack case. Finally, the convection velocity decreases with an increase in frequencies because at low frequency, only the large eddies contribute to the pressure gusts (see, for instance, Schloemer 1967). At higher frequencies, the contribution from smaller eddies, which are close to the wall, becomes significant, resulting in lower convection velocities. Therefore, Schloemer (1967)'s observation regarding the frequency dependence of convection velocity is valid for both angles of attack. Lastly, quantifying the spanwise correlation length is anything but straightforward. Corcos (1964), under the assumption that the normalized cross-power spectral density can be represented by two separate dimensionless variables \(\omega\Delta x_{1}/U_{c}\) and \(\omega\Delta x_{3}/U_{c}\), showed that the two-dimensional coherence function can be written as follows: \[\gamma(\Delta x_{1},\Delta x_{3},\omega)=\frac{\Phi_{pp}(\omega,\Delta x_{1}, \Delta x_{3})}{\Phi_{pp}(\omega,0,0)}=A(\omega\Delta x_{1}/U_{c})\ B(\omega \Delta x_{3}/U_{c})\ \mathrm{e}^{-i\omega\Delta x_{1}/U_{c}} \tag{2}\] The magnitude-squared coherence in the spanwise direction, is obtained by multiplying \(\gamma(0,\Delta x_{3},\omega)\) by its complex conjugate, and is plotted in figure 8 (a). As can be seen in the latter, the coherence goes to zero beyond 1000 Hz, which makes the estimation of the associated length scales impossible. More importantly, a hump centred around \(\sim 100\) Hz is observed in the values of \(\gamma^{2}\). These results are consistent with the previous measurements by Moreau and Roger (2005) shown as symbols in figure 8 (a). Even though similar results were also reported by Kalyani, Moreau, and Ragni (2022) beyond 100 Hz, the oscillatory behavior observed in their figure 4 below 100 Hz is caused by a too short signal length as shown in the appendix. Corcos (1964) was also first to recognize the exponential decay nature of the wall-pressure correlation separated by a finite distance, as also evidenced in figure 8 (a). Invoking observation of exponential decay of correlation made by Corcos (1964), the function \(A\) and \(B\) can be written as: \[A(\omega\Delta x_{1}/U_{c})=\mathrm{e}^{-\omega\,b_{2}\Delta x_{1}/U_{c}}\quad \mathrm{and}\quad B(\omega\Delta x_{3}/U_{c})=\mathrm{e}^{-\omega\,b_{1} \Delta x_{3}/U_{c}} \tag{3}\] where, \(b_{1}\) and \(b_{2}\) are fitting parameters. Under the assumption of zero streamwise separation, the normalized cross-power spectral density can then be written as follows: \[\gamma(0,\Delta x_{3},\omega)=\mathrm{e}^{-\omega\,b_{1}\Delta x_{3}/U_{c}} \tag{4}\] The spanwise correlation length \(l_{z}(f)\) can be estimated by: \[l_{z}(\omega)=\int_{0}^{\infty}\gamma(0,\Delta x_{3},\omega)\Delta x_{3} \tag{5}\] Plugging equation (4) in equation (5) yields: \[l_{z}(\omega)=b_{1}\,U_{c}/\omega \tag{6}\] The reader should be cautioned that Corcos's model (equation (6)) can lead to nonphysical values of spanwise correlation length, as it relies on the assumption that the convection velocity is independent of frequency. Nevertheless, it provides a reasonable estimation of the correlation length and has been used in the past by several authors [Roger and Moreau 2004]. Therefore, Corcos's model (equation (6)) was used to estimate the spanwise correlation length. However, as the frequency at which the model should be used is unclear, the frequency was arbitrarily chosen. The resulting lengths are shown as the solid black and broken grey lines for the \(15^{\circ}\) angle-of-attack and the \(8^{\circ}\) angle-of-attack cases, respectively. The resulting values for the constant \(b_{1}\) are 1.37 and 1.34 for the \(15^{\circ}\) and the \(8^{\circ}\) angle-of-attack cases, respectively. Although the estimate of Corcos's model predicts high-frequency attenuation in spanwise correlation length, it over-predicts it at low frequency for the \(8^{\circ}\) Figure 8: Analysis of the spanwise wall-pressure at \(U_{\infty}=16\) m/s (a) Magnitude squared coherence (\(\gamma^{2}\)). Black circles for ECL measurements [Moreau and Roger 2005] for \(\gamma^{2}\) between RMP 26-27. (b) Estimation of spanwise correlation lengths. angle-of-attack case. This can be corrected by using Efimtsov (1982) model (solid red line), which takes the boundary-layer thickness (\(\delta\)) and friction velocity (\(u_{\tau}\)) into account to re-scale correlation length in the low-frequency range. The three empirical constants were set to 1.34, 19.5, and 13.5 in order to estimate the spanwise correlation length with Efimtsov (1982) model. In order to experimentally estimate the spanwise correlation length, the spanwise coherence between several spanwise sensors (RMPs \(25-28\)) near the trailing edge (\(x/C=0.98\)) was calculated. The estimated values of the real part of the coherence were fitted with an exponential decay function for a given frequency. The exponential decay function was chosen based on observations by Corcos (1964). Finally, the correlation length \(l_{z}\) was obtained by combining equations (4) and (5). The resulting values of the correlation lengths (\(l_{z}(f)\)) are represented by symbols (crosses and circles) in figure 8 (b). As in the case of convection velocity, the spanwise correlation length also increases for the \(15^{\circ}\) angle of attack compared to the \(8^{\circ}\) angle of attack case. As wall pressure is an imprint of turbulent flows convecting over the surface, what flow structures can explain such an increase in convection velocity? To answer this question, velocity field measurements were carried out using PIV and will be discussed in the following section. ### Velocity measurements Two snapshots of wall-parallel velocity are plotted in figure 9. As can be seen, large rollers, similar to that reported by Jaiswal _et al._ (2022) at \(5^{\circ}\) incidence, are observed, which Figure 9: Snapshots of instantaneous wall-parallel velocity at two time instants. evidences the presence of large coherent structures. This is also consistent with the flow topology seen by Christophe and Moreau (2008) in their LES at \(15^{\circ}\). These structures are typically induced by instability within the separated shear layer. At some instant, even a fully separated boundary-layer is observed as shown in figure 9 (b). These instantaneous flow fields confirm large scale separation and passage of coherent rollers at the trailing edge, which are reminiscent of Kelvin-Helmholtz flow type. Figures 10 and 11 show the mean boundary-layer statistics recorded by the first and the second camera, respectively. Figure 10 is plotted with respect to an observer sitting on the leading-edge of the airfoil while in figure 11, the coordinate system of the velocity field is aligned with the wind-tunnel axis. As evidenced in table 2, the spatial resolution achieved by the first camera is about three times higher than that of the second camera. Thus, further spatial filtering is expected in the results shown in figure 11. Figure 10 shows that, in a time-averaged sense, the mean flow becomes separated from RMP 3 (\(x/C=0.09\)) onward. This is consistent with figure 3, which shows a plateau in \(C_{p}\) between RMP 3 and RMP 6. More importantly, the separated region shown by the black dashed lines in figures 10 and 11 has a negligible r.m.s value of velocity disturbances (all Reynolds stresses close to zero). This confirms the laminar nature of the time-averaged separated flow region, and in the literature, it is commonly referred to as the LSB. The presence of such an LSB is characteristic of the flow past the CD airfoil at \(Re_{c}=150000\) and is consistent with the finding of Christophe and Moreau (2008), who also reported the presence of a LSB when the CD airfoil is placed at a \(15^{\circ}\) incidence. The LSB near the leading-edge of the airfoil seems to deflect the mean flow away from the airfoil, which provides a possible explanation for a drop in mean-loading reported in figure 3. The deflected mean flow and resultant flow acceleration near the leading-edge, at the point of inception of the LSB, can be evidenced from an increase in the length of arrows in figure 10. In a time-averaged sense, the LSB seems to cover at least 30% of the airfoil chord. However, due to the limited field-of-view, the exact extent of the LSB could not be quantified. Overall, the mean flow topology presented in figures 10 and 11 show that the flow at the leading-edge region of the CD airfoil at \(15^{\circ}\) angle of attack and \(Re_{c}\simeq 150000\) is laminar in nature. The LSB ensures the flow transition that occurs only after \(x/C>0.4\) as found in the previous LES (Christophe and Moreau, 2008). The mean boundary-layer statistics recorded by the third camera are shown in figure 12 for the case when the airfoil is placed at a \(15^{\circ}\) angle of attack with an inlet velocity of 16 m/s. Figure 12 (a) shows the mean wall-parallel velocity. Despite the large-scale flow separations observed in figure 9, the boundary-layer near the trailing edge is fully attached in a time-averaged sense. As such, in the present pre-stall noise study, the time-averaged flow near the trailing-edge of the CD airfoil is different from the one reported by Lacagnina _et al._ (2019), who reported a separated time-averaged flow near the trailing edge. The black dotted line is the iso-contour of the inlet velocity free-stream velocity, which roughly corresponds to the overall extent of the boundary layer. The Reynolds stress tensor terms, \(\overline{u_{1}u_{1}}/U_{\infty}^{2}\), \(\overline{u_{2}u_{2}}/U_{\infty}^{2}\), and \(-\overline{u_{1}u_{2}}/U_{\infty}^{2}\), are shown in figures 12 (b), (c), and (d), respectively. Compared to the leading-edge region, the disturbances (quantified by r.m.s of velocities) close to the trailing-edge are substantially higher, which implies that the flow transitions to a fully turbulent boundary-layer somewhere between 40 and 65% of the chord. Higher levels of r.m.s velocities are the sources of far-field noise [Ffowcs Williams and Hall 1970]. In particular, elevated regions of r.m.s velocity do not have a clear peak but a broad region of elevated intensity. This is typical of flows that experience the presence of shear-layer instabilities [Jaiswal _et al._ 2022]. In order to understand the impact of the Reynolds number, the results of the measurements performed at 28 m/s are plotted in figure 13. Upon comparison with figure 12, it shows similar overall behavior in the measured velocity field in the trailing-edge region. The overall length of the boundary layer is similar to the 16 m/s case at \(x_{2}\simeq 0.32~{}C\) and close to the trailing-edge region (\(x_{2}\simeq 0.02~{}C\)), as shown in tables 3 and 4. Furthermore, for the 28 m/s case, the turbulence intensity appears to be much lower than in the 16 m/s case, resulting in more localized levels of iso-contours in figure 13 compared to those in figure 12. This is especially true for the cross-term \(-\overline{u_{1}u_{2}}/U_{\infty}^{2}\). A more quantitative comparison can be obtained by looking at the velocity profiles near the airfoil trailing edge, as shown in Figure 14. The velocity profile at RMP 26 (\(x/C=0.98\)) shows when the CD airfoil placed at \(15^{\circ}\) and 16 m/s flow incidence and inlet velocity respectively, the near wall mean velocity is reduced compared to the inlet velocity. Similar observations have been made by Caiazzo _et al._ (2023) (see figure 4), who reported a decrease in the near wall mean velocity as the mean pressure gradient increases. As such, we expect the boundary layer to grow faster in the streamwise direction near the trailing-edge region for the 28 m/s case compared with the 16 m/s case. This faster growth of the boundary layer for the 28 m/s case is captured in the shape factor, which remains smaller at both RMP 21 and RMP 26 locations compared to 16 m/s (see the values in Tables 3 and 4, for instance). Yet, both velocity cases have higher values of the shape factor compared to the case when the airfoil is fixed at \(8^{\circ}\) angle of attack and 16 m/s. Notably, a higher value of the shape factor indicates flow close to separation [see Figure 10 of Sanjose _et al._2019]. Therefore, as the flow speed increases, the probability of flow separation decreases. Nevertheless, the overall boundary layer extent is similar for the 28 m/s and 16 m/s cases, as evidenced in Tables 3 and 4. As such, the Reynolds number based on the momentum conservation is increased, the Reynolds number is increased, and the Reynolds number is increased. Figure 13: Contours of velocity statistics over the airfoil trailing edge for \(U_{\infty}=28\) m/s: (a) mean wall-parallel velocity \(U_{1}\) (black line corresponds to free-stream inlet velocity \(U_{\infty}\)), (b) \(\frac{\overline{u_{1}u_{1}}}{U_{\infty}^{2}}\) (black dashed lines indicate iso-values of 0.15 and 0.1); (c) \(\frac{\overline{u_{2}u_{2}}}{U_{\infty}^{2}}\), black dashed lines indicate iso-values of 0.045 and 0.025; (d) \(-\frac{\overline{u_{1}u_{2}}}{U_{\infty}^{2}}\), black dashed line indicates iso-values of 0.01. Coordinate system is aligned with the trailing edge. thickness (\(\text{Re}_{\theta}\)) for the airfoil placed at \(15^{\circ}\) angle-of-attack is substantially higher than that of the \(8^{\circ}\) case near the trailing-edge region. The profiles of velocity statistics, namely \(\overline{u_{1}u_{1}}/U_{\infty}^{2}\), \(\overline{u_{2}u_{2}}/U_{\infty}^{2}\), and \(-\overline{u_{1}u_{2}}/U_{\infty}^{2}\), for the two velocity cases at \(15^{\circ}\) angle of incidence and the case when the airfoil is placed at \(8^{\circ}\) angle of attack and \(U_{\infty}=16\) m/s are compared in figures 14 (b-d). Generally, the velocity statistics are normalized with the friction velocity to remove any Reynolds number (\(Re_{\tau}\)) based effects. However, the overall goal of plot 14 (b-d) is to demonstrate the levels of velocity disturbances with respect to the inlet velocity \(U_{\infty}\). Such a scaling inherently shows the applicability of thin-airfoil linearized theory, which assumes that the velocity disturbances are small compared to the inlet velocity \(U_{\infty}\). While the peak levels of velocity statistics scale with the boundary-layer thickness \(\delta_{95}\), in absolute Figure 14: Comparison of velocity profiles at RMP 26 (\(x/C=0.98\)). Legend: Solid black and red lines correspond to airfoil placed at \(15^{\circ}\) angle-of-attack and at an inlet velocity of 16 and 28 m/s respectively. Dotted grey line corresponds to airfoil placed at \(8^{\circ}\) angle-of-attack and at an inlet velocity of 16 m/s. units (for instance in meters) they are much further away from the wall compared to the 8\({}^{\circ}\) angle-of-attack and 16 m/s case. More importantly, the profiles confirm that the r.m.s levels of velocity disturbances are elevated for the CD airfoil placed at 15\({}^{\circ}\) angle-of-attack at 16 m/s case compared to the rest. With the exception of the wall-parallel disturbances (\(\overline{u_{1}u_{1}}/U_{\infty}^{2}\)) for the 15\({}^{\circ}\) angle-of-attack at 16 m/s case, the disturbances are at least an order of magnitude smaller in the rest of the cases tested. Previous studies [see figure 9 of Caiazzo _et al._ 2023, for instance] have reported an increase in r.m.s levels of velocity disturbances for wall bounded flows subjected to mean adverse pressure gradients. Recently, Pargal [2023] showed that normalizing the wall-pressure spectra by the square of the maximum value of the Reynolds stress, denoted by \(|\overline{-u_{1}u_{2}}|_{max}^{2}\), leads to a collapse in low-frequency spectra over a broad range of cases for boundary-layer flows subjected to arbitrary mean pressure gradient. This normalization holds true because, as first shown by Na and Moin [1998], the term \(p_{rms}/(-\rho\ \overline{u_{1}u_{2}}_{max})\) falls between 2 and 3 for boundary-layer flows. This was later confirmed by [Abe 2017, Le Floc'h _et al._ 2020] for canonical boundary-layer flows, and more recently by Caiazzo _et al._ [2023] for flows past an airfoil. These \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{ AOA} & \multicolumn{1}{c}{\(U_{\infty}\)} & \multicolumn{1}{c}{\(U_{e}\)} & \multicolumn{1}{c}{\(\delta_{95}\)} & \multicolumn{1}{c}{\(\delta^{*}\)} & \multicolumn{1}{c}{\(\theta\)} & \multicolumn{1}{c}{\(H\)} & \multicolumn{1}{c}{Re\({}_{\theta}\)} & \multicolumn{1}{c}{\(-\overline{u_{1}u_{2}}_{max}\)} & \multicolumn{1}{c}{\(\frac{p_{rms}}{\overline{u_{1}u_{2}}_{max}}\)} \\ & \multicolumn{1}{c}{[m/s]} & \multicolumn{1}{c}{[m/s]} & \multicolumn{1}{c}{[mm]} & \multicolumn{1}{c}{[mm]} & \multicolumn{1}{c}{[mm]} & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{[m\({}^{2}\)/s\({}^{2}\)]} & \\ \hline 8\({}^{\circ}\) & 16 & 17.44 & 6.34 & 2.42 & 1.19 & 2.03 & 1350 & 1 & 2.77 \\ 15\({}^{\circ}\) & 16 & 17.15 & 34.88 & 16.16 & 6.03 & 2.67 & 6712 & 3.76 & 2.22 \\ 15\({}^{\circ}\) & 28 & 31.88 & 33.78 & 14.35 & 6.088 & 2.35 & 12577 & 7.46 & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 4: Boundary layer parameters at RMP 26 (\(x/C=0.98\)) \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{ AOA} & \multicolumn{1}{c}{\(U_{\infty}\)} & \multicolumn{1}{c}{\(U_{e}\)} & \multicolumn{1}{c}{\(\delta_{95}\)} & \multicolumn{1}{c}{\(\delta^{*}\)} & \multicolumn{1}{c}{\(\theta\)} & \multicolumn{1}{c}{\(H\)} & \multicolumn{1}{c}{Re\({}_{\theta}\)} & \multicolumn{1}{c}{\(-\overline{u_{1}u_{2}}_{max}\)} & \multicolumn{1}{c}{\(\frac{p_{rms}}{\overline{u_{1}u_{2}}_{max}}\)} \\ & \multicolumn{1}{c}{[m/s]} & \multicolumn{1}{c}{[m/s]} & \multicolumn{1}{c}{[mm]} & \multicolumn{1}{c}{[mm]} & \multicolumn{1}{c}{[mm]} & \multicolumn{1}{c}{[mm]} & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{[m\({}^{2}\)/s\({}^{2}\)]} & \\ \hline 8\({}^{\circ}\) & 16 & 18.7 & 4.41 & 1.36 & 0.82 & 1.65 & 997 & 0.77 & 2.6 \\ 15\({}^{\circ}\) & 16 & 17.81 & 28.97 & 12.73 & 5.06 & 2.51 & 5847 & 4.01 & 3.08 \\ 15\({}^{\circ}\) & 28 & 32.8 & 28.61 & 11.53 & 5.25 & 2.19 & 11188 & 8.68 & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 3: Boundary layer parameters at RMP 21 (\(x/C=0.8582\)) observations are confirmed in tables 3 and 4 for the present case. Small deviations from the aforementioned values can be ascribed to measurement uncertainty, and the presence of open jet, which predominantly contributes to low frequency wall-pressure spectra and which is absent in the aforementioned data [1, 2, 3, 4]. More importantly, when the scaling proposed by Pargal [2023] is used to scale the wall-pressure spectra in Figure 15, a collapse in the low-frequency range is achieved. This collapse is remarkable because the wall-pressure spectra exhibit a difference of 20 dB, as shown in Figure 6, corresponding to an order of magnitude difference in wall-pressure fluctuations. As the PIV velocity measurements were not time-resolved, additional single wire measurements were performed. As mentioned, single HWA measurements were performed close to the trailing-edge (RMP 26) of the airfoil. These measurements were done for airfoil placed at \(15^{\circ}\) angle of attack and fixed inlet velocity of \(U_{\infty}=16\) m/s. Figure 16 shows the pre-multiplied spectrogram \(f\times E_{11}/U_{e}^{2}\). The plot shows that the pre-multiplied energy spectrum peaks at about 100 Hz, away from the wall (\(0.4-0.6\times\delta_{95}\)), approximately the location where the peak in r.m.s. of velocity fluctuations was reported in figure 14. Figure 15: Power spectral density of the wall-pressure fluctuations at RMP 26 at fixed inlet velocity \(U_{\infty}=16\) m/s. The PSD has been normalized by the maximum value of the square of \(\overline{-u_{1}u_{2}}\), the square of the free-stream density \(\rho\), the local edge velocity \(U_{e}\) and the boundary-layer thickness \(\delta_{95}\). Legends: Dotted grey line for CD airfoil at an incidence of \(8^{\circ}\) while solid black line represents CD airfoil at an incidence of \(15^{\circ}\). In summary, figures 14 to 16 show that large scale flow disturbances may be present and confirm the instantaneous snapshots in figure 9. These large structures are in turn responsible for elevated levels of r.m.s. velocity fluctuations and the peak in the pre-multiplied spectrogram. As such, modal decomposition could be useful to understand the hierarchy and the organization of velocity disturbances close to the trailing edge. ## IV Modal analysis The Proper Orthogonal Decomposition (POD) (Holmes _et al._, 2012) was employed to uncover the modes present in the velocity disturbance field. One benefit of using POD is that, unlike linear stability analysis, it does not require velocity disturbances to be small. In the present paper, POD was carried out using the snapshot approach of the algorithm developed by Sirovich (1987). For more information, please refer to the monograph by Holmes _et al._ (2012). The modal energy distribution of the measured velocity field is shown in figure 17. The spatial POD modes are used to identify the spatial organization of the velocity disturbance field and their associated energy levels (E\({}_{\rm r}\)), and are plotted in figure 18. In the present manuscript, only the spatial modes associated with the vertical velocity disturbances (\(E_{22}\)) are used because they are the principal drivers of wall-pressure fluctuations and far-field acoustics (Jaiswal _et al._, 2020). Figure 17 (b) clearly shows that the first 12 modes Figure 16: Premultiplied 1-D velocity energy spectra \(\overline{E_{11}}\)\(\left(f\times E_{11}/U_{e}^{2}\right)\) as a function of frequency, over the airfoil at RMP 26. contribute to approximately 40% of the total energy, although the cumulative energy for the 16 m/s case appears to be slightly lower compared to the 28 m/s case. The relative contributions for the 16 m/s and 28 m/s are shown for the first 12 modes. As can be seen, the relative energies of modal pairs 3-4, 5-6, and 11-12 appear to be similar and may form a modal pair. However, upon inspection, it was found not to be the case (see figure 18 for example). Yet, the spatial organization appears to be similar between the 16 m/s and 28 m/s cases. Moreover, in these figures, the dashed black lines that represent the time-averaged location where the wall-parallel velocity is equal to the free-stream velocity \(U_{\infty}\) show that the spatial modes are distributed across the boundary layer. In contrast, modal decomposition performed by Lacagnina _et al._ (2019) had shown that the spatial modes are uniquely present outside the time-averaged extent of the shear layer. In fact, the spatial distribution of the velocity disturbance field looks similar to the instantaneous field in figure 9 (a), and it could be due to the passage of coherent structures, and it may correspond to the disturbance at the frequency range of \(80-300\) Hz. More importantly, the spatial extent or wavelength of this modal pattern (mode 3) closely corresponds to the peak in the pre-multiplied spectrogram (figure 16). As such, it may be responsible for the hump in the low-frequency wall-pressure (figure 6) and far-field acoustic spectra (figure 4). To verify this, the correlation between mode 3 and band-passed pressure will be performed next. Figure 17: POD based modal decomposition of the measured velocity field in the trailing-edge region (camera 3) for an airfoil placed at \(15^{\circ}\) angle of attack. (a) Relative energy of individual POD modes; (b) Cumulative sum of POD modes. Legends: Red cross for inlet velocity of \(U_{\infty}=16\) m/s while the blue circles represent inlet velocity of \(U_{\infty}=28\) m/s. ## V Correlation analysis Having characterized the velocity and pressure field, the manuscript will now attempt to delineate the correlation between these two quantities of interest. Correlation between flow quantities are quantified using Pearson's correlation coefficient. Pearson correlation coefficient at two different locations \((x_{1},x_{2},x_{3})\) and \(({x_{1}}^{\prime},{x_{2}}^{\prime},{x_{3}}^{\prime})\) is denoted by: \[R_{\zeta\chi}(x_{1},{x_{1}}^{\prime},x_{2},{x_{2}}^{\prime},x_{3},{x_{3}}^{ \prime})=\frac{\overline{\zeta(x_{1},x_{2},x_{3})\chi({x_{1}}^{\prime},{x_{2} }^{\prime},{x_{3}}^{\prime})}}{\sqrt{\overline{\zeta(x_{1},x_{2},x_{3})^{2}} }\times\sqrt{\overline{\chi({x_{1}}^{\prime},{x_{2}}^{\prime},{x_{3}}^{\prime })^{2}}}} \tag{7}\] where \(\zeta(x_{1},x_{2},x_{3})\) and \(\chi({x_{1}}^{\prime},{x_{2}}^{\prime},{x_{3}}^{\prime})\) are the fluctuating components of variables of interest. Figure 18: Modes of the vertical velocity disturbances, \(E_{22}\), measured at the trailing-edge of the airfoil. Coordinate system is aligned with the trailing edge. Left plots (a) and (c) correspond to mode 3 while right ones are for mode 4. Top figures (a) and (b) correspond to \(U_{\infty}=16\) m/s while bottom figures correspond to \(U_{\infty}=28\) m/s. Black contour lines show the corresponding mean free-stream inlet velocity \(U_{\infty}\). The Pearson correlation method is used in pattern recognition to quantify the similarity between patterns or features in data. For example, in time series data, the correlation between the values of two time series at different time points can be used to quantify the similarity between the patterns of the time series. However, correlation alone does not establish causation, as correlation cannot yield causal asymmetry and hence cannot separate the cause from the effect (Bossomaier _et al._, 2016). As such, in the present section the overall goal is recognize pattern in velocity disturbance field that are _similar_ to ones measured in time series of pressure signals recorded at the wall or at far-field locations. This can aid to identify velocity disturbance pattern associated with separation noise. The causality is inferred through Amiet's (1976) equation (1) and Poisson's equation (see Grasso _et al._, 2019, for instance), which relates velocity disturbance to wall-pressure fluctuations. ### Wall and far-field pressure correlation analysis To identify patterns in measured time series of wall-pressure and far-field acoustic pressure the correlation, \(R_{p^{\prime}_{w},p_{a}}\), has been calculated. To segregate the separation noise, both signals have been band-passed filtered between \(80-300\) Hz and \(600-2100\) Hz, where contributions from separation noise can be ignored (see figure 16). The results are shown in figures 19 (a-b). A negative correlation between the wall-pressure fluctuations \(p^{\prime}_{w}\) and the far-field acoustic ones \(p^{\prime}_{a}\) is measured when these signals have been band-passed filtered between \(80-300\) Figure 19: Cross-correlation between filtered wall-pressure and far-field pressure for \(15^{\circ}\) angle-of-attack and \(U_{\infty}\)=16 m/s. (a) Cross-correlation for pressure signals filtered between 100-300 [Hz]; (b) Cross-correlation for pressure signals filtered between 600-2100 [Hz]. This can be caused by the passage of eddies at these frequencies near the trailing-edge and their diffraction in the form of acoustic pressure at a far-field location (see also figure 16). The phase opposition between near-field and far-field is due to the dipole nature of the source term. In contrast, for the band passed frequencies between \(600-2100\) Hz, no meaningful correlation is obtained. This already suggests a significant low-frequency contribution of the surface noise sources caused by the largest turbulent coherent structures. ### Correlation between POD modes and pressure The temporal signals associated with mode 3, has been correlated with the band-passed filtered wall-pressure and far-field pressure signals. The frequency band for the separation noise has been chosen to be between \(80-300\) Hz and \(600-2100\) Hz, as in figure 19. Once again the band passed filtering has been achieved using a zero-phase digital filtering, which conserves the phase. Figures 20 (a) and (b) show the correlation between the third mode (\(R_{E_{22},p_{w}}\)) and the wall-pressure fluctuations measured by RMP 26 (\(x/C=0.98\)). In these plots, \(T_{f}\) corresponds to the time of flight of an acoustic signal emitted at the trailing-edge of the airfoil to reach the far-field location where the noise is measured. The third mode of wall-normal velocity fluctuations (\(E_{22}\)) and the recorded wall-pressure signals \(p_{w}^{\prime}\) show a meaningful correlation only at the band-pass frequency range of \(80-300\) Hz (figure 20 (a)), while the correlation drops to background noise levels for the higher frequency band \(600-2100\) Hz (figure 20 (b)). Similarly for the correlation between mode 3 and the far-field acoustic pressure \(p_{a}^{\prime}\) meaningful results are only obtained for the lower frequency band \(80-300\) Hz. The only difference is that it takes the near field hydrodynamic event a finite time to reach the far-field location, where the acoustic measurements are achieved. As such, before the cross-correlation was performed, the time series of the acoustic signal was shifted by the time of flight \(T_{f}\). More importantly a phase opposition is seen in figure 20 (c)) between mode 3 and the far-field acoustic pressure. This is not surprising as the third mode of the wall-normal velocity disturbances (\(E_{22}\)) and the recorded wall-pressure signals are in phase (see figure 20 (c)) while the acoustic pressure and wall-pressure field are in phase opposition (figure 19). To conclude, the near-field source terms are amplified in the case when the airfoil is placed at high angles of attack through induced flow separation. The separated shear layer can induce Kelvin-Helmholtz like roller structures, the imprint of which are registered by the surface pressure probes. This results in an increased noise content, at a frequency that is associated with the wavelength of these roller structures. Having characterized the near-field source terms and its correlation with far-field acoustics, the diffracted acoustic pressure field around the airfoil is then quantified. Finally attempts are made to identify the equivalent source images responsible for the separation noise. Figure 20: Cross-correlation between the third mode of wall-normal velocity fluctuations (E\({}_{22}\)), filtered wall-pressure fluctuations \(p^{\prime}_{w}\) and far-field acoustic pressure \(p^{\prime}_{a}\). Legends: (_a_) and (_b_) \(R_{E_{22},p^{\prime}_{w}}\); (_c_) and (_d_) \(R_{E_{22},p^{\prime}_{a}}\). Left figures (_a_) and (_c_) correspond to pressure fluctuations band-passed filtered between \(80-300\) [Hz] while right figures correspond to pressure fluctuations band-passed filtered between \(600-2100\) [Hz]. Far-field acoustic pressure analysis The far-field acoustic pressure has been measured around the airfoil mid-chord to compare the influence of angles of attack on the acoustic directivity patterns. This has been done at several frequencies, and hence at several Helmholtz numbers \(kc\), where \(k\) is the acoustic wavenumber. The results are shown in figure 21. While there is an overall increase in absolute levels of the measured sound pressure levels, the overall sound directivity pattern is similar between the \(8^{\circ}\) and \(15^{\circ}\) angles of attack cases, where the former is known to emit noise through an equivalent dipole at the trailing edge (Wu, Moreau, and Sandberg, 2020). As such, classical dipole noise at the airfoil trailing-edge seems to be the driver of separation noise. In contrast, at higher Mach numbers (\(M_{\infty}=0.3-0.4\)) than the ones reported in present study, Turner and Kim (2022) had reported a significant contribution from the quadrupole noise sources. To further investigate the overall contribution of quadrupole noise generated, due to separated shear layers, the cross-correlation between two far-field microphones located on either side of the airfoil mid-chord was performed. To isolate the influence of separation noise, the far-field noise signals were band-passed filtered between 80 and 1000 Hz. The comparison at 16 m/s between the two angles of attacks, \(8^{\circ}\) and \(15^{\circ}\) are shown in figure 22 (a). The clear phase opposition decisively demonstrates that the dominant noise source is dipolar in nature. To further reinforce these findings, the OverAll Sound Pressure Level (OASPL) as a function of free-stream velocity \(U_{\infty}\) is shown in figure 22 (b). Once again, to isolate overall influence of separation noise, the sound pressure levels have been integrated between \(80-1000\) Hz, where the separation noise dominates. The results clearly show that the OASPL due to noise separation follows the classical compact dipole scaling \(U_{\infty}^{6}\), which was first proposed by Curle (1955). Having shown that the separation noise can be represented by an equivalent compact dipole source, we now attempt to check whether it can be quantified using the diffraction theory outlined above in equation (1) (Amiet, 1976). The success of Amiet's model and its extension relies on the fact that the response of the airfoil to an incident gust can be predicted using the linearized thin-airfoil theory. Following Moreau and Roger (2009); Roger and Moreau (2004), the dimensionless radiation ratio \(\Lambda\) is plotted in figure 23. Red dashed lines correspond the frequency range for which the estimation of the span was performed. The high frequency region is limited to 1 kHz because the spanwise coherence for the 15\({}^{\circ}\) angle-of-attack case drops drastically below the measurement uncertainty beyond this frequency range. The radiation ratio quantifies the diffraction efficacy for an airfoil trailing-edge that is subjected to an unsteady pressure gust. To recall, the radiation ratio is defined as the ratio of far-field and near-field spectra normalised by spanwise length scales, Figure 21: Sound Pressure Level directivity measured at 1.21 m from the trailing-edge. Microphone locations are shown with respect to the wind tunnel axis. Solid black lines for airfoil at 15\({}^{\circ}\) Grey broken lines for airfoil at 8\({}^{\circ}\). (a) 100 Hz (\(kC=0.24\)) (b) 300 Hz (\(kC=0.74\)) (c) 500 Hz (\(kC=1.23\)) (d) 1000 Hz (\(kC=2.46\)). far-field observer distance and the airfoil span length, in the following manner: \[\Lambda=\frac{S_{pp}}{\Phi_{pp}\times l_{z}}\times\frac{\mathbf{x}^{2}}{L} \tag{8}\] As argued by Moreau and Roger (2009); Roger and Moreau (2004), a good collapse between the airfoil span length and the airfoil span length is observed in the range \(15^{\circ}\), where the airfoil span length is \(L\). The airfoil span length is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\). The airfoil span length is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\). The airfoil span length is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), which is \(L\), which is \(L\), which is \(L\), is \(L\), which is \(L\), is various cases should be expected for the same airfoil. Figure 23 compares the present measurements with the theoretical prediction with Amiet's model (solid lines) and the previous measurements at ECL (diamond and square symbols) as reported in Moreau and Roger (2009). The two theoretical curves stress the effect of directivity in \(\Lambda\) and the actual microphone position in the present experiment, \(258^{\circ}\) (on the airfoil pressure side to provide laser access on the suction side), reproduces better the experimental trend compared to the ECL measurements at about \(270^{\circ}\) (or equivalently \(90^{\circ}\) on the airfoil suction side). The 5 dB spread in the experimental data is consistent with the data of figure 16 (a) in Moreau and Roger (2009), and can be attributed to both the saturation in the Electret microphones at low frequencies shown in figure 5 and a less accurate calibration methods in 2005. Yet, in both data sets, while a good collapse between the two angles of attack \(15^{\circ}\) and \(8^{\circ}\) is achieved between 500 and 2000 Hz, the collapse is relatively poor at lower frequencies (\(80-500\) Hz range), for the high incidence where the separation noise dominates. On the other hand, the newer \(8^{\circ}\) case shows a good match with Amiet's prediction. ## VII Discussion The low-frequency content of the airfoil self noise increases as the angle of attack is increased from \(8^{\circ}\) to \(15^{\circ}\). This increase noise is also accompanied by an increase in the amplitude of wall-pressure and velocity disturbances. The wall-pressure field shows an increased amplitude, spanwise extent and the velocity at which pressure gusts convect past the trailing-edge at frequencies where the separation noise is dominant. The genesis of the increased disturbances can be linked to the late transition of boundary layer. In particular, as the angle of attack is increased, it was found that the LSB covers at least \(40\%\) of the airfoil chord consistently with previous LES results (Christophe, 2011; Christophe, Anthoine, and Moreau, 2009), which leads to a delayed flow transition and re-attachment, somewhere between 40 and \(65\%\) of the chord. As such, the magnitude of flow disturbances represented by the dimensionless Reynolds stress components \(\overline{u_{1}u_{1}}/U_{\infty}^{2}\), \(\overline{u_{2}u_{2}}/U_{\infty}^{2}\), and \(-\overline{u_{1}u_{2}}/U_{\infty}^{2}\), increases substantially compared to the airfoil at \(8^{\circ}\) attack, especially close to the airfoil trailing edge. While the time-averaged flow is found to be attached, large-scale flow distortions in form of rollers, that are reminiscent of KH-type instability, are present. These roller structures are similar to the ones that were previously reported on the CD airfoil numerically by Christophe and Moreau (2008) at the same incidence and experimentally by Jaiswal _et al._ (2022) at a lower angle-of-attack. As the wall-normal spatial extent of these structures can be substantially larger than the mean boundary layer thickness, they have access to higher momentum flow. This explains why an increase in low-frequency convection velocity was observed despite a strong adverse pressure gradient in the trailing-edge region. The velocity fluctuations, \(-\overline{u_{1}u_{2}}\), increases steadily before eventually getting saturated close to the trailing-edge region of the airfoil. The peak values of \(-\overline{u_{1}u_{2}}\) are shown to scale the wall-pressure spectra for two angles of attack. As such, it clearly demonstrates that the increase in the magnitude in wall-pressure statistics can be linked to an increase in \(-\overline{u_{1}u_{2}}\). The amplification of flow disturbance, such as \(\overline{-u_{1}u_{2}}\), is known to yield KH-type instability and vortex pairing in the shear layer, producing the observed roller structures (Huang and Ho 1990; Watmuff 1999; Yarusevych, Sullivan, and Kawall 2006' 2009). In summary, these rollers are present due to the late amplification (\(x>0.4C\)) of the LSB instability, and its subsequent roll-up, which ensures that large eddies reach the trailing edge of the airfoil. Jaiswal _et al._ (2022) showed that these rollers have large coherence in the spanwise direction. Furthermore, the mode associated with roller structures correlates with the wall-pressure fluctuations at frequencies that correspond to the maximum levels of spanwise coherence. In addition, the spanwise coherence of wall-pressure and HWA spectrogram peak at the same frequency. In the absence of any alternative frequency-centred activities in the flow, it may be concluded that these rollers are responsible for an increase in the spanwise correlation length. Finally, Lacagnina _et al._ (2019) had shown that flapping of the shear yields an increase in low-frequency noise. While the flapping of LSB may result in flapping of shear-layer, no evidence for its contribution to separation noise is found in the absence of rollers. This is because no modal structures associated exclusively with shear-layer flapping were identified. The noise mechanism due to shear-layer flapping is thus not universally present for an airfoil at near stall conditions. In the present case, the increase in flow disturbances, and the associated rolling-up of the shear layer are the only dominant flow mechanisms that contribute to separation noise. Therefore, the question naturally arises: is the increase in the magnitude of \(-\overline{u_{1}u_{2}}\) sufficient to nullify Amiet's diffraction theory that depends on the thin-airfoil linearized theory? To answer this question, the radiation ratio is calculated for cases with variable flow incidence at the same Reynolds number based on chord. The results confirm that the diffraction efficacy of an airfoil subjected to higher angles of attack is substantially attenuated at frequencies associated with separation noise. This is because the overall increase in sound pressure level is comparatively small compared to the rise in spanwise correlation \(l_{z}\). In particular, the energy conversion from near-field pressure to far-field pressure should be more effective as \(l_{z}\) increases; however, this is not achieved. Furthermore, the roller structures imply that the unsteady Kutta condition may not be valid, as its validity hinges on the flow leaving the airfoil trailing edge smoothly. As such, the diffraction efficacy for an airfoil trailing-edge that is subjected to an unsteady pressure gust due to flow separation is substantially attenuated. Nevertheless, the separation noise can be fully quantified using a compact dipole. The far-field microphones located on the either side of the airfoil confirm this dipolar behaviour along with \(U_{\infty}^{6}\) scaling of the OASPL measured in the far-field. Furthermore, the far-field acoustic pressure field directivity pattern is similar for both \(8^{\circ}\) and \(15^{\circ}\) angles of attack, which reinforces the dipolar directivity pattern. These observations partly explain why Christophe and Moreau (2008) obtained a more favourable estimate of acoustic noise while using the Ffowcs Williams and Hall (1970) analogy compared to diffraction theory (Amiet 1976) at frequencies where the separation noise dominates. ## VIII Conclusions The present paper is a detailed aeroacoustic investigation of a CD airfoil at near stall condition. This is achieved by placing the CD airfoil at high angles of attack in an open jet anechoic wind tunnel. Two sets of experiments are performed at \(Re_{c}\simeq 140,000\) and \(Re_{c}\simeq 245,000\) based on airfoil chord for an airfoil placed at \(15^{\circ}\) angle of attack. For the airfoil at \(Re_{c}\simeq 140,000\), synchronized PIV, RMP and far-field microphone measurements were performed. The present study is driven by two fundamental research questions. 1) What is the mechanism that is responsible for separation noise for an airfoil near stall conditions? If so, is it universal? 2) Is the noise due to flow separation generated by a dipole for airfoil close to stall? If so, can it be quantified by Amiet's (1976) diffraction theory? The present study shows that when the CD airfoil is placed at a higher angle of attack compared to \(8^{\circ}\), such as \(15^{\circ}\) in the present study, strong amplification of flow disturbance, up to an order of magnitude higher is seen in the trailing-edge region. In fact, noise due to flow separation can be linked to increase in flow disturbances, like \(\overline{-u_{1}u_{2}}\), which scale up the wall-pressure fluctuations. This increased Reynolds stress triggers the roll up of the separated shear layer. These rollers are linked to the flow transition triggered by the Kelvin-Helmoltz instability. They are also linked to an increase in spanwise coherence of the wall-pressure fluctuations, as they convect past the trailing-edge. The modal decomposition obtained by POD shows that the modes associated with these roller structures correlate with near and far-field pressure. This correlation is observed only at frequencies where the separation noise dominates, i.e. frequency at which the \(\overline{E_{11}}\) peaks. As such, rollers and associated Kelvin-Helmholtz type flow instability play a central role in the increase in noise due to flow separation. Lastly, in the present study, no contributions coming exclusively from the flapping of the shear layer were observed. The present study conclusively shows that separation noise is dipolar in nature, therefore, quadrupole contribution for low-speed airfoils at near-stall conditions can be neglected, at least for flows up to a Mach number of about 0.1. Yet the increase in flow disturbances measured close to the trailing-edge of the airfoil implies that the assumption of small amplitude disturbance are no longer valid, which is the central premise of the thin-airfoil linearized theory used to estimate the response of the airfoil to an incoming pressure gust. Furthermore, passage of large roller structures past the trailing edge may invalidate the unsteady Kutta condition. Yet outside the frequency range at which flow separation operates, Amiet's (1976) theory should be able to predict the far-field noise even at high angles of attack as previously shown by Christophe, Anthoine, and Moreau (2009). ###### Acknowledgements. The authors would like to acknowledge the help of Sidharth Krishnan Kalyani and Yann Pasco during the PIV measurements. Authors are thankful for computational time in supercomputer Graham, managed by the Digital research alliance of Canada. ## Funding This work was supported by the Canadian NSERC Discovery grant (no.RGPIN-2014-04111). ## Declaration of Interests The authors report no conflict of interest. ## Data Availability Statement Raw data of PIV were processed on the Digital research alliance of Canada's HPC center. Derived data supporting the findings of this study are available from the first author upon reasonable request. ## Appendix In this appendix, the influence of the total length of RMP signals on wall-pressure statistics is studied, particularly at low frequencies where this parameter is known to define the lowest achievable frequency in power spectral densities. Figure 24 first shows that this total length is an important metric when it is used to estimate the convection velocity. This in part explains as to why previous studies (Kalyani, Moreau, and Ragni, 2022) have reported slightly different values of \(U_{c}\) and \(l_{z}\). However, the uncertainty in the estimation of \(U_{c}/U_{\infty}\) is less than 10%, which yields a marginal uncertainty in the radiation ratio, \(\Lambda\), when plotted on a logarithmic scale. In turn, this has no significant impact on the efficacy of diffraction theory for separation noise. In order to understand the impact of the signal length on the spanwise correlation length (\(l_{z}\)), the spanwise coherence (\(\gamma^{2}\)) is plotted in figure 25 between two spanwise probes 25 and 27. The results are also compared with the data reported by Kalyani, Moreau, and Ragni (2022). Figure 25 shows that at low-frequency oscillations, which represent uncertainty in the estimate of \(\gamma^{2}\), are higher for cases where the signal length is truncated below 30 seconds. Consequently, Kalyani, Moreau, and Ragni (2022), who estimated the spanwise coherence (\(\gamma^{2}\)) with a signal length of 15 seconds, have a higher uncertainty in the estimate of \(\gamma^{2}\) Furthermore, Kalyani, Moreau, and Ragni (2022) took one-tenth of the number of points to estimate the PSD compared to the present case. As such, the low-frequency part of \(\gamma^{2}\) shows an erroneous double peak in their results (see Figure 4(a) of Kalyani, Moreau, and Ragni (2022)), which is absent in the present case as well as in one reported earlier by Moreau and Roger (2005). Figure 24: Convection velocity for 15\({}^{\circ}\) angle of attack and \(U_{\infty}=16\) m/s case. Red circles corresponds to full signal length of 1 minute while blue and black crosses denote signal lengths of 20 and 30 seconds respectively. Figure 25: Legend: \(\gamma^{2}\) between RMP 25 and RMP 27 for 15\({}^{\circ}\) angle of attack and U\({}_{\infty}=16\) m/s case. Legend for signal length same as in figure 24. Black dotted line correspond to Kalyani, Moreau, and Ragni’s (2022) data.
2305.14112
Assessing non-Oberbeck-Boussinesq effects of convection in cryogenic helium
The present study investigates the non-Oberbeck-Boussinesq (NOB) effects which arise due to the temperature dependence of material properties in cryogenic helium experiments of turbulent Rayleigh-B\'enard convection. They are manifest as a difference of the measured mean temperature at the center of the closed cell, $T_c$, from the arithmetic mean temperature obtained from the prescribed fixed and uniform temperatures at the top and bottom copper plates of the apparatus, $T_m = (T_{bot} +T_{top})=2$. Therefore, the material properties such as specific heat at constant pressure, dynamic viscosity, thermal conductivity, the isobaric expansivity, and the mass density are expanded into power series with respect to temperature up to the quadratic order with coeffcients obtained from the software package HEPAK. A subsequent nonlinear regression that uses deep convolutional networks delivers a dependence of the strength of non-Oberbeck-Boussinesq effects in the pressure-temperature parameter plane. Strength of the NOB effects is evaluated via the deviation of the mean temperature profile $\xi_{NOB} = T_m - T_c$ from the top/bottom-symmetric Oberbeck-Boussinesq case $\xi_{NOB} = 0$. Training data for the regression task are obtained from 236 individual long-term laboratory measurements at different Rayleigh numbers which span 8 orders of magnitude.
Michal Macek, Georgy Zinchenko, Vera Musilova, Pavel Urban, Joerg Schumacher
2023-05-23T14:37:04Z
http://arxiv.org/abs/2305.14112v1
# Assessing non-Oberbeck-Boussinesq effects of convection in cryogenic helium ###### Abstract The present study investigates the non-Oberbeck-Boussinesq (NOB) effects which arise due to the temperature dependence of material properties in cryogenic helium experiments of turbulent Rayleigh-Benard convection. They are manifest as a difference of the measured mean temperature at the center of the closed cell, \(T_{c}\), from the arithmetic mean temperature obtained from the prescribed fixed and uniform temperatures at the top and bottom copper plates of the apparatus, \(T_{m}=(T_{\rm bot}+T_{\rm top})/2\). Therefore, the material properties such as specific heat at constant pressure, dynamic viscosity, thermal conductivity, the isobaric expansivity, and the mass density are expanded into power series with respect to temperature up to the quadratic order with coefficients obtained from the software package HEPAK. A subsequent nonlinear regression that uses deep convolutional networks delivers a dependence of the strength of non-Oberbeck-Boussinesq effects in the pressure-temperature parameter plane. Strength of the NOB effects is evaluated via the deviation of the mean temperature profile \(\xi_{\rm NOB}\equiv T_{m}-T_{c}\) from the top/bottom-symmetric Oberbeck-Boussinesq case \(\xi_{\rm NOB}=0\). Training data for the regression task are obtained from 236 individual long-term laboratory measurements at different Rayleigh numbers which span 8 orders of magnitude. ## I Introduction Controlled laboratory experiments of turbulent Rayleigh-Benard convection (RBC) are one pillar of turbulence research to obtain a deeper understanding of the physical transfer processes and their coupling to statistical properties and structures, both in the bulk and the boundary layers of buoyancy-driven flows [1; 2; 3; 4]. The highest Rayleigh numbers \(Ra\) for fluid flows at Prandtl numbers \(Pr\simeq 1\) are obtained in two gases, either compressed sulphur hexafluoride, \(\mathrm{SF}_{6}\), [5; 6] or cryogenic helium, \({}^{4}\)He, the latter of which is cooled down to a few Kelvin [7; 8; 9; 10; 11; 12; 14; 16]. While the Rayleigh number \(Ra\) quantifies the thermal driving of convective turbulence, the Prandtl number \(Pr\) is the ratio of molecular momentum to temperature diffusion. Together with a third parameter, the aspect ratio of the exclusively used cylindrical closed vessels \(\Gamma=D/H\) with cell diameter \(D\) and cell height \(H\), these three dimensionless numbers determine the control parameters of the experiments and are subsequently used to quantify the response of the apparatus in the form of power laws of the turbulent momentum and heat transfer. The latter are quantified by the dimensionless Reynolds and Nusselt numbers, \(Re\) and \(Nu\)[2; 3]. The present study is focused to the experiments in cryogenic helium \({}^{4}\)He. The Rayleigh-Benard convection model incorporates the Oberbeck-Boussinesq (OB) approximation [3; 4] which considers the working fluid as incompressible. In addition, the mass density field of the fluid, \(\rho(\mathbf{x},t)\) is taken as a linear function of the temperature field \(T(\mathbf{x},t)\) and given by \[\rho(\mathbf{x},t)=\rho_{\rm ref}[1-\alpha(T(\mathbf{x},t)-T_{\rm ref})]\quad\text{ with}\quad\alpha=-\frac{1}{\rho}\frac{\partial\rho}{\partial T}\Big{|}_{p}\,, \tag{1}\] with the isobaric thermal expansion coefficient or expansivity \(\alpha\). This dependence is incorporated in the volume forces and thus couples the temperature field to the momentum balance. Quantities \(\rho_{\rm ref}\) and \(T_{\rm ref}\) are reference magnitudes of density and temperature, respectively. One important consequence of the OB approximation is that statistical properties, such as mean profiles, in the lower and upper halves of the convection cell including the corresponding viscous and thermal boundary layers, are symmetric with respect to the midplane at \(z=H/2\). Consequently, \[T_{c}:=\Big{\langle}T\left(z=\frac{H}{2}\right)\Big{\rangle}=\frac{T_{\rm bot} +T_{\rm top}}{2}=:T_{m}\,, \tag{2}\] in laboratory experiments with the prescribed fixed and uniform temperatures at the top and bottom, \(T_{\rm top}\) and \(T_{\rm bot}\). Cryogenic RBC experiments at the highest Rayleigh numbers have to be operated close to the critical point (CP) of He [8; 11; 12; 14]. At this point, where the saturated vapor curve (SVC) representing the phase boundary between the gas and liquid state ends, the material properties of the working fluid such as specific heat at constant pressure, \(C_{p}\), dynamic viscosity \(\mu\), or thermal conductivity \(\lambda\) fluctuate strongly. This is considered as one possible source of the deviations from the Boussinesq limit, which are experimentally probed by a violation of (2). In other words, non-Boussinesq (NOB) effects are detected as \(T_{c}\neq T_{m}\). It is exactly this deviation which we want to explore in detail in the present work for cryogenic \({}^{4}\)He. Therefore, we define the non-Oberbeck-Boussinesq parameter \[\xi_{\rm NOB}(p_{m},T_{m},\chi_{k}):=T_{m}-T_{c}\,, \tag{3}\] where \(\chi_{k}\) are for now a short-hand notation for material properties which will be specified further below. Figure 1 (panel a) summarizes the operating points within the \(p\)-\(T\) diagram for cryogenic experiments conducted in the apparatus of the group in Brno (Czech Republic) [17]. We indicate the mean temperature as well as the range of the applied outer temperature difference \(\Delta T=T_{\rm bot}-T_{\rm top}>0\) at the mean pressure \(p=p_{m}\). It is seen that a number of measurements are close to the phase boundary (solid black curve) and a few even in the vicinity of the critical point (magenta star). In panel (b), the measured values of \(\xi_{\rm NOB}\) are plotted in the phase diagram of the two control parameters, \(Ra\) and \(Pr\). It is important to note that \(Ra\) and \(Pr\) are control parameters characterizing OB convection, thus ideally \(\xi_{\rm NOB}=0\) independent of \(Ra\) and \(Pr\). The experimentally observed values \(\xi_{\rm NOB}\neq 0\) unambiguously indicate presence of NOB effects and must be captured introducing additional control parameters. We define and discuss a suitable set below. In this work, we will systematically investigate the non-Boussinesq effects in cryogenic helium experiments at high Rayleigh numbers spanning a range of \(10^{7}\lesssim Ra\lesssim 10^{15}\) performed in Brno. We quantify the deviation of the center temperature \(T_{c}\) from \(T_{m}\) in the \(p\)-\(T\) diagram by means of nonlinear regression applying deep neural networks, i.e., determine \(\xi_{\rm NOB}(p_{m},T_{m})\). This regression proceeds in three different levels of refinement, partly based on a perturbative expansion of the temperature dependence of essential material parameters and state variables \(p\), \(T\) as described in the next section. Furthermore, we aim to identify which of the variations of the material parameters are the most important ones for the magnitude of the NOB parameter. The complex material dependencies including discontinuities at the phase boundary and the singularities at the critical point are tabulated in the software package HEPAK written by V. Arp et al. [18] and detailed by Horizon Technologies Inc [19]. They allow us to quantify the prefactors of the polynomial expansions of the material parameters and state variables at different orders. Starting point is the set of fully compressible equations of motion which was outlined by Gray and Giorgini [20]. Material parameter dependencies have been also systematically discussed for high-Rayleigh-number experiments in compressed SF\({}_{6}\) in refs. [21; 22; 23]. Our paper is organized as follows. Section II introduces the fully compressible equations of motion and discusses the resulting polynomial expansions for the material properties. In Sec. III, we explore the basic state and transport properties of \({}^{4}\)He as a function of pressure \(p\) and temperature \(T\) in connection with the \((p_{m},T_{m})\) operating points of all realized RBC experiments within the respective \(\Delta T\) region. Here, we also discuss the second mechanism to Figure 1: Summary of the operating points \((p_{m},T_{m})\) of the cryogenic Rayleigh-Bénard experiments. (a) The mean temperatures \(T_{m}\) and the temperature range \(\Delta T=T_{\rm bot}-T_{\rm top}\) at a mean pressure \(p_{m}\) are provided in the \(p\)–\(T\) parameter plane. The ‘errorbars’ stemming from the \((p_{m}\),\(T_{m})\) points denote the ranges of \(\Delta T\) between the cold (blue) and hot (red) plate temperatures \(T_{t}\), \(T_{b}\), respectively. The solid line marks the saturated vapor curve (SVC) and the star symbol in the center of the figure indicates the critical point (CP) with \(T_{\rm cri}=5.195\) K, \(p_{\rm cri}=0.228\) MPa. The different colors of experimental (\(p_{m}\),\(T_{m}\)) points correspond to new data (green), and data published in Refs. [13] (black) and [15] (brown). (b) Color-coded non-Boussinesq parameter \(\xi_{\rm NOB}\) at the \(Ra,Pr\) control parameters for the experiments shown in panel (a), given in units of Kelvin. NOB convection, the compressibility effects which are shown to be negligible in the present setup. In Sec. IV, we briefly overview the essential features of the experimental set-up for cryogenic RBC. Section V discusses the nonlinear regression results. In the last section, we give the conclusions and outlook. Technical details of the deep neural networks and an error analysis of the machine learning procedures are discussed in the appendices A and B. ## II Perturbative expansion of the equations of motion Convective motions in a fluid layer are described by the set of three balance equations involving the continuity equation for the mass balance, the Navier-Stokes equations for the momentum balance, and the energy balance equation. Following here the textbook by Batchelor [24] and the seminal work by Gray and Giorgini [20], they are given by \[\frac{D\rho}{Dt} =-\rho\frac{\partial u_{j}}{\partial x_{j}}, \tag{4}\] \[\rho\frac{Du_{i}}{Dt} =-\frac{\partial p}{\partial x_{i}}-\rho g_{i}\alpha T+\frac{ \partial}{\partial x_{j}}(\mu\Gamma_{ij}),\] (5) \[\rho C_{p}\frac{DT}{Dt}-\alpha T\frac{Dp}{Dt} =\frac{\partial}{\partial x_{j}}\left(\lambda\frac{\partial T}{ \partial x_{j}}\right)+\mu\Phi, \tag{6}\] where \(D\bullet/Dt=\partial\bullet/\partial t+\mathbf{u}\cdot\nabla\bullet\) is the material derivative and \[\Gamma_{ij}=\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{ \partial x_{i}}-\frac{2}{3}\frac{\partial u_{k}}{\partial x_{k}}\delta_{ij} \tag{7}\] is the rate of strain tensor in the compressible case. Furthermore, \[\Phi=\frac{1}{2}\Gamma_{ij}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{ \partial u_{j}}{\partial x_{i}}\right), \tag{8}\] is the dissipation function and \(g_{i}=(0,0,-g)\) the vector that contains the acceleration due to gravity \(g\) in vertical direction. The bulk viscosity is set to zero. A simplified form of the convection equations (4)-(6) is obtained in the form of the OB approximation, once the dynamical viscosity \(\mu\), the thermal conductivity \(\lambda\), the isobaric thermal expansivity \(\alpha\), and the specific heat \(C_{p}\) at constant pressure are taken as constants. Furthermore, the mass density is taken to be constant \(\rho=\rho_{0}\), such that the flow is basically incompressible, and heating due to pressure variations remains subdominant, which results to \[\frac{\partial u_{j}}{\partial x_{j}} =0, \tag{9}\] \[\rho_{0}\frac{Du_{i}}{Dt} =-\frac{\partial p}{\partial x_{i}}-\rho_{0}g_{i}\alpha T+\mu \frac{\partial^{2}u_{i}}{\partial x_{j}^{2}},\] (10) \[\rho_{0}C_{p}\frac{DT}{Dt} =\lambda\frac{\partial^{2}T}{\partial x_{j}^{2}}. \tag{11}\] This implies that the speed of sound is much larger than the typical convection velocity, the free-fall velocity \(U_{\rm f}=\sqrt{g\alpha\Delta TH}\) for small \(\alpha\Delta T\). In all expressions above we have used the Einstein summation convention. Our objective is to evaluate the importance of individual terms of the compressible equations (4)-(6) beyond the OB limit systematically and analyze their effects on the mean temperature profiles in the thermal convection experiments in cryogenic helium. We will therefore assume that all material parameters are functions of the temperature only and that their pressure dependence is much less significant for the present experimental conditions. They can then be approximated by Taylor expansions with respect to \(T\), which we will follow up to the quadratic expansion term. This results to the following expressions \[\rho =\rho_{0}\left(1-\alpha_{0}\delta T+\beta_{0}\delta T^{2}\right), \tag{12}\] \[\mu =\mu_{0}\left(1+m_{10}\delta T+m_{20}\delta T^{2}\right),\] (13) \[C_{p} =C_{p0}\left(1+c_{10}\delta T+c_{20}\delta T^{2}\right),\] (14) \[\lambda =\lambda_{0}\left(1+l_{10}\delta T+l_{20}\delta T^{2}\right),\] (15) \[\alpha =\alpha_{0}\left(1+a_{10}\delta T+a_{20}\delta T^{2}\right). \tag{16}\] where \(\delta T=T-T_{m}\). The first index in coefficients defines the term number in Taylor series decomposition, index \(0\) refers to the mean temperature \(T_{m}\) and the minus in (12) follows a convention usual in fluid dynamics. All constants in the expansions (12)-(16) are determined from quadratic interpolations of the respective material properties at three temperature values \(T_{\rm top}\), \(T_{m}\), and \(T_{\rm bot}\) at a given pressure value \(p\) using HEPAK [18]. The substitution of Eqs. (12)-(16) into the compressible fluid equations (4)-(6) and the subsequent performance of a number of transformations leads to a set of a dimensionless convection equations which include the NOB effects up to second order with respect to temperature. They are given by \[-\varepsilon_{1}\frac{D\theta}{Dt}+2\varepsilon_{2}\theta\frac{D \theta}{Dt} =-\left(1-\varepsilon_{1}\theta+\varepsilon_{2}\theta^{2}\right) \frac{\partial u_{j}}{\partial x_{j}}, \tag{17}\] \[\left(1-\varepsilon_{1}\theta+\varepsilon_{2}\theta^{2}\right) \frac{Du_{i}}{Dt} =-\frac{\partial\left(p-p_{s}\right)}{\partial x_{i}}+\left[ \left(\theta-\theta_{s}\right)-\frac{\varepsilon_{2}}{\varepsilon_{1}}\left( \theta^{2}\!-\!\theta_{s}^{2}\right)\right]k_{i}\!+\] \[\frac{1}{Re_{\rm f}}\left(1+\varepsilon_{3}\theta+\varepsilon_{ 4}\theta^{2}\right)\frac{\partial\Gamma_{ij}}{\partial x_{j}}+\frac{1}{Re_{ \rm f}}\left(\varepsilon_{3}+2\varepsilon_{4}\theta\right)\Gamma_{ij}\frac{ \partial\theta}{\partial x_{j}},\] (18) \[\left(1-\varepsilon_{1}\theta+\varepsilon_{2}\theta^{2}\right) \left(1+\varepsilon_{5}\theta+\varepsilon_{6}\theta^{2}\right)\frac{D\theta}{ Dt} =\frac{\tilde{D}}{Re_{\rm f}}\left(1+\varepsilon_{3}\theta+\varepsilon_{4}\theta^{2} \right)\Phi\!+\] \[\frac{1}{Re_{\rm f}Pr}\left(1+\varepsilon_{7}\theta+\varepsilon_ {8}\theta^{2}\right)\frac{\partial^{2}\theta}{\partial x_{j}^{2}}+\frac{1}{ Re_{\rm f}Pr}\left(\varepsilon_{7}+2\varepsilon_{8}\theta\right)\left(\frac{ \partial\theta}{\partial x_{j}}\right)^{2}+\] \[\tilde{D}\left(1+\varepsilon_{9}\theta+\varepsilon_{10}\theta^{2 }\right)\left[\varepsilon_{1}\frac{D\left(p-p_{s}\right)}{Dt}-\left(1- \varepsilon_{1}\theta_{s}+\varepsilon_{2}\theta_{s}^{2}\right)u_{3}\right] \left(\theta+\tilde{T}_{m}\right). \tag{19}\] Here, \[\theta=\frac{\delta T}{\Delta T}=\frac{T-T_{m}}{T_{\rm bot}-T_{\rm top}}\quad \mbox{and}\quad\tilde{T}_{m}=\frac{T_{m}}{\Delta T}=\frac{T_{m}}{T_{\rm bot}- T_{\rm top}}\,, \tag{20}\] and \(\theta_{s}\) and \(p_{s}\) are temperature and pressure equilibrium (static heat conduction) profiles, respectively. Furthermore, we define in the energy balance a dimensionless parameter \[\tilde{D}=\frac{g\alpha_{0}H}{C_{p0}}\,, \tag{21}\] which is denoted as the dissipation number [25]. This number relates the dry adiabatic lapse rate \(g/C_{p0}\) to the characteristic temperature drop \(\alpha_{0}H\). The velocity was made non-dimensional by \(U_{\rm f}\). The Prandtl number reads \(Pr=\nu_{0}/\kappa_{0}\) with the temperature diffusivity \(\kappa_{0}=\lambda_{0}/(\rho_{0}C_{p0})\), the free-fall Reynolds number \(Re_{\rm f}=U_{\rm f}H/\nu_{0}\), and the unit vector in the momentum balance points into the positive \(z\)-direction, \(k_{i}=(0,0,1)\). The free-fall Reynolds number follows to \(Re_{\rm f}=\sqrt{Ra/Pr}\) where \(Ra\) is the Rayleigh number \[Ra=\frac{\alpha_{0}}{\nu_{0}\kappa_{0}}g\Delta TH^{3}\,. \tag{22}\] Finally, in table 1, we list all parameters \(\varepsilon_{i}\) which were used in Eqs. (17)-(19) for first and second order expansions. Setting all these expansion parameters \(\varepsilon_{i}\) and \(\tilde{D}\) to zero recovers the OB equations (9)-(11) [2; 3]. Expansion parameters with an odd-number index are for the linear expansion while those with an even index are for the quadratic order. Notice that \(\tilde{D}\), similar to the OB control parameters \(Ra\) and \(Pr\) depends only on local values of the material properties at the reference temperature \(T_{m}\), and--like the Prandtl number \(Pr\)--is independent of \(\Delta T\). In this sense it differs from \(\varepsilon_{i}\), which are "non-local" and depend on the temperature derivatives at \(T_{m}\) as well as \(\Delta T\). The regression algorithms in Sec. V will proceed in incremental steps, i.e., consider linear expansions only at first and incorporate second order subsequently. ## III State and transport properties of cryogenic \({}^{4}\)He Figure 2 shows the accurate values of the mass density \(\rho\), the specific heat at constant pressure \(C_{p}\), the dynamic viscosity \(\mu\), and the thermal conductivity \(\lambda\) with respect to the \(p\)-\(T\) plane in a region of gaseous helium phase including the regions near to the vapor liquid saturation curve and the critical point. These surfaces are obtained from the HEPAK code [18], which is based on high-order interpolation of all available and reliable measurements. The plots give us a first guidance in the selection of the appropriate the functional form of the state and transport properties in the equations beyond the OB approximation. All displayed quantities, \(\rho\), \(C_{p}\), \(\lambda\), and \(\mu\) develop a discontinuity at the phase boundary in the pressure-temperature plane which corresponds to a first-order gas-liquid phase transition. Furthermore, \(C_{p}\) and \(\lambda\) develop a divergence at the critical point (CP), which is given by \(p_{\rm CP}=227\) kPa and \(T_{\rm CP}=5.2\) K. The precursors of this divergence are visible here. Panel (a) displays in fact the equation of state \(\rho(T,p)\). In the vicinity of this point, most of the RBC experiments in cryogenic helium have been performed as indicated by the black dots in all panels of the figure. They indicate mean pressure and temperature (and a view from the top would reproduce the points of Fig. 1). Notice that the density \(\rho\) and the specific heat \(C_{p}\) vary by several orders of magnitude over the domain displayed, while the dynamic viscosity \(\mu\) and the thermal conductivity \(\lambda\) vary by a factor of smaller than 4 only. Finally, in order to assess possible contribution of compressibility to the NOB effects, we evaluate the Mach number \(M=U_{\rm f}/c=\sqrt{g\alpha\Delta T\mu}/c\), with isobaric thermal expansivity \(\alpha\) and the speed of sound \(c\) obtained from HEPAK (not shown). The results show that for all experiments considered, the Mach numbers are in the range \(M\lesssim 10^{-2}\) and guarantee that possible breaking of OB conditions due to compressibility can be neglected. Thus the NOB effects in RBC experiments with cryogenic helium stem solely from temperature dependencies of the fluid properties. ## IV Experiments in cryogenic \({}^{4}\)He Cryogenic helium \({}^{4}\)He has been used to reach extreme turbulence intensity in "tabletop" RBC experiments on one hand thanks to the peculiar material properties near the CP allow to reach high Rayleigh numbers \(Ra\), on the other hand, due to technical advantages, as heat leaks and many other parasitic effects are naturally highly suppressed in cryogenic conditions. The first advantage goes side-by-side with caveats of inevitably varying the Prandtl number \(Pr=\nu/\kappa\)[8; 11] as well as NOB effects stemming from (21) and the dependencies (12)-(16). Here we re-analyze RBC data obtained at the Brno cryogenic turbulence facility [17]. The Brno experiment comprises a cryostat with a helium cryogenic experimental cell with the height \(H=0.3\) m and diameter \(d=0.3\) m (aspect ratio \(\Gamma=d/H=1\)) with particular effort to minimize the influence of the cell structure and materials on the observed convection. The cell has been designed to withstand pressures of 3.5 bars to cover a range of Rayleigh numbers \(10^{7}\leq Ra\leq 10^{15}\). Here, we list the main features of the experimental cell only, including all crucial recent upgrades. In an ideal RBC experiment, the top and bottom plates should maintain non-fluctuating constant temperatures \(T_{\rm top}\) and \(T_{\rm bot}\) and the cell sidewalls should be adiabatic. The top and bottom plates of our cell are made of 28 mm thick annealed OFHC copper of very high thermal conductivity of at least \(\lambda_{\rm Cu}=2\) kW m\({}^{-1}\) K\({}^{-1}\) at 5 K. Parasitic heat fluxes from the sidewalls into the working fluid are minimized by using very thin stainless steel sidewalls with thickness \(\delta=0.5\) mm and a thermal conductivity \(\lambda_{w}\). A special design of the cell corners is used, see Fig. 4 of [17]. One way to estimate the influence of the sidewall on the heat transport is via the wall parameter \(W=4\delta\lambda_{w}/(\lambda_{\rm He}D)\); for our cell \(0.22>W>0.15\) depending on actual value of the thermal conductivity for each data point. By correction, we mean a subtraction of the heat conducted by sidewalls from the heat that passes through \begin{table} \begin{tabular}{l l l} \hline \hline Quantity & First order expansion & Second order expansion \\ \hline Mass density \(\rho\) & \(\varepsilon_{1}=\alpha_{0}\Delta T\) & \(\varepsilon_{2}=\beta_{0}\Delta T^{2}\) \\ Dynamic viscosity \(\mu\) & \(\varepsilon_{3}=m_{10}\Delta T\) & \(\varepsilon_{4}=m_{20}\Delta T^{2}\) \\ Specific heat \(C_{p}\) & \(\varepsilon_{5}=c_{10}\Delta T\) & \(\varepsilon_{6}=c_{20}\Delta T^{2}\) \\ Thermal conductivity \(\lambda\) & \(\varepsilon_{7}=l_{10}\Delta T\) & \(\varepsilon_{8}=l_{20}\Delta T^{2}\) \\ Isobaric expansion coefficient \(\alpha\) & \(\varepsilon_{9}=a_{10}\Delta T\) & \(\varepsilon_{10}=a_{20}\Delta T^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: List of all Taylor expansion parameters \(\varepsilon_{i}\) for \(i=1,...,10\). the working fluid by convection [26]. We also paid attention to employ good thermal shielding and minimize other external parasitic heat flows into the cell which could substantially influence the convection dynamics. A temperature correction which needs to be addressed in cryogenic experiments is that due to adiabatic gradient, \(g/C_{p}\). It is given by \(\Delta T_{\rm ad}=g\alpha HT_{m}/C_{p}=\tilde{D}T_{m}\), see also Eq. (21), which has to be subtracted from the measured temperature difference \(\Delta T\) before evaluating \(Ra\) and comparing to the results of DNS based on Eqs. (4)-(6), otherwise the experimental data points would be very much off the expected \(Nu(Ra)\) dependencies. In typical large RBC cryogenic helium experiments, including the experiments in Brno, \(\Delta T_{\rm ad}\) is of order 1 mK. In the largest RBC cell at Oregon, it was about 3 mK [8]. In contrast, in turbulent RBC in air at room temperature the adiabatic gradient correction is not important, as \(g/C_{p}=0.01\) K/m; thus the scale height across which \(T\) drops by an order of magnitude is 1 km. Cryogenic helium does not absorb thermal radiation, which leaves the radiation corrections to the Nusselt number negligibly small. We point to a straightforward evaluation in ref. [28] and to the discussion of an extreme case in [29]. The top and bottom plates of the cell are equipped with four germanium (Ge) thermometers, calibrated at Physikalisch-Technische Bundesanstalt Berlin (Germany) up to the best currently available precision of \(\pm 2\) mK absolute accuracy over the entire temperature range of interest. These Ge thermometers are embedded in the middle and on the sides of both plates. We see no horizontal temperature gradients within Ge sensors accuracy of 2 mK in both copper plates. Following the last upgrade, both copper plates contain a pair of fast response Lakeshore DT-670 silicon (Si) diode thermometers, allowing to resolve the plate temperature fluctuations, which on one hand enable much better control of the temperature boundary conditions [15] compared to previous experiments via Proportional-Integrative-Derivative feedback loops and, on the other hand, can be used to correlate dynamics of the turbulent Figure 2: Basic state quantities (a,b) and molecular transport properties (c,d) which characterize the complex material properties of cryogenic helium \({}^{4}\)He near its critical point (CP) of \(p_{\rm CP}=227\) kPa and \(T_{\rm CP}=5.2\) K as a function of pressure \(p\) and temperature \(T\). Panel (a) shows the density \(\rho\) which thus corresponds with the equation of state \(\rho(T,p)\). Panel (b) plots the specific heat \(C_{p}\), panel (c) the dynamical viscosity \(\mu\), and panel (d) the thermal conductivity \(\lambda\). The black bullets indicate the mean values \(p_{m}\) and \(T_{m}\) for all experiments shown in Fig 1. The data are obtained from the HEPAK code [18]. large-scale circulation in the bulk with temperature fluctuations affecting the boundary layers. Additional sensors are placed inside the cell, calibrated in situ by us against the primary four Ge thermometers, and can be adjusted to measure directly the absolute turbulent core temperature \(T_{c}\) or the fluctuations \(\delta T_{c}\). ## V Nonlinear regression of non-boussinesq parameters ### Experimental data base and regression procedure We are interested in the dependence of the NOB parameter on state variables and material properties, i.e., most generally this results to a function \(\xi(p_{m},T_{m},\Delta T,\alpha(T,p),\lambda(T,p),\mu(T,p),C_{p}(T,p),\rho(T,p))\). The perturbative expansion of Sec. II has reduced this high-dimensional function to one of 13 input parameters. These are the mean pressure \(p_{m}\), the mean temperature \(T_{m}\), the temperature difference between bottom and top boundaries \(\Delta T\), and the expansion parameters \(\varepsilon_{i}\) for \(i=1,...,10\). The latter ones determine the first and second temperature derivatives of the material properties and state variables as summarized in Table 1. The nonlinear regression proceeds in three levels of increasing complexity. We aim at reconstructing the following functions, \[\xi_{1} :=\xi_{\rm NOB}(p_{m},T_{m},\Delta T)\,, \tag{23}\] \[\xi_{2} :=\xi_{\rm NOB}(p_{m},T_{m},\Delta T,\varepsilon_{2k+1})\quad{ \rm for}\quad k=0,...,4\,,\] (24) \[\xi_{3} :=\xi_{\rm NOB}(p_{m},T_{m},\Delta T,\varepsilon_{k})\quad{\rm for }\quad k=1,...,10\,. \tag{25}\] In the first approach \(\xi_{1}\), no expansion parameter \(\varepsilon_{i}\) is used for the training of the neural network. It is explored how the NOB parameter depends on mean pressure and temperature as well as imposed temperature difference only. The result can serve as a baseline to forecast NOB effects on the temperature profile asymmetry \(\xi_{\rm NOB}(p_{m},T_{m},\Delta T)\) in future experiments. In the more detailed successive steps, we are interested in a finer resolution of the effects of individual fluid properties on the temperature asymmetry. In the second (third) approach \(\xi_{2}\) (\(\xi_{3}\)), sets of all linear-order (linear and quadratic-order) expansion coefficients \(\varepsilon_{2k+1}\) (\(\varepsilon_{2k}\) ) are taken together as input data for a slightly deeper neural network architecture since the feature extraction proceeds in a higher-dimensional feature space. In this case, before calculating the non-Boussinesq parameter, it was necessary to perform preliminary calculations to determine \(\varepsilon_{k}\) at each point of the \(p\)-\(T\)-space within the given \(\Delta T\) by means of HEPAK. Values of the NOB parameters (23)-(25) are given in units of K throughout the paper. ### First reconstruction method without expansion parameters In the first reconstruction method, there are three input parameters, the mean pressure \(p_{m}\) and the mean temperature \(T_{m}\) which define the operating point of a particular laboratory measurements. The third input parameter is the applied temperature difference between the bottom and top plates. The only output parameter is the difference between mean temperature \(T_{m}\) and temperature \(T_{c}\) in the center of the layer, \(\xi_{1}\), as already defined in the last subsection, see Eq. (23). For details on the network architecture, see table 2 in Appendix A. Figure 3, shows the reconstruction of \(\xi_{1}(T,P)\) for 4 different outer temperature differences \(\Delta T\) which are indicated in the title of panels (a), (b), (d), and (e). In panels (c) and (f), we display in addition the root mean square error (RMSE) contour plots for two cases which arises when taking the \(i=1,...,N_{\rm rec}=100\) individual reconstructions. It is defined as \[{\rm RMSE}(p,T)=\sqrt{\frac{1}{N_{\rm rec}}\sum_{i=1}^{N_{\rm rec}}|\xi_{k}( i,T,p)-\overline{\xi}_{k}(T,p)|^{2}}\quad{\rm for}\quad k=1,2,3\,, \tag{26}\] where \(\overline{\xi}_{k}(T,p)\) the mean reconstructed surface. In panels (a), (b), (d), and (e), it can be seen that the maximum value of the NOB parameter is always reached in the vicinity of the phase boundary and at the critical point. As the temperature difference \(\Delta T\) increases, the NOB parameter also increases. In panels (c) and (f), it is seen that the RMSE also increases for bigger \(\Delta T\). One reason is that there are less corresponding experimental data. The largest errors occur near the critical point at the end of the phase boundary. The non-Oberbeck-Boussinesq parameter \(\xi_{1}\) is also shown in \(Ra\)-\(Pr\) parameter space in Fig. 4 for three outer temperature differences \(\Delta T\), cf. Fig. 1(a) for an analogous display of experimental values. All branches for each \(\Delta T\) start at \(Pr=0.7\) for the smallest Rayleigh numbers. When the Rayleigh number increases by 8 orders of magnitude, the Prandtl number increases monotonically up to \(Pr\simeq 15\). The data points are color-coded by \(\xi_{1}\). The NOB parameter also grows along the curves up to its highest values at \(Pr\simeq 2\). With a further increase of the Prandtl number, the NOB parameter \(\xi_{1}\) remains however nearly unchanged. Figure 4: Plot of \(\xi_{1}\) in the \(Ra\)–\(Pr\) parameter space for three different outer temperature differences \(\Delta T\). The data points are color-coded by \(\xi_{1}\) as given by the legend to the right. Unit for the color code is Kelvin. Figure 3: Contour plots of the reconstruction of the non-Boussinesq parameter \(\xi_{1}(p,T)\) for different \(\Delta T\) is shown in panels (a), (b), (d), and (e). Panels (c) and (f) display the corresponding root mean square error (RMSE) for two of the four cases. The RMSE is given by Eq. (26). Unit for the color code is Kelvin. ### Second reconstruction method including linear-order \(\varepsilon\)-parameters For the next level, we reconstruct the parameter field \(\xi_{2}\) with the linear order expansion parameters \(\varepsilon_{i}\). Input parameters for the deep neural network are now mean pressure \(p_{m}\), mean temperature \(T_{m}\), outer temperature difference \(\Delta T\) (as in the last case) together with all \(\varepsilon_{2k+1}\) for \(k=0,...,4\). The output is now the NOB parameter \(\xi_{2}\) of Eq. (23). The neural network is detailed in table 3 of Appendix A. The results are shown in Fig. 5, in analogy with Fig. 3, for four different temperature differences. Also, we add again two root mean square error plots in panels (c, f) for \(\Delta T=0.1\) and \(0.4\), respectively. It can be seen that the qualitative behavior is similar to the results of the first reconstruction method. The difference between the reconstructions \(\xi_{1}\) and \(\xi_{2}\) is highlighted in Fig. 6, showing the deviation \(|\xi_{1}-\xi_{2}|\). Maximum deviations are up to \(\xi\simeq 0.005\) K, observed for \(\Delta T=0.2\) K near the critical point and for \(\Delta T=0.4\) K in the lower-\(T\) part near the saturation curve (maximum relative value of \(|\xi_{1}-\xi_{2}|/\xi_{1}\sim 40\%\)). The graphs show that the influence of linear coefficients near the critical point grows as the temperature difference increases. The \(Ra\)-\(Pr\) plots related to the \(|\xi_{1}-\xi_{2}|\) differences are shown in Fig. 7 and display the largest deviation near the critical point and at the saturation line when \(Pr\gtrsim 2\) and \(Ra\) varying from \(10^{12}\) to \(10^{14}\). ### Third reconstruction method including quadratic-order \(\varepsilon\)-parameters The last reconstruction method includes 13 input parameters to obtain \(\xi_{3}\). This comprises linear and quadratic expansions with respect to the temperature encoded by \(\varepsilon_{i}\) for \(i=1,\ldots,10\). Our analysis shows that the absolute difference \(|\xi_{2}-\xi_{3}|\) remains very small for all the \(p_{m}\), \(T_{m}\), \(\Delta T\) values considered. This can be seen in Fig. 8, which corresponds to four'sections' through the \(p-T\)-plane at \(T_{m}=4.6,4.91,5.2\) and \(5.5\) K showing the \(p\)-dependencies of \(\xi_{1}\), \(\xi_{2}\) and \(\xi_{3}\). Finally, we refer to Appendix B, where we have summarized further error analysis for all three reconstruction methods. This analysis illustrates how strongly the 100 individual reconstructions of \(\xi_{k}\) vary when selecting different subsets as training and test data, see Fig. 10. Figure 5: Contour plots of the reconstruction of the non-Boussinesq parameter \(\xi_{2}(p,T)\) for different \(\Delta T\) is shown in panels (a), (b), (d), and (e). Panels (c) and (f) display the corresponding root mean square error (RMSE) for two of the four cases. The RMSE is given by Eq. (26). The unit for the color code is Kelvin. ### Discussion of the reconstruction methods The results in Secs. V.2-V.4 display a general trend for the magnitude of NOB effects in RBC in cryogenic helium gas quantified by the response parameters \(\xi_{i}\), \(i=1,2,3\). All of them grow significantly as we increase pressure \(p_{m}\) and decrease temperature \(T_{m}\) towards the phase boundary at SVC and towards the CP. This is seen throughout Figs. 3 and 5 for different \(\Delta T\) values, chosen here to cover the ranges of \(\Delta T\) taken in high-\(Ra\) turbulent RBC experiments. Figure 8 shows in more detail the differences between individual reconstructions obtained by different neural network architectures. We can observe that while the differences between \(\xi_{2}\) and \(\xi_{3}\) are practically negligible, the reconstruction Figure 6: Contour plots of the absolute difference \(|\xi_{1}-\xi_{2}|\) between the 1st and 2nd reconstruction method in the \(p\)–\(T\) parameter plane for four different \(\Delta T\) values which correspond to those in panels (a), (b), (d), and (e) of Figs. 3 and 5. The unit for the color code is Kelvin. Figure 7: The absolute difference \(|\xi_{1}-\xi_{2}|\) between the first and the second reconstruction method in the \(Ra\)–\(Pr\) parameter space for three different outer temperature differences \(\Delta T\) (cf. Fig. 6). The unit for the color code is Kelvin. calculating \(\xi_{1}\), involving only the basic experimental parameters, namely \(p_{m}\),\(T_{m}\) and \(\Delta T\), significantly differ from \(\xi_{2}\) and \(\xi_{3}\), which in addition take into account the linear and quadratic terms in the expansion of the material properties of helium as a function of temperature, expressed by the dimensionless NOB control parameters \(\varepsilon_{i}\), \(i=1,...,10\), given in Tab. 1. In particular, \(\xi_{2}\) and \(\xi_{3}\) show much more pronounced growth at the phase boundary near the SVC and at the CP as a function of pressure than \(\xi_{1}\). Further, the \(\xi_{2}\) and \(\xi_{3}\) curves are on average markedly more convex, while \(\xi_{1}\) often grows in a more concave fashion towards the SVC and/or CP, see e.g. the most pronounced case for \(\Delta T=0.2\) K and \(T_{m}=5.2\) K (very close to the critical temperature \(T_{\rm cri}=5.195\) K). Note that the occurrence of crossings of the concave regions of \(\xi_{1}\) with convex parts of \(\xi_{2}\) apparent in Fig. 8 explains appearance of tongue-shaped features (contours of \(|\xi_{1}-\xi_{2}=\)0) seen in Fig. 6. In addition to the three reconstructions (23)-(25) detailed in Secs. V.2-V.4, we performed also several four-parameter reconstructions taking individual \(\varepsilon_{i}\), \(i=1,...,10\) one-by-one in addition to \(p_{m}\), \(T_{m}\) and \(\Delta T\) within the appropriate neural network. Each of the results differed from the \(\xi_{1}\) surfaces shown in Fig. 3 by less then the experimental accuracy of 3mK. Thus only the combination of the linear expansion parameters, resulting in \(\xi_{2}\) of Eq. (24), can be considered significant. In Fig. 9, we finally plot the \(\xi_{\rm NOB}\) results obtained by the machine learning (\(y\)-axis) in comparison to the experimental data (\(x\)-axis) for \(\xi_{1}\) (a), \(\xi_{2}\) (b), and \(\xi_{3}\) (c). Each point thus corresponds to an experimental value and a value obtained from the neural network. In addition, a line \(y=x\) for an ideal fit to the data is shown. The presented spread of points shows a fairly good reconstruction of the non-Boussinesq parameters by the ML algorithm. Figure 8: Comparison of individual reconstructions of the NOB parameter \(\xi_{1}\) (red), \(\xi_{2}\) (blue) and \(\xi_{3}\) (green) plotted as functions of pressure \(p\) at four values of \(T_{m}=4.6\), 4.91, 5.2, and 5.5 K (see insets in panels) and four values of \(\Delta T=0.01\), 0.1, 0.2, and 0.4 K (cf. Figs. 3 and 5). The vertical red dashed lines in panels with \(T_{m}=4.6\) K and 4.91 K denote the \(p\) values at the SVC. ## VI Conclusion and Outlook In this work, we investigated the non-Oberbeck-Boussinesq behavior in high-Rayleigh-number laboratory experiments of turbulent Rayleigh-Benard convection in cryogenic helium \({}^{4}\)He. The NOB effects in this experimental setup are shown to be caused by the temperature dependence of the material properties at the molecular level, while the compressibility effects can be neglected with a Mach number of \(M\lesssim 10^{-2}\). The temperature \(T_{c}\) measured at the center of the RBC cell is found to deviate from the arithmetic mean of the temperatures of the copper plates at the top and bottom, \(T_{m}\). This is an indicator of an asymmetry of the statistical properties between the top and bottom in the cell, which unambiguously signifies breaking of the OB approximation. The deviations have been determined in a series of experiments which provide a sparse data set to reconstruct the function \(\xi_{\rm NOB}(p,T)=T_{m}-T_{c}\) by a nonlinear regression. The experiments are characterized by the operating point in the pressure-temperature plane, \((p_{m},T_{m})\), and the outer temperature difference, \(\Delta T\). We thus provide a smooth approximation (at different levels of accuracy) for the strength of the NOB effects with respect to two state variables. In detail, we performed reconstructions of the NOB parameter by deep neural networks in three different ways. The first approach is based on the operating point \(p_{m},T_{m}\) and \(\Delta T\). The second and third approaches incorporate the expansion coefficients up to the linear and quadratic orders of the Taylor expansion with respect to \(\delta T=T-T_{m}\) of the material properties and mass density, respectively. The comparison of the different methods can be summarized as follows: (i) The inclusion of the linear-order temperature expansion alters the reconstruction results by up to 40%. (ii) The inclusion of the second expansion order does not alter the magnitude of the NOB effects significantly. Our study provides a first systematic reconstruction of the NOB effects in experiments with cryogenic helium, and renders a set of maps for expected NOB effects in a wide area of the pressure-temperature plane for different values of \(\Delta T\) in future experiments. A systematic analysis of the impact of the expansions on the heat and momentum transfer could be a next step which would require numerical investigations of the OB configuration for these parameters. Figure 9: Correlation of the reconstructed values of the NOB parameters \(\xi_{1}\) (a), \(\xi_{2}\) (b) and \(\xi_{3}\) (c) of Eqs. (23)-(25) with the experimental values of \(\xi_{\rm NOB}\) of Eq. (3). The gray line is the ideal fit for visual guidance. The red and yellow curves at the bottom of each panel represent the standard deviation \(s\) and the sum of residuals \(|r|\) for the ensembles of \(N_{\rm rec}=100\) neural network runs to obtain each point, respectively; see Appendix B for details. ###### Acknowledgements. The joint project is supported by grant no. 21-06012J of the Czech Science Foundation (GACR) for M.M. and by grant no. SCHU 1410/31-1 of the Deutsche Forschungsgemeinschaft (DFG) for G.Z. The work of G.Z. is co-funded by the European Union (ERC, MesoComp, 101052786). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Finally, we thank Ladislav Skrbek, Olga Shishkina, and Valentina Valori for discussions. ## Appendix A Details of the deep neural networks for nonlinear regression In the following, we will provide some technical details of the machine learning methods that were used for the nonlinear regression analysis, see e.g. [27]. In order to calculate all results, we had to use network models with different input and output parameters. They are provided in the following three tables together with some information. In the 1st model we trained a neural network without any information about the perturbative expansion as explained in the main text. The model consists of a total of 6 layers. These are (1) linear layers which apply a linear transformation of the incoming data followed by (2) a rectified linear unit (ReLU) function which performs an element-wise nonlinear activation. ReLU and the parametric ReLU (PReLU) are given by \[\text{ReLU}(x)=\max_{x}(0,x)\quad\text{and}\quad\text{PReLU}(x)=\max(0,x)+a \min(0,x)\,, \tag{10}\] for input data \(x\) with parameter \(a>0\). Furthermore, (3) the BatchNorm1d function of the PyTorch library is applied as a batch normalization to fix expectation value E and variance Var of the input in order to accelerate the training. This function is given by \[\text{BatchNorm1d}(x)=\frac{x-\text{E}[x]}{\sqrt{\text{Var}[x]+\epsilon}}\,, \tag{11}\] for input data \(x\) in the form of a mini-batch. Finally, (4) the sigmoid applies another element-wise nonlinear activation which is given by \[\sigma(x)=\frac{1}{1+\exp(-x)}\,. \tag{12}\] This first neural network in table 2 obtains \(\xi_{1}\) as the output resulting from three quantities at input, see Eq. (23). For our nonlinear regression task, there are however only 236 data points available to train the neural network. This is a small number for a full training of a deep neural network in the considered pressure-temperature interval. It can lead to the fact that in the randomly chosen input data for the training-testing procedure, no data point near the critical line/point is chosen at all. This in turn can result in an incorrect approximation of the non-Boussinesq parameter \(\xi_{\text{NOB}}\). In order to remove this effect of an insufficient amount of experimental data, training and testing were performed in a cycle of 100 runs at each of the three levels. Each run takes a randomly chosen subsample of the RBC data and uses the rest for testing. The deviation of each calculation from the mean is shown in appendix B. In this way, we obtained more robust regression results for a lacking number of RBC experiments. \begin{table} \begin{tabular}{l c c} \hline \hline Layer & Output shape & Number of parameters \\ \hline Linear & [16, 400] & 1600 \\ ReLU & [16, 400] & 1 \\ BatchNorm-1d & [16, 400] & 800 \\ Linear & [16, 1] & 401 \\ BatchNorm-1d & [16, 1] & 2 \\ Sigmoid & [16, 1] & 0 \\ \hline Total & & 2804 \\ \hline \hline \end{tabular} \end{table} Table 2: Details of the deep neural network which is used for the first approach. The output shape consists of the batch size and the number of weights. The second reconstruction method includes the parameters \(\varepsilon\) of the linear order expansion. These parameters, however, cannot be obtained close to the phase boundary from HEPAK and were thus reconstructed first by a neural network. The corresponding architecture is specified in the right fraction of table 3. Input is again \(p_{m}\), \(T_{m}\), and \(\Delta T\). With these input parameters, we obtain 5 outputs, namely \(\varepsilon_{1}\), \(\varepsilon_{3}\), \(\varepsilon_{5}\), \(\varepsilon_{7}\), and \(\varepsilon_{9}\). Subsequently, we add them as further input parameters, i.e., we have a total of 8 inputs to calculate the contour plot of \(\xi_{2}\) in the \(p\)-\(T\) plane. The recursion is now more complicated in comparison to the first approach because all property gradients in the experimental data fluctuate strongly, particularly close to the phase boundary. Also, the model is more complex since it includes more parameters. The architecture of the neural network is basically the same for both substeps. The third reconstruction method is similar to the second. Now, we use linear and quadratic orders, reconstruct the 10 expansion coefficients first, and obtain \(\xi_{3}\) from a network with 13 inputs. The architecture is specified in table 4. It is basically similar to the second reconstruction approach. ## Appendix B Error analysis of the nonlinear regression Fig. 10 shows the deviations of the \(N_{\text{rec}}=100\) individual reconstructions, which are indexed with \(i\), of the non-Boussinesq parameter \(\langle\xi_{k}(i)\rangle_{p,T}\) from the mean of all calculations for \(k=1,2\) and \(3\). We therefore first average the reconstructed field \(\xi_{k}(T,P)\) over the \(p\)-\(T\) plane which is indicated by \(\langle\cdot\rangle_{p,T}\). The mean non-Boussinesq parameter is eventually obtained by \[\overline{\langle\xi_{k}\rangle}_{p,T}=\frac{1}{N_{\text{rec}}} \sum_{i=1}^{N_{\text{rec}}}\langle\xi_{k}(i)\rangle_{p,T}\quad\text{for}\quad k =1,2,3\,. \tag{11}\] \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Reconstruction of \(\xi_{3}\)} & \multicolumn{2}{c}{Evaluation of \(\varepsilon_{i}\)} \\ Layer & Output shape & Number of parameters & Output shape & Number of parameters \\ \hline Linear & [16, 400] & 5600 & [16, 500] & 2000 \\ PReLU & [16, 400] & 1 & [16, 500] & 1 \\ BatchNorm-1d & [16, 400] & 800 & [16, 500] & 1000 \\ Linear & [16, 1] & 401 & [16, 10] & 5010 \\ BatchNorm-1d & [16, 1] & 2 & [16, 10] & 20 \\ Sigmoid & [16, 1] & 0 & [16, 10] & 0 \\ \hline Total & & 6804 & & 8031 \\ \hline \hline \end{tabular} \end{table} Table 4: Details of the deep neural network which is used for the third approach. The output shape consists of the batch size and the number of weights. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Reconstruction of \(\xi_{2}\)} & \multicolumn{2}{c}{Evaluation of \(\varepsilon_{2i+1}\)} \\ Layer & Output shape & Number of parameters & Output shape & Number of parameters \\ \hline Linear & [16, 400] & 3600 & [16, 500] & 2000 \\ PReLU & [16, 400] & 1 & [16, 500] & 1 \\ BatchNorm-1d & [16, 400] & 800 & [16, 500] & 1000 \\ Linear & [16, 1] & 401 & [16, 5] & 2505 \\ BatchNorm-1d & [16, 1] & 2 & [16, 5] & 10 \\ Sigmoid & [16, 1] & 0 & [16, 5] & 0 \\ \hline Total & & 4804 & & 5516 \\ \hline \hline \end{tabular} \end{table} Table 3: Details of the deep neural network which is used for the second approach. The output shape consists of the batch size and the number of weights for each network layer. Figure 10: Reconstructions of \(\xi_{i}(T,p)\). Deviation from the mean of the 100 individual reconstructions. Points are \(\langle\xi_{i}(i)\rangle_{p,T}\), the solid line is \(\overline{\langle\xi_{i}\rangle}_{p,T}\).
2304.12892
Automated choice of the best renormalization scheme
High-precision predictions in BSM models require calculations at the loop-level and thus a renormalization of (some of) the BSM parameter. Here many choices for the renormalization scheme (RS) are possible. A given RS can be well suited to yield ``stable'' and ``well behaved'' higher-order corrections in one part of the BSM parameter space, but can fail completely in other parts. The latter may not even be noticed numerically if an isolated parameter point is investigated. Here we review a new method for choosing a ``well behaved'' RS. We demonstrate the feasibility of our new method in the chargino/neutralino sector of the Minimal Supersymmetric Standard Model (MSSM), but stress the general applicability of our method to all types of BSM models.
S. Heinemeyer, F. von der Pahlen
2023-04-25T15:04:03Z
http://arxiv.org/abs/2304.12892v1
# Automated choice of the best renormalization scheme ###### Abstract: High-precision predictions in BSM models require calculations at the loop-level and thus a renormalization of (some of) the BSM parameter. Here many choices for the renormalization scheme (RS) are possible. A given RS can be well suited to yield "stable" and "well behaved" higher-order corrections in one part of the BSM parameter space, but can fail completely in other parts. The latter may not even be noticed numerically if an isolated parameter point is investigated. Here we review a new method for choosing a "well behaved" RS. We demonstrate the feasibility of our new method in the chargino/neutralino sector of the Minimal Supersymmetric Standard Model (MSSM), but stress the general applicability of our method to all types of BSM models. ## 1 Introduction A reliable investigation of a model beyond the Standard Model (BSM) requires the inclusion of higher-order corrections to, e.g., the production cross sections of BSM particles at the HL-LHC. This in turn requires the renormalization of the BSM model. The renormalization of BSM models is much less explored than the renormalization of the SM. Examples for "full one-loop renormalizations" can be found for the Minimal Supersymmetric Standard Model (MSSM) [1, 2], and the Next-to-MSSM (NMSSM) [3]. These analyses showed that many different choices of renormalization schemes (RS) are possible. This can concern the choice of the set of to-be-renormalized parameters out of a larger set of BSM parameters, but can also concern the type of renormalization condition that is chosen for a specific parameter. BSM models naturally possess several new BSM parameters. The number of new parameters can vary from \({\cal O}(1)\) to \({\cal O}(10)\), or even higher. Often multi-dimensional parameter scans are employed, or methods such as Markow-Chain Monte-Carlo (MCMC) analyses to find the phenomenological best-appealing parameters in the multi-dimensional BSM parameter space. The above mentioned BSM analyses also demonstrated that a given RS can be well suited to yield "stable" and "well behaved" higher-order corrections (more details will be given below) in one part of the BSM parameter space, but can fail completely in other parts. The latter may not even be noticed numerically if only isolated parameter points are investigated, which is natural in a scan, or MCMC analyses. Consequently, the exploration of BSM models requires a choice of a good RS _before_ the calculation is performed. An RS "fails" if one of the counterterms (or a linear combination of counterterms) does not (or only marginally) depend on the parameter itself, but is rather determined via other parameters of the model. This failure can manifest itself in _(i)_ "unnaturally" large higher-order corrections, _(ii)_ large (numerical) differences between \(\overline{\rm DR}\) and OS masses, _(iii)_ (numerical) differences between \(\overline{\rm DR}\) and OS parameters. In this work we review a new method how such a situation can be avoided, i.e. how a "good" RS can be chosen. This method is based on the properties of the transformation matrix that connects the various counter terms with the underlying parameter. This allows a point-by-point test of all "available" or "possible" RS, and the "best" one can be chosen to perform the calculation. Our idea is designed to work in all cases of RS choices (in BSM models). The numerical examples will be performed within the MSSM, concretely in the sector of charginos and neutralinos, the supersymmetric (SUSY) partners of the SM gauge bosons and the 2HDM-like Higgs sector. While this constitutes a very specific example, we would like to stress the general applicability of our method to all types of BSM models and types of RS choices. ## 2 Renormalization: theoretical considerations and concrete implementations ### The general idea As discussed above, the idea of how to choose a stable and well behaved RS is generally applicable. However, here we will outline it focusing a more concrete problem: in our theory we have \(m\) underlying Lagrangian parameters and \(n>m\) particles or particle masses that can be renormalized OS. Each choice of \(m\) particles renormalized OS defines an RS\({}_{I}\), of which we have \(N\) in total. How can one choose the "best" RS\({}_{L}\)? Our starting point will be the following: The masses of the BSM particles under investigation have not (yet) been measured. Then we start with \(\overline{\rm DR}\) parameters. The general idea for the automated choice of the RS\({}_{L}\) in the \(\overline{\rm DR}\) case can be outlined for two possible levels of refinement. The first one is called "semi-OS scheme", and the second one "full-OS scheme" (where in our numerical examples we will focus on the latter). The two cases are defined as follows. **Semi-OS scheme:** 1. We start with \(m\)\(\overline{\rm DR}\) parameters, \(P_{i}^{\overline{\rm DR}}\), from the Lagrangian and \(N\) RS\({}_{l}\). 2. For each RS\({}_{l}\), i.e. each different choice of \(m\) particles renormalized OS, we evaluate the corresponding OS parameters \[P_{i,l}^{\rm os}=P_{i}^{\overline{\rm DR}}-\delta P_{i,l,\rm fin}^{\rm os}\] (1) with the transformation matrix \({\bf A}_{l}^{\overline{\rm DR}}\) (more details will be given below). 3. It will be argued that a "bad" scheme RS\({}_{l}\) has a small or even vanishing \(|\det{\bf A}_{l}^{\overline{\rm DR}}|\). 4. Comparing the various \(|\det{\bf A}_{l}^{\overline{\rm DR}}|\) yields RS\({}_{L}\). 5. Inserting \(P_{i,L}^{\rm os}\) into the Lagrangian yields \(n\) particle masses out of which \(m\) are by definition given as their OS values. The remaining OS masses have to be determined calculating \(n-m\) finite shifts. 6. The counterterms for the \(P_{i,L}^{\rm os}\) are already known from Eq. (1) as \(\delta P_{i,L}^{\rm os}\) and can be inserted as counterterms in a loop calculation. This procedure yields all ingredients for an OS scheme. However, the OS counterterms \(\delta P_{i,L}^{\rm os}\) and thus also the OS parameters themselves, \(P_{i,L}^{\rm os}\), are calculated in terms of \(\overline{\rm DR}\) parameters, i.e. one has \(\delta P_{i,L}^{\rm os}(P_{i}^{\overline{\rm DR}})\) and \(P_{i,L}^{\rm os}(P_{i}^{\overline{\rm DR}})\). This is unsatisfactory for a "true" OS scheme, i.e. one would like to have \(\delta P_{i,L}^{\rm OS}(P_{i,L}^{\rm OS})\). Furthermore, when a RS\({}_{l}\) "starts to turn bad" as a function of a \(\overline{\rm DR}\) parameter, large differences between the \(P_{i,l}^{\rm os}\) and \(P_{i}^{\overline{\rm DR}}\) occur, shedding doubt on the above outlined procedure. These problems can be circumvented by extending the above scheme to an evaluation of the counterterms in terms of OS parameters. The general idea starts as above, but deviates from step 4 on. **Full-OS scheme:** The first two steps are as in the semi-OS scheme. We then continue with 1. Inserting \(P_{i,l}^{\rm os}\) into the Lagrangian yields \(n\) particle masses out of which \(m\) are by definition given as their os\({}_{l}\) values. The remaining os\({}_{l}\) masses have to be determined calculating \(n-m\) finite shifts. 2. RS\({}_{l}\) is applied again on the OS\({}_{l}\) Lagrangian. 3. This yields now OS counterterms in terms of os\({}_{l}\) parameters, \[\delta P_{i,l}^{\rm OS}(P_{i,l}^{\rm os})\] (2) with the transformation matrix \({\bf A}_{l}^{\rm OS}\) (more details will be given below). 6. It will be argued that a "bad" scheme RS\({}_{I}\) has a small/vanishing \(|\det\mathbf{A}_{I}^{\overline{\text{DR}}}|\) and/or \(|\det\mathbf{A}_{I}^{\text{OS}}|\). 7. Comparing the various \(\min\left\{|\det\mathbf{A}_{I}^{\overline{\text{DR}}}|,|\det\mathbf{A}_{I}^{ \text{OS}}|\right\}\) yields RS\({}_{L}\). 8. The counterterms for the \(P_{i,L}^{\text{OS}}\) are already known from Eq. (2) as \(\delta P_{i,L}^{\text{OS}}\) and can be inserted as counterterms in a loop calculation. Steps 3-5 could be iterated until convergence is reached. We will not do this. ### Application to the chargino/neutralino sector of the MSSM The concrete implementation concerns the calculation of physics processes with (external) charginos and/or neutralinos, \(\tilde{\chi}_{c}^{\pm}(c=1,2)\) and \(\tilde{\chi}_{n}^{0}(n=1,2,3,4)\) at the loop level. This requieres the choice of a (numerically well behaved) RS. The possible scheme choices are (\(n^{\prime\prime}>n^{\prime}>n\)) \[\text{CCN}_{n},\quad\text{CNN}_{cnt^{\prime}},\quad\text{NNN}_{ nn^{\prime}n^{\prime\prime}}\quad c=1,2;\;n,n^{\prime},n^{\prime\prime}=1,2,3,4. \tag{3}\] Here CCN\({}_{n}\) denotes a scheme where the two charginos and the neutralino \(n\), \(\tilde{\chi}_{n}^{0}\), are renormalized OS. CNN\({}_{cnt^{\prime}}\) denotes a scheme were chargino \(c\), \(\tilde{\chi}_{c}^{\pm}\), as well as neutralinos \(n,n^{\prime}\), \(\tilde{\chi}_{n}^{0}\), \(\tilde{\chi}_{n^{\prime}}^{0}\), are renormalized OS. Finally NNN\({}_{nn^{\prime}n^{\prime\prime}}\) denotes a scheme with three neutralinos renormalized OS. For sake of simplicity, in the following we neglect the NNN\({}_{nn^{\prime}n^{\prime\prime}}\) schemes. To fix our notation we briefly describe the chargino/neutralino sector of the MSSM. The bilinear term in the Lagrangian is given by, \[\mathcal{L}_{\tilde{\chi}^{-}\tilde{\chi}^{0}}^{\text{bil.}} =\overline{\tilde{\chi}_{i}^{-}}\not{p}\,\omega_{-}\tilde{\chi}_{i }^{-}+\overline{\tilde{\chi}_{i}^{-}}\not{p}\,\omega_{+}\tilde{\chi}_{i}^{-}- \overline{\tilde{\chi}_{i}^{-}}\,[\mathbf{V}^{*}\mathbf{X}^{\top}\mathbf{U}^ {\dagger}]_{ij}\,\omega_{-}\tilde{\chi}_{j}^{-}-\overline{\tilde{\chi}_{i}^{- }}\,[\mathbf{U}\mathbf{X}^{*}\mathbf{V}^{\top}]_{ij}\,\omega_{+}\tilde{\chi}_{ j}^{-}\] \[\quad+\frac{1}{2}\left(\overline{\tilde{\chi}_{k}^{0}}\not{p}\, \omega_{-}\tilde{\chi}_{k}^{0},+\overline{\tilde{\chi}_{k}^{0}}\not{p}\, \omega_{+}\tilde{\chi}_{k}^{0}-\overline{\tilde{\chi}_{k}^{0}}\,[\mathbf{N} ^{*}\mathbf{Y}\mathbf{N}^{\dagger}]_{kl}\,\omega_{-}\tilde{\chi}_{l}^{0}\,[ \mathbf{N}\mathbf{Y}^{*}\mathbf{N}^{\top}]_{kl}\,\omega_{+}\tilde{\chi}_{l}^{0 }\right)\, \tag{4}\] already expressed in terms of the chargino and neutralino mass eigenstates \(\tilde{\chi}_{i}^{-}\) and \(\tilde{\chi}_{k}^{0}\), respectively, and \(i,j=1,2\) and \(k,l=1,2,3,4\). The mass eigenstates can be determined via unitary transformations where the corresponding matrices diagonalize the chargino and neutralino mass matrix, \(\mathbf{X}\) and \(\mathbf{Y}\), respectively. In the chargino case, two \(2\times 2\) matrices \(\mathbf{U}\) and \(\mathbf{V}\) are necessary for the diagonalization of the chargino mass matrix \(\mathbf{X}\), \[\mathbf{M}_{\tilde{\chi}^{-}}=\mathbf{V}^{*}\,\mathbf{X}^{\top}\, \mathbf{U}^{\dagger}=\begin{pmatrix}m_{\tilde{\chi}_{1}^{\pm}}&0\\ 0&m_{\tilde{\chi}_{2}^{\pm}}\end{pmatrix}\quad\text{with}\quad\mathbf{X}= \begin{pmatrix}M_{2}&\sqrt{2}\sin\beta\,M_{W}\\ \sqrt{2}\cos\beta\,M_{W}&\mu\end{pmatrix}\, \tag{5}\] where \(\mathbf{M}_{\tilde{\chi}^{-}}\) is the diagonal mass matrix with the chargino masses \(m_{\tilde{\chi}_{1}^{\pm}},m_{\tilde{\chi}_{2}^{\pm}}\) as entries, which are determined as the (real and positive) singular values of \(\mathbf{X}\). The singular value decomposition of \(\mathbf{X}\) also yields results for \(\mathbf{U}\) and \(\mathbf{V}\). In the neutralino case, as the neutralino mass matrix \(\mathbf{Y}\) is symmetric, one \(4\times 4\) matrix is sufficient for the diagonalization \[\mathbf{M}_{\tilde{\chi}^{0}}=\mathbf{N}^{*}\,\mathbf{Y}\,\mathbf{N}^{\dagger}= \mathbf{diag}(m_{\tilde{\chi}_{1}^{0}},m_{\tilde{\chi}_{2}^{0}},m_{\tilde{\chi} _{3}^{0}},m_{\tilde{\chi}_{4}^{0}}) \tag{6}\] with \[{\bf Y}=\left(\begin{array}{cccc}M_{1}&0&-M_{Z}\,s_{\rm w}\cos\beta&M_{Z}\,s_{ \rm w}\sin\beta\\ 0&M_{2}&M_{Z}\,c_{\rm w}\cos\beta&-M_{Z}\,c_{\rm w}\sin\beta\\ -M_{Z}\,s_{\rm w}\cos\beta&M_{Z}\,c_{\rm w}\cos\beta&0&-\mu\\ M_{Z}\,s_{\rm w}\sin\beta&-M_{Z}\,c_{\rm w}\sin\beta&-\mu&0\end{array}\right). \tag{7}\] \(M_{Z}\) and \(M_{W}\) are the masses of the \(Z\) and \(W\) boson, \(c_{\rm w}=M_{W}/M_{Z}\) and \(s_{\rm w}=\sqrt{1-c_{\rm w}^{2}}\). The unitary 4\(\times\)4 matrix \({\bf N}\) and the physical neutralino (tree-level) masses \(m_{\tilde{\chi}_{4}^{0}}\) (\(k=1,2,3,4\)) result from a numerical Takagi factorization of \({\bf Y}\). Concerning the renormalization of this sector, the following replacements of the parameters and the fields are performed according to the multiplicative renormalization procedure, which is formally identical for the two set-ups: \[M_{1} \to M_{1}+\delta M_{1}\,\quad M_{2}\ \to\ M_{2}+\delta M_{2}\, \quad\mu\ \to\ \mu+\delta\mu\, \tag{8}\] \[\omega_{-/\tilde{\chi}_{4}^{\pm}} \to \left[\mathds{1}+\tfrac{1}{2}\delta{\bf Z}_{\tilde{\chi}^{\pm}} ^{L/R}\right]_{ij}\omega_{-/\tilde{\chi}_{j}^{\pm}}\qquad(i,j=1,2)\,\] (9) \[\omega_{-/\tilde{\chi}_{4}^{0}} \to \left[\mathds{1}+\tfrac{1}{2}\delta{\bf Z}_{\tilde{\chi}^{0}}^{ \ast}\right]_{kl}\omega_{-/\tilde{\chi}_{l}^{0}}\qquad(k,l=1,2,3,4). \tag{10}\] It should be noted that the parameter counterterms are complex counterterms which each need two renormalization conditions to be fixed. The transformation matrices are not renormalized, so that, using the notation of replacing a matrix by its renormalized matrix and a counterterm matrix \[{\bf X}\to{\bf X}+\delta{\bf X}\,\quad{\bf Y}\to{\bf Y}+\delta{\bf Y} \tag{11}\] with \[\delta{\bf X}=\left(\begin{array}{ccc}\delta M_{2}&\sqrt{2}\, \delta(M_{W}\sin\beta)\\ \sqrt{2}\,\delta(M_{W}\cos\beta)&\delta\mu\end{array}\right)\, \tag{12}\] \[\delta{\bf Y}=\left(\begin{array}{ccc}\delta M_{1}&0&-\delta(M_ {Z}s_{\rm w}\cos\beta)&\delta(M_{Z}s_{\rm w}\sin\beta)\\ 0&\delta M_{2}&\delta(M_{Z}c_{\rm w}\cos\beta)&-\delta(M_{Z}c_{\rm w}\sin\beta )\\ -\delta(M_{Z}s_{\rm w}\cos\beta)&\delta(M_{Z}c_{\rm w}\cos\beta)&0&-\delta \mu\\ \delta(M_{Z}s_{\rm w}\sin\beta)&-\delta(M_{Z}c_{\rm w}\sin\beta)&-\delta\mu&0 \end{array}\right)\, \tag{13}\] the replacements of the matrices \({\bf M}_{\tilde{\chi}^{-}}\) and \({\bf M}_{\tilde{\chi}^{0}}\) can be expressed as \[{\bf M}_{\tilde{\chi}^{-}}\to{\bf M}_{\tilde{\chi}^{-}}+\delta{ \bf M}_{\tilde{\chi}^{-}}={\bf M}_{\tilde{\chi}^{-}}+{\bf V}^{\ast}\delta{\bf X }^{\top}{\bf U}^{\dagger} \tag{14}\] \[{\bf M}_{\tilde{\chi}^{0}}\to{\bf M}_{\tilde{\chi}^{0}}+\delta{ \bf M}_{\tilde{\chi}^{0}}={\bf M}_{\tilde{\chi}^{0}}+{\bf N}^{\ast}\delta{\bf Y }{\bf N}^{\dagger}. \tag{15}\] More details on the renormalization can be found in Ref. [4]. ### Concrete renormalization in the semi-OS scheme We start with \(\overline{\rm DR}\) mass matrices for charginos and neutralinos, collectively denoted as \({\bf X}^{\overline{\rm DR}}(P_{i}^{\overline{\rm DR}})\), depending on the three input parameters, \[P_{i}^{\overline{\rm DR}}=M_{1}^{\overline{\rm DR}},M_{2}^{\overline{\rm DR}}, \mu^{\overline{\rm DR}}=\{p_{i}^{\overline{\rm DR}}\}. \tag{16}\] The mass matrices can be diagonalized, \[{\bf X}^{\overline{\rm DR}}\to{\bf M}^{\overline{\rm DR}}:=({\bf N}^{\overline{\rm DR }})^{\dagger}{\bf X}^{\overline{\rm DR}}{\bf N}^{\overline{\rm DR}}\, \tag{17}\] containing on the diagonal two charginos and four neutralino masses, \(m_{j}\). The \({\bf X}^{\overline{\rm DR}}\) can be renormalized, \[{\bf X}^{\overline{\rm DR}} \to{\bf X}^{\overline{\rm DR}}+\delta{\bf X}^{\overline{\rm DR}}( \delta P_{i}^{\overline{\rm DR}}) \tag{18}\] \[{\bf M}^{\overline{\rm DR}} \to{\bf M}^{\overline{\rm DR}}+\delta{\bf M}^{\overline{\rm DR}}( \delta P_{i}^{\overline{\rm DR}})={\bf M}^{\overline{\rm DR}}+({\bf N}^{ \overline{\rm DR}})^{\dagger}\delta{\bf X}^{\overline{\rm DR}}(\delta P_{i}^{ \overline{\rm DR}}){\bf N}^{\overline{\rm DR}}. \tag{19}\] So far, the \(\delta P_{i}^{\overline{\rm DR}}\) are unknown. The self-energies of the charginos and neutralinos can be written down as \(\Sigma_{j}(P_{i}^{\overline{\rm DR}},{\bf X}^{\overline{\rm DR}})\). Now the RS is chosen: CCN\({}_{c}\) or CNN\({}_{cm^{\prime}}\). For each of these \(N=28\) schemes we perform the following. The scheme is denotes as RS\({}_{l}\) (\(l=1\ldots 28\)). Three renormalized self-energies are chosen to be zero, \[\hat{\Sigma}_{k,l}(P_{i}^{\overline{\rm DR}},{\bf X}^{\overline{\rm DR }})=0\ (k=1,2,3)\, \tag{20}\] corresponding to three os masses, \(m_{k}^{\rm os}\). The three renormalized self-energies yield three conditions on \(\delta{\bf M}_{k}^{\overline{\rm DR}}\), \[\delta{\bf M}_{k,l}^{\overline{\rm DR}} =f_{k,l}^{\overline{\rm DR}}(m_{k^{\prime},l}^{\overline{\rm DR}}, \Sigma_{k^{\prime\prime},l})+F_{k,l}^{\overline{\rm DR}}(\delta\tan\beta, \delta M_{Z}^{2},\ldots) \tag{21}\] \[\downarrow{\bf A}_{l}^{\overline{\rm DR}}\] (22) \[\delta P_{i,l}^{\rm os} =g_{i,l}^{\overline{\rm DR}}(m_{k^{\prime},l}^{\overline{\rm DR}}, \Sigma_{k^{\prime\prime},l})+G_{i,l}^{\overline{\rm DR}}(\delta\tan\beta, \delta M_{Z}^{2},\ldots)\, \tag{23}\] yielding the os values \[P_{i}^{\overline{\rm DR}}\to P_{i}^{\overline{\rm DR}}-\delta P_{i,l}^{\rm os} \equiv\ P_{i,l}^{\rm os}. \tag{24}\] It is worth noticing that in the r.h.s. of Eq. (21) \(f_{k,l}\) is linear in \(\delta P_{i,l}^{\rm os}\), while \(F_{k,l}\) only depends on the counterterm of the remaining model parameters. These relations define \({\bf A}_{l}^{\overline{\rm DR}}\), the transformation matrix from the set of mass counterterms to parameter counterterms, \[\delta P_{i,l}^{\rm os}=({\bf A}_{l}^{\overline{\rm DR}})_{ik}^{-1}\left( \delta{\bf M}_{k,l}^{\overline{\rm DR}}-F_{k,l}(\delta\tan\beta,\delta M_{Z}^{ 2},\ldots)\right). \tag{25}\] os masses \(m_{k,l}^{\rm os}\) are derived from \[{\bf X}_{l}^{\rm os}(P_{i,l}^{\rm os})\to{\bf M}_{l}^{\rm os}:=({\bf N}_{l}^{ \rm os})^{\dagger}{\bf X}_{l}^{\rm os}(P_{i,l}^{\rm os}){\bf N}^{\rm os}. \tag{26}\] The three masses that are not obtained as os masses so far can be evaluated by adding finite shifts to them, see Ref. [4]. As discussed above, an RS "fails" if one of the counterterms (or a linear combination of counterterms) does not (or only marginally) depend on the parameter itself, but is rather determined via other parameters of the model. This is exactly given in our ansatz if the matrix \({\bf A}_{l}^{\overline{\rm DR}}\) does not provide a numerically "well behaved" transition \[\delta{\bf M}_{k,l}^{\overline{\rm DR}}\stackrel{{{\rm A}_{j}^{ \overline{\rm DR}}}}{{\rightarrow}}\delta P_{i,l}^{\rm os}\, \tag{27}\] see Eqs. (22), suppressing terms involving other counterterms (\(\delta\tan\beta\), \(\delta M_{Z}^{2}\),...). Following the argument of the "well behaved" transition, \(\mathrm{RS}_{l}\) fails if \(\mathbf{A}_{l}^{\overline{\mathrm{DR}}}\) becomes (approximately) singular, or the normalized determinant, \[\mathbf{D}_{l}^{\overline{\mathrm{DR}}}:=\frac{|\det\mathbf{A}_{l}^{\overline{ \mathrm{DR}}}|}{||\mathbf{A}_{l}^{\mathrm{DR}}||}\ll 1\, \tag{28}\] Conversely, the "best" scheme \(\mathrm{RS}_{L}\) can be chosen via the condition of the maximum normalized determinant, \[\mathrm{RS}_{L}^{\mathrm{os}}\quad\Leftrightarrow\quad\mathbf{D}_{L}^{ \overline{\mathrm{DR}}}=\max_{l}\left\{\mathbf{D}_{l}^{\overline{\mathrm{DR}} }\right\}. \tag{29}\] ### Concrete implementation in the full OS renormalization For each \(\mathrm{RS}_{l}\) as evaluated in Sect. 2.3 we now have os mass matrices for charginos and neutralinos, collectively denoted as \(\mathbf{X}^{\mathrm{os}}(P_{i,l}^{\mathrm{os}})\) following Eq. (26). We also have os parameters \(P_{i,l}^{\mathrm{os}}(P_{i}^{\overline{\mathrm{DR}}})\) following Eq. (24) and \(\delta P_{i,l}^{\mathrm{os}}(P_{i}^{\overline{\mathrm{DR}}})\) following Eq. (23). This is unsatisfactory for a "true" OS scheme, i.e. one would like to have \(\delta P_{i,l}^{\mathrm{OS}}(P_{i,l}^{\mathrm{OS}})\). Furthermore, when a \(\mathrm{RS}_{l}\) "starts to turn bad" as a function of a \(\overline{\mathrm{DR}}\) parameter, large differences between the \(P_{i,l}^{\mathrm{os}}\) and \(P_{l}^{\overline{\mathrm{DR}}}\) occur, shedding doubt on the above outlined procedure. These problems can be circumvented by extending the above scheme to an evaluation of the counterterms in terms of OS parameters. We start with the os parameters obtained in Sect. 2.3, \(P_{i,l}^{\mathrm{os}}\). The mass matrices depend on these three input parameters. Now the renormalization process in \(\mathrm{RS}_{l}\) is applied again, starting from the above os values. Following the same steps as in Sect. 2.3, defining the matrix \(\mathbf{A}_{l}^{\mathrm{os}}\). As in the case of the semi-OS scheme, a bad \(\mathrm{RS}_{l}\) is indicated if in our ansatz if the matrix \(\mathbf{A}_{l}^{\mathrm{os}}\) does not provide a numerically "well behaved" transition \[\delta\mathbf{M}_{k,l}^{\mathrm{os}}\stackrel{{ \mathbf{A}_{l}^{\mathrm{os}}}}{{\rightarrow}}\delta P_{i,l}^{\mathrm{OS}}\, \tag{30}\] and suppressing terms involving other counterterms (\(\delta\tan\beta\), \(\delta M_{Z}^{2}\),...). Following the argument of the "well behaved" transition, \(\mathrm{RS}_{l}\) fails if \(\mathbf{A}_{l}^{\overline{\mathrm{DR}}}\) or \(\mathbf{A}_{l}^{\mathrm{os}}\) become (approximately) singular, or the normalized determinant, \[\mathbf{D}_{l}^{\overline{\mathrm{DR}}}:=\frac{|\det\mathbf{A}_{l}^{\overline {\mathrm{DR}}}|}{||\mathbf{A}_{l}^{\mathrm{DR}}||}\ll 1\quad\text{or}\quad \mathbf{D}_{l}^{\mathrm{os}}:=\frac{|\det\mathbf{A}_{l}^{\mathrm{os}}|}{|| \mathbf{A}_{l}^{\mathrm{os}}||}\ll 1\, \tag{31}\] equivalent to \(\mathbf{D}_{l}^{\mathrm{OS}}:=\min\left\{\mathbf{D}_{l}^{\overline{\mathrm{ DR}}},\mathbf{D}_{l}^{\mathrm{os}}\right\}\ll 1\). Conversely, the "best" scheme \(\mathrm{RS}_{L}\) can be chosen via the condition of the maximum normalized determinant, \[\mathrm{RS}_{L}^{\mathrm{OS}}\quad\Leftrightarrow\quad\mathbf{D}_{L}^{ \mathrm{OS}}=\max_{l}\left\{\mathbf{D}_{l}^{\mathrm{OS}}\right\}. \tag{32}\] Now all ingrediences for physics calculations are at hand. _(i)_ The physical parameters \(P_{i,L}^{\mathrm{OS}}\) are given via the OS analogon to Eq. (24). _(ii)_ The counterterms for the \(P_{i,L}^{\mathrm{OS}}\) are known from the OS analogon to Eq. (23) as \(\delta P_{i,L}^{\mathrm{OS}}\) and can be inserted as counterterms in a loop calculation. _(iii)_ Inserting \(P_{i,L}^{\mathrm{OS}}\) into the Lagrangian yields six particle masses out of which three are by definition given as their OS values. The remaining OS masses have to be determined calculating three finite shifts, see Ref. [4]. ## 3 Numerical example As numerical example of the application of our procedure we show in Fig. 1 the results for the decay width for \(\Gamma(\chi_{2}^{+}\to\chi_{1}^{0}W^{+})\) as a function of \(\mu\) for \(M_{1}=200\:\mathrm{GeV}\), \(M_{2}=500\:\mathrm{GeV}\) and \(\tan\beta=10\). The results were obtained using the FeynArts/FormCalc/LoopTools set-up [5, 6, 7] with the MSSM model file as defined in Ref. [2]. The upper plot shows the normalized determinants \(\mathbf{D}_{l}^{\overline{\mathrm{DR}}}\) (dotted) and \(\mathbf{D}_{l}^{\mathrm{os}}\) (dashed), see Eq. (31) in four colors for the four "best RS". The results of the "selected best RS" are overlaid with a gray band. The horizontal colored bar indicates this best RS for the corresponding value of \(\mu\), following the same color coding as the curves: \(\mathrm{CNN}_{223}\) for \(\mu\lesssim 210\:\mathrm{GeV}\), \(\mathrm{CNN}_{212}\) for \(215\:\mathrm{GeV}\lesssim\mu\lesssim 240\:\mathrm{GeV}\), \(\mathrm{CNN}_{213}\) for \(245\:\mathrm{GeV}\lesssim\mu\lesssim 505\:\mathrm{GeV}\), \(\mathrm{CNN}_{113}\) for \(510\:\mathrm{GeV}\lesssim\mu\). In this example the selected best scheme has determinants larger than \(\sim 0.5\), indicating that the counter terms can be determined reliably. The middle left figure shows the tree results for the same four selected RS as colored dashed lines, and the results of the "selected best RS" are again overlaid with a gray band. One can observe that where a scheme is chosen, the tree level width behaves "well" and smooth. It reaches zero at \(\mu\sim 330\:\mathrm{GeV}\) because the involved tree-level coupling has an (accidental) zero crossing. On the other hand, outside the selected interval the tree-level result behave highly irregular, induced by the shifts in the mass matrices to obtain OS masses. The middle right plot shows the "loop plus real photon emission" results with the same color coding as in the middle left plot. As for the tree-level result one sees that where a scheme is chosen the loop corrections behave smooth and the overall size stays at the level of \(\sim 10\%\) or less compared to the tree-level result. As above, outside the chosen interval the loop corrections take irregular values, which sometimes even diverge, owing to a vanishing determinant. The lower left plot, using again the same color coding, shows the sum of tree and higher-order corrections, i.e. of the two previous plots. The same pattern of numerical behavior can be observed. The chosen scheme yields a reliable higher-order corrected result, whereas other schemes result in highly irregular and clearly unreliable results. This is summarized in the lower right plot, where show the selected tree-level result as dashed line, the loop result as dotted, and the full result as solid line. The overall behavior is completely well-behaved and smooth. A remarkable feature can be observed at \(\mu\sim 500\:\mathrm{GeV}\). Here the selected tree-level result has a kink, because of a change in the shift in the OS values of the involved chargino/neutralino masses, caused by the change from switching from \(\mathrm{CNN}_{213}\) to \(\mathrm{CNN}_{113}\). However, the loop corrections contain also a corresponding kink, leading to a completely smooth full one-loop result. This shown example demomstrates the power of the new algorythm used to select _beforehand_ the best RS out of many. It also demonstrates that without such a scheme choice completely unreliable results can be obtained. ## Acknowledgements S.H. thanks the organizers of L&L 2022 for the invitation and the (as always!) inspiring atmosphere. The work of S.H. has received financial support from the grant PID2019-110058GB-C21 funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe". MEINCOP Spain under contract PID2019-110058GB-C21 and in part by by the grant IFT Centro de Excelencia Severo Ochoa CEX2020-001007-S funded by MCIN/AEI/10.13039/501100011033.
2304.04028
A descent subgradient method using Mifflin line search for nonsmooth nonconvex optimization
We propose a descent subgradient algorithm for minimizing a real function, assumed to be locally Lipschitz, but not necessarily smooth or convex. To find an effective descent direction, the Goldstein subdifferential is approximated through an iterative process. The method enjoys a new two-point variant of Mifflin line search in which the subgradients are arbitrary. Thus, the line search procedure is easy to implement. Moreover, in comparison to bundle methods, the quadratic subproblems have a simple structure, and to handle nonconvexity the proposed method requires no algorithmic modification. We study the global convergence of the method and prove that any accumulation point of the generated sequence is Clarke stationary, assuming that the objective $f$ is weakly upper semismooth. We illustrate the efficiency and effectiveness of the proposed algorithm on a collection of academic and semi-academic test problems.
Morteza Maleknia, Majid Soleimani-damaneh
2023-04-08T14:43:08Z
http://arxiv.org/abs/2304.04028v1
# A descent subgradient method using Mifflin's line search for nonsmooth nonconvex optimization ###### Abstract We propose a descent subgradient algorithm for minimizing a function \(f:\mathbb{R}^{n}\to\mathbb{R}\), assumed to be locally Lipschitz, but not necessarily smooth or convex. To find an effective descent direction, the Goldstein \(\varepsilon\)-subdifferential is approximated through an iterative process. The method enjoys a new two-point variant of Mifflin's line search in which the subgradients are arbitrary. Thus, the line search procedure is easy to implement. Moreover, in comparison to bundle methods, the quadratic subproblems have a simple structure, and to handle nonconvexity the proposed method requires no algorithmic modification. We study the global convergence of the method and prove that any accumulation point of the generated sequence is Clarke stationary, assuming that the objective \(f\) is weakly upper semismooth. We illustrate the efficiency and effectiveness of the proposed algorithm on a collection of academic and semi-academic test problems. n o n ### Literature review Bundle methods, originally developed by Lemarechal [6, 7, 8] and Wolfe [9], are of the most common tools for solving problem (1). A well-developed theoretical base and a nice practical performance make these methods highly popular in nonsmooth optimization. Bundle methods store a number of previously computed trial points along with the corresponding subgradients into a bundle of information. Using the elements of this bundle, a model function for the objective function is constructed. As a standard manner, by minimizing the model function, one can obtain a search direction. Next, a line search procedure finds the next trial point, and the bundle of information is updated accordingly. The aggregation strategy proposed by Kiwiel [10] was an important contribution to the field in resolving some difficulties with the amount of required storage. One can point to the proximal bundle method [11, 12] as one of the most efficient variants of the bundle methods. These methods keep the model function local enough by means of a proximity parameter. Variable metric bundle methods [13, 14, 15, 16] employ quasi-Newton techniques to augment the model function with an approximation of the Hessian matrix. Moreover, some recent variants of the bundle methods that deal with approximate subgradients can be found in [17, 18, 19, 20]. For more recent developments in bundle methods and their applications one can refer to [21, 22, 23, 24, 25, 26]. One drawback of bundle methods is that their generalization from convex to nonconvex case requires serious algorithmic modifications, which leads to a much less satisfactory numerical performance. In 2002, Burke et al. [27] initiated a giant stride towards approximating the subdifferential set by sampling gradients. The results of that work led to proposing an implementable algorithm, namely Gradient Sampling (GS) [28]. This method approximates the \(\varepsilon\)-steepest descent direction to obtain a search direction during each iteration. Then, it employs a standard backtracking Armijo line search to find a suitable step size. A work of Kiwiel [29] improved the convergence results of the original GS method. An extension of the method for solving constrained problems was presented in [30]. A specific variant of the GS approach for solving min-max problems was appeared in [31]. Although the original GS approach is robust, it requires \(m>n\) gradient evaluations during each iteration, which makes the method computationally expensive. To tackle this difficulty, some variants of the GS method with the aim of reducing the number of gradient evaluations were developed in [32, 33, 34]. Moreover, some special types of the GS approach, that inexactly solve the corresponding quadratic subproblems, can be found in [35, 36]. As another class of methods that can deal with problem (1), one can point to the subgradient methods originally proposed by Shor [37]. The initial subgradient method has a very simple structure as any direction opposite to an arbitrary subgradient can be employed as a search direction, not to mention making use of an off-line sequence of step sizes. However, the approach suffers from several limitations, including poor speed of convergence, lack of descent, deficiency of a practical stopping criterion based on first-order optimality conditions, and limited convergence results for nonconvex objectives. To boost the convergence speed, a subgradient method with space dilation was suggested in [38]. Based on the space dilation operator, Shor developed another variant of subgradient methods, namely \(r\)-algorithm [37, 39]. One can consider these modified subgradient methods as variable metric methods which do not satisfy the secant equation. Besides a limited theoretical foundation, these approaches do not have a stopping criterion based on a necessary optimality condition. Moreover, the amount of required storage for storing the corresponding operators poses some difficulties with medium and large-scale problems. Owing to some features of bundle methods, Bagirov et al. [40, 41] proposed a descent subgradient algorithm for solving problem (1). Their approach is interesting as it enjoys a practical stopping criterion and, unlike bundle methods, it requires no algorithmic modifications to handle nonconvexity. However, the user has to supply those subgradients which approximately satisfy the conditions in the Lebourg's mean value theorem [42], namely quasi-secants. In fact, Bagirov et al.'s method does not work with arbitrary subgradients. Another descent subgradient algorithm in which the subgradients are not arbitrary can be found in [43]. ### The proposed method In this study, we propose a descent subgradient algorithm for solving problem (1). By Rademacher's theorem [44], we know that the locally Lipschitz function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is differentiable almost everywhere on \(\mathbb{R}^{n}\). Moreover, in many practical situations, locally Lipschitz functions are continuously differentiable (smooth) almost everywhere on \(\mathbb{R}^{n}\). Indeed, while minimizing a locally Lipschitz function over \(\mathbb{R}^{n}\) using a machine which uses **IEEE** double or single precision arithmetic, a nonsmooth point is never encountered except in trivial or pathological cases. In this regard, the Clarke subdifferential merely suggests the traditional steepest descent direction as a search direction, which is not an effective descent direction in nonsmooth optimization [45, 46]. To avoid this issue, we employ the Goldstein \(\varepsilon\)-subdifferential [47] which stabilizes our choice of the search direction. More precisely, our main idea is to develop an iterative procedure to approximate the Goldstein \(\varepsilon\)-subdifferential, which leads to an estimation of the \(\varepsilon\)-steepest descent direction. The heart of the proposed method is a new two-point variant of the Mifflin's line search whose finite convergence is guaranteed under the assumption that the objective \(f\) is weakly upper semismooth. Thanks to the proposed line search, our algorithm works with arbitrary subgradients, which is not the case in [41] and [43]. As opposed to bundle methods, the proposed method requires no algorithmic and parametric modifications to handle nonconvexity. In addition, the structure of the quadratic subproblems is simpler than the bundle type methods. In contrast with original GS method, our approximation of the Goldstein \(\varepsilon\)-subdifferential is improved sequentially, and hence the proposed approach needs fewer subgradient evaluations than the original GS method. To control the size of quadratic subproblems, the user can optionally employ an adaptive subgradient selection strategy to discard almost redundant subgradients. We study the global convergence of the method and prove that any accumulation point of the generated sequence is Clarke stationary for objective \(f\). By means of numerical experiments, we show the efficiency of the method in practice. To this end, first we consider a set of academic nonsmooth convex and nonconvex test problems to provide some comparative results. Next, we apply our method to a nonsmooth model arising in data clustering. In our third experiment, we consider the problem of Chebyshev approximation by polynomials. Finally, we turn to the problem of minimizing eigenvalue products. ### Outline In Section 2, we provide some required preliminaries. Section 3 describes the proposed approach for finding a descent direction. Approximate Clarke stationary points are computed in Section 4, and a Clarke stationary point for objective function is obtained in Section 5. Numerical results are reported in Section 6, and Section 7 concludes the paper. ## 2 Preliminaries Throughout this paper, we use the following notations. The usual inner product in the Euclidean space \(\mathbb{R}^{n}\) is denoted by \(\mathbf{x}^{T}\mathbf{y}\), which induces the Euclidean norm \(\|\mathbf{x}\|=(\mathbf{x}^{T}\mathbf{x})^{1/2}\). An open ball with center \(\mathbf{x}\in\mathbb{R}^{n}\) and radius \(\varepsilon\geq 0\) is denoted by \(\mathcal{B}(\mathbf{x},\varepsilon)\), that is, \[\mathcal{B}(\mathbf{x},\varepsilon):=\{\mathbf{y}\in\mathbb{R}^{n}\ :\ \|\mathbf{y}-\mathbf{x}\|<\varepsilon\}.\] Moreover, \(\mathbb{N}\) is the set of natural numbers, \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\), and \(\mathbb{R}_{+}:=(0,\infty)\). Suppose \(f:\mathbb{R}^{n}\to\mathbb{R}\) is a locally Lipschitz function. Then, by Rademacher's theorem [44], \(f\) is differentiable almost everywhere on \(\mathbb{R}^{n}\). Let \[\Omega_{f}:=\{\mathbf{x}\in\mathbb{R}^{n}\ :\ f\text{ is not differentiable at }\mathbf{x}\}.\] Then, the Clarke subdifferential of \(f\) at a point \(\mathbf{x}\in\mathbb{R}^{n}\) is defined as [42] \[\partial f(\mathbf{x}):=\texttt{conv}\{\mathbf{\xi}\in\mathbb{R}^{n}\ :\ \exists\,\{\mathbf{x}_{i}\}\subset\mathbb{R}^{n}\setminus\Omega_{f}\ \text{ s.t. }\ \mathbf{x}_{i}\to\mathbf{x}\text{ and }\nabla f(\mathbf{x}_{i})\to\mathbf{\xi}\},\] where \(\texttt{conv}\) denotes the convex hull operator. Furthermore, for any \(\varepsilon\geq 0\), the (Goldstein) \(\varepsilon\)-subdifferential of \(f\) at a point \(\mathbf{x}\in\mathbb{R}^{n}\) is the set [48] \[\partial_{\varepsilon}f(\mathbf{x}):=\texttt{cl\,conv}\{\partial f(\mathbf{y})\ :\ \mathbf{y}\in \mathcal{B}(\mathbf{x},\varepsilon)\},\] in which \(\texttt{cl\,conv}\) is the closure of the convex hull. If \(\varepsilon=0\), we have \(\partial f(\mathbf{x})=\partial_{0}f(\mathbf{x})\), for all \(\mathbf{x}\in\mathbb{R}^{n}\). In addition, for any \(\varepsilon\geq 0\) and \(\mathbf{x}\in\mathbb{R}^{n}\), the set \(\partial_{\varepsilon}f(\mathbf{x})\) is a nonempty, convex and compact subset of \(\mathbb{R}^{n}\). If \(f\) is differentiable at \(\mathbf{x}\in\mathbb{R}^{n}\), then \(\nabla f(\mathbf{x})\in\partial f(\mathbf{x})\). Furthermore, If \(f\) is smooth at \(\mathbf{x}\in\mathbb{R}^{n}\), we have \(\{\nabla f(\mathbf{x})\}=\partial f(\mathbf{x})\). Also, the set-valued map \(\partial_{\varepsilon}f:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) is locally bounded and upper semicontinuous [42]. It is recalled that for a point \(\mathbf{x}\in\mathbb{R}^{n}\) to be a local minimizer of the locally Lipschitz function \(f\), it is necessary that \(\mathbf{0}\in\partial f(\mathbf{x})\). Such a point is called a _Clarke stationary_ point. In the proposed method, the following concept of stationarity plays a crucial role. **Definition 2.1**.: Let \(\mathbf{x}\in\mathbb{R}^{n}\), \(\varepsilon>0\), and \(\delta>0\). Assume \(\mathcal{G}_{\varepsilon}(\mathbf{x})\subset\partial_{\varepsilon}f(\mathbf{x})\) is a nonempty inner approximation of \(\partial_{\varepsilon}f(\mathbf{x})\).Then the point \(\mathbf{x}\in\mathbb{R}^{n}\) is called a \((\delta,\mathcal{G}_{\varepsilon}(\mathbf{x}))\)-stationary point if \[\min\{\|\mathbf{g}\|\ :\ \mathbf{g}\in\texttt{conv}\mathcal{G}_{\varepsilon}(\mathbf{x}) \}\leq\delta.\] ## 3 In quest of a descent direction Our main idea to obtain a descent direction for the locally Lipschitz function \(f:\mathbb{R}^{n}\to\mathbb{R}\) at a point \(\mathbf{x}\in\mathbb{R}^{n}\) is to approximate the \(\varepsilon\)-steepest descent direction. In this respect, we concisely review the notions of steepest descent and \(\varepsilon\)-steepest descent directions. For the locally Lipschitz objective \(f\), the steepest descent direction at a point \(\mathbf{x}\in\mathbb{R}^{n}\) is obtained from the following minimization problem [28, 49]: \[\min\{\|\mathbf{\xi}\|\ :\ \mathbf{\xi}\in\partial f(\mathbf{x})\}. \tag{2}\] Let \(\mathbf{\xi}^{*}\neq\mathbf{0}\) be the optimal solution of problem (2). Then, the direction \(\mathbf{\tilde{d}}:=-\mathbf{\xi}^{*}/\|\mathbf{\xi}^{*}\|\) is called _(normalized) steepest descent direction_. In many iterative algorithms for solving problem (1), we often land on a continuously differentiable point which is close by to the nonsmooth region \(\Omega_{f}\). In this situation, \(\partial f(\mathbf{x})\) does not contain any information of the nearby nonsmooth region; in other words, \(\partial f(\mathbf{x})=\{\nabla f(\mathbf{x})\}\). Thus, the steepest descent direction coincides with the direction \(-\nabla f(\mathbf{x})/\|\nabla f(\mathbf{x})\|\), which is not an effective descent direction for nonsmooth function \(f\). In contrast, in the same situation, the \(\varepsilon\)-subdifferential \(\partial_{\varepsilon}f(\mathbf{x})\) can capture some local information of the nearby nonsmooth region and provide an effective descent direction. If \(\mathbf{\xi}^{*}\neq\mathbf{0}\) solves the following minimization problem: \[\min\{\|\mathbf{\xi}\|\ :\ \mathbf{\xi}\in\partial_{\varepsilon}f(\mathbf{x})\}, \tag{3}\] we call the direction \(\mathbf{\tilde{d}}:=-\mathbf{\xi}^{*}/\|\mathbf{\xi}^{*}\|\) _(normalized) \(\varepsilon\)-steepest descent direction_, which is similar to those introduced in [28, 37]. As observed, to solve problem (3), we need to know the entire subdifferential on \(\mathcal{B}(\mathbf{x},\varepsilon)\), which is impractical in many real-life situations. In this regard, we develop an iterative procedure to efficiently approximate \(\partial_{\varepsilon}f(\mathbf{x})\). For a given point \(\mathbf{x}\in\mathbb{R}^{n}\), and given scalars \(m\in\mathbb{N}\) and \(\varepsilon>0\), let \[\mathcal{G}_{\varepsilon}(\mathbf{x}):=\{\mathbf{\xi}_{1},\mathbf{\xi}_{2},\ldots,\mathbf{ \xi}_{m}\}\subset\partial_{\varepsilon}f(\mathbf{x})\] be a collection of subgradients. Then, we consider \[\texttt{conv}\,\mathcal{G}_{\varepsilon}(\mathbf{x})\subset\partial_{\varepsilon }f(\mathbf{x})\] as an inner approximation of \(\partial_{\varepsilon}f(\mathbf{x})\), and solve the following minimization problem: \[\min\{\|\mathbf{g}\|\ :\ \mathbf{g}\in\texttt{conv}\,\mathcal{G}_{\varepsilon}(\mathbf{x})\}, \tag{4}\] which is a practical approximation of problem (3). If \(\mathbf{g}^{*}\neq\mathbf{0}\) is the optimal solution of problem (4), the direction \(\mathbf{d}:=-\mathbf{g}^{*}/\|\mathbf{g}^{*}\|\) is an approximation of \(\mathbf{\tilde{d}}\). In case \(\texttt{conv}\mathcal{G}_{\varepsilon}(\mathbf{x})\) is a good approximation of \(\partial_{\varepsilon}f(\mathbf{x})\), one can use the direction \(\mathbf{d}\) to take a descent step, i.e., there exists the step length \(t>0\) satisfying the following sufficient decrease condition: \[f(\mathbf{x}+t\mathbf{d})-f(\mathbf{x})\leq-\beta t\|\mathbf{g}^{*}\|\quad\text{and}\quad t \geq\bar{t}, \tag{5}\] where \(\beta\in(0,1)\) is a sufficient decrease parameter, and \(\bar{t}>0\) is a lower bound for the step length \(t\). Otherwise, the working set \(\mathcal{G}_{\varepsilon}(\mathbf{x})\) should be improved by appending a new element of \(\partial_{\varepsilon}f(\mathbf{x})\), namely \(\mathbf{\xi}_{m+1}\). The new subgradient \(\mathbf{\xi}_{m+1}\in\partial_{\varepsilon}f(\mathbf{x})\) must be chosen such that \[\mathbf{\xi}_{m+1}\notin\texttt{conv}\mathcal{G}_{\varepsilon}(\mathbf{x}). \tag{6}\] In this way, if we update \(\mathcal{G}_{\varepsilon}(\mathbf{x})\) by \[\mathcal{G}_{\varepsilon}^{+}(\mathbf{x}):=\mathcal{G}_{\varepsilon}(\mathbf{x})\cup\{ \mathbf{\xi}_{m+1}\},\] we have \(\texttt{conv}\mathcal{G}_{\varepsilon}(\mathbf{x})\subsetneq\texttt{conv}\mathcal{G }_{\varepsilon}^{+}(\mathbf{x})\). In other words, our approximation of \(\partial_{\varepsilon}f(\mathbf{x})\) is improved significantly. The following lemma provides a useful criterion to find the new subgradient \(\mathbf{\xi}_{m+1}\in\partial_{\varepsilon}f(\mathbf{x})\) which satisfies condition (6). **Lemma 3.1**.: _Let \(\mathbf{g}^{*}\neq\mathbf{0}\) be the optimal solution of problem (4), and \(\mathbf{d}=-\mathbf{g}^{*}/\|\mathbf{g}^{*}\|\). For a \(\beta\in(0,1)\) and \(\mathbf{\xi}_{m+1}\in\partial_{\varepsilon}f(\mathbf{x})\), assume_ \[\mathbf{\xi}_{m+1}^{T}\mathbf{d}\geq-\beta\|\mathbf{g}^{*}\|. \tag{7}\] _Then \(\mathbf{\xi}_{m+1}\notin\texttt{conv}\mathcal{G}_{\varepsilon}(\mathbf{x})\)._ Proof.: Since \(\mathbf{g}^{*}\) solves problem (4), we have [50] \[-\mathbf{g}^{*}\in\mathcal{N}_{\texttt{conv}\mathcal{G}_{\varepsilon}(\mathbf{x})}( \mathbf{g}^{*}),\] in which \(\mathcal{N}_{\texttt{conv}\mathcal{G}_{\varepsilon}(\mathbf{x})}(\mathbf{g}^{*})\) denotes the normal cone of the set \(\texttt{conv}\mathcal{G}_{\varepsilon}(\mathbf{x})\) at \(\mathbf{g}^{*}\). This means \[\mathbf{g}^{T}\mathbf{g}^{*}\geq\|\mathbf{g}^{*}\|^{2},\quad\text{for all }\mathbf{g}\in \texttt{conv}\mathcal{G}_{\varepsilon}(\mathbf{x}). \tag{8}\] Therefore, if we compute \(\mathbf{\xi}_{m+1}\in\partial_{\varepsilon}f(\mathbf{x})\) such that \(\mathbf{\xi}_{m+1}^{T}\mathbf{d}\geq-\beta\|\mathbf{g}^{*}\|\), inequality (8) implies \(\mathbf{\xi}_{m+1}\notin\texttt{conv}\mathcal{G}_{\varepsilon}(\mathbf{x})\). Based on the preceding discussion, in Algorithm 1, we present a new two-point variant of Mifflin's line search [51, 52] which either obtains the step length \(t>0\) satisfying sufficient decrease condition (5) or provides some \(\mathbf{\xi}_{m+1}\in\partial_{\varepsilon}f(\mathbf{x})\) satisfying criterion (7). There are three conditional blocks and two step lengths in Algorithm 1, \(t_{i}\) and \(\bar{t}_{i}\). We employ the step length \(t_{i}\) to find an element of \(\partial_{\varepsilon}f(\mathbf{x})\) satisfying (7), which is controlled by the third conditional block. Since we should keep the computed subgradients in \(\partial_{\varepsilon}f(\mathbf{x})\), the trial step length \(t_{i}\) varies within the interval \((0,\varepsilon)\). The length of this interval is efficiently reduced by the first conditional block. Also, by using the trial step length \(\bar{t}_{i}\), we look for a suitable step length in the interval \((0,1]\) satisfying the sufficient decrease condition (5), which is done in the second conditional block. Consequently, the indicator \(\mathbf{I}=1\) suggests a descent step by using the resulting point \(\mathbf{s}\in\mathbb{R}^{n}\), i.e., the current point \(\mathbf{x}\) is updated by \(\mathbf{x}^{+}:=\mathbf{s}\). On the other hand, the indicator \(\mathbf{I}=0\) reveals that one can use the obtained subgradient \(\mathbf{s}\in\partial_{\varepsilon}f(\mathbf{x})\) to improve \(\mathcal{G}_{\varepsilon}(\mathbf{x})\), i.e., we set \(\mathbf{\xi}_{m+1}:=\mathbf{s}\) and \(\mathcal{G}_{\varepsilon}(\mathbf{x})\) is updated by \(\mathcal{G}_{\varepsilon}^{+}(\mathbf{x}):=\mathcal{G}_{\varepsilon}(\mathbf{x})\cup \{\mathbf{\xi}_{m+1}\}\). In the following, we show that Algorithm 1 terminates after finite number of iterations. To this end, we start with the following lemma. **Lemma 3.2**.: _Suppose that Algorithm 1 does not terminate. Then_ 1. _For any_ \(i\geq 0\)_, we have_ \(t_{i}\in\{t_{i+1}^{l},t_{i+1}^{u}\}\)_. Moreover, for all_ \(i\geq 1\)_,_ \[0<t_{i+1}^{u}-t_{i+1}^{l}\leq(1-\zeta)(t_{i}^{u}-t_{i}^{l}).\] (9) \[0\leq t_{i}^{l}\leq t_{i+1}^{l}<t_{i+1}^{u}\leq t_{i}^{u}\leq\varepsilon.\] (10) ``` Inputs: Radius \(\varepsilon\in(0,1)\), current point \(\mathbf{x}\in\mathbb{R}^{n}\), and search direction \(\mathbf{d}=-\mathbf{g}^{*}/\|\mathbf{g}^{*}\|\) with \(\mathbf{g}^{*}\neq\mathbf{0}\). Parameters:\(\beta_{1},\beta_{2}\in(0,1)\) with \(\beta_{1}<\beta_{2}\), reduction factor \(\zeta\in(0,0.5)\), lower bound \(\bar{t}\in(0,\varepsilon)\), and positive integer \(p\in\mathbb{N}\). Outputs: A point \(\mathbf{s}\in\mathbb{R}^{n}\) and the indicator \(\mathbf{I}\in\{0,1\}\). Function:\(\{\mathbf{s},\mathbf{I}\}=\texttt{T-PLS}\left(\varepsilon,\,\mathbf{x},\,\mathbf{d}\right)\) 1:Initialization: Choose \(t_{0}\in(\bar{t},\varepsilon)\) and set \(\bar{t}_{0}:=1\). Compute \(\mathbf{\xi}_{0}\in\partial f(\mathbf{x}+t_{0}\mathbf{d})\), and set \(t_{0}^{l}:=0,t_{0}^{u}:=\varepsilon\), \(i:=0\) ; 2:while true do 3:if\(f(\mathbf{x}+t_{i}\mathbf{d})-f(\mathbf{x})\leq-\beta_{1}\,t_{i}\,\|\mathbf{g}^{*}\|\), then 4: Set \(t_{i+1}^{l}:=t_{i},\;\;t_{i+1}^{u}:=t_{i}^{u}\) ; 5:else 6: Set \(t_{i+1}^{l}:=t_{i}^{l},\;\;t_{i+1}^{u}:=t_{i}\) ; 7:endif 8:if\(f(\mathbf{x}+\bar{t}_{i}\mathbf{d})-f(\mathbf{x})\leq-\beta_{1}\,\bar{t}_{i}\,\|\mathbf{g}^{*}\|\), and \(\bar{t}_{i}\geq\bar{t}\), then 9: Set \(\mathbf{I}:=1\) and \(\mathbf{s}:=\mathbf{x}+\bar{t}_{i}\mathbf{d}\) ; 10:return\(\{\mathbf{s},\mathbf{I}\}\) and Stop ; 11:endif 12:if\(\mathbf{\xi}_{i}^{T}\mathbf{d}\geq-\beta_{2}\|\mathbf{g}^{*}\|\), then 13: Set \(\mathbf{I}:=0\) and \(\mathbf{s}:=\mathbf{\xi}_{i}\) ; 14:return\(\{\mathbf{s},\mathbf{I}\}\) and Stop ; 15:endif 16: Choose \(t_{i+1}\in\left[t_{i+1}^{l}+\zeta(t_{i+1}^{u}-t_{i+1}^{l}),\,t_{i+1}^{u}-\zeta (t_{i+1}^{u}-t_{i+1}^{l})\right]\) ; 17: Set \(\bar{t}_{i+1}:=\exp(\frac{\log t_{0}}{p})^{i+1}\) ; 18: Compute \(\mathbf{\xi}_{i+1}\in\partial f(\mathbf{x}+t_{i+1}\mathbf{d})\) ; 19: Set \(i:=i+1\) ; 20:endwhile End Function ``` **Algorithm 1**A Two-Point Line Search (LS) _._ 2. _There exists_ \(t^{*}\in[0,\varepsilon]\) _such that_ \(t^{u}_{i}\downarrow t^{*}\)_,_ \(t^{l}_{i}\uparrow t^{*}\)_, and_ \(t_{i}\to t^{*}\) _as_ \(i\to\infty\)_. In addition_ \[t^{*}\in\mathcal{T}:=\{t\ :\ f(\boldsymbol{x}+t\boldsymbol{d})-f(\boldsymbol{x}) \leq-\beta_{1}\,t\,\|\boldsymbol{g}^{*}\|\}.\] 3. _Let_ \(\mathcal{I}:=\{i\ :\ t^{u}_{i+1}=t_{i}\}\)_. Then_ \(\mathcal{I}\) _is infinite._ Proof.: (i) This part follows immediately from the construction of the algorithm. (ii) We conclude from (9) that the sequence \(\{t^{u}_{i}-t^{l}_{i}\}_{i}\) is bounded from below and decreasing. Thus, it converges. Assume \(\{t^{u}_{i}-t^{l}_{i}\}\to A\) as \(i\to\infty\). Letting \(i\) tend to infinity in inequality (9), we deduce \(0\leq A\leq(1-\zeta)A\). Now, \(\zeta\in(0,0.5)\) gives \(A=0\). Since \(\{t^{u}_{i}-t^{l}_{i}\}\to 0\) as \(i\to\infty\), inequality (10) implies the existence of \(t^{*}\in[0,\varepsilon]\) such that \(t^{l}_{i}\uparrow t^{*},t^{u}_{i}\downarrow t^{*}\) as \(i\to\infty\). Furthermore, the fact \(t_{i}\in\{t^{l}_{i+1},t^{u}_{i+1}\}\) for all \(i\geq 0\) yields \(t_{i}\to t^{*}\) as \(i\to\infty\). To prove \(t^{*}\in\mathcal{T}\), we note that \(t^{l}_{i}\in\mathcal{T}\) for all \(i\geq 0\). In other words, \[f(\boldsymbol{x}+t^{l}_{i}\boldsymbol{d})-f(\boldsymbol{x})\leq-\beta_{1}\,t ^{l}_{i}\,\|\boldsymbol{g}^{*}\|,\qquad\text{for all }i\geq 0.\] Therefore, continuity of \(f\) along with the fact that \(t^{l}_{i}\uparrow t^{*}\) as \(i\to\infty\) implies \[f(\boldsymbol{x}+t^{*}\boldsymbol{d})-f(\boldsymbol{x})\leq-\beta_{1}\,t^{* }\,\|\boldsymbol{g}^{*}\|,\] which means \(t^{*}\in\mathcal{T}\). (iii) First, we prove \(\mathcal{I}\neq\emptyset\). By contradiction, suppose \(\mathcal{I}=\emptyset\), which means \[f(\boldsymbol{x}+t_{i}\boldsymbol{d})-f(\boldsymbol{x})\leq-\beta_{1}\,t_{i }\,\|\boldsymbol{g}^{*}\|,\qquad\text{for all }i\geq 0.\] In particular, for \(i=0\), we have \[f(\boldsymbol{x}+t_{0}\boldsymbol{d})-f(\boldsymbol{x})\leq-\beta_{1}\,t_{0} \,\|\boldsymbol{g}^{*}\|. \tag{11}\] On the other hand, at iteration \(i=p-1\), one has \(\bar{t}_{i+1}=\exp(\frac{\log t_{0}}{p})^{p}=t_{0}>\bar{t}\). This fact along with (11) implies that Algorithm 1 terminates at the iteration \(i=p\) with indicator \(\boldsymbol{I}=1\), which violates the assumption. Thus, \(\mathcal{I}\neq\emptyset\). Next, we prove \(\mathcal{I}\) is infinite. By contradiction, assume \(\mathcal{I}\) is finite. Then, as \(\mathcal{I}\neq\emptyset\) and \(t^{u}_{i}\downarrow t^{*}\) as \(i\to\infty\), there exists \(\bar{i}\in\mathbb{N}\) such that \[t^{u}_{i}=t^{*},\qquad\text{for all }i\geq\bar{i}\quad\text{and}\quad t^{u}_{i} >t^{*},\qquad\text{for all }i<\bar{i}.\] Thus, \(t^{*}=t^{u}_{\bar{i}}=t_{\bar{i}-1}\), and hence \[f(\boldsymbol{x}+t_{\bar{i}-1}\boldsymbol{d})-f(\boldsymbol{x})>-\beta_{1}\,t _{\bar{i}-1}\,\|\boldsymbol{g}^{*}\|,\] yielding \(t^{*}\notin\mathcal{T}\), which violates the fact \(t^{*}\in\mathcal{T}\). Now, we are prepared to state the main result for Algorithm 1. Before it, we need to make the following semismooth assumption about the objective function \(f\), which is commonly used in nonsmooth optimization [51, 53]. **Assumption 1**.: _For any \(\mathbf{z},\mathbf{d}\in\mathbb{R}^{n}\) and sequences \(\{\mathbf{\xi}_{i}\}_{i}\subset\mathbb{R}^{n}\) and \(\{h_{i}\}_{i}\subset\mathbb{R}_{+}\) satisfying \(h_{i}\downarrow 0\) as \(i\to\infty\) and \(\mathbf{\xi}_{i}\in\partial f(\mathbf{z}+h_{i}\mathbf{d})\), one has_ \[\limsup_{i\to\infty}\mathbf{\xi}_{i}^{T}\mathbf{d}\geq\liminf_{i\to\infty}\frac{f(\bm {z}+h_{i}\mathbf{d})-f(\mathbf{z})}{h_{i}}.\] The locally Lipschitz function \(f\) which satisfies the above assumption is called _weakly upper semismooth_[53]. The class of weakly upper semismooth functions is quite broad. For example, convex, concave, and max- and min-type functions are weakly upper semismooth (for more details, see [54] and [53]). **Theorem 3.3**.: _Suppose that Assumption 1 holds. Then Algorithm 1 terminates after finitely many iterations._ Proof.: By indirect proof suppose that Algorithm 1 does not terminate. Let \(\mathcal{I}\) be as defined in part (iii) of Lemma 3.2. Then \(\mathcal{I}\) is infinite and \[f(\mathbf{x}+t_{i}\mathbf{d})-f(\mathbf{x})>-\beta_{1}t_{i}\|\mathbf{g}^{*}\|,\quad\text{for all }i\in\mathcal{I}. \tag{12}\] Moreover, in virtue of part (ii) of Lemma 3.2, we have \(t_{i}\to t^{*}\) as \(i\to\infty\) with \(t^{*}\in\mathcal{T}\), i.e., \[f(\mathbf{x}+t^{*}\mathbf{d})-f(\mathbf{x})\leq-\beta_{1}t^{*}\|\mathbf{g}^{*}\|. \tag{13}\] Combining (12) and (13), one can write \[f(\mathbf{x}+t_{i}\mathbf{d})-f(\mathbf{x}+t^{*}\mathbf{d})>-\beta_{1}\|\mathbf{g}^{*}\|(t_{i}-t^{ *}),\quad\text{for all }i\in\mathcal{I}. \tag{14}\] Let \(h_{i}:=t_{i}-t^{*}>0\), for all \(i\in\mathcal{I}\), and \(\mathbf{z}:=\mathbf{x}+t^{*}\mathbf{d}\). Then (14) is represented as \[-\beta_{1}\|\mathbf{g}^{*}\|<\frac{f(\mathbf{z}+h_{i}\mathbf{d})-f(\mathbf{z})}{h_{i}},\quad \text{for all }i\in\mathcal{I}. \tag{15}\] Inequality (15) along with the semismoothness hypotheses of Assumption 1 yields \[-\beta_{1}\|\mathbf{g}^{*}\|\leq\liminf_{i\xrightarrow{\mathcal{I}}\infty}\frac{ f(\mathbf{z}+h_{i}\mathbf{d})-f(\mathbf{z})}{h_{i}}\leq\limsup_{i\xrightarrow{\mathcal{I}} \infty}\mathbf{\xi}_{i}^{T}\mathbf{d}. \tag{16}\] On the other hand, as the algorithm does not terminate by the third conditional block, it must be the case that \[\mathbf{\xi}_{i}^{T}\mathbf{d}<-\beta_{2}\|\mathbf{g}^{*}\|,\quad\text{for all }i\in \mathcal{I}.\] Therefore \[\limsup_{i\xrightarrow{\mathcal{I}}\infty}\mathbf{\xi}_{i}^{T}\mathbf{d}\leq-\beta_{ 2}\|\mathbf{g}^{*}\|<-\beta_{1}\|\mathbf{g}^{*}\|,\] which contradicts (16). ## 4 Computation of a \((\delta,\mathcal{G}_{\varepsilon}(\mathbf{x}))\)-stationary point In this section, for a given \(\delta>0\) and \(\varepsilon>0\), we employ the proposed line search procedure of Algorithm 1 to develop an iterative process for finding a \((\delta,\mathcal{G}_{\varepsilon}(\mathbf{x}))\)-stationary point. Such a process is presented in Algorithm 2. ``` Inputs: Starting point \(\mathbf{x}_{0}\in\mathbb{R}^{n}\), radius \(\varepsilon\in(0,1)\), stationarity tolerance \(\delta>0\). Output: A \((\delta,\mathcal{G}_{\varepsilon}(\mathbf{x}))\)-stationary point \(\mathbf{x}\in\mathbb{R}^{n}\). Function:\(\mathbf{x}\) = DG-SP\((\mathbf{x}_{0},\varepsilon,\delta)\) 1:Initialization: Compute \(\mathbf{\xi}_{0}\in\partial f(\mathbf{x}_{0})\), set \(\mathcal{G}_{\varepsilon}(\mathbf{x}_{0}):=\{\mathbf{\xi}_{0}\}\) and \(k:=0\) ; 2:while true do 3: Set \(\mathbf{g}_{k}^{*}:=\arg\min\{\|\mathbf{g}\|:\;\mathbf{g}\in\mathsf{conv}\mathcal{G}_{ \varepsilon}(\mathbf{x}_{k})\}\) ; 4:if\(\|\mathbf{g}_{k}^{*}\|\leq\delta\), then 5:return\(\mathbf{x}_{k}\) as a \((\delta,\mathcal{G}_{\varepsilon}(\mathbf{x}_{k}))\)-stationary point and Stop ; 6:endif 7: Compute the search direction \(\mathbf{d}_{k}:=-\mathbf{g}_{k}^{*}/\|\mathbf{g}_{k}^{*}\|\) ; 8: Set \(\{\mathbf{s}_{k},\mathbf{I}_{k}\}:=\mathsf{T}\)-PLS \((\varepsilon,\,\mathbf{x}_{k},\,\mathbf{d}_{k})\) ; 9:if\(\mathbf{I}_{k}=1\), then 10: Set \(\mathbf{x}_{k+1}:=\mathbf{s}_{k}\) ; 11: Compute \(\mathbf{\xi}_{k+1}\in\partial f(\mathbf{x}_{k+1})\) ; 12: Set \(\mathcal{G}_{\varepsilon}(\mathbf{x}_{k+1}):=\{\mathbf{\xi}_{k+1}\}\) ; 13:endif 14:if\(\mathbf{I}_{k}=0\), then 15: Set \(\mathbf{x}_{k+1}:=\mathbf{x}_{k}\) and \(\mathbf{\xi}_{k+1}:=\mathbf{s}_{k}\) ; 16: Set \(\mathcal{G}_{\varepsilon}(\mathbf{x}_{k+1}):=\mathcal{G}_{\varepsilon}(\mathbf{x}_{k} )\cup\{\mathbf{\xi}_{k+1}\}\) ; 17:endif 18: Set \(k:=k+1\) ; 19:endwhile 20:End Function ``` **Algorithm 2**Computation of a \((\delta,\mathcal{G}_{\varepsilon}(\mathbf{x}))\)-stationary point Regarding Algorithm 2, let \[\mathcal{A}:=\{k\in\mathbb{N}_{0}\;:\;\mathbf{I}_{k}=1\}. \tag{17}\] In the rest of this section, we aim to show that Algorithm 2 terminates after finite number of iterations. In the following lemma, \(lev_{\alpha}(f):=\{\mathbf{x}\in\mathbb{R}^{n}\;:\;f(\mathbf{x})\leq\alpha\}\) is the \(\alpha\)-sublevel set of the function \(f\). Since \(f\) is locally Lipschitz, \(lev_{\alpha}(f)\) is closed, for each \(\alpha\in\mathbb{R}\). Moreover, at iteration \(k\) of Algorithm 2, it is assumed that the line search procedure of Algorithm 1 terminates at the \(i_{k}\)-th iteration. **Lemma 4.1**.: _Suppose that Assumption 1 holds and \(lev_{f(\mathbf{x}_{0})}(f)\) is bounded. If Algorithm 2 does not terminate, i.e., \(k\to\infty\), then \(\mathcal{A}\) is finite._ Proof.: Since \(lev_{f(\mathbf{x}_{0})}(f)\) is bounded and closed, we conclude \[f^{*}:=\min\{f(\mathbf{x})\;:\;\mathbf{x}\in\mathbb{R}^{n}\}>-\infty. \tag{18}\] By indirect proof, assume \(\mathcal{A}\) is infinite. As Algorithm 2 does not terminate, one has \[\|\mathbf{g}_{k}^{*}\|>\delta,\quad\text{for all $k$}. \tag{19}\] Furthermore, for any \(k\in\mathcal{A}\), we have \(\mathbf{I}_{k}=1\) and hence \[f(\mathbf{x}_{k+1})-f(\mathbf{x}_{k})=f(\mathbf{s}_{k})-f(\mathbf{x}_{k})\leq- \beta_{1}\bar{t}_{i_{k}}\|\mathbf{g}_{k}^{*}\|,\quad\text{for all $k\in\mathcal{A}$}. \tag{20}\] By construction of Algorithm 1, we have \(\bar{t}_{i_{k}}\geq\bar{t}>0\). Thus, in view of (19) and (20), one can write \[f(\mathbf{x}_{k+1})-f(\mathbf{x}_{k})\leq-\beta_{1}\bar{t}\delta,\quad \text{for all $k\in\mathcal{A}$}. \tag{21}\] Moreover, for any \(k\in\mathbb{N}_{0}\setminus\mathcal{A}\), we have \(\mathbf{I}_{k}=0\) and thus \[f(\mathbf{x}_{k+1})=f(\mathbf{x}_{k}),\quad\text{for all $k\in \mathbb{N}_{0}\setminus\mathcal{A}$}. \tag{22}\] Using (21) and (22) inductively, for each \(k\in\mathbb{N}_{0}\), we get \[f(\mathbf{x}_{k+1})\leq f(\mathbf{x}_{0})-\sum_{\begin{subarray}{c}j\in \mathcal{A}\\ j\leq k+1\end{subarray}}\beta_{1}\bar{t}\delta. \tag{23}\] Since \(\mathcal{A}\) is infinite, \(\sum_{\begin{subarray}{c}j\in\mathcal{A}\\ j\leq k+1\end{subarray}}\beta_{1}\bar{t}\delta\to\infty\) as \(k\to\infty\). Therefore, (23) implies \(f(\mathbf{x}_{k})\to-\infty\) as \(k\to\infty\), which contradicts (18). Our principal result about Algorithm 2 is stated in the next theorem. **Theorem 4.2**.: _Suppose that Assumption 1 holds and \(lev_{f(\mathbf{x}_{0})}(f)\) is bounded. Then Algorithm 2 terminates in a finite number of iterations._ Proof.: By indirect proof, assume that Algorithm 2 does not terminate, i.e., \(k\to\infty\). Therefore \[\|\mathbf{g}_{k}^{*}\|>\delta,\quad\text{for all $k$}. \tag{24}\] Let \(\mathcal{A}\) be as defined in (17). By Lemma 4.1, \(\mathcal{A}\) is finite, and we denote the largest index in \(\mathcal{A}\) by \(\bar{k}\) (in case \(\mathcal{A}=\emptyset\), we set \(\bar{k}:=0\)). Let \(\bar{\mathbf{x}}:=\mathbf{x}_{\bar{k}+1}\). Then, for any \(k>\bar{k}\), we have \(\mathbf{I}_{k}=0\), and hence \(\mathbf{x}_{k+1}=\bar{\mathbf{x}}\), for all \(k>\bar{k}\). Moreover \[\mathcal{G}_{\varepsilon}(\mathbf{x}_{k+1})=\mathcal{G}_{\varepsilon }(\mathbf{x}_{k})\cup\{\mathbf{\xi}_{k+1}\},\quad\text{for all $k>\bar{k}$}, \tag{25}\] in which, \(\mathbf{\xi}_{k+1}\) satisfies \[\mathbf{\xi}_{k+1}^{T}\mathbf{d}_{k}\geq-\beta_{2}\|\mathbf{g}_{k}^{*}\|,\] or equivalently (note \(\mathbf{d}_{k}=-\mathbf{g}_{k}^{*}/\|\mathbf{g}_{k}^{*}\|\)) \[\mathbf{\xi}_{k+1}^{T}\mathbf{g}_{k}^{*}\leq\beta_{2}\|\mathbf{g}_{k}^{*}\|^{2}. \tag{26}\] We also note that \(\mathbf{\xi}_{k+1}\in\partial_{\varepsilon}f(\bar{\mathbf{x}})\) and \(\mathbf{g}_{k}^{*}\in\mathsf{conv}\mathcal{G}_{\varepsilon}(\bar{\mathbf{x}})\subset \partial_{\varepsilon}f(\bar{\mathbf{x}})\), for all \(k>\bar{k}\). Compactness of \(\partial_{\varepsilon}f(\bar{\mathbf{x}})\) yields \[C_{1}:=\sup\{\|\mathbf{\xi}\|\ :\ \mathbf{\xi}\in\partial_{\varepsilon}f(\bar{\mathbf{x}})\}<\infty.\] Set \(C_{2}:=\max\{C_{1},\delta\}\). Thus \[\|\mathbf{\xi}_{k+1}-\mathbf{g}_{k}^{*}\|\leq 2C_{2},\quad\text{for all }k>\bar{k}. \tag{27}\] Next, for any \(t\in(0,1)\) and \(k>\bar{k}\), we have \[\|\mathbf{g}_{k+1}^{*}\|^{2} \leq\|t\mathbf{\xi}_{k+1}+(1-t)\mathbf{g}_{k}^{*}\|^{2}\] \[=t^{2}\|\mathbf{\xi}_{k+1}-\mathbf{g}_{k}^{*}\|^{2}+2t(\mathbf{g}_{k}^{*})^{ T}(\mathbf{\xi}_{k+1}-\mathbf{g}_{k}^{*})+\|\mathbf{g}_{k}^{*}\|^{2}. \tag{28}\] In view of (26) and (27), one can continue (28) as \[\|\mathbf{g}_{k+1}^{*}\|^{2} \leq 4t^{2}C_{2}^{2}+2t\beta_{2}\|\mathbf{g}_{k}^{*}\|^{2}-2t\|\mathbf{g}_{ k}^{*}\|^{2}+\|\mathbf{g}_{k}^{*}\|^{2}\] \[=4t^{2}C_{2}^{2}+\left(1-2t(1-\beta_{2})\right)\|\mathbf{g}_{k}^{*}\| ^{2}\] \[=:\psi(t), \tag{29}\] for all \(t\in(0,1)\). One can observe \(t^{*}:=(1-\beta_{2})\|\mathbf{g}_{k}^{*}\|^{2}/4C_{2}^{2}\in(0,1)\) minimizes \(\psi(t)\) and \[\psi(t^{*})=\left(1-\frac{(1-\beta_{2})^{2}\|\mathbf{g}_{k}^{*}\|^{2}}{4C_{2}^{2}} \right)\|\mathbf{g}_{k}^{*}\|^{2}.\] Using (24), the above equality implies \[\psi(t^{*})\leq\left(1-\frac{(1-\beta_{2})^{2}\delta^{2}}{4C_{2}^{2}}\right) \|\mathbf{g}_{k}^{*}\|^{2}. \tag{30}\] Since \(\delta\leq C_{2}\) and \(\beta_{2}\in(0,1)\), we conclude \(\sigma:=1-\frac{(1-\beta_{2})^{2}\delta^{2}}{4C_{2}^{2}}\in(0,1)\). Now, (28) and (30) imply \[0\leq\|\mathbf{g}_{k+1}^{*}\|^{2}\leq\psi(t^{*})\leq\sigma\|\mathbf{g}_{k}^{*}\|^{2}, \quad\text{for all }k>\bar{k}, \tag{31}\] which means that the sequence \(\{\|\mathbf{g}_{k}^{*}\|^{2}\}_{k>\bar{k}}\) is decreasing and bounded from below, and hence it converges. Assume \(\{\|\mathbf{g}_{k}^{*}\|^{2}\}\to A\) as \(k\to\infty\). Letting \(k\) approach infinity in inequality (31), we obtain \(0\leq A\leq\sigma A\). Since \(\sigma\in(0,1)\), we conclude \(A=0\). Therefore \(\{\|\mathbf{g}_{k}^{*}\|^{2}\}\to 0\) as \(k\to\infty\), which contradicts (24). **Remark 1**.: Regarding Algorithm 2, if the number of consecutive iterations with \(\mathbf{I}_{k}=0\) is large, the size of \(\mathcal{G}_{\varepsilon}(\mathbf{x}_{k})\) increases as \(k\) grows (See Line 16 of Algorithm 2). This issue may pose some difficulty with the size of the subproblem which is solved in Line 3 of the algorithm. In such situations, the user can optionally employ an adaptive reset strategy to efficiently control the size of subproblems. Such a strategy has been proposed in Appendix A. ## 5 Computation of a Clarke stationary point For the given sequences \(\{\delta_{\nu}\}\downarrow 0\) and \(\{\varepsilon_{\nu}\}\downarrow 0\), the main aim of this section is to obtain a Clarke stationary point through a sequence of \((\delta_{\nu},\mathcal{G}_{\varepsilon_{\nu}}(\boldsymbol{x}_{\nu+1}))\)-stationary points. Algorithm 3 which has a simple structure generates such a sequence. ``` Inputs: Starting point \(\boldsymbol{x}_{0}\in\mathbb{R}^{n}\), positive sequences \(\{\delta_{\nu}\}\downarrow 0\) and \(\{\varepsilon_{\nu}\}\downarrow 0\), and optimality tolerance \(\eta>0\). Output: A point \(\boldsymbol{x}\in\mathbb{R}^{n}\) as an approximation of a Clarke stationary point. 1:Initialization: Set \(\nu:=0\) ; 2:while true do 3: Set \(\boldsymbol{x}_{\nu+1}:=\texttt{DG-SP}\left(\boldsymbol{x}_{\nu},\varepsilon_ {\nu},\delta_{\nu}\right)\) ; 4:if\(\delta_{\nu}\leq\eta\), and \(\varepsilon_{\nu}\leq\eta\), then 5:return\(\boldsymbol{x}_{\nu}\) as an approximation of a Clarke stationary point and Stop ; 6:endif 7: Set \(\nu:=\nu+1\) ; 8:endwhile ``` **Algorithm 3**Computation of a Clarke stationary point In order to study the asymptotic behavior of Algorithm 3, we assume \(\eta=0\). Thus, the algorithm generates the infinite sequence \(\{\boldsymbol{x}_{\nu}\}_{\nu}\). In the following theorem, we prove that any accumulation point of the sequence \(\{\boldsymbol{x}_{\nu}\}_{\nu}\) is Clarke stationary for objective \(f\). **Theorem 5.1**.: _Suppose that Assumption 1 holds and \(lev_{f(\boldsymbol{x}_{0})}(f)\) is bounded. If \(\eta=0\) in Algorithm 3, then any accumulation point of the sequence \(\{\boldsymbol{x}_{\nu}\}_{\nu}\) generated by this algorithm is Clarke stationary for \(f\)._ Proof.: For any \(\nu\geq 0\), Algorithm 3 generates the \((\delta_{\nu},\mathcal{G}_{\varepsilon_{\nu}}(\boldsymbol{x}_{\nu+1}))\)-stationary point \(\boldsymbol{x}_{\nu+1}\), i.e., \[\min\{\|\boldsymbol{g}\|\;:\;\boldsymbol{g}\in\texttt{conv}\mathcal{G}_{ \varepsilon_{\nu}}(\boldsymbol{x}_{\nu+1})\}\leq\delta_{\nu},\quad\text{for all }\nu\geq 0. \tag{32}\] Since \(\boldsymbol{x}_{\nu}\in lev_{f(\boldsymbol{x}_{0})}(f)\), for all \(\nu\geq 0\), boundedness of \(lev_{f(\boldsymbol{x}_{0})}(f)\) implies that the sequence \(\{\boldsymbol{x}_{\nu}\}_{\nu}\) has at least one accumulation point, say \(\boldsymbol{x}^{*}\). Thus, there exists \(\mathcal{V}\subset\mathbb{N}_{0}\) such that \(\boldsymbol{x}_{\nu}\xrightarrow{\nu}\boldsymbol{x}^{*}\). Therefore, in view of (32), we have \[\min\{\|\boldsymbol{g}\|\;:\;\boldsymbol{g}\in\texttt{conv}\mathcal{G}_{ \varepsilon_{\nu}}(\boldsymbol{x}_{\nu+1})\}\leq\delta_{\nu},\quad\text{for all }\nu\in\mathcal{V}. \tag{33}\] Let \(\omega>0\) be arbitrary. Since \(\delta_{\nu}\downarrow 0\) as \(\nu\to\infty\), there exists \(\bar{\nu}\in\mathcal{V}\) sufficiently large such that \(\delta_{\nu}<\omega\), for all \(\nu\geq\bar{\nu}\). Then, it follows from (33) that \[\|\boldsymbol{g}_{\nu}^{*}\|:=\min\{\|\boldsymbol{g}\|\;:\;\boldsymbol{g}\in \texttt{conv}\mathcal{G}_{\varepsilon_{\nu}}(\boldsymbol{x}_{\nu+1})\}<\omega,\quad\text{for all }\nu\geq\bar{\nu},\;\nu\in\mathcal{V}. \tag{34}\] Therefore, the sequence \(\{\|\boldsymbol{g}_{\nu}^{*}\|\}_{\nu}\) is bounded, and without loss of generality, one may assume \(\boldsymbol{g}_{\nu}^{*}\to\boldsymbol{g}^{*}\) as \(\nu\xrightarrow{\mathcal{V}}\infty\). Now, the fact that \[\boldsymbol{g}_{\nu}^{*}\in\texttt{conv}\mathcal{G}_{\varepsilon_{\nu}}( \boldsymbol{x}_{\nu+1})\subset\partial_{\varepsilon_{\nu}}f(\boldsymbol{x}_{ \nu+1})\] along with the upper semicontinuity of the map \(\partial.f(\cdot)\) implies \(\mathbf{g}^{*}\in\partial f(\mathbf{x}^{*})\). Consequently \[\min\{\|\mathbf{g}\|\ :\ \mathbf{g}\in\partial f(\mathbf{x}^{*})\}\leq\omega.\] Since \(\omega>0\) was arbitrary, we conclude \(\mathbf{0}\in\partial f(\mathbf{x}^{*})\). ## 6 Numerical experiments In this section, we apply our method to a set of academic and semi-academic test problems and report the most important results. The proposed method is called **Subopt**. First, we consider a set of academic test problems and compare the efficiency of the proposed method with some well-known nonsmooth solvers. Next, several semi-academic problems are considered to show the applicability of the method in various contexts. The following experiments have been implemented in Matlab software (R2017b) on a machine with Intel Core i5 CPU 2.5 GHz and 6 GB RAM. Our choices for the parameters are as follows. In the line search procedure of Algorithm 1, we set \(\beta_{1}:=10^{-6},\beta_{2}:=0.1,\bar{t}:=\varepsilon/2\), and the parameter \(p\) is set to be \(25\). To initialize the step length \(t_{i}\), we set \(t_{0}:=(\bar{t}+\varepsilon)/2\). In line 16 of Algorithm 1, since \(\zeta\in(0,0.5)\), one may choose \(t_{i+1}\) as \[t_{i+1}:=\frac{t_{i+1}^{u}+t_{i+1}^{l}}{2}\in\left[t_{i+1}^{l}+ \zeta(t_{i+1}^{u}-t_{i+1}^{l}),\,t_{i+1}^{u}-\zeta(t_{i+1}^{u}-t_{i+1}^{l}) \right].\] Regarding Algorithm 3, the sequences \(\{\delta_{\nu}\}\downarrow 0\) and \(\{\varepsilon_{\nu}\}\downarrow 0\) were defined by \(\delta_{\nu+1}:=0.5\delta_{\nu}\) and \(\varepsilon_{\nu+1}:=0.5\varepsilon_{\nu}\) with \(\delta_{0}:=1\), \(\varepsilon_{0}:=0.1\). ### Academic problems and alternative solvers Table 1 provides a collection of nonsmooth convex and nonconvex test problems. The first five of these problems are convex and the rest are nonconvex. In this table, \(f^{*}\) denotes a known (local) optimal value. Note that all of these academic test problems can be formulated with any number of variables. Table 2 describes the set of considered nonsmooth solvers. In this table, **GS** stands for the well-known original gradient sampling method, which is capable of minimizing \begin{table} \begin{tabular}{c c c c} \hline Problem & Name & Convex? & \(f^{*}\) & Ref. \\ \hline 1 & MAXL & Yes & 0 & [55] \\ 2 & L1HILB & Yes & 0 & [55] \\ 3 & MAXQ & Yes & 0 & [13] \\ 4 & MXHILB & Yes & 0 & [13] \\ 5 & Chained CB3 II & Yes & \(2(n-1)\) & [13] \\ 6 & Active faces & No & 0 & [56] \\ 7 & Brown function 2 & No & 0 & [13] \\ 8 & Chained Mifflin 2 & No & varies & [13] \\ 9 & Chained crescent I & No & 0 & [13] \\ 10 & Chained crescent II & No & 0 & [13] \\ \hline \end{tabular} \end{table} Table 1: List of test problems both convex and nonconvex objectives [28]. **BTNC** is a variant of the well-known and efficient proximal Bundle-Trust (BT) method, which can solve both convex and nonconvex minimization problems [12]. **SubG** is the classical subgradient method [37]. Although the convergence of this method was proved only for convex functions, there are empirical evidences to believe that it is able to minimize some types of nonconvex nonsmooth functions [48]. Thus, we also applied this method to the considered set of nonconvex test problems using some heuristic approaches to choose the off-line sequence of step length. We used the Matlab code of the **GS** algorithm which is freely available [28], and the other solvers have been implemented by the authors of this paper. In this experiment, each problem was run using a single starting point randomly generated from \(\mathcal{B}(\mathbf{x}_{0},(\|\mathbf{x}_{0}\|+1)/n)\), where \(\mathbf{x}_{0}\) was suggested in the literature. Moreover, we stopped an algorithm once the relative error \[E_{k}:=\frac{|f(\mathbf{x}_{k})-f^{*}|}{|f^{*}|+1}\] drops below the prespecified tolerance \(5\times 10^{-4}\). We also limited the number of iterations to \(10^{4}\). Figure 1 shows the performance profiles [57] based on the number of function and subgradient evaluations and elapsed CPU time, for \(n=50\) and \(100\). As seen from Figure 1, **Subopt**, **BTNC** and **GS** successfully reached the desired accuracy in all problems, while **SubG** has been successful in \(60\%\) of problems with \(n=50\) and \(100\). In terms of function and subgradient evaluations, **BTNC** is superior to other solvers. Moreover, quite by a large margin, **GS** used more evaluations than the other solvers. This is due to the fact that this solver performs best when the size of the sample is set to \(2n\). Furthermore, in majority of problems, **Subopt** consumed less CPU time than **BTNC**. This can be attributed to the fact that the structure of the quadratic subproblems in **Subopt** is simpler than **BTNC**. For \(n=100\), although the **SubG** is the least robust solver, due to its simple structure, it consumed less CPU time than the other solvers in problems it successfully solved. ### Data clustering Assume \(\mathscr{A}\subset\mathbb{R}^{n}\) is a finite set of data points, i.e., \[\mathscr{A}=\{\mathbf{a}_{1},\mathbf{a}_{2},\ldots,\mathbf{a}_{m}\},\quad\text{where}\quad \mathbf{a}_{i}\in\mathbb{R}^{n},\ i=1,\ldots,m.\] For a given \(\kappa\in\mathbb{N}\), our aim is to partition the set \(\mathscr{A}\) into \(\kappa\) subsets \(\mathscr{A}_{j},j=1,\ldots,\kappa\) such that \begin{table} \begin{tabular}{l l c} Name & Method & Ref. \\ \hline **Subopt** & Descent Subgradient & The current work \\ **GS** & Gradient Sampling & [28] \\ **BTNC** & Proximal Bundle & [12] \\ **SubG** & Classical Subgradient & [48] \\ \hline \end{tabular} \end{table} Table 2: A list of nonsmooth solvers Figure 1: Top: performance profiles based on the function and subgradient evaluations (Left) and elapsed CPU time (Right), for \(n=50.\) Bottom: the same for \(n=100.\) 1. \(\mathscr{A}_{j}\neq\emptyset,\quad j=1,\ldots,\kappa\). 2. \(\mathscr{A}_{j}\cap\mathscr{A}_{j^{\prime}}=\emptyset,\quad j,j^{\prime}=1, \ldots,\kappa\), \(j\neq j^{\prime}\). 3. \(\mathscr{A}=\cup_{j=1}^{k}\mathscr{A}_{j}\). Such a problem is called _hard clustering probelm_[48]. Each cluster \(\mathscr{A}_{j}\) is characterized by its center \(\mathbf{x}_{j}\), \(j=1,\ldots,\kappa\). Then, a data point \(\mathbf{a}\in\mathscr{A}\) belongs to the cluster \(\mathscr{A}_{j}\) if \[\|\mathbf{a}-\mathbf{x}_{\bar{j}}\|=\min_{j=1,\ldots,\kappa}\|\mathbf{a}-\mathbf{x}_{j}\|,\] in which \(\|\cdot\|\) is the squared Euclidean norm [48]. The problem of finding the center points \(\mathbf{x}_{j},j=1,\ldots,\kappa\), can be formulated by the following unconstrained nonsmooth optimization problem [48] \[\begin{split}&\min\ f_{\kappa}(\mathbf{X})\\ &\text{s.t.}\ \mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{ \kappa}]\in\mathbb{R}^{n\times\kappa},\end{split} \tag{35}\] where \[f_{\kappa}(\mathbf{X})=\frac{1}{m}\sum_{i=1}^{m}\min_{j=1,\ldots,\kappa}\|\mathbf{a}_{i }-\mathbf{x}_{j}\|.\] Note that, for any \(\kappa>1\), this problem is nonsmooth and nonconvex. To create an instance of problem (35), let \(\mathscr{A}\subset\mathbb{R}^{2}\) be our data set containing ten thousands data points, i.e., \(n=2\) and \(m=10,000\). The top-left plot of Figure 2 depicts the data set \(\mathscr{A}\). To solve this problem for different values of \(\kappa\), we applied **Subopt** using a randomly generated starting point and optimality tolerance \(\eta=10^{-8}\). The obtained results have been illustrated in Figure 2, for \(\kappa=2,5,10,15\) and \(20\). Table 3 reports the computational cost of minimization of \(f_{\kappa}(\mathbf{X})\) for the considered values of \(\kappa\). In this table, #Fun and #Sub denote the number of function and subgradient evaluations, respectively. Moreover, \(f_{\kappa}(\mathbf{X}_{\text{end}})\) is the value of the objective function of problem (35) at the last iteration. Figure 2: Illustration of the data set \(\mathscr{A}\) (top-left) and clusters \(\mathscr{A}_{j}\), for \(\kappa=2\) (top-middle), \(\kappa=5\) (top-right), \(\kappa=10\) (bottom-left), \(\kappa=15\) (bottom-middle), and \(\kappa=20\) (bottom-right). ### Chebyshev approximation by polynomials Let \(\mathcal{C}[a,b]\) be the space of real continuous functions on the closed interval \([a,b]\). One can equip this space with infinity norm, i.e., for each \(f\in\mathcal{C}[a,b]\), define \[\|f\|_{\infty}:=\max\{|f(x)|\ :\ x\in[a,b]\}.\] Moreover, assume \(\mathcal{P}_{n}\) is the space of polynomials of degree \(m\leq n\). For a given \(f\in\mathcal{C}[a,b]\) and \(n\in\mathbb{N}_{0}\), our aim is to find \(p^{*}\in\mathcal{P}_{n}\) which is the Chebyshev approximation (best uniform approximation) of \(f\) in \(\mathcal{P}_{n}\), i.e., \[\min_{p\in\mathcal{P}_{n}}\|p-f\|_{\infty}=\|p^{*}-f\|_{\infty}.\] Let \(\boldsymbol{c}=(c_{n},\ldots,c_{1},c_{0})\in\mathbb{R}^{n+1}\) be the coefficients of a given polynomial \(p\in\mathcal{P}_{n}\). Then, the above problem can be represented by \[\min_{\boldsymbol{c}\in\mathbb{R}^{n+1}}\ h(\boldsymbol{c}), \tag{36}\] where \[h(\boldsymbol{c}):=\max_{x\in[a,b]}|c_{n}x^{n}+\ldots+c_{1}x+c_{0}-f(x)|.\] It is noted that problem (36) is a nonsmooth convex minimization problem. To evaluate \(h\) at a given \(\boldsymbol{c}\in\mathbb{R}^{n+1}\), we have to solve a one-dimensional maximization problem. For this purpose, we create a one-dimensional grid of the interval \([a,b]\) containing \(2,000\) grid points. Next, we evaluate \(|c_{n}x^{n}+\ldots+c_{1}x+c_{0}-f(x)|\) on the grid points and find the maximum value. The grid point at which the maximum occurs is used to initialize a local maximization method, which is based on the golden section and hyperbolic interpolation methods. Now consider \(f(x):=\sin(2x)\) on the interval \([-\pi,\pi]\). We applied **Subopt** to find the Chebyshev approximation of \(f\) in \(\mathcal{P}_{0}\), \(\mathcal{P}_{1}\), \(\mathcal{P}_{2}\), and \(\mathcal{P}_{3}\) using a randomly generated starting point and optimality tolerance \(\eta=10^{-8}\). The obtained results have been reported in Table 4. Based on these results, the Chebyshev approximation of \(f\) in \(\mathcal{P}_{0}\), \(\mathcal{P}_{1}\), and \(\mathcal{P}_{2}\) is the constant polynomial \(p_{1}^{*}(x)\equiv 0\), while the Chebyshev approximation of \(f\) in \(\mathcal{P}_{3}\) is \(p_{2}^{*}(x)=-0.0472x^{3}+0.1923x\). The following _alternation_ theorem helps us to confirm the results obtained by **Subopt**. **Theorem 6.1** ([1]).: \(p^{*}\in\mathcal{P}_{n}\) _is the best uniform approximation to \(f\in C[a,b]\) if and only if the error function \(e(x):=p^{*}(x)-f(x)\) takes consecutively the extremum value \(\|p^{*}-f\|_{\infty}\) on \([a,b]\) with alternating signs at least \((n+2)\) times._ Figure 3 indicates the error functions \(e_{1}(x):=p_{1}^{*}(x)-\sin(2x)\) and \(e_{2}(x):=p_{2}^{*}(x)-\sin(2x)\) on interval \([-\pi,\pi]\). As can be seen, \(e_{1}(x)\) takes consecutively the extremum value \(\|p_{1}^{*}-f\|_{\infty}=1\) with alternating sings at four points. Also, \(e_{2}(x)\) takes consecutively the extremum value \(\|p_{2}^{*}-f\|_{\infty}=0.8723\) with alternating sings at six points satisfying the requirements of Theorem 6.1. ### Minimization of eigenvalue products For a positive semidefinite matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\), we consider the problem \[\begin{split}&\min_{\mathbf{X}\in\mathbb{R}^{N\times N}}\;\prod_{j=1} ^{s}\lambda_{j}(\mathbf{A}\circ\mathbf{X})\\ &\text{s.t.}\;\;\mathbf{X}\succeq 0,\;\;\mathbf{X}_{i,i}=1,\quad i=1, \ldots,N,\end{split} \tag{37}\] where \(\mathbf{A}\circ\mathbf{X}\) is the componentwise product of matrices \(\mathbf{A}\) and \(\mathbf{X}\), and \(\lambda_{j}(\mathbf{A}\circ\mathbf{X})\) denotes the \(j\)-th largest eigenvalue of \(\mathbf{A}\circ\mathbf{X}\). Problem (37) was first presented in [28] and solved by the original gradient sampling method. This problem is nonconvex, and its objective function is differentiable at \(\mathbf{X}\) when \(\mathbf{X}\) is positive definite and \(\lambda_{s}(\mathbf{A}\circ\mathbf{X})>\lambda_{s+1}(\mathbf{A}\circ\mathbf{X})\). Indeed, it has been shown in [58] that the objective function of the problem is partly smooth. Alternatively, one can consider the following equivalent form of problem (37) as follows \[\begin{split}&\min_{\mathbf{x}\in\mathbb{R}^{n}}\;\prod_{j=1}^{s} \lambda_{j}(\mathbf{A}\circ Mat(\mathbf{x}))\\ &\text{s.t.}\;\;Mat(\mathbf{x})\succeq 0,\end{split} \tag{38}\] Figure 3: Error functions \(e_{1}(x)\) and \(e_{2}(x)\) on the interval \([-\pi,\pi]\). in which \(n:=N(N-1)/2\) and \[Mat(\mathbf{x}):=\left[\begin{array}{cccccc}1&x_{1}&x_{2}&\ldots&x_{N-1}\\ x_{1}&1&x_{N}&\ldots&x_{2N-3}\\ x_{2}&x_{N}&1&\ldots&x_{3N-6}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ x_{N-1}&x_{2N-3}&x_{3N-6}&\ldots&1\end{array}\right]\in\mathbb{R}^{N\times N}.\] To handle the constraint of problem (38), following [28], we employ an exact penalty function, which leads to the following unconstrained minimization problem: \[\min_{\mathbf{x}\in\mathbb{R}^{n}}\ \prod_{j=1}^{s}\lambda_{j}(\mathbf{A}\circ Mat(\mathbf{x}) )-\mu\min\{0,\lambda_{N}(Mat(\mathbf{x}))\}. \tag{39}\] In order to generate various instances of this problem, as suggested in [28], the matrices \(\mathbf{A}\) are considered as the leading \(N\times N\) submatrices of a \(63\times 63\) covariance matrix arising in an environmental application, which is freely available. Table 5 shows the results obtained by **Subopt** when applied to different instances of the problem using \(\mu=100\), a randomly generated starting point, and optimality tolerance \(\eta=10^{-5}\). In this table, \(f_{\text{end}}\) is the value of the objective function of problem (39) at the last iteration. Comparing the results with those obtained by the original GS method, one can see that **Subopt** has provided better approximations of optimal values for different instances of the problem using optimality tolerance \(\eta=10^{-5}\) (See Table 2 in [28]). ## 7 Conclusion We have developed a descent subgradient method for solving an unconstrained nonsmooth nonconvex optimization problem. To find an efficient descent direction, we proposed an iterative procedure to provide an inner approximation of the Goldstein \(\varepsilon\)-subdifferential. The least norm element of this approximation was considered as a search direction, and a new variant of Mifflin's line search was suggested for finding the next trial point. The finite convergence of the presented line search has been proved under the assumption that the objective function satisfies some semismoothness assumption. We studied the global convergence of the method to a Clarke stationary \begin{table} \begin{tabular}{c c c c c c c} \hline \(n\) & \(N\) & \(s\) & \#Fun & \#Sub & \(f_{\text{end}}\) & Time(s) \\ \hline 1 & 2 & 1 & 248 & 130 & 0.4566 & 1.15 \\ 6 & 4 & 2 & 313 & 164 & 0.1580 & 1.37 \\ 15 & 6 & 3 & 3386 & 1690 & 0.0726 & 12.01 \\ 28 & 8 & 4 & 3055 & 1524 & 0.0327 & 10.25 \\ 45 & 10 & 5 & 6652 & 3326 & 0.0265 & 26.47 \\ 66 & 12 & 6 & 3146 & 1576 & 0.0106 & 16.40 \\ 91 & 14 & 7 & 3179 & 1547 & 0.0049 & 21.53 \\ 120 & 16 & 8 & 4053 & 2029 & 0.0035 & 44.71 \\ \hline \end{tabular} \end{table} Table 5: Results for minimization of eigenvalue products for various instances with \(\eta=10^{-5}\). point. Our numerical tests have confirmed that, the proposed subgradient algorithm is a big rival to GS and bundle type methods. Moreover, some semi-academic problems showed the applicability of the method in a variety of contexts. ## Disclosure statement The authors report there are no competing interests to declare. ## Funding This project was fully supported by the Iran National Science Foundation (INSF) under contract no. 99025023.
2306.12033
End-to-End Augmentation Hyperparameter Tuning for Self-Supervised Anomaly Detection
Self-supervised learning (SSL) has emerged as a promising paradigm that presents self-generated supervisory signals to real-world problems, bypassing the extensive manual labeling burden. SSL is especially attractive for unsupervised tasks such as anomaly detection, where labeled anomalies are often nonexistent and costly to obtain. While self-supervised anomaly detection (SSAD) has seen a recent surge of interest, the literature has failed to treat data augmentation as a hyperparameter. Meanwhile, recent works have reported that the choice of augmentation has significant impact on detection performance. In this paper, we introduce ST-SSAD (Self-Tuning Self-Supervised Anomaly Detection), the first systematic approach to SSAD in regards to rigorously tuning augmentation. To this end, our work presents two key contributions. The first is a new unsupervised validation loss that quantifies the alignment between the augmented training data and the (unlabeled) test data. In principle we adopt transduction, quantifying the extent to which augmentation mimics the true anomaly-generating mechanism, in contrast to augmenting data with arbitrary pseudo anomalies without regard to test data. Second, we present new differentiable augmentation functions, allowing data augmentation hyperparameter(s) to be tuned end-to-end via our proposed validation loss. Experiments on two testbeds with semantic class anomalies and subtle industrial defects show that systematically tuning augmentation offers significant performance gains over current practices.
Jaemin Yoo, Lingxiao Zhao, Leman Akoglu
2023-06-21T05:48:51Z
http://arxiv.org/abs/2306.12033v1
# End-to-End Augmentation Hyperparameter Tuning ###### Abstract Self-supervised learning (SSL) has emerged as a promising paradigm that presents self-generated supervisory signals to real-world problems, bypassing the extensive manual labeling burden. SSL is especially attractive for unsupervised tasks such as anomaly detection, where labeled anomalies are often nonexistent and costly to obtain. While self-supervised anomaly detection (SSAD) has seen a recent surge of interest, the literature has failed to treat data augmentation as a hyperparameter. Meanwhile, recent works have reported that the choice of augmentation has significant impact on detection performance. In this paper, we introduce ST-SSAD (Self-Tuning Self-Supervised Anomaly Detection), the _first systematic approach to SSAD in regards to rigorously tuning augmentation_. To this end, our work presents two key contributions. The first is a new unsupervised validation loss that quantifies the alignment between the augmented training data and the (unlabeled) test data. In principle we adopt transduction, quantifying the extent to which augmentation mimics the true anomaly-generating mechanism, in contrast to augmenting data with arbitrary pseudo anomalies _without_ regard to test data. Second, we present new differentiable augmentation functions, allowing data augmentation hyperparameter(s) to be tuned end-to-end via our proposed validation loss. Experiments on two testbeds with semantic class anomalies and subtle industrial defects show that systematically tuning augmentation offers significant performance gains over current practices. ## 1 Introduction Anomaly detection (AD) finds many applications in security, finance, manufacturing, and surveillance, to name a few. Thanks to its popularity, the literature is abound with numerous detection techniques [1], while deep neural network-based AD models have attracted the most attention recently [2]. Especially for adversarial or dynamically-changing settings in which the anomalies are to be identified, it is important to design _unsupervised_ techniques. While supervised detection can be employed for label-rich settings, unsupervised detection becomes critical to remain alert to emerging phenomena or the so-called "unknown unknowns". Lack of ground-truth labels, however, makes it notoriously difficult to tune modern deep AD models with various hyperparameters [3]. Recently, self-supervised learning (SSL) has emerged as a promising paradigm that offers supervisory signals to real-world problems while avoiding the extensive cost of manual labeling, leading to great success in advancing NLP [4; 5] as well as computer vision tasks [6; 7]. SSL has become particularly attractive for _unsupervised_ tasks such as AD, where labeled data is either nonexistent, costly to obtain, or nontrivial to simulate in the face of unknown anomalies. Thus, the literature has seen a recent surge of SSL-based AD (SSAD) techniques [8; 9; 10; 11; 12; 13]. The typical approach to SSAD involves incorporating self-generated _pseudo_ anomalies into training, and then learning to separate those from the inliers. The pseudo anomalies are most often synthesized artificially by transforming inliers through a data augmentation function, such as masking, blurring, etc. (there is also work that "outlier-expose" the inliers to real samples from external data repos [9]). In this paper, we address a fundamental challenge with SSAD, to which recent works seem to have turned a blind eye: recognizing and tuning _augmentation as a hyperparameter_. As shown recently by Yoo et al. [14], the choice of the augmentation function, as well as its associated argument(s) such as the masking amount, blurring level, etc., have tremendous impact on detection performance. This may come across as no surprise, or trivial to state, since the supervised learning community has long (and appropriately) integrated "data augmentation hyperparameters" into model selection [15, 16]. Meanwhile, there exists no such attempt in the SSAD literature (!).1 Although model selection without ground-truth labels is admittedly a much harder problem, turning a blind eye to the challenge may mislead by overstating the (unreasonable) effectiveness of SSL for unsupervised AD. Footnote 1: Note that typical train-validation split does not apply to SSAD with augmented data (inliers plus augmented pseudo anomalies), since different augmentations lead to different validation data, making them incomparable. Our work introduces the _first systematic approach to SSAD in regards to rigorously tuning augmentation_. The key idea is to capitalize on _transductive_ learning [17], where we leverage the _unlabeled_ test data during self-supervised AD model tuning. Intuitively, SSL-based AD would work well as long as the augmentation-generated pseudo anomalies resemble the true anomalies (put differently, when the augmentation function well mimics the true anomaly-generating mechanism). To capture this insight, **(1)** we first design a novel, _unsupervised validation loss_ for SSAD toward quantifying the alignment between the augmented training data and test data. Then, we propose to tune augmentation through our differentiable validation loss _end-to-end_ (see Fig. 1). This necessitates the augmentation function to be differentiable as well. To this end, **(2)** we propose new _differentiable formulations for popular augmentations_ such as CutOut (local) [18] and Rotation (global) [8] as proof of concept. We argue that the use of (unlabeled) test data, containing the anomalies to be identified, _during model tuning_ transductively2 is indeed exceedingly important for "success". It is fundamentally different from existing SSAD approaches that "imagine" how the actual anomalies would look like or otherwise haphazardly choose augmentation that corresponds to some arbitrary notion of anomalies, which may not well align or agree with what is to be detected. Surely, one can incorporate expert/prior knowledge of anomalies in choosing augmentation, but in the absence thereof (recall unknown-unknowns), SSAD would likely fail as the recent study by Yoo et al. [14] documents. Footnote 2: Vapnik [17] advocated transductive learning over inductive learning, stating that one should not solve a more general/harder (intermediate) problem, but rather solve the specific problem at hand directly. We argue that transduction is especially relevant for operationalizing SSL for AD. Our extensive experiments on 41 anomaly detection tasks including both local and global anomalies show that ST-SSAD significantly outperforms both unsupervised and self-supervised baselines which rely on manual hyperparameter search without labels. Our qualitative analysis visually supports that ST-SSAD is capable of learning different augmentation hyperparameters for different anomaly types, even when they share the same normal data, by leveraging the anomalies in unlabeled test data. While in this paper we focus on image anomaly detection, our ST-SSAD framework is generally applicable to other input data modalities, provided such augmentation functions can be learned or designed. Figure 1: (left) Training and (right) validation (i.e. augmentation tuning) stages of ST-SSAD, which alternates between: (left) Given aug. function \(\phi_{\mathrm{aug}}(\cdot;\mathbf{a})\), estimate the parameters \(\theta\) of detector \(f_{\theta}\) on inliers \(\mathcal{D}_{\mathrm{trn}}\) and pseudo anomalies \(\mathcal{D}_{\mathrm{aug}}\) via the training loss; (right) Given \(f_{\theta}\) and (unlabeled) test data \(\mathcal{D}_{\mathrm{test}}\), gradient-update the aug. hyperparameters \(\mathbf{a}\) via our differentiable unsupervised validation loss that measures the agreement between \(\mathcal{D}_{\mathrm{trn}}\cup\mathcal{D}_{\mathrm{aug}}\) and \(\mathcal{D}_{\mathrm{test}}\) in the embedding space. ## 2 Preliminaries NotationLet \(\mathcal{D}_{\mathrm{trn}}\) denote a set of training normal (i.e. inlier) data, and \(\mathcal{D}_{\mathrm{test}}\) be a set of test data containing both normal and anomalous samples. Let \(\mathbf{x}\in\mathbb{R}^{d}\) denote a data sample in \(\mathcal{D}_{\mathrm{trn}}\cup\mathcal{D}_{\mathrm{test}}\), where \(d\) represents its dimension. Let \(\phi_{\mathrm{aug}}(\mathbf{x},\mathbf{a})\in\mathbb{R}^{d}\times\mathcal{A} \mapsto\mathbb{R}^{d}\) depict a data augmentation function conditioned on hyperparameters \(\mathbf{a}\in\mathcal{A}\), where \(\mathcal{A}\) denotes possible values of the hyperparameters. For example, for \(\phi_{\mathrm{aug}}\): rotation of an image, \(\mathcal{A}=[0,360)\) is the domain of possible rotation angles. Let \(f_{\theta}\in\mathbb{R}^{d}\mapsto\mathbb{R}^{h}\) be a detector parameterized by \(\theta\), and \(s\in\mathbb{R}^{h}\mapsto\mathbb{R}^{+}\) be an anomaly score function. Specifically, \(f_{\theta}\) returns a low-dimensional embedding \(\mathbf{z}\in\mathbb{R}^{h}\) for each sample \(\mathbf{x}\), which is then fed into \(s\) to compute the anomaly score of \(\mathbf{x}\). We assume that \(f_{\theta}\) is trained in a self-supervised fashion by creating a set \(\mathcal{D}_{\mathrm{aug}}=\{\phi_{\mathrm{aug}}(\mathbf{x};\mathbf{a})\mid \mathbf{x}\in\mathcal{D}_{\mathrm{trn}}\}\) of pseudo anomalies using \(\phi_{\mathrm{aug}}\). Problem definitionGiven \(\mathcal{D}_{\mathrm{trn}}\) and \(\mathcal{D}_{\mathrm{test}}\), how can we find \(\mathbf{a}^{*}\) (along with the model parameters \(\theta\)) that maximizes the accuracy of the detector \(f_{\theta}\) with score function \(s\)? There is no trivial solution to the problem, since the labeled anomalies are not given at training time, but the problem is crucial for the success of SSAD in real-world tasks where the labels are hard to obtain or even nonexistent. To the best of our knowledge, this problem has not been studied in the literature, and our work is the first to propose a systematic solution to the problem. ## 3 Proposed Framework for End-to-End Augmentation Tuning We propose ST-SSAD (Self-Tuning Self-Supervised Anomaly Detection), a framework for augmentation hyperparameter tuning in SSAD. Given test data \(\mathcal{D}_{\mathrm{test}}\) which contains unlabeled anomalies, ST-SSAD automatically finds the best augmentation hyperparameter \(\mathbf{a}^{*}\) that maximizes the semantic alignment between the augmentation function and the underlying anomaly-generating mechanism hidden in \(\mathcal{D}_{\mathrm{test}}\). The search process is performed in an end-to-end fashion thanks to two core novel engines of ST-SSAD: (1) an _unsupervised validation_ loss \(\mathcal{L}_{\mathrm{val}}\) and (2) a _differentiable augmentation_ function \(\phi_{\mathrm{aug}}\), which we describe in detail in Sec. 3.1 and 3.2, respectively. Fig. 1 shows an overall structure of ST-SSAD, which updates the parameters \(\theta\) of the detector \(f_{\theta}\) and the augmentation hyperparameters a through alternating stages for training and validation. Let \(\mathcal{Z}_{\mathrm{trn}}=\{f_{\theta}(\mathbf{x})\mid\mathbf{x}\in\mathcal{D }_{\mathrm{trn}}\}\) be the embeddings of \(\mathcal{D}_{\mathrm{trn}}\), \(\mathcal{Z}_{\mathrm{aug}}=\{f_{\theta}(\phi_{\mathrm{aug}}(\mathbf{x}; \mathbf{a}))\mid\mathbf{x}\in\mathcal{D}_{\mathrm{trn}}\}\) be the embeddings of augmented data, and \(\mathcal{Z}_{\mathrm{test}}=\{f_{\theta}(\mathbf{x})\mid\mathbf{x}\in\mathcal{ D}_{\mathrm{test}}\}\) be the embeddings of \(\mathcal{D}_{\mathrm{test}}\). In the training stage, ST-SSAD updates \(\theta\) to minimize the training loss based on the pretext task of SSAD as determined by augmentation function \(\phi_{\mathrm{aug}}\) with given \(\mathbf{a}\). In the validation stage, ST-SSAD updates \(\mathbf{a}\) to reduce the unsupervised validation loss based on the embeddings generated by the updated \(f_{\theta}\). The framework halts when \(\mathbf{a}\) reaches a local optimum, typically after a few iterations. The detailed process of ST-SSAD is shown in Algo. 1. Line 3 denotes the training stage, and Lines 4 to 8 represent the validation stage. \(\theta\) is updated in Line 9 after the validation stage because of the second-order optimization (Sec. 3.3). Due to its gradient-based solution to a bilevel optimization problem, Algo. 1 is executed for multiple random initializations of \(\mathbf{a}\) in Line 1 (Sec. 3.3). ### Unsupervised Validation Loss The unsupervised validation loss \(\mathcal{L}_{\mathrm{val}}\) is one of the core components of ST-SSAD, which guides the direction of hyperparameter optimization. The main goal of \(\mathcal{L}_{\mathrm{val}}\) is to quantify the agreement between the augmentation \(\phi_{\mathrm{aug}}\) and the anomaly-generating mechanism yielding \(\mathcal{D}_{\mathrm{test}}\) without labels. Our idea is to measure the distance between \(\mathcal{D}_{\mathrm{trn}}\cup\mathcal{D}_{\mathrm{aug}}\) and \(\mathcal{D}_{\mathrm{test}}\) in the embedding space, based on the intuition that the two sets will become similar, the more \(\mathcal{D}_{\mathrm{aug}}\) resembles the true anomalies in \(\mathcal{D}_{\mathrm{test}}\). Fig. 2 illustrates the intuition: we aim to find \(\phi_{\mathrm{aug}}\) that creates \(\mathcal{Z}_{\mathrm{aug}}\) similar to the set \(\mathcal{Z}_{\mathrm{test}}^{(a)}\) of true anomalies (in red), while matching \(\mathcal{Z}_{\mathrm{trn}}\) with the set \(\mathcal{Z}_{\mathrm{test}}^{(n)}\) of normal data in \(\mathcal{D}_{\mathrm{test}}\) (in green). By using the embeddings, we can effectively avoid the high dimensionality of raw data and focus on their semantic representation. Based on the idea, we present the basic form of our validation loss as follows: \[\mathcal{L}_{\mathrm{val}}^{(b)}(\mathcal{Z}_{\mathrm{trn}},\mathcal{Z}_{ \mathrm{aug}},\mathcal{Z}_{\mathrm{test}})=\mathrm{dist}(\mathcal{Z}_{\mathrm{ trn}}\cup\mathcal{Z}_{\mathrm{aug}},\mathcal{Z}_{\mathrm{test}}), \tag{1}\] where \(\mathrm{dist}(\cdot,\cdot)\) is a distance function between sets of vectors. Effectiveness of \(\mathcal{L}_{\mathrm{val}}^{(b)}\) is determined by how the distance is defined, which we carefully design to address two notable challenges: Figure 2: Illustration of \(\mathcal{L}_{\mathrm{val}}\). * [leftmargin=*] * _Scale invariance:_ The scale of distances between embeddings can arbitrarily change as \(\mathbf{a}\) is updated, which makes the value of \(\mathcal{L}_{\mathrm{val}}\) inconsistent. Thus, \(\mathcal{L}_{\mathrm{val}}\) should be robust to the scale of distances as long as the (relative) distribution of embeddings is preserved. * _Ratio invariance:_ Let \(\gamma=|\mathcal{D}_{\mathrm{aug}}|/|\mathcal{D}_{\mathrm{trn}}|\) denote the ratio of augmented data, which means the number of times we apply \(\phi_{\mathrm{aug}}\) to \(\mathcal{D}_{\mathrm{trn}}\). Since the exact anomaly ratio in \(\mathcal{D}_{\mathrm{test}}\) is unknown, \(\mathcal{L}_{\mathrm{val}}\) should be robust to the value of \(\gamma\) which we manually set prior to training. Total distance normalizationFor scale invariance, we propose total distance normalization to unify the total pairwise squared distance (TPSD) [19] of embeddings. Let \(\mathbf{Z}\) be an embedding matrix that concatenates all embedding vectors in \(\mathcal{Z}_{\mathrm{trn}}\), \(\mathcal{Z}_{\mathrm{aug}}\), and \(\mathcal{Z}_{\mathrm{test}}\). Then, TPSD is defined as \(\mathrm{TPSD}(\mathbf{Z})=\sum_{ij}\|\mathbf{z}_{i}-\mathbf{z}_{j}\|_{2}^{2}\), where \(\mathbf{z}_{i}\) denotes the \(i\)-th row of \(\mathbf{Z}\). Although the naive computation of TPSD is slow, we can transform any \(\mathbf{Z}\) to have the unit TPSD in linear time [19] via \[\mathbf{z}_{i}^{\prime}=\frac{\sqrt{N}}{\|\mathbf{Z}^{c}\|_{\mathrm{F}}} \mathbf{z}_{i}^{c}\quad\text{where}\quad\mathbf{z}_{i}^{c}=\mathbf{z}-\frac{ 1}{N}\sum_{i=1}^{N}\mathbf{z}_{i}\, \tag{2}\] where \(\|\cdot\|_{\mathrm{F}}\) is the Frobenius norm of a matrix, and \(N\) is the number of rows in \(\mathbf{Z}\). By using \(\mathbf{Z}^{\prime}\) instead of \(\mathbf{Z}\) for computing the distances, we can focus on the relative distances between embeddings while maintaining the overall variance. It is noteworthy that the vector normalization, i.e., \(\mathbf{z}_{i}\leftarrow\mathbf{z}_{i}/\|\mathbf{z}_{i}\|_{2}\ \forall i\), does not solve the problem since the scale of distances can still be arbitrary even with the upper bound of individual distances forced by the unit vectors. Another advantage that total distance normalization offers is that it steers away from the trivial solution of the distance minimization problem, which is to set all embeddings to the zero vector. Mean distance lossFor ratio invariance, we use the asymmetric mean distance as the \(\mathrm{dist}\) function to separate \(\mathrm{dist}(\mathcal{Z}_{\mathrm{trn}}\cup\mathcal{Z}_{\mathrm{aug}}, \mathcal{Z}_{\mathrm{test}})\) into \((\mathrm{dist}(\mathcal{Z}_{\mathrm{trn}},\mathcal{Z}_{\mathrm{test}})+ \mathrm{dist}(\mathcal{Z}_{\mathrm{aug}},\mathcal{Z}_{\mathrm{test}}))/2\) as follows. \[\mathcal{L}_{\mathrm{val}}(\mathcal{Z}_{\mathrm{trn}},\mathcal{Z}_{\mathrm{ aug}},\mathcal{Z}_{\mathrm{test}})=\frac{1}{2}\sum_{\mathbf{z}^{\prime}\in \mathcal{Z}_{\mathrm{test}}^{\prime}}\|\mathbf{z}^{\prime}-\mathrm{mean}( \mathcal{Z}_{\mathrm{trn}}^{\prime})\|_{2}+\|\mathbf{z}^{\prime}-\mathrm{mean }(\mathcal{Z}_{\mathrm{aug}}^{\prime})\|_{2}\, \tag{3}\] where \(\mathcal{Z}_{\mathrm{trn}}^{\prime}\), \(\mathcal{Z}_{\mathrm{aug}}^{\prime}\), and \(\mathcal{Z}_{\mathrm{test}}^{\prime}\) are the embeddings after the total distance normalization, and \(\mathrm{mean}(\cdot)\) is the (elementwise) mean of a set of vectors. The mean operation allows \(\mathcal{L}_{\mathrm{val}}\) to be invariant to the individual (or internal) distributions of \(\mathcal{Z}_{\mathrm{trn}}\) and \(\mathcal{Z}_{\mathrm{aug}}\), including their sizes, while focusing on their global relative positions with respect to the test embeddings in \(\mathcal{Z}_{\mathrm{test}}\). This is another desired property for \(\mathcal{L}_{\mathrm{val}}\), since we want to avoid minimizing \(\mathcal{L}_{\mathrm{val}}\) only by decreasing the variance of \(\mathcal{Z}_{\mathrm{aug}}\). More detailed discussion on the properties of \(\mathcal{L}_{\mathrm{val}}\) under various scenarios is given in Appendix A. ### Differentiable Augmentation The second driving engine is a differentiable augmentation function that enables ST-SSAD to conduct end-to-end optimization. There are two main approaches to making augmentation differentiable. The first is to train a neural network that mimics the augmentation function, mapping input samples to augmented counterparts, which can be done offline regardless of the detector network \(f_{\theta}\). However, such a neural network is required to have a large capacity to be able to learn the augmentation function accurately, especially for high dimensional target samples, which demands considerable training cost. The second approach is to directly formulate an augmentation function in a differentiable way. Some functions are inherently differentiable if implemented in a correct way, while others require differentiable surrogate functions that provide a similar functionality. As proof of concept, we take this approach to introduce two differentiable augmentation functions; one representative of local and another representative of global augmentation. Specifically, we propose the novel CutDiff (SS3.2.1) as a differentiable variant of CutOut [18] that is originally designed for localized anomalies. We also utilize a differentiable formulation of Rotation (SS3.2.2), which is a popular augmentation that transforms the input globally and has been widely used for semantic anomaly detection [8, 10, 12]. #### 3.2.1 CutDiff for Local Augmentation Local augmentation functions such as CutOut [18], CutPaste [11], and patch-wise cloning [20] mimic subtle local anomalies such as textural defects by modifying a partial region in an image. CutOut removes a small patch from an image and fills in it with a black patch, while CutPaste copies a small patch and pastes it into a different random location of the same image. However, all of these functions are not differentiable, and thus cannot be directly used for our end-to-end ST-SSAD framework. We propose CutDiff in Algo. 2, which creates a smooth round patch and extracts it from the given image in a differentiable manner. The main idea is to model the shape of the patch as a function of hyperparameters \(\mathbf{a}\in\mathbb{R}^{3}\) by computing the scaled distance between each image pixel and the randomly selected center of the patch. The three elements in \(\mathbf{a}\) represent the patch shape including its width, height, and orientation. Let \(\mathbf{R}\) and \(\mathbf{S}\) respectively denote the rotation and scale matrices for a patch, as \(\mathbf{R}=\begin{bmatrix}\cos(g)&-\sin(g)\\ \sin(g)&\cos(g)\end{bmatrix}\) and \(\mathbf{S}=\begin{bmatrix}s/r&0\\ 0&sr\end{bmatrix}\), where \(g\), \(s\), and \(r\) represent the rotated angle, size, and ratio, respectively. These three elements can be associated with those in \(\mathbf{a}\) through \(\mathbf{L}=\mathbf{R}\mathbf{S}\), where \(\mathbf{L}\) is a lower triangular matrix of \(\mathbf{a}\) as given in Line 2 of Algo. 2. By directly learning \(\mathbf{L}\), we in effect tune the rotation and scale of the patch. We provide visualization of CutDiff compared with CutOut and CutPaste, as well as implementation details in Appendix B. #### 3.2.2 Rotation for Global Augmentation Geometric augmentation functions such as rotation, translation, and flipping have been widely used for image anomaly detection [8, 10]. Unlike the local augmentation functions such as CutOut, many geometric transformations are differentiable as they are represented by matrix-vector operations. In this work, we use the differentiable image rotation function proposed by Jaderberg et al. [21], which consists of two main steps. First step is the creation of a rotation matrix, which is the same as the \(\mathbf{R}\) matrix except that zeros are padded as the third column. Second step is to create a sampling function that selects a proper pixel position from the given image for each pixel position of the target image based on the computed rotation matrix and the affine grid. The resulting operation is differentiable, since it is a mapping from the original pixels to the output through a parameterized sampling. ### Implementation Details Second-order optimizationST-SSAD updates augmentation hyperparameters \(\mathbf{a}\) and the parameters \(\theta\) of the detection network \(f_{\theta}\) through alternating stages at each training iteration. That is, we expect the following to hold: \[\mathcal{L}_{\mathrm{val}}(\mathbf{a}^{(t+1)},\theta^{\prime})\ <\ \mathcal{L}_{\mathrm{val}}( \mathbf{a}^{(t)},\theta)\, \tag{4}\] where \(t\) is the current number of epochs, and \(\theta^{\prime}\) denotes the updated parameters of the detector \(f_{\theta}\) derived by using \(\mathbf{a}^{(t)}\) to generate pseudo anomalies for its training. However, the first-order optimization of \(\mathbf{a}\) cannot take into account that the parameters \(\theta^{\prime}\) and \(\theta\) are different between both sides of Eq. (4), as it treats \(\theta^{\prime}\) as a constant. As a solution, ST-SSAD considers \(\theta^{\prime}\) as a function of \(\mathbf{a}^{(t)}\) and conducts second-order optimization as follows: \[\mathbf{a}^{(t+1)}=\mathbf{a}^{(t)}-\beta\nabla_{\mathbf{a}^{(t)}}\mathcal{L} _{\mathrm{val}}(\mathbf{a}^{(t)},\theta-\alpha\nabla_{\theta}\mathcal{L}_{ \mathrm{trn}}(\theta,\mathbf{a}^{(t)})). \tag{5}\] In this way, the optimization process can accurately track the change in \(\theta\) caused by the update of \(\mathbf{a}\), resulting in a stable minimization of \(\mathcal{L}_{\mathrm{val}}\). Note that Eq. (5) is the same as in Line 8 of Algorithm 1, except we assume that \(\mathcal{L}_{\mathrm{val}}\) takes \((\mathbf{a},\theta)\) as its inputs in Eq.s (4) and (5). InitializationThe result of ST-SSAD is affected by the initialization of augmentation hyperparameters \(\mathbf{a}\), since it conducts gradient-based updates toward local optima. A natural way to address initialization is to pick a few random starting points and select the best one. However, it is difficult to fairly select the best from multiple initialization choices, since our \(\mathcal{L}_{\mathrm{val}}\) is designed to locally improve the current \(\mathbf{a}\), rather than to compare different models; e.g., it is possible that a less-aligned model can produce lower \(\mathcal{L}_{\mathrm{val}}\) if the augmented data are distributed more sparsely in the embedding space. As a solution, we propose a simple yet effective measure to enable the comparison between models from different initialization points. Let \(s\) denote the anomaly score function as presented in the problem definition in Sec. 2. Then, we define the score variance of the test data as \[\mathcal{S}(\theta)=\frac{\sum_{s\in\mathcal{C}}(s-\mathrm{mean}(\mathcal{C} ))^{2}}{|\mathcal{D}_{\mathrm{test}}|-1}\quad\text{ where }\quad\mathcal{C}=\{s(f_{\theta}(\mathbf{x}))\mid \mathbf{x}\in\mathcal{D}_{\mathrm{test}}\}. \tag{6}\] We use \(\mathcal{S}\) (the larger, the better) to select the best initialization point after training completes. The idea is that the variance of test anomaly scores is likely to be large under a good augmentation, as it generally reflects a better separability between inliers and anomalies in the test data, and it offers a fair evaluation since ST-SSAD does not observe the score function \(s\) at all during optimization. ## 4 Experiments DatasetsWe evaluate ST-SSAD on 41 different anomaly detection tasks, which include 23 subtle (local) anomalies in MVTec AD [22] and 18 semantic (gross) anomalies in SVHN [23]. MVTec AD is an image dataset of industrial objects, where the anomalies are local defects such as scratches. We use four types of objects in our experiments: Cable, Carpet, Grid, and Tile, each of which contains five to eight anomaly types. SVHN is a digits image dataset from house numbers in Google Street View. We use digits 2 and 6 as normal classes and treat the remaining digits as anomalies, generating 18 different anomaly detection tasks for all pairs of digits (2 vs. others and 6 vs. others). Model settingsWe use a detector network \(f_{\theta}\) based on ResNet-18 [24] that was used in previous work for unsupervised anomaly detection [11]. We use the binary cross entropy as the training loss \(\mathcal{L}_{\mathrm{trn}}\) for classifying between normal and augmented data, applying an MLP head to the embeddings to produce prediction outputs. The anomaly score \(s(\mathbf{x})\) of each data \(\mathbf{x}\) is defined as the negative log likelihood of a Gaussian density estimator learned on the embeddings of training data as in previous works [11; 25]. For ST-SSAD, we use four uniformly sampled initializations for each augmentation function: \(\{0.0001,0.001,0.01,0.1\}\) for CutDiff patch size, and \(\{45^{\circ},135^{\circ},225^{\circ},315^{\circ}\}\) for Rotation angle. We set both the initial patch angle and ratio to zero. We employ CutDiff and Rotation for defect and semantic anomaly detection tasks, respectively. The sum of training and validation losses is used as the stopping criterion for the updates to hyperparameters \(\mathbf{a}\). Evaluation metricsThe accuracy of each model is measured by the area under the ROC curve (AUC) on the anomaly scores computed for \(\mathcal{D}_{\mathrm{test}}\). We run all experiments 5 times and report the average and standard deviation. For statistical comparison between different models on all tasks and random seeds, we also run the paired Wilcoxon signed-rank test [26]. The one-sided test with \(p\)-values smaller than \(0.05\) represents that our ST-SSAD is statistically better than the other. BaselinesTo the best of our knowledge, there are no direct competitors on augmentation hyperparameter tuning for self-supervised anomaly detection. Thus, we compare ST-SSAD with various types of baselines: _SSL without hyperparameter tuning_--(1) random dynamic selection (RD) that selects \(\mathbf{a}\) randomly at each training epoch, and (2) random static selection (RS) that selects \(\mathbf{a}\) once before the training begins. _Unsupervised learning_--(3) autoencoder (AE) [8] and (4) DeepSVDD [27]. _Variants of our ST-SSAD with naive choices_--(5) using maximum mean discrepancy (MMD) [28] as \(\mathcal{L}_{\mathrm{val}}\), (6) MMD without the total distance normalization, and (7) using first-order optimization. RD and RS are used with either of CutOut, CutPaste, CutDiff, or Rotation, which we denote CO, CP, CD, and RO for brevity. We also depict baselines (5)-(7) as MMD1, MMD2, and FO, respectively. Additional details on experiments are given in Appendix C. ### Demonstrative Examples We first present experimental results on demonstrative datasets, where we create anomalies using CutDiff with different augmentation hyperparameters. Specifically, given images of the Carpet object in MVTec AD, we create 25 types of anomalies with the patch size in \(\{0.01,0.02,0.04,0.08,0.16\}\) and the aspect ratio in \(\{0.25,0.5,1.0,2.0,4.0\}\), where the angle is fixed to \(0\). Our goal is to demonstrate that ST-SSAD is able to learn different \(\mathbf{a}\) for different anomaly types in these controlled settings. Fig. 3 shows the results of learning. In Fig. 2(a), ST-SSAD learns different values of \(\mathbf{a}\) depending on the properties of anomalies, demonstrating the ability of ST-SSAD to adapt to varying anomalies. Nevertheless, there exists slight difference between the learned \(\mathbf{a}\) and the true values in some cases, as embedding distributions can be matched as in Fig. 2(b) even with such a difference. This difference is typically larger for patch ratio than for patch size, suggesting that patch size impacts the embeddings more than the ratio does. Fig. 2(c) depicts the training process of ST-SSAD for five anomaly types with different patch sizes, where the patch ratio is \(1.0\). We visualize the average and standard deviation from five runs with different random seeds. ST-SSAD accurately adapts to different patch sizes even from the same initialization point, updating \(\mathbf{a}\) through iterations to minimize the validation loss. ### Testbed Evaluation Next, we perform quantitative evaluation of ST-SSAD on both industrial-defect anomalies and semantic class anomalies, covering 41 different anomaly detection tasks. Table 1 provides the results on 23 tasks for industrial-defect anomalies. ST-SSAD achieves the best AUC in 9 different tasks, and it outperforms 7 baselines with most \(p\)-values smaller than \(0.01\), representing strong statistical significance. Table 2 shows the results on 18 detection tasks for semantic class anomalies, where ST-SSAD significantly outperforms all baselines with all \(p\)-values smaller than \(0.0001\). The ablation studies between ST-SSAD and its three variants in Tables 1 and 2 show the effectiveness of our ideas that compose ST-SSAD: total distance normalization, mean distance loss, and second-order optimization. Especially, the two MMD-based models are significantly worse than our ST-SSAD, showing that MMD is not suitable for augmentation tuning even though it is widely used as a set distance measure, due to the challenges we aim to address with \(\mathcal{L}_{\mathrm{val}}\). The difference between ST-SSAD and the first-order baseline is smaller, meaning that the first-order optimization can still be used for ST-SSAD when the computational efficiency needs to be prioritized. ### Qualitative Analysis We also perform qualitative analysis as shown in Fig.s 5 and 6, visualizing the augmentation functions learned by ST-SSAD for different types of anomalies. Fig. 5 illustrates three types of anomalies in the Cable object and one type of anomaly in the Carpet object. These four anomaly types all have their own sizes and aspect ratios of defected regions, which are accurately learned by ST-SSAD. Note that the three types of Cable anomalies share the training data \(\mathcal{D}_{\mathrm{trn}}\); ST-SSAD captures their difference only from the unlabeled test data. The locations of patches created by CutDiff are chosen randomly at each run, since the locations of local defects are different for each anomalous image. Fig. 6 illustrates images in the SVHN dataset and the embedding distributions after the training of ST-SSAD is completed. Fig.s 5(a) and 5(c) show that ST-SSAD learns \(180^{\circ}\) as the angle of Rotation, since the anomalies in both tasks can be resembled by the \(180^{\circ}\)-rotated normal images. After the training is done, the embedding distributions between \(\mathcal{Z}_{\mathrm{trn}}\cup\mathcal{Z}_{\mathrm{aug}}\) and \(\mathcal{Z}_{\mathrm{test}}\) are matched as shown in Fig.s 5(b) and 5(d), achieving high average AUC of \(0.944\) and \(0.887\), respectively (see Table 2). Figure 3: Experimental results on demonstrative examples, where we create anomalies using CutDiff with different hyperparameters. (a) ST-SSAD learns \(\mathbf{a}\) differently for each anomaly type, following the true values shown in the \(x\)- and \(y\)-axes, until (b) the embedding distributions are matched between \(\mathcal{Z}_{\mathrm{trn}}\cup\mathcal{Z}_{\mathrm{aug}}\) and \(\mathcal{Z}_{\mathrm{test}}\), showing a learning trajectory like (c). AUC is \(1.00\) in all 25 tasks. ### Discussion As Table 1 shows, ST-SSAD cannot always improve detection across tasks. In some tasks like Rough anomalies in Tile, a simple baseline like random CutOut shows higher AUC than other models. This is because some anomaly types are hard to mimic with CutDiff due to the inherent mismatch of the augmentation function. Fig. 4 shows two example anomaly types, Tile-Oil and Carpet-Thread, where ST-SSAD cannot improve over the baselines. Local defects of Oil are brighter than the background, whereas CutDiff rather darkens the chosen patch. Anomalies of Thread type contain long thin threads, which are also hard to represent with CutDiff regardless of hyperparameter values. As a general framework, the performance of ST-SSAD is affected by the detector model and augmentation function used. As the first systematic study for unsupervised augmentation tuning, we propose two differentiable augmentation functions for local and semantic anomalies, respectively, and show the success of ST-SSAD on two types of testbeds as proof of concept. We leave it as a future work to design a broader family of differentiable augmentations which can deal with more diverse types of anomalies, that also apply to data modalities other than images. ## 5 Related Work **Self-supervised learning (SSL)** has seen a surge of attention for pre-training foundation models [29], like LLMs that can generate remarkable human-like text [30]. Self-supervised representation learning \begin{table} \begin{tabular}{l|l|c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{Main Result} & \multicolumn{4}{c}{Ablution Study} \\ \hline \hline Object & Anomaly & AE & D-SVDD & RS-CO & RD-CO & RS-CP & RD-CP & RS-CD & RD-CD & **ST-SSAD** & **MMD1** & **MMD2** & FO \\ \hline Cable & Bent wire & 0.515 & 0.432 & 0.556 & 0.560 & 0.703 & **0.765** & 0.527 & 0.580 & 0.490 & 0.581 & 0.643 & 0.579 \\ Cable & Cable swap & 0.639 & 0.295 & 0.483 & 0.625 & 0.618 & 0.683 & 0.574 & **0.696** & 0.532 & 0.510 & 0.562 & 0.545 \\ Cable & Combined & 0.584 & 0.587 & 0.879 & 0.857 & 0.880 & **0.894** & 0.901 & 0.879 & 0.925 & 0.939 & 0.962 & 0.882 \\ Cable & Cut inner insulation & 0.758 & 0.991 & 0.630 & 0.737 & 0.766 & **0.833** & 0.623 & 0.732 & 0.667 & 0.633 & 0.649 & 0.689 \\ Cable & Cut outer insulation & **0.899** & 0.343 & 0.695 & 0.815 & 0.787 & 0.871 & 0.703 & 0.790 & 0.516 & 0.428 & 0.461 & 0.527 \\ Cable & Missing cable & 0.920 & 0.466 & 0.953 & 0.961 & 0.755 & 0.801 & 0.935 & 0.945 & **0.998** & 0.855 & 0.772 & 0.999 \\ Cable & Missing wire & 0.433 & 0.494 & 0.781 & 0.655 & 0.501 & 0.546 & 0.708 & 0.620 & **0.863** & 0.547 & 0.477 & 0.699 \\ Cable & Pulse insulation & 0.287 & 0.471 & 0.469 & 0.527 & 0.645 & **0.672** & 0.489 & 0.503 & 0.630 & 0.692 & 0.816 & 0.676 \\ Carpet & Color & 0.578 & 0.716 & 0.669 & 0.508 & 0.412 & 0.827 & 0.643 & 0.639 & **0.938** & 0.761 & 0.741 & 0.918 \\ Carpet & Cut & 0.198 & 0.758 & 0.439 & 0.608 & 0.403 & 0.411 & 0.490 & 0.767 & **0.790** & 0.353 & 0.401 & 0.595 \\ Carpet & Hole & 0.626 & 0.676 & 0.379 & 0.613 & 0.404 & 0.389 & 0.470 & **0.765** & 0.590 & 0.438 & 0.229 & 0.630 \\ Carpet & Metal contamination & 0.065 & **0.739** & 0.198 & 0.304 & 0.240 & 0.167 & 0.255 & 0.447 & 0.076 & 0.392 & 0.134 & 0.392 \\ Carpet & Thread & 0.394 & **0.742** & 0.494 & 0.585 & 0.469 & 0.517 & 0.508 & 0.679 & 0.483 & 0.492 & 0.541 & 0.642 \\ \hline Grid & Bent & **0.849** & 0.168 & 0.456 & 0.322 & 0.421 & 0.433 & 0.337 & 0.354 & 0.771 & 0.780 & 0.650 & 0.602 \\ Grid & Backen & 0.086 & 0.183 & 0.397 & 0.312 & 0.487 & 0.502 & 0.340 & 0.392 & **0.869** & 0.845 & 0.887 & 0.884 \\ Grid & Glue & 0.704 & 0.143 & 0.634 & 0.568 & 0.674 & 0.732 & 0.681 & 0.578 & **0.906** & 0.966 & 0.974 & 0.721 \\ Grid & Metal contamination & 0.851 & 0.229 & 0.421 & 0.380 & 0.499 & 0.514 & 0.425 & 0.613 & **0.858** & 0.861 & 0.665 & 0.732 \\ Grid & Thread & 0.583 & 0.209 & 0.612 & 0.494 & 0.500 & 0.549 & 0.654 & 0.611 & **0.973** & 0.962 & 0.969 & 0.964 \\ \hline Tile & Crack & 0.770 & 0.728 & 0.872 & 0.993 & 0.743 & 0.636 & 0.837 & **0.999** & 0.749 & 0.740 & 0.820 & 0.595 \\ Tile & Glue strip & 0.697 & 0.509 & 0.693 & **0.836** & 0.665 & 0.700 & 0.675 & 0.831 & 0.767 & 0.855 & 0.649 & 0.561 \\ Tile & Gray stroke & 0.637 & 0.785 & 0.845 & 0.642 & 0.583 & 0.657 & 0.856 & 0.802 & **0.974** & 0.663 & 0.706 & 0.973 \\ Tile & Oil & 0.414 & 0.690 & 0.708 & 0.745 & 0.464 & 0.576 & 0.863 & **0.837** & 0.554 & 0.548 & 0.614 & 0.555 \\ Tile & Rough & 0.724 & 0.387 & 0.606 & **0.725** & 0.631 & 0.661 & 0.568 & 0.657 & 0.690 & 0.700 & 0.549 & 0.605 \\ \hline \multicolumn{1}{c}{\(p\)-value} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \end{tabular} \end{table} Table 1: Test AUC on 23 different tasks for subtle anomaly detection. Each number is the average from five runs, and the best in each row is in bold. ST-SSAD outperforms most baselines, which is supported by the \(p\)-values in the last row derived from the Wilcoxon signed rank test. \begin{table} \begin{tabular}{l|l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{Main Result} & \multicolumn{4}{c}{Ablution Study} \\ \hline \hline Object & Anomaly & AE & D-SVDD & RS-RO & RD-RO & **ST-SSAD** & **MMD1** & **MMD2** & FO \\ \hline Digt2 & Digt 0 & 0.602 & 0.472 & 0.672 & 0.734 & **0.816** & 0.519 & 0.518 & 0.506 \\ Digt2 & Digt 1 & 0.544 & 0.499 & 0.601 & 0.690 & **0.743** & 0.499 & 0.501 & 0.498 \\ Digt2 & Digt 3 & 0.604 & 0.503 & has also offered astonishing boost to a variety of tasks in NLP, vision, and recommender systems [31]. In fact, SSL has been argued as the key toward "unlocking the dark matter of intelligence" [32]. **Self-supervised anomaly detection (SSAD)**: Most SSL methods can be categorized as generative versus contrastive. Generative SSAD can further be organized based on (denoising) autoencoders [33; 34; 35; 36], adversarial learning [37; 38], as well as flow-based models [39; 40]. Contrastive SSAD, on the other hand, relies on data augmentation that generates pseudo anomalies by transforming inliers, and a supervised loss that is trained to distinguish between inliers and the pseudo anomalies [41]. Many augmentation strategies are designed for contrastive SSAD including geometric [8; 10], localized cut-paste [11], patch-wise cloning [20], masking [42], distribution-shifting transformation [43], and learnable neural network-based transformation [13]. Beyond synthesizing pseudo anomalies via augmentation, outlier exposure augments inlier-only data from external repos [9; 44]. **Automating augmentation:** Recent work in computer vision (CV) have shown that the success of SSL relies heavily on well-designed data augmentation strategies [45; 46]. Sensitivity to the choice of augmentation has also been shown for SSAD recently [14]. While augmentation in CV plays a key role in improving generalization by accounting for invariances (e.g. mirror reflection of a dog is still a dog), augmentation in SSAD plays the key role of presenting the classifier with specific kinds of pseudo anomalies. While the supervised CV community proposed methods toward automating augmentation [47; 15], our proposed work is the first attempt toward rigorously tuning data augmentation for SSAD. The key difference is that the former sets aside a _labeled validation_ set to measure generalization, whereas we address the arguably more challenging setting for _fully unsupervised_ anomaly detection without any labels. ## 6 Conclusion Our work presented ST-SSAD, the first framework for self-tuning self-supervised anomaly detection, which automatically tunes the augmentation hyperparameters in an end-to-end fashion. To this end, we addressed two key challenges: unsupervised validation and differentiable augmentation. We proposed a smooth validation loss that quantifies the agreement between augmented and test data in a traductive fashion. We introduced two differentiable formulations for both local and global augmentation, while ST-SSAD can flexibly accommodate any other differentiable augmentation. Experiments on two large testbeds validated the superiority of ST-SSAD over existing practices. Future work will design differentiable formulations for other augmentation families and then also incorporate the discrete selection of augmentation as part of self-tuning for SSAD. Figure 5: Illustrations of four anomaly types for the Cable and Carpet objects and the corresponding augmentations learned by ST-SSAD. Different hyperparameters of CuthDiff are learned to resemble the true anomalies, including both the size and the aspect ratio of a patch. Figure 6: Illustrations of learned augmentations on the SVHN dataset and the corresponding distributions of embeddings. The three rows in (a, c) represent normal images, augmented images, and anomalies, respectively. ST-SSAD successfully learns the rotation of \(180^{\circ}\) for both tasks, achieving a match also visually between the distributions of \(\mathcal{Z}_{\mathrm{trn}}\cup\mathcal{Z}_{\mathrm{aug}}\) and \(\mathcal{Z}_{\mathrm{test}}\).
2310.06978
$L^{p}-$estimates for uncentered spherical averages and lacunary maximal functions
The primary goal of this paper is to introduce bilinear analogues of uncentered spherical averages, Nikodym averages associated with spheres and the associated bilinear maximal functions. We obtain $L^p$-estimates for uncentered bilinear maximal functions for dimensions $d\geq2$. Moreover, we also discuss the one-dimensional case. In the process of developing these results, we also establish new and interesting results in the linear case. In particular, we will prove $L^p$-improving properties for single scale averaging operators and $L^p$-estimates for lacunary maximal functions in this context.
Ankit Bhojak, Surjeet Singh Choudhary, Saurabh Shrivastava, Kalachand Shuin
2023-10-10T19:57:35Z
http://arxiv.org/abs/2310.06978v2
# \(L^{p}-\)estimates for uncentered spherical averages and lacunary maximal functions ###### Abstract. We prove \(L^{p}-\)estimates for lacunary maximal functions associated with uncentered spherical averages and Nikodym averages associated with spheres. In the process, we require to prove \(L^{p}-\)improving properties for single scale averaging operators in this context. In the second half of the paper, we define bilinear analogues of uncentered spherical maximal function and the Nikodym maximal function and study their \(L^{p}-\)boundedness properties. Interestingly, in the bilinear setting, one-dimensional case is also meaningful. We obtain \(L^{p}-\)estimates for lacunary uncentered bilinear maximal functions in all dimensions. 2010 Mathematics Subject Classification: Primary 42B25; Secondary 42B15; ###### Contents * 1 Introduction * 2 Uncentered spherical averages and maximal functions * 3 Nikodym maximal functions associated with sphere * 4 Bilinear maximal functions * 5 Proof of Theorem 2.4: Lacunary uncentered spherical maximal operator \(N_{lac}^{T}\) * 6 Proof of Theorem 2.7: \(L^{p}-\)improving properties of \(A^{T}\) * 7 Proof of Theorems 3.1 and 3.2: Nikodym maximal operators * 8 Proof of Theorem 4.1: Lacunary bilinear maximal operator \(\mathcal{N}_{lac}^{T}\) in dimension \(d\geq 2\) * 9 Proof of Theorem 4.2: Lacunary bilinear maximal operator \(\mathcal{N}_{lac}^{T}\) in dimension \(d=1\) * 10 Necessary conditions ## 1. Introduction This article is motivated by recent advances in the field of spherical averages and the corresponding maximal functions in linear and bilinear settings. The recent works by Chang, Dosidis and Kim [1] on uncentered spherical averages and by Jeong and Lee [1] for bilinear spherical maximal averages play key roles in this paper. In the former, the authors introduced two generalizations of spherical averages and corresponding maximal functions by considering uncentered spherical averages and Nikodym type maximal averages. They proved \(L^{p}-\)estimates for these operators. In this paper, we complement these results by proving off-diagonal \(L^{p}-\)estimates for single scale averages and \(L^{p}-\)estimates for the corresponding lacunary maximal averages. Further, we extend the notion of uncentered spherical averages and Nikodym type maximal averages to bilinear setting in the spirit of bilinear spherical maximal functions studied in Jeong and Lee [1]. Since the introductory material on each topic requires new notation, we dedicate separate sections to describe the results on each topic mentioned above along with the necessary preliminary background material. ### Organization of the paper The rest of the paper is organized as follows. * In Section 2 we discuss some known results for uncentered spherical maximal functions. Later, we describe our results on \(L^{p}-\)improving properties and lacunary maximal functions in this direction. These results are proved in Sections 5-6. * Section 3 is devoted to discuss the Nikodym maximal function associated with spheres. In this section we state our result on the lacunary Nikodym maximal function and bilinear Nikodym maximal function and prove them in Section 7. * The bilinear analogues of the results discussed in Sections 2 and 3 are described in Section 4. Their proofs are discussed in Sections 8-9. * Finally, in Section 10 we discuss some examples to obtain necessary conditions on various parameters. ## 2. Uncentered spherical averages and maximal functions Let \(f\in\mathcal{S}(\mathbb{R}^{d}),d\geq 2\). Given \(u\in\mathbb{R}^{d},\) the uncentered spherical average of \(f\) is defined by \[A_{t}^{u}f(x)=\int_{\mathbb{S}^{d-1}}f(x+t(u+y))\ d\sigma(y),\ t>0,\] where \(d\sigma\) is the normalized surface measure on the sphere \(\mathbb{S}^{d-1}\). For \(t=1,\) we use the notation \(A_{1}^{u}=A^{u}.\) Note that if \(u=0,A_{t}^{0}f\) is the standard spherical average. The spherical averages are well-studies in the literature in various contexts. For example, it is well-known that the spherical average \(A_{t}^{0}f\) appears as a solution to the wave equation. Littman [14] investigated \(L^{p}\to L^{q}-\)estimates of the operator \(A^{0}\) for \(d\geq 2\). He proved the following \(L^{p}-\)improving estimates for spherical averages: \[\|A^{0}f\|_{L^{q}(\mathbb{R}^{d})}\lesssim\|f\|_{L^{p}(\mathbb{R}^{d})} \tag{2.1}\] for \((1/p,1/q)\) belonging to the triangle with vertices \(\{(0,0),(1,1),(\frac{d}{d+1},\frac{1}{d+1})\}\). Here, the notation \(A\lesssim B\) means that there exists a constant \(C>0\) such that \(A\leq CB\). We also refer the reader to [10] and [11] for \(L^{p}-\)improving properties of spherical averages. The corresponding \(L^{p}-\)improving estimates for \(A_{t}^{0}\) follows by scaling. Recently, Chang, Dosidis and Kim [1] introduced maximal functions associated with uncentered spherical averages and proved interesting results. Given a compact set \(T\subset\mathbb{R}^{d},\) they considered the maximal function \[A^{T}f(x):=\sup_{u\in T}|A^{u}f(x)|\] and the full maximal function associated with uncentered spherical averages, defined by \[S^{T}f(x):=\sup_{u\in T}\sup_{t>0}|A_{t}^{u}f(x)|.\] Observe that if \(T=\{0\},\) the maximal operator \(S^{T}\) corresponds to the classical spherical maximal function, denoted by \(M_{full}\). The spherical maximal operator \(M_{full}\) has been studied extensively. Stein [13] and Bourgain [15] proved sharp \(L^{p}-\)estimates for the spherical maximal function for \(d\geq 3\) and \(d=2\) respectively. The \(L^{p}-\)estimates for \(M_{full}\) holds in the range \(p>\frac{d}{d-1},\) which is known to be optimal. If we restrict the supremum in the definition of \(M_{full}\) to lacunary sequences, the resulting maximal function is commonly referred to as the lacunary spherical maximal function and is denoted by \(M_{lac}\). The range of \(p\) for which \(M_{lac}\) satisfies \(L^{p}-\)estimates extends to \(p>1.\) We refer the reader to [11] for recent developments on operators \(M_{lac}\) and \(M_{full}.\) Chang, Dosidis and Kim [1] showed that in order to get non-trivial \(L^{p}-\)estimates for operators \(A^{T}\) and \(S^{T}\), we require suitable assumptions on'size' of the underlying set \(T\). For example, if we consider \(T=\mathbb{S}^{d-1}\), then the operator \(A^{\mathbb{S}^{d-1}}\) is not bounded in \(L^{p}(\mathbb{R}^{n})\) for any \(p<\infty\). This assertion follows from the existence of Nikodym sets for spheres, see[1] and references therein for precise details. Let us recall the notion of Nikodym sets in the context of lines and spheres. **Definition 2.1**.: [1, 1]__ 1. Nikodym sets for lines: A Nikodym set for lines is a set \(A\subset\mathbb{R}^{d}\) of Lebesgue measure zero such that for every \(x\in\mathbb{R}^{d}\), there is a line \(\tau\) which passes through \(x\) and \(A\cap\tau\) contains a unit line segment. 2. Similarly, a Nikodym set for spheres (resp., unit spheres) is a set \(T\subset\mathbb{R}^{d}\) of Lebesgue measure zero such that for every \(y\) in a set of positive Lebesgue measure, there exists a sphere (resp., unit sphere) \(S\) containing \(y\) such that \(A\cap S\) has positive \((d-1)-\)dimensional Hausdorff measure. Nikodym sets are closely related with Kakeya sets and Besicovitch sets. The interested reader is referred to Mattila [1] for more details about these sets. Note that the Nikodym sets for spheres in \(\mathbb{R}^{d}\) are small with respect to the \(d-\) dimensional Lebesgue measure. However, their Hausdorff dimension must be \(d\), see [1] for details. In [1], Chang, Dosidis and Kim studied \(L^{p}-\)boundedness of maximal functions \(A^{T}\) and \(S^{T}\) under suitable conditions on the size of \(T\) in terms of Minkowski content. A compact set \(T\subset\mathbb{R}^{d}\) is said to have finite \(s\)-dimensional upper Minkowski content if for all \(\delta\in(0,\frac{1}{2})\), \[N(T,\delta)\lesssim\delta^{-s},\] where \(N(T,\delta)\) denotes the minimal number of balls of radius \(\delta\) needed to cover \(T.\) They proved the following result. **Theorem 2.2** ([1],Theorem 1.7).: _Let \(d\geq 2\) and \(0\leq s<d-1\). Let \(T\subset\mathbb{R}^{d}\) be a compact set with finite \(s\)-dimensional upper Minkowski content. Then \(A^{T}\) is bounded from \(L^{p}(\mathbb{R}^{d})\) to itself in each of the following cases._ 1. _when_ \(d=2\) _and_ \(p>1+s\)_,_ 2. _when_ \(d=3\) _and_ \[p>1+\min\Big{(}\frac{s}{2},\frac{1}{3-s},\frac{5-2s}{9-4s}\Big{)},\] 3. _when_ \(d\geq 4\) _and_ \[p>1+\min\Big{(}\frac{s}{d-1},\frac{1}{d-s},\frac{d-s}{3(d-s)-2}\Big{)}.\] The following result is known for the operator \(S^{T}\). **Theorem 2.3** ([1], Theorem 1.10).: _Let \(T\) be a compact subset of \(\mathbb{R}^{d}\) with upper Minkowski content \(0<s<d-1\),_ 1. _when_ \(d=2\) _and_ \(0<s<1\)_,_ \(S^{T}\) _maps_ \(L^{p}(\mathbb{R}^{d})\) _to_ \(L^{p}(\mathbb{R}^{d})\) _for_ \(p>2+\min\{1,\max(s,\frac{4s-2}{2-s})\}\)_,_ 2. _when_ \(d\geq 3\) _and_ \(0<s<d-1\)_,_ \(S^{T}\) _maps_ \(L^{p}(\mathbb{R}^{d})\) _to_ \(L^{p}(\mathbb{R}^{d})\) _for_ \(p>1+[d-1-s+\max(0,\min(1,(2s-d+3)/4))]^{-1}\)_._ In the same paper, the authors have obtained certain necessary conditions on \(p\) for the \(L^{p}-\)boundedness of \(S^{T}\) to hold. These necessary conditions are sharp for a few cases. ### Results on lacunary maximal function \(N_{lac}^{T}\) Motivated by the discussion above, we consider the lacunary analogue of maximal averages and show that the range of \(p\) in Theorem 2.3 could be improved significantly for the lacunary uncentered spherical maximal function. This is defined as follows \[N_{lac}^{T}f(x):=\sup_{k\in\mathbb{Z}}\Big{|}A_{2^{k}}^{T}f(x)\Big{|}=\sup_{u \in T,k\in\mathbb{Z}}\Big{|}\int_{\mathbb{S}^{d-1}}f(x+2^{k}(u+y))d\sigma(y) \Big{|}.\] Once again, observe that if \(T=\{0\}\), the operator \(N_{lac}^{0}\) coincides with the classical lacunary spherical maximal function \(M_{lac}.\) We would like to refer the reader to Calderon [10], Coifman-Weiss [11], Lacey [1] for \(L^{p}-\)estimates of \(N_{lac}^{0}=M_{lac}\) for \(1<p\leq\infty\). Also, see Seeger, Tao and Wright [11] for weak-type \(L\log\log L-\)estimate for the operator \(M_{lac}.\) We have the following result for the lacunary maximal function \(N_{lac}^{T}.\) **Theorem 2.4**.: _Let \(d\geq 2\) and \(0\leq s<d-1\). Suppose that \(T\subset\mathbb{R}^{d}\) is a compact set with finite \(s-\)dimensional upper Minkowski content. Then \(N_{lac}^{T}\) is bounded on \(L^{p}(\mathbb{R}^{d})\) for each of the following cases._ 1. \(d=2\) _and_ \(p>1+s,\)__ 2. \(d=3\) _and_ \(p>1+\min\left(\frac{s}{2},\frac{1}{3-s},\frac{5-2s}{9-4s}\right),\)__ 3. \(d\geq 4\) _and_ \(p>1+\min\left(\frac{s}{d-1},\frac{1}{d-s},\frac{d-s}{3(d-s)-2}\right).\)__ 4. _Moreover,_ \(N_{lac}^{T}\) _is of restricted weak-type_ \((p,p)\) _at the endpoint_ \(p=1+\min\left(\frac{s}{d-1},\frac{1}{d-s}\right)\)_._ **Remark 2.5**.: _Observe that the range of \(p\) in Theorems 2.2 and 2.4 are the same._ **Remark 2.6**.: _Theorem 2.4 implies the pointwise a.e. convergence of the averaging operator \(A_{2^{k}}^{T}f\), see Section \(7\) in [10] for the necessary details._ ### Results on \(L^{p}-\)improving properties of \(A^{t}\) We extend the \(L^{p}-\)estimates of \(A^{T}\) to the off-diagonal range. These are referred to as \(L^{p}-\)improving estimates for the operator \(A^{T}\). This completes the picture of \(L^{p}\to L^{q}-\)estimates for the operator \(A^{T}\) with \(p\leq q.\) We would like to point out that \(L^{p}-\)improving estimates for averaging operators play a crucial role in proving sparse domination of the corresponding maximal operators. For example, see [1] for this connection in the context of classical spherical maximal functions. We also refer to [14, 15, 16] for sparse domination of bilinear spherical maximal functions. However, we do not pursue this direction of sparse domination of maximal operators in this paper. We need to introduce some notation in order to state the \(L^{p}-\)improving results. Let \(\Delta\) denote the closed triangle with vertices \(A,H\) and \(E,\) where \(A=(0,0),\)\(E=\left(\frac{d-s}{d-s+1},\frac{1}{d-s+1}\right)\) and * \(H=\left(\frac{1}{1+s},\frac{1}{1+s}\right)\), if \(d=2,\) * \(H=\left(\left(1+\min\{\frac{s}{2},\frac{1}{3-s},\frac{5-2s}{9-4s}\}\right)^{-1 },\left(1+\min\{\frac{s}{2},\frac{1}{3-s},\frac{5-2s}{9-4s}\}\right)^{-1} \right),\) if \(d=3,\) * \(H=\left(\left(1+\min\{\frac{s}{d-1},\frac{1}{d-s},\frac{d-s}{3(d-s)-2}\}\right) ^{-1},\left(1+\min\{\frac{s}{d-1},\frac{1}{d-s},\frac{d-s}{3(d-s)-2}\}\right) ^{-1}\right),\) if \(d\geq 4.\) Observe that, if \(d\geq 4\) and \(d-2<s<d-1,\) then \(\min\{\frac{s}{d-1},\frac{1}{d-s},\frac{d-s}{3(d-s)-2}\}=\frac{d-s}{3(d-s)-2}.\) Similarly, if \(d=3\) and \(3/2<s<2,\) then \(\min\{\frac{s}{2},\frac{1}{3-s},\frac{5-2s}{9-4s}\}=\frac{5-2s}{9-4s}.\) With the notation as above, we have the following \(L^{p}-\)improving properties of \(A^{T}.\) **Theorem 2.7**.: _Let \(T\subset\mathbb{R}^{d}\) be a compact set with finite \(s\)-dimensional upper Minkowski content for \(0\leq s<d-1\). Then \(A^{T}\) is bounded from \(L^{p}(\mathbb{R}^{d})\) into \(L^{q}(\mathbb{R}^{d})\) for each of the following cases._ 1. _when_ \(d=2\) _and_ \((\frac{1}{p},\frac{1}{q})\in\Delta\setminus\{E,H\}\)_,_ 2. _when_ \(d=3\)_,_ \(s\leq 3/2\) _and_ \((\frac{1}{p},\frac{1}{q})\in\Delta\setminus\{E,H\}\)_, and for_ \(3/2<s<2\) _and_ \((\frac{1}{p},\frac{1}{q})\in\Delta\setminus[E,H]\)_,_ _._ 3. _when_ \(d\geq 4\)_,_ \(s\leq d-2\) _and_ \((\frac{1}{p},\frac{1}{q})\in\Delta\setminus\{E,H\}\)_, and for_ \(d-2<s<d-1\) _and_ \((\frac{1}{p},\frac{1}{q})\in\Delta\setminus[E,H]\)_._ _Moreover, \(A^{T}\) is of restricted weak-type \((p,q)\), i.e., it is bounded from the Lorentz space \(L^{p,1}(\mathbb{R}^{d})\) into \(L^{q,\infty}(\mathbb{R}^{d}),\) for each of the following end-point values of \(p\) and \(q\)._ 1. _when_ \(d=2\) _and_ \((\frac{1}{p},\frac{1}{q})=E,H\)_,_ 2. _when_ \(d=3\)_,_ \(s\leq 3/2\) _and_ \((\frac{1}{p},\frac{1}{q})=E,H\)_, and_ \(3/2<s<2\) _and_ \((\frac{1}{p},\frac{1}{q})=E\)_,_ 3. _when_ \(d\geq 4\)_,_ \(s\leq d-2\) _and_ \((\frac{1}{p},\frac{1}{q})=E,H\)_, and_ \(d-2<s<d-1\) _and_ \((\frac{1}{p},\frac{1}{q})=E\)_._ The following remarks point out the sharpness of some indices in Theorem 2.7. _Remark 2.8_.: For \(0<s\leq 1\), the restricted weak-type estimate at the point \(H\) is sharp in the sense that \(A^{T}\) does not map \(L^{p,r}(\mathbb{R}^{d})\) into \(L^{q,\infty}(\mathbb{R}^{d})\) boundedly for any \(r>1\) and \((\frac{1}{p},\frac{1}{q})=H\). We will provide an example in Section 10 to support this assertion. _Remark 2.9_.: The point \(E\) in Theorem 2.7 is sharp. This can be verified by taking \(T=\{0\}^{d-\lceil s\rceil}\times C_{s}\), where \(C_{s}\subset\mathbb{R}^{\lceil s\rceil}\) is a self similar \(s\)-dimensional set. By taking \(f=\chi_{B(0,\delta)}\) for a small \(\delta>0\), we can show that the operator \(A^{T}\) is unbounded from \(L^{p}(\mathbb{R}^{d})\) to \(L^{q}(\mathbb{R}^{d})\) if \[\frac{d}{(d-1)p}-\frac{1-s}{(d-1)q}>1.\] This shows the sharpness of the point \(E\). ## 3. Nikodym maximal functions associated with sphere The study of Kakeya and Nikodym maximal functions is a classical topic in harmonic analysis and geometric measure theory. These objects have been greatly studied to understand some of the most interesting phenomena in harmonic analysis, which includes the Fourier restriction conjecture and the Bochner-Riesz problem. We refer the reader to [1, 10, 11] for more details on these maximal functions. Recently, in [1], the authors introduced the Nikodym maximal function associated with spheres and studied its \(L^{p}-\)boundedness properties. In this article, we are concerned with the lacunary analogues of these maximal functions in linear and bilinear setting. ### Linear Nikodym maximal function \(N^{\delta}\): Let \(0<\delta<1/2\), the Nikodym maximal function \(N^{\delta}\) associated with sphere \(\mathbb{S}^{d-1}\) is defined by \[N^{\delta}f(x):=\sup_{u\in\mathbb{S}^{d-1}}\frac{1}{|S^{\delta}(0)|}\Big{|} \int_{S^{\delta}(0)}f(x+u+y)\ dy\Big{|},\] where \(S^{\delta}(0)\) denotes the \(\delta\)- neighborhood of the unit sphere \(\mathbb{S}^{d-1}\). The \(L^{p}-\)estimates for the operator \(N^{\delta}\) are studied in [1]. Consider the corresponding lacunary maximal function defined by \[N^{\delta}_{lac}f(x):=\sup_{u\in\mathbb{S}^{d-1}}\sup_{k\in\mathbb{Z}}\frac{1 }{|S^{\delta}(0)|}\Big{|}\int_{S^{\delta}(0)}f(x+2^{k}(u+y))\ dy\Big{|}.\] We have the following bounds for the operator \(N^{\delta}_{lac}\). **Theorem 3.1**.: _Let \(0<\delta<1/2\) and \(\epsilon>0\). Then the operator \(N^{\delta}_{lac}\) satisfies the following estimates_ 1. _For_ \(d=2\) _we have_ \[\|N^{\delta}_{lac}\|_{p}\lesssim\begin{cases}\delta^{1-\frac{2}{p}-\epsilon}\| f\|_{p}&\text{if }1<p\leq 2,\\ \delta^{-\epsilon}\|f\|_{p}&\text{if }2\leq p<\infty.\end{cases}\] _._ 2. _For_ \(d=3\) _we have_ \[\|N_{lac}^{\delta}\|_{p}\lesssim\begin{cases}\delta^{\frac{3}{2}-\frac{k}{2p}- \epsilon}\|f\|_{p}&\text{if }1<p\leq\frac{3}{2},\\ \delta^{\frac{1}{2}-\frac{1}{p}-\epsilon}\|f\|_{p}&\text{if }\frac{3}{2}<p\leq 2,\\ \delta^{-\epsilon}\|f\|_{p}&\text{if }2\leq p<\infty.\end{cases}\] 3. _Finally, for_ \(d\geq 4\) _we have_ \[\|N_{lac}^{\delta}\|_{p}\lesssim\begin{cases}\delta^{1-\frac{2}{p}-\epsilon}\| f\|_{p}&\text{if }1<p\leq\frac{4}{3},\\ \delta^{1-\frac{2}{p}-\epsilon}\|f\|_{p}&\text{if }\frac{4}{3}<p\leq 2,\\ \delta^{-\epsilon}\|f\|_{p}&\text{if }2\leq p<\infty.\end{cases}\] ### Bilinear Nikodym maximal function \(\mathcal{N}^{\delta}\): Finally, we consider the bilinear analogue of the Nikodym maximal function \(N^{\delta}\) defined by \[\mathcal{N}^{\delta}(f,g)(x):=\sup_{u,v\in\mathbb{S}^{d-1}}\frac{1}{|S^{ \delta}(0)|}\left|\int_{S^{\delta}(0)}f(x+u+y)g(x+v+z)\;d(y,z)\right|,\] where \(0<\delta<\frac{1}{2}\) and as earlier \(S^{\delta}(0)=\{(y,z)\in\mathbb{R}^{2d}:\;1-\delta<|(y,z)|<1+\delta\}\). We prove the following \(L^{p}-\)estimates for the operator \(\mathcal{N}^{\delta}\). **Theorem 3.2**.: _Let \(1\leq p_{1},p_{2},p\leq\infty\) be such that \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p}.\) Then_ 1. \(\|\mathcal{N}^{\delta}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p}}\lesssim 1\) _if_ \((p_{1},p_{2})\neq(1,1).\)__ 2. \(\|\mathcal{N}^{\delta}\|_{L^{1}\times L^{1}\to L^{\frac{1}{2}}}\lesssim\delta ^{-1}.\)__ ## 4. Bilinear maximal functions ### Bilinear uncentered spherical maximal functions Let \(T\subset\mathbb{R}^{d},d\geq 1\), be a compact set with finite \(s\)-dimensional upper Minkowski content, where \(0\leq s\leq d\). Observe that here we allow \(d=1\) as well. Define the uncentered bilinear spherical average of dyadic scale \(2^{k}\) associated with \(T\) by \[\mathcal{N}^{T}_{2^{k}}(f,g)(x):=\sup_{u,v\in T}\left|\int_{\mathbb{S}^{2d-1} }f(x+2^{k}(u+y))g(x+2^{k}(v+z))\;d\sigma_{2d-1}(y,z)\right|.\] Note that this can be rewritten in terms of Fourier transform as follows \[\mathcal{N}^{T}_{2^{k}}(f,g)(x)=\sup_{u,v\in T}\Big{|}\int_{\mathbb{R}^{2d}} \widehat{\sigma}_{2d-1}(2^{k}\xi,2^{k}\eta)e^{2\pi\imath x\cdot(\xi+\eta)} \widehat{f}(\xi)\widehat{g}(\eta)e^{2\pi\imath x\cdot(\xi+\eta)}\;d\xi d\eta \Big{|},\] where \(\widehat{\sigma}_{2d-1}\) denotes the Fourier transform of surface measure of the sphere \(\mathbb{S}^{2d-1}\). It is well-known that the pointwise a.e. convergence \[\lim_{k\to-\infty}\mathcal{N}^{T}_{2^{k}}(f,g)(x)=\widehat{\sigma}_{2d-1}(0) f(x)g(x)\text{ and }\lim_{k\to+\infty}\mathcal{N}^{T}_{2^{k}}(f,g)(x)=0\] can be deduced by proving appropriate \(L^{p}-\)estimates for the corresponding lacunary maximal function defined by \[\mathcal{N}^{T}_{lac}(f,g)(x):=\sup_{k\in\mathbb{Z}}\;\mathcal{N}^{T}_{2^{k}} (f,g)(x).\] Observe that if \(T=\{\vec{0}\}\), the maximal function \(\mathcal{N}^{0}_{lac}\) reduces to the classical bilinear lacunary spherical maximal function \[\mathcal{S}_{lac}(f,g)(x):=\sup_{k\in\mathbb{Z}}\left|\int_{\mathbb{S}^{2d-1} }f(x+2^{k}y)g(x+2^{k}z)\;d\sigma_{2d-1}(y,z)\right|.\] The operator \(\mathcal{S}_{lac}\) is well studied. For \(d=1\), Christ and Zhou [2] proved \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\to L^{p}(\mathbb{R})-\)boundedness of this operator for almost complete range of exponents \(p_{1},p_{2}\) and \(p\). We also for refer to [1] for \(L^{p}-\) boundedness of the continous analogue of \(\mathcal{S}_{lac}\) and [1] for its sharp endpoints bounds in dimension one. Recently, Borges and Foster [1] and Cho, Lee and Shuin [11] have proved \(L^{p_{1}}(\mathbb{R}^{d})\times L^{p_{2}}(\mathbb{R}^{d})\to L^{p}(\mathbb{R}^{d})-\)boundedness of \(\mathcal{S}_{lac}\) for \(d\geq 2\), except for border line cases. Also see [1] for bilinear maximal functions defined on degenerate surfaces. We extend these results to the context of the lacunary maximal function \(\mathcal{N}_{lac}^{T}\). Given a set of point \(\{X_{1},X_{2},\ldots,X_{k}\}\) in \(\mathbb{R}^{2}\), let \(\Omega(\{X_{1},X_{2},\ldots,X_{k}\})\) denote the open convex hull of all the points \(X_{1},X_{2},\ldots,X_{k}\). We define the points \(O=(0,0)\), \(A=(1,0)\), and \(B=(0,1)\). The points \(P=P(d,s),\ Q=Q(d,s)\) and \(R=R(d,s)\) are defined as follows, \[P=\left(\max\left\{\frac{3d-2s}{3d-2s+2},\frac{3d-2}{3d-2+2s}\right\},\max \left\{\frac{3d-2s}{3d-2s+2},\frac{3d-2}{3d-2+2s}\right\}\right),\] \[Q=\left(1,\frac{d-s-1}{d-s}\right),\ \ \ \ \text{and}\ \ \ \ R=\left(\frac{d-s-1}{d-s},1\right).\] We have the following \(L^{p}-\)estimates for the operator \(\mathcal{N}_{lac}^{T}\) in dimension \(d\geq 2\). **Theorem 4.1**.: _Let \(d\geq 2\), \(0<s\leq d\) and \(0<p_{1},p_{2},p_{3}\leq\infty\) with \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p_{3}}\). Then we have the following,_ 1. _The operator_ \(\mathcal{N}_{lac}^{T}\) _is bounded from_ \(L^{p_{1}}(\mathbb{R}^{d})\times L^{p_{2}}(\mathbb{R}^{d})\) _to_ \(L^{p_{3}}(\mathbb{R}^{d})\) _if_ 1. \(d=2\) _and_ \((\frac{1}{p_{1}},\frac{1}{p_{2}})\in\Omega(\{O,A,P,B\})\)_,_ 2. \(d\geq 3\)_,_ * \(0\leq s\leq d-2\) _and_ \((\frac{1}{p_{1}},\frac{1}{p_{2}})\in\Omega(\{O,A,Q,P,R,B\})\)_._ * \(d-2\leq s\leq d\) _and_ \((\frac{1}{p_{1}},\frac{1}{p_{2}})\in\Omega(\{O,A,P,B\})\)_._ 2. _For_ \(d\geq 3\) _and_ \(d-2\leq s\leq d\)_, the operator_ \(\mathcal{N}_{lac}^{T}\) _is bounded from_ \(L^{p_{1}}(\mathbb{R}^{d})\times L^{p_{2}}(\mathbb{R}^{d})\) _to_ \(L^{p_{3},\infty}(\mathbb{R}^{d})\) _if_ \((\frac{1}{p_{1}},\frac{1}{p_{2}})\) _lies on the line segments_ \(AQ\) _and_ \(BR\) _excluding the points_ \(Q,R\)_._ In dimension \(d=1\), we have the following result. **Theorem 4.2**.: _Let \(T\) be a compact set with finite \(s-\)dimensional upper Minkowski content with \(0\leq s\leq 1\). The operator \(\mathcal{N}_{lac}^{T}\) is bounded from \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) to \(L^{p_{3}}(\mathbb{R})\) for all indices \(0<p_{1},p_{2},p_{3}\leq\infty\) with \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p_{3}}\) in each of the following cases._ 1. \(0\leq s\leq 1\) _and_ \((\frac{1}{p_{1}},\frac{1}{p_{2}})\in\Omega\left(\{O,A,B\}\right)\)_._ 2. \(0\leq s<\frac{1}{2}\) _and_ \((\frac{1}{p_{1}},\frac{1}{p_{2}})\in\Omega\left(\{O,A,D,B\}\right),\) _where_ \(D=\left(\frac{1}{1+2s},\frac{1}{1+2s}\right).\)__ Figure 1. The figure denotes the region of boundedness of \(\mathcal{N}_{lac}^{T}\) when \((\frac{1}{p_{1}},\frac{1}{p_{2}})\) belongs to the region \(\Omega(\{O,A,P,B\})\) for \(d-2\leq s\leq d\) and the region \(\Omega(\{O,A,Q,P,R,B\})\) for \(0\leq s\leq d-2\). ## 5. Proof of Theorem 2.4: Lacunary uncentered spherical maximal operator \(N_{lac}^{T}\) To prove Theorem 2.4, we employ a multiscale decomposition of the operator. Let \(\phi\in\mathcal{S}(\mathbb{R}^{d})\) be a function such that \(\widehat{\phi}\) is supported in \(B(0,2)\) and \(\widehat{\phi}(\xi)=1\) for \(\xi\in B(0,1)\). We define \(\widehat{\phi}_{t}(\xi)=\widehat{\phi}(t\xi)\) and \(\widehat{\psi}_{t}(\xi)=\widehat{\phi}(t\xi)-\widehat{\phi}(2t\xi)\). Then, we have the identity \[\widehat{\phi}(\xi)+\sum_{j=1}^{\infty}\widehat{\psi}_{2^{-j}}(\xi)=1,\;\xi \neq 0. \tag{5.1}\] Using this identity, the lacunary maximal function \(N_{lac}^{T}\) can be dominated by \[N_{lac}^{T}f(x) \leq \sup_{k\in\mathbb{Z}}|A_{2^{k},0}^{T}f(x)|+\sum_{j=1}^{\infty}\sup _{k\in\mathbb{Z}}|A_{2^{k},j}^{T}f(x)|, \tag{5.2}\] where \[A_{2^{k},j}^{T}f(x)=\sup_{u\in T}|(f*\psi_{2^{k-j}}*\sigma_{2^{k }})(x+2^{k}u)|,\;j\geq 1, \tag{5.4}\] \[A_{2^{k},0}^{T}f(x)=\sup_{u\in T}|(f*\sigma_{2^{k}}*\sigma_{2^{k }})(x+2^{k}u)|. \tag{5.3}\] We need to prove suitable estimates on the intermediary lacunary operators \[M_{j}^{T}f(x)=\sup_{k\in\mathbb{Z}}|A_{2^{k},j}^{T}f(x)|,\;j\geq 0.\] First, we observe that \(M_{0}^{T}f\) can be controlled by the classical Hardy-Littlewood maximal function \(M_{HL}f\). **Proposition 5.1**.: _Let \(f\in L^{1}_{loc}(\mathbb{R}^{d})\). Then_ \[M_{0}^{T}f(x)\lesssim M_{HL}f(x),\;\;a.e\;x\in\mathbb{R}^{d}.\] Proof.: Consider \[|A_{2^{k},0}^{T}f(x)| = \sup_{u\in T}\left|\int_{\mathbb{S}^{d-1}}\phi_{2^{k}}*f(x+2^{k}( u+y))\ d\sigma(y)\right|\] \[\lesssim \sup_{u\in T,|y|=1}|\phi_{2^{k}}*f(x+2^{k}(u+y))|.\] Figure 2. The figure denotes the region \(\Omega\left(\{O,A,D,B\}\right)\). Using the fact that \(\phi\in\mathcal{S}(\mathbb{R}^{d})\) and \(T\) is compact, we get that \[\left|\phi_{2^{k}}*f(x+2^{k}(u+y))\right| = 2^{-kd}\left|\int_{\mathbb{R}^{d}}f(z)\phi(2^{-k}(x-z)+u+y)\ dz\right|\] \[\lesssim \left|\int_{\mathbb{R}^{d}}\frac{2^{-kd}f(z)}{\left(1+|2^{-k}(x-z )+u+y|\right)^{N}}dz\right|\] \[\lesssim M_{HL}f(x).\] To deal with maximal operators \(M_{j}^{T}\), we need a counting argument that bounds the \(L^{p}-\)norm of the maximal average \(A_{1,j}^{T}\) by that of certain linear operators with appropriate growth in \(j\). **Lemma 5.2** (Lemma 4.5, [15]).: _Let \(T\) be a compact subset of \(\mathbb{R}^{d}\) and \(\{T_{j}\}\) be the collection of centers of balls of radius \(2^{-j}\) covering \(T\). Then, for \(p\geq 1\), we have_ \[\|\sup_{u\in T}|f*\psi_{2^{-j}}*\sigma(\cdot+u)|\|_{L^{p}(\mathbb{R}^{d})}^{p} \lesssim\sharp T_{j}\|f*\psi_{2^{-j}}*\sigma\|_{L^{p}(\mathbb{R}^{d})}^{p}.\] We will require the following \(L^{p}-\)estimates for the maximal averages \(A_{1,j}^{T}\) from [15]. **Lemma 5.3** ([15]).: _Let \(j\in\mathbb{N}\). The following bounds hold true:_ \[\|A_{1,j}^{T}\|_{L^{1}(\mathbb{R}^{d})\to L^{1}(\mathbb{R}^{d})} \lesssim\min\{2^{j*},2^{j}\}. \tag{5.6}\] \[\|A_{1,j}^{T}\|_{L^{\frac{3}{2}}(\mathbb{R}^{d})\to L^{\frac{3}{2}}( \mathbb{R}^{d})} \lesssim j^{\frac{1}{3}2^{\frac{j}{6}}},\ \ d\geq 3,\] (5.7) \[\|A_{1,j}^{T}\|_{L^{\frac{3}{2}}(\mathbb{R}^{d})\to L^{\frac{4}{3}}( \mathbb{R}^{d})} \lesssim j^{\frac{1}{4}2^{\frac{j}{4}}},\ \ d\geq 4. \tag{5.5}\] The \(L^{1}-\)estimate above, is a consequence of Lemma 5.2 and the kernel estimate \(|\psi_{2^{-j}}*d\sigma(x)|\lesssim\frac{2^{j}}{(1+2j!|x|-1!)^{N}}\). The \(L^{p}-\)estimates at the points \(p=\frac{3}{2},\frac{4}{3}\) were obtained in [15] by proving almost sharp estimates for the Nikodym maximal function \(N^{\delta}\) defined in Section 3.1. Next, we illustrate a bootstrap argument which allows us to extend the range of \(p\) for which \(L^{p}-\)boundedness of the multiscale operator \(M_{j}^{T}\) holds, provided we have an initial \(L^{p}-\)estimate for the single scale operator \(A_{1,j}^{T}\). **Lemma 5.4**.: _Let \(1\leq p_{1}<p_{2}\) be such that_ \[\|A_{1,j}^{T}\|_{L^{p_{1}}\to L^{p_{1}}} \leq C_{1},\] \[\|M_{j}^{T}\|_{L^{p_{2}}\to L^{p_{2}}} \leq C_{2}.\] _Then, we have_ \[\|M_{j}^{T}\|_{L^{p}\to L^{p}}\lesssim C_{1}^{\frac{p_{1}}{2}}C_{2}^{1-\frac{p _{1}}{2}},\ \text{for}\ p=\frac{2p_{2}}{2+p_{2}-p_{1}}.\] Proof.: The proof involves a vector-value argument. Consider the following vector-valued operator \(\vec{\mathbf{A}}\) acting on a sequence of measurable functions \(f=\{f_{k}\}_{k\in\mathbb{Z}}\). \[\vec{\mathbf{A}}(\{f_{k}\}_{k\in\mathbb{Z}})(x)=\{A_{2^{k},j}^{T}(f_{k})(x)\}_{k \in\mathbb{Z}}.\] First, observe that the operator \(A_{1,j}^{T}\) readily extends to a vector-valued setting, namely we get that \[\|\vec{\mathbf{A}}\|_{L^{p_{1}}(\ell_{p_{1}})\to L^{p_{1}}(\ell_{p_{1}})}\leq C_{1}. \tag{5.8}\] Next, using the \(L^{p_{2}}-\)boundedness of \(M_{j}^{T}\), we get that \[\|\boldsymbol{\vec{A}}f\|_{L^{p_{2}}(\ell_{\infty})} = \|\sup_{k}A_{2^{k},j}^{T}(f_{k})\|_{p_{2}}\] \[\leq \|M_{j}^{T}(\sup_{m}|f_{m}|)\|_{p_{2}}\] \[\leq C_{2}\|f\|_{L^{p_{2}}(\ell_{\infty})}.\] Therefore, we have that \[\|\boldsymbol{\vec{A}}\|_{L^{p_{2}}(\ell_{\infty})\to L^{p_{2}}(\ell_{ \infty})}\leq C_{2}. \tag{5.9}\] Interpolate between (5.8) and (5.9) to deduce that \[\|\boldsymbol{\vec{A}}\|_{L^{p}(\ell_{2})\to L^{p}(\ell_{2})}\lesssim C_{1}^{ \frac{p_{1}}{2}}C_{2}^{1-\frac{p_{1}}{2}}. \tag{5.10}\] This estimate can be used to get that \[\|M_{j}^{T}f\|_{p} \leq \left\|\left(\sum_{k\in\mathbb{Z}}|A_{2^{k},j}^{T}f|^{2}\right) ^{\frac{1}{2}}\right\|_{p}\] \[\lesssim C_{1}^{\frac{p_{1}}{2}}C_{2}^{1-\frac{p_{1}}{2}}\left\|\left( \sum_{k\in\mathbb{Z}}|\psi_{2^{k-j}}*f|^{2}\right)^{\frac{1}{2}}\right\|_{p}\] \[\lesssim C_{1}^{\frac{p_{1}}{2}}C_{2}^{1-\frac{p_{1}}{2}}\|f\|_{p},\] where in the last step, we have used the Littlewood-Paley inequality (see for example, [10]). We now state the main estimates for the operators \(M_{j}^{T}\) needed to prove Theorem 2.4. **Lemma 5.5**.: _Let \(j\in\mathbb{N}\) and \(0\leq s<d-1\). We have the following estimates,_ 1. _For_ \(d\geq 2\)_,_ (5.11) \[\|M_{j}^{T}f\|_{2}\lesssim 2^{-\frac{(d-1-s)j}{2}}\|f\|_{2}.\] 2. _For_ \(d\geq 2,\;p_{0}=1+\frac{s}{2(d-1)}\)_,_ (5.12) \[\|M_{j}^{T}\|_{L^{p_{0}}(\mathbb{R}^{d})\to L^{p_{0}}(\mathbb{R}^{d})} \lesssim 2^{j\frac{s(d-1)}{2(d-1)+s}}.\] 3. _For_ \(\frac{3}{2}<p\leq 2\)_,_ (5.13) \[\|M_{j}^{T}\|_{L^{p}(\mathbb{R}^{3})\to L^{p}(\mathbb{R}^{3})} \lesssim j^{\frac{1}{p}-\frac{1}{2}}2^{-j\left(\frac{2-4s}{2}-\frac{7-3s}{p} \right)}.\] 4. _For_ \(d\geq 4,\;\frac{4}{3}<p\leq 2\)_,_ (5.14) \[\|M_{j}^{T}\|_{L^{p}(\mathbb{R}^{d})\to L^{p}(\mathbb{R}^{d})} \lesssim j^{\frac{1}{p}-\frac{1}{2}}2^{-j\left(\frac{2(d-2-s)s}{2}-\frac{2d-1- 2s}{p}\right)}.\] Proof.: **Proof of (1):** By an application of Lemma 5.2, we get that \[\|A_{1,j}^{T}f\|_{2} = \|\sup_{u\in T}|f*\psi_{2^{-j}}*\sigma(\cdot+u)|\|_{2}\] \[\lesssim 2^{\frac{js}{2}}\|f*\psi_{2^{-j}}*\sigma\|_{2}.\] We use Plancherel's identity and standard scaling argument to get that \[\|A_{2^{k},j}^{T}f\|_{2}\lesssim 2^{-\frac{(d-1-s)j}{2}}\|f\|_{2}. \tag{5.15}\] Let \(\tilde{\psi}\in\mathcal{S}(\mathbb{R}^{d})\) be such that \(\widehat{\tilde{\psi}}(\xi)=1\) on the support of \(\hat{\psi}\). Since, \(\widehat{\tilde{\psi}}_{2^{k}}\widehat{\psi}_{2^{k}}=\widehat{\psi}_{2^{k}}\) for all \(k\), we can use the orthogonality of Fourier transform in the following way to get the desired result. \[\|M_{j}^{T}f\|_{2}^{2} \leq \left\|\left(\sum_{k\in\mathbb{Z}}|A_{2^{k},j}^{T}f|^{2}\right)^{ \frac{1}{2}}\right\|_{2}^{2}=\left\|\left(\sum_{k\in\mathbb{Z}}|A_{2^{k},j}^{T }(\tilde{\psi}_{2^{k-j}}*f)|^{2}\right)^{\frac{1}{2}}\right\|_{2}^{2}\] \[\lesssim 2^{-(d-1-s)j}\sum_{k}\|\tilde{\psi}_{2^{k-j}}*f\|_{2}^{2}\] \[\lesssim 2^{-(d-1-s)j}\|f\|_{2}^{2}.\] **Proof of (2):** By an application of Lemma 5.4 along with the estimates (5.5) and 5.11, we have that \[\|M_{j}^{T}\|_{L^{\frac{d}{3}}\to L^{\frac{d}{3}}}\lesssim 2^{-j\frac{(d-1-s)} {4}}.\] By a recursive application of Lemma 5.4 along with (5.5) and the above estimate, we obtain the estimate 5.12. **Proof of (3):** The inequality 5.13 follows from repeated application of Lemma 5.4 along with the estimates (5.6) and 5.11. **Proof of (4):** The proof is similar to that of inequality 5.13 with the exception of using estimate 5.7 instead of 5.6. To obtain a restricted weak type inequality, we will employ an interpolation trick of Bourgain for \(M_{j}^{T}\). We state the lemma for convenience and a proof can be found in [10] (Lemma 2.6). **Lemma 5.6** ([10]).: _Let \(\epsilon_{1},\epsilon_{2}>0\). Suppose that \(\{T_{j}\}\) is a sequence of linear (or sublinear) operators such that for some \(1\leq p_{1},p_{2}<\infty\), and \(1\leq q_{1},q_{2}<\infty\),_ \[\|T_{j}(f)\|_{L^{q_{1}}}\leq M_{1}2^{\epsilon_{1}j}\|f\|_{L^{p_{1}}},\ \|T_{j}(f)\|_{L^{q_{2}}}\leq M_{2}2^{-\epsilon_{2}j}\|f\|_{L^{p_{2}}}.\] _Then \(T=\sum_{j}T_{j}\) is bounded from \(L^{p,1}\) to \(L^{q,\infty}\), i.e._ \[\|T(f)\|_{L^{q,\infty}}\lesssim M_{1}^{\theta}M_{2}^{1-\theta}\|f\|_{L^{p,1}},\] _where \(\theta=\frac{\epsilon_{2}}{\epsilon_{1}+\epsilon_{2}}\), \(\frac{1}{q}=\frac{\theta}{q_{1}}+\frac{1-\theta}{q_{2}}\) and \(\frac{1}{p}=\frac{\theta}{p_{1}}+\frac{1-\theta}{p_{2}}\)._ Since we have all the ingredients, we now conclude the proof of Theorem 2.4. Proof of Theorem 2.4.: First, we prove the restricted weak type inequality at the endpoint \(p_{0}=1+\frac{s}{d-1}\). The restricted weak type inequality \(N_{lac}^{T}:L^{1+\frac{s}{d-1},1}\to L^{1+\frac{s}{d-1},\infty}\) follows from the estimates 5.11, (5.12) along with an application of Lemma 5.6 for the operators \(M_{j}^{T}\). The proof of restricted weak type estimate for the endpoint \(p=1+\frac{1}{d-s}\) is simpler. Indeed, it follows by applying Lemma 5.6 to the endpoint estimates 5.11 and \(\|M_{j}^{T}\|_{L^{1}\to L^{1}}\lesssim 2^{j}\). The \(L^{p}-\)estimates for \(N_{lac}^{T}\) for the range \(p>1+\min\{\frac{s}{d-1},\frac{1}{d-s}\}\) follows from the interpolation of respective restricted weak type inequalities and the trivial \(L^{\infty}-\)estimate. Finally, we obtain the better \(L^{p}-\)bounds for the case \(p>1+\frac{5-2s}{9-4s}\) and \(p>1+\frac{d-s}{3(d-s)-2}\) in dimensions \(d=3\) and \(d\geq 4\) respectively by resorting to inequalities 5.13 and 5.14 for \(M_{j}^{T}\) and summing in \(j\). ## 6. Proof of Theorem 2.7: \(L^{p}-\)improving properties of \(A^{t}\) In view of the real interpolation theory, we need to establish restricted weak-type estimates for the operator \(A^{T}\) at the endpoints described in Theorem 2.7. This is obtained by decomposing the operator \(A^{T}\) at dyadic scales and a discretization of the set \(T\) adapted to each dyadic scale. We will prove suitable \(L^{p}-\)estimates on each piece of \(A^{T}\) and use the interpolation theorem due to Bourgain to get desired restricted weak-type estimates. Let \(\phi\) and \(\psi\) be as in 5.1. This gives us the following decomposition, \[A_{1}^{T}f(x)\leq\sum_{j=0}^{\infty}A_{1,j}^{T}f(x).\] Next, consider the covering of \(T\) by balls of radius \(\delta=2^{-j}.\) Invoking the covering argument frm Lemma 5.2, we get that \[\|A_{1,j}^{T}\|_{L^{p}}^{p}\lesssim N(T,2^{-j})\|f*\psi_{2^{-j}}*\sigma\|_{L^ {p}}^{p},\ p\geq 1.\] Recall that we have the bound \(N(T,2^{-j})\leq 2^{js}\). So by using the Fourier transform estimate \[\|A_{1,j}^{T}\|_{L^{2}\to L^{2}}\lesssim 2^{-j\left(\frac{d-s-1}{2}\right)}. \tag{6.1}\] Moreover, using the \(L^{p}-\)improving estimates of \(A_{1}^{u}\)[11] (for a fixed \(u\in T\)) and applying the covering argument we get \[\|A_{1,j}^{T}\|_{L^{\frac{d+1}{2}}\to L^{d+1}}\lesssim 2^{j\left(\frac{s}{ 2+1}\right)}. \tag{6.2}\] Now using the estimates (6.1), (5.5), and (6.2), and applying Bourgain's interpolation trick [Lemma 2.6, [Lee03]] we get \[\|A_{1}^{T}f\|_{L^{q,\infty}}\lesssim\|f\|_{L^{p,1}},\] where \((1/p,1/q)=H\) and \(E\). This completes the proof. ## 7. Proof of Theorems 3.1 and 3.2: Nikodym maximal operators ### Proof of Theorem 3.1: Theorem 3.1 follows by using similar arguments as in the case Theorem 2.4. We give a brief sketch of the proof. Let us first, recall the \(L^{p}-\)estimates for the single scale operator \(N_{1}^{\delta}\) from [Theorem 1.3, [CDK22]]. \[\|N_{1}^{\delta}\|_{L^{1}(\mathbb{R}^{d})\to L^{1}(\mathbb{R}^{d})} \lesssim\delta^{-1} \tag{7.2}\] \[\|N_{1}^{\delta}\|_{L^{\frac{3}{2}}(\mathbb{R}^{d})\to L^{\frac{ 3}{2}}(\mathbb{R}^{d})} \lesssim\delta^{-\frac{1}{6}}(-\log\delta)^{\frac{1}{3}}\] (7.3) \[\|N_{1}^{\delta}\|_{L^{\frac{4}{6}}(\mathbb{R}^{d})\to L^{\frac{ 4}{3}}(\mathbb{R}^{d})} \lesssim\delta^{-\frac{1}{4}}(-\log\delta)^{\frac{1}{4}}. \tag{7.1}\] Denote \[N_{k}^{\delta}f(x)=\sup_{u\in\mathbb{S}^{d-1}}\frac{1}{|S^{\delta}(0)|}\Big{|} \int_{S^{\delta}(0)}f(x+2^{k}(u+y))dy\Big{|}.\] As in the proof of Theorem 3.1, using the identity (5.1), we get the following decomposition of \(N_{k}^{\delta}f.\) \[N_{k}^{\delta}f(x)=N_{k}^{\delta}(\phi_{2^{k}}*f)(x)+\sum_{j=1}^{\infty}N_{k}^ {\delta}(\psi_{2^{k-j}}*f)(x).\] Consequently, we get that \[N_{lac}^{\delta}f(x) \leq \sup_{k\in\mathbb{Z}}|N_{k}^{\delta}(\phi_{2^{k}}*f)(x)|+\sum_{j= 1}^{\infty}\sup_{k\in\mathbb{Z}}|N_{k}^{\delta}(\psi_{2^{k-j}}*f)(x)|. \tag{7.4}\] Let \(M_{j}^{\delta}\) denote the intermediary maximal operator \[M_{j}^{\delta}f(x)=\sup_{k\in\mathbb{Z}}|N_{k}^{\delta}(\psi_{2^{k-j}}*f)(x)|.\] Observe that the arguments used in Proposition 5.1 apply here as well and we get the corresponding estimate for \(M_{0}^{\delta}f,\) namely, \[M_{0}^{\delta}f(x)\lesssim M_{HL}f(x),\ \text{a.e}\ x\in\mathbb{R}^{d}.\] Next, we have the following \(L^{2}-\)estimate for the operators \(M_{j}^{\delta}\). **Lemma 7.1**.: _For \(d\geq 2\) and \(j\in\mathbb{N}\), the following holds_ \[\|M_{j}^{\delta}f\|_{2}\lesssim\delta^{-\epsilon}2^{-j\epsilon}\|f \|_{2}.\] _Consequently, we get that_ \[\|N_{lac}^{\delta}\|_{2}\lesssim\delta^{-\epsilon}\|f\|_{2}.\] Proof.: We claim that for any \(0<\epsilon<1\) the following estimate holds, \[\frac{|\widehat{\chi_{S^{\delta}(0)}}(\xi)|}{|S^{\delta}(0)|} \lesssim\frac{\delta^{-\epsilon}}{\left(1+|\xi|\right)^{\frac{d-1}{2}+\epsilon}}. \tag{7.5}\] Indeed, to prove the above inequality it is enough to show \[|\widehat{\chi_{S^{\delta}(0)}}(\xi)|\lesssim\min\left\{\frac{1}{ \left(1+|\xi|\right)^{\frac{d+1}{2}}},\frac{\delta}{\left(1+|\xi|\right)^{ \frac{d-1}{2}}}\right\}.\] These estimates can be obtained by using the following identity and mean value theorem, \[\widehat{\chi_{S^{\delta}(0)}}(\xi) = \frac{(1+\delta)^{\frac{\delta}{2}}J_{\frac{\delta}{2}}((1+ \delta)|\xi|)}{|\xi|^{\frac{\delta}{2}}}-\frac{(1-\delta)^{\frac{\delta}{2}}J _{\frac{\delta}{2}}((1-\delta)|\xi|)}{|\xi|^{\frac{\delta}{2}}}.\] From the covering argument of Lemma 5.2 and (7.5), we obtain that \[\|N_{1}^{\delta}(\psi_{2^{-j}}*f)\|_{2} = \left\|\sup_{u\in\mathbb{S}^{d-1}}\left|f*\psi_{2^{-j}}*\frac{ \chi_{S^{\delta}(0)}}{|S^{\delta}(0)|}(\cdot+u)\right|\right\|_{2} \tag{7.6}\] \[\lesssim 2^{j\frac{d-1}{2}}\left\|f*\psi_{2^{-j}}*\frac{\chi_{S^{\delta} (0)}}{|S^{\delta}(0)|}(\cdot)\right\|_{2}\] \[\lesssim 2^{j\frac{d-1}{2}}\frac{\delta^{-\epsilon}}{2^{j\left(\frac{d-1 }{2}+\epsilon\right)}}\|f\|_{2}\] \[= \delta^{-\epsilon}2^{-j\epsilon}\|f\|_{2}.\] By an scaling argument and the estimate (7.6), we have that \[\|M_{j}^{\delta}f\|_{2}^{2} \leq \left\|\left(\sum_{k\in\mathbb{Z}}|N_{k}^{\delta}(\psi_{2^{k-j}}* f)|^{2}\right)^{\frac{1}{2}}\right\|_{2}^{2}=\left\|\left(\sum_{k\in\mathbb{Z}}|N_{k}^{ \delta}(\psi_{2^{k-j}}*\widetilde{\psi}_{2^{k-j}}*f)|^{2}\right)^{\frac{1}{2} }\right\|_{2}^{2}\] \[\lesssim \delta^{-2\epsilon}2^{-2j\epsilon}\sum_{k}\|\widetilde{\psi}_{2 ^{k-j}}*f\|_{2}^{2}\lesssim\delta^{-2\epsilon}2^{-2j\epsilon}\|f\|_{2}^{2}.\] Here, in the last step we have used the Littlewood-Paley inequality. By a bootstrapping argument similar to that in the proof of Theorem 2.4 with estimates (7.1), (7.2), (7.3) and Lemma 7.1, we obtain the required estimates for \(N_{lac}^{\delta}\). A recursive application of the arguments similar to that of Lemma 5.4 for \(M_{j}^{\delta}\) along with Lemma 7.1 and the estimates (7.2), (7.3) gives us that \[\|M_{j}^{\delta}\|_{L^{p}\to L^{p}} \lesssim 2^{-j\epsilon\left(4-\frac{\epsilon}{p}\right)}\delta^{- \epsilon}\delta^{\frac{1}{p}-\frac{1}{2}}, \text{for}\;d\geq 3,\;\frac{3}{2}<p\leq 2, \tag{7.8}\] \[\|M_{j}^{\delta}\|_{L^{p}\to L^{p}} \lesssim 2^{-j\epsilon\left(3-\frac{\epsilon}{p}\right)}\delta^{- \epsilon}\delta^{\frac{1}{p}-\frac{1}{2}}, \text{for}\;d\geq 4,\;\frac{4}{3}<p\leq 2. \tag{7.7}\] ### Proof of Theorem 3.2: The following result will be useful in computing the \(L^{\frac{1}{2}}-\)norm of the operator using appropriate \(L^{1}-\)bounds. **Lemma 7.2** ([16]).: _Let \(T\) be a bilinear operator satisfying the following._ 1. _There exists_ \(N>0\) _such that for functions_ \(f_{i},f_{j}\) _supported in the unit cubes_ \(Q_{l_{i}}\) _and_ \(Q_{l_{j}}\) _with their lower left corners at_ \(l_{i}\) _and_ \(l_{j}\) _respectively and_ \(\|l_{i}-l_{j}\|_{\infty}>N,\) _we have_ \(T(f_{i},f_{j})=0.\)__ 2. _There exists_ \(R>0\) _such that_ \(T(f,g)\) _is supported on_ \((\operatorname{supp}(f)+B(0,R))\bigcup(\operatorname{supp}(g)+B(0,R)).\)__ 3. _The operator_ \(T\) _satisfies the_ \(L^{p}-\)_estimate_ \[\|T(f,g)\|_{p_{3}}\leq C\|f\|_{p_{1}}\|g\|_{p_{2}},\] _for all functions_ \(f,g\) _supported in a fixed cube and for some exponents_ \(p_{1},p_{2}\) _and_ \(p_{3}\) _with_ \(p_{1},p_{2}\geq 1,\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p}>1\) _and_ \(p_{3}\geq p.\)__ _Then, for all \(r\in[p,p_{3}],\) we have_ \[\|T\|_{L^{p_{1}}\times L^{p_{2}}\to L^{r}}\lesssim C.\] Now we proceed with the proof of Theorem 3.2. First, we prove the boundedness of \(\mathcal{N}^{\delta}\) in the Banach range. We observe the trivial \(L^{\infty}\) estimate \[\|\mathcal{N}^{\delta}\|_{L^{\infty}\times L^{\infty}\to L^{\infty}}\leq 1. \tag{7.9}\] The Holder's inequality yields \[\mathcal{N}^{\delta}(f,g)(x)\leq N_{p}f(x)N_{p^{\prime}}g(x),\ \ \text{for}\ \frac{1}{p}+\frac{1}{p^{\prime}}=1,\ p\geq 1,\] where \(N_{p}f(x)=\sup\limits_{u\in\mathbb{S}^{d-1}}\left(\frac{1}{\delta}\int_{S^{ \delta}(0)}|f(x+u+y)|^{p}\ d(y,z)\right)^{\frac{1}{p}}.\) We use a change of variable to polar co-ordinates and the well-known slicing technique, see [10], to prove the following estimates for \(N_{p}f.\) Consider \[\|N_{p}f\|_{p}^{p} \leq\int_{\mathbb{R}^{d}}\sup\limits_{u\in\mathbb{S}^{d-1}}\frac {1}{\delta}\int_{t=1-\delta}^{1+\delta}\int_{\mathbb{S}^{2d-1}}|f(x+u+ty)|^{p} \ d\sigma_{2d}(y,z)t^{2d-1}dtdx\] \[\lesssim\int_{\mathbb{R}^{d}}\sup\limits_{u\in\mathbb{S}^{d-1}} \frac{1}{\delta}\int_{t=1-\delta}^{1+\delta}\int_{B_{d}(0,1)}|f(x+u+ty)|^{p} \ (1-|y|^{2})^{\frac{d-2}{2}}dydtdx\] \[\lesssim\int_{\mathbb{R}^{d}}\sup\limits_{u\in\mathbb{S}^{d-1}} \frac{1}{\delta}\int_{t=1-\delta}^{1+\delta}\int_{B_{d}(u,t)}|f(x+y)|^{p}\ dyt^{-d}dtdx\] \[\lesssim\int_{\mathbb{R}^{d}}\frac{1}{\delta}\int_{t=1-\delta}^{1 +\delta}\int_{B_{d}(0,10)}|f(x+y)|^{p}\ dydtdx\] \[\leq\frac{1}{\delta}\int_{t=1-\delta}^{1+\delta}\int_{B_{d}(0,10) }\|f\|_{p}^{p}\ dydt\] \[\lesssim\|f\|_{p}^{p}\] The estimate above implies that \(\|\mathcal{N}^{\delta}(f,g)\|_{1}\lesssim\|f\|_{p}\|g\|_{p^{\prime}}.\) The desired result for other exponents in the Banach triangle follows by interpolation. Next, we prove the \(L^{1}\times L^{1}\to L^{\frac{1}{2}}\) bound of \(\mathcal{N}^{\delta}\). We can see that \[\mathcal{N}^{\delta}(f,g)(x) \leq\frac{1}{\delta}\left|\int_{B_{2d}(0,10)}f(x+y)g(x+z)\ d(y,z)\right|\] \[\leq\frac{1}{\delta}\int_{B_{d}(0,10)}|f(x+y)|\ dy\int_{B_{d}(0,1 0)}|g(x+z)|\ dz\] \[\leq\frac{1}{\delta}\|f\|_{1}\int_{B_{d}(0,10)}|g(x+z)|\ dz.\] From the above estimates, we obtain \[\|\mathcal{N}^{\delta}(f,g)\|_{1}\lesssim\frac{1}{\delta}\|f\|_{1}\|g\|_{1}.\] The required \(L^{1}(\mathbb{R}^{d})\times L^{1}(\mathbb{R}^{d})\to L^{\frac{1}{2}}(\mathbb{R} ^{d})-\)estimate follows from an application of Lemma 7.2. Proof of Theorem 4.1: Lacunary bilinear maximal operator \(\mathcal{N}^{T}_{lac}\) in dimension \(d\geq 2\) In order to prove this theorem, we decompose the operator \(\mathcal{N}^{T}_{lac}\) into infinitely many pieces. Indeed, using the identity (5.1) we get for any given \(k\in\mathbb{Z}\), \[\mathcal{N}^{T}_{2^{k}}(f,g)(x)\leq\mathcal{A}^{\phi,\phi}_{2^{k}}(f,g)(x)+ \mathcal{A}^{\phi,\infty}_{2^{k}}(f,g)(x)+\mathcal{A}^{\infty,\phi}_{2^{k}}( f,g)(x)+\sum_{i,j\geq 1}\mathcal{A}^{i,j}_{2^{k}}(f,g)(x),\] where \[\mathcal{A}^{\phi,\phi}_{2^{k}}(f,g)(x)=\mathcal{N}^{T}_{2^{k}}( f\ast\phi_{2^{k}},g\ast\phi_{2^{k}})(x), \quad\mathcal{A}^{\phi,\infty}_{2^{k}}(f,g)(x)=\mathcal{N}^{T}_{2^{k}}(f\ast \phi_{2^{k}},g)(x),\] \[\mathcal{A}^{\infty,\phi}_{2^{k}}(f,g)(x)=\mathcal{N}^{T}_{2^{k} }(f,g\ast\phi_{2^{k}})(x),\text{ and }\quad\mathcal{A}^{i,j}_{2^{k}}(f,g)(x)=\mathcal{N}^{T}_{2^{k}}(f\ast\psi_{2^{ k-i}},g\ast\psi_{2^{k-j}})(x).\] Observe that \[\mathcal{N}^{T}_{2^{k}}(f\ast\phi_{2^{k}},g)(x)\] \[\leq \sup_{u\in T_{1},|y|\leq 1}|\phi_{2^{k}}\ast f(x+2^{k}(u+y))|\sup_{v \in T_{2}}\int_{\mathbb{S}^{2d-1}}|g(x+2^{k}(v+z))|d\sigma(y,z)\] \[= \sup_{u\in T_{1},|y|\leq 1}|\phi_{2^{k}}\ast f(x+2^{k}(u+y))|\sup_{ v\in T_{2}}\int_{B^{d}(0,1)}|g(x+2^{k}(v+z))|(1-|z|^{2})^{\frac{d-2}{2}}\int_{ \mathbb{S}^{d-1}}d\sigma(y)dz.\] Now, using Proposition 5.1 and boundedness of Hardy-Littlewood maximal function \(M_{HL}\) we get \(L^{1}(\mathbb{R}^{d})\times L^{1}(\mathbb{R}^{d})\to L^{\frac{1}{2},\infty}( \mathbb{R}^{d})-\)estimate of the maximal function \(\sup_{k\in\mathbb{Z}}\mathcal{N}^{T}_{2^{k}}(f\ast\phi_{2^{k}},g)\). The similar estimate holds for \(\sup_{k\in\mathbb{Z}}\mathcal{A}^{\phi,\phi}_{2^{k}}(f,g)\) and \(\sup_{k\in\mathbb{Z}}\mathcal{A}^{\infty,\phi}_{2^{k}}(f,g)\). Therefore, it remains to study boundedness of \(\mathcal{A}^{i,j}_{2^{k}}(f,g)\). We prove the following \(L^{p}-\)estimates for the operator \(\mathcal{A}^{i,j}_{1}\). **Lemma 8.1**.: _Let \(d\geq 1\) and \(0<s\leq d\). Then, for \(i,j\geq 0,\) we have_ 1. \(\|\mathcal{A}^{i,j}_{1}\|_{L^{2}\times L^{2}\to L^{1}}\lesssim 2^{-(i+j)( \frac{3d-2-2s}{4})},\)__ 2. \(\|\mathcal{A}^{i,j}_{1}\|_{L^{1}\times L^{1}\to L^{\frac{1}{2}}}\lesssim\min\{2 ^{(i+j)},2^{(i+j)s}\}.\)__ _Consequently, for \(1\leq p\leq 2,\) we get that,_ \[\|\mathcal{A}^{i,j}_{1}\|_{L^{p}\times L^{p}\to L^{p/2}}\lesssim 2^{-(i+j)\max\left\{ \frac{(3d-2}{2}-\frac{3d-2s+2s}{2p}),\frac{(3d-2s}{2}-\frac{3d+2-2s}{2p}) \right\}}. \tag{8.1}\] Proof.: Observe that the operator \(\mathcal{A}^{i,j}_{1}\) is local in nature, i.e. if \(f\) and \(g\) are supported in the unit cubes, then the support of \(\mathcal{A}^{i,j}_{1}(f,g)\) lies in a fixed neighbourhood of the unit ball. Thus, in view of Lemma 7.2, it suffices to prove \(L^{2}(\mathbb{R}^{d})\times L^{2}(\mathbb{R}^{d})\to L^{2}(\mathbb{R}^{d})-\)estimates for the operator \(\mathcal{A}^{i,j}_{1}\) acting on functions supported in a fixed cube. In this direction, the following estimate for bilinear averages corresponding to fixed parameters \(u\in T_{1},v\in T_{2}\) is known, see [Proposition 4.1, [GGPP23]]. \[\|\mathcal{A}^{i,j}_{u,v}(f,g)\|_{2}\lesssim(2^{i}+2^{j})^{-\frac{2d-1}{2}}2^{ \min\{i,j\}\frac{d}{2}}\|f\|_{2}\|g\|_{2}. \tag{8.2}\] Next, we use the covering argument to push the estimate above for the operator \(\mathcal{A}^{i,j}_{1}\). Let \(C_{k}\) denote the set of centers of balls of radius \(2^{-i}\) covering \(T_{k},\ k=1,2.\) Set \(\Psi_{2^{-j}}(x)=2^{jn}(1+2^{j}|x|)^{-(n+1)}\) and observe that \(|\tilde{\psi}_{2^{-j}}(x)|\leq\Psi_{2^{-j}}(x)\). Consider \[\|\mathcal{A}_{1}^{i,j}(f,g)\|_{2}^{2}\] \[=\int\sup_{u,v}|(f\otimes g)*(\psi_{2^{-i}}\otimes\psi_{2^{-j}})* \sigma_{2d-1}*(\tilde{\psi}_{2^{-i}}\otimes\tilde{\psi}_{2^{-j}})(x+u,x+v)|^{2} \;dx\] \[\leq\int\sup_{u,v}\left|\int\int|(f\otimes g)*(\psi_{2^{-i}} \otimes\psi_{2^{-j}})*\sigma_{2d-1}(x+u-y,x+v-z)|\Psi_{2^{-i}}\otimes\Psi_{2^{ -j}}(y,z)\;dydz\right|^{2}\;dx\] \[\leq\sum_{u\in C_{1},v\in C_{2}}\int\int\int|(f\otimes g)*(\psi_{ 2^{-i}}\otimes\psi_{2^{-j}})*\sigma_{2d-1}|^{2}(x+u-y,x+v-z)\Psi_{2^{-i}} \otimes\Psi_{2^{-j}}(y,z)\;dydzdx\] \[\lesssim\sum_{u\in C_{1},v\in C_{2}}\|(f\otimes g)*(\psi_{2^{-i} }\otimes\psi_{2^{-j}})*\sigma_{2d-1}(\cdot+u,\cdot+v)\|_{2}^{2}\] \[\lesssim\sum_{u\in C_{1},v\in C_{2}}(2^{i}+2^{j})^{-(2d-1)}2^{ \min\{i,j\}d}\|f\|_{2}^{2}\|g\|_{2}^{2}\] \[\lesssim(2^{i}+2^{j})^{-(2d-1)+s}2^{\min\{i,j\}d}\|f\|_{2}^{2}\|g \|_{2}^{2}.\] Note that here we have used the estimate (8.2) in the second to the last step. The estimate above along with the Lemma 7.2 yields the desired estimate ((i)) in Lemma 8.1. Next, estimate ((ii)) follows by using essentially the similar idea as earlier. Invoking [Proposition 4.2, [17]] we have \[\|\mathcal{A}_{u,v}^{i,j}(f,g)\|_{1}\lesssim\|f\|_{1}\|g\|_{1}.\] Observe that using the slicing argument, we get that \[\mathcal{A}_{1}^{i,j}(f,g)(x) =\sup_{(u,v)\in T}\bigg{|}\int_{0}^{1}r^{d-1}(1-r^{2})^{\frac{d- 2}{2}}\int_{\mathbb{S}^{d-1}}f*\psi_{2^{-i}}(x+u+ry)\;d\sigma(y)\] \[\qquad\qquad\qquad\int_{\mathbb{S}^{d-1}}g*\psi_{2^{-j}}(x+v+ \sqrt{1-r^{2}}z)\;d\sigma(z)\;dr\bigg{|}\] \[\leq\int_{0}^{1}r^{d-1}(1-r^{2})^{\frac{d-2}{2}}\sup_{u\in T_{1} }\left|\int_{\mathbb{S}^{d-1}}f*\psi_{2^{-i}}(x+u+ry)\;d\sigma(y)\right|\] \[\qquad\qquad\qquad\sup_{v\in T_{2}}\left|\int_{\mathbb{S}^{d-1}} g*\psi_{2^{-j}}(x+v+\sqrt{1-r^{2}}z)\;d\sigma(z)\right|\;dr.\] It is straightforward to verify that the kernel estimate \(|\psi_{2^{-i}}*d\sigma_{r}(x+u)|\lesssim\frac{2^{i}}{r^{d-1}(1+|x|)^{N}}\) holds with implicit constant independent of \(u\). This implies that \[\|\mathcal{A}_{1}^{i,j}(f,g)\|_{1}\lesssim 2^{i+j}\int_{0}^{1}\frac{dr}{\sqrt{1-r ^{2}}}\|f\|_{1}\|g\|_{1}\lesssim 2^{i+j}\|f\|_{1}\|g\|_{1}.\] As earlier, the estimate above with Lemma 7.2 yields the desired estimate ((ii)). Now, we will prove Theorem 4.1. **Proof of boundedness in the region \(\Omega(\{O,A,P,B\})\):** By interpolation, it suffices to prove weak-type estimates at the points \(A,B\) and strong type bounds on the line segment \(OP\) (excluding \(P\)). The weak-type bounds at the points \(A\) and \(B\) follow by the following pointwise inequality \[\mathcal{N}_{lac}^{T}(f,g)(x)\lesssim\min\{\|f\|_{\infty}M_{HL}g(x),\|g\|_{ \infty}M_{HL}f(x)\}.\] Indeed, for \(d\geq 2\), the estimate above follows using a slicing argument. We have, \[\mathcal{N}_{lac}^{T}(f,g)(x)\] \[=2\sup_{k\in\mathbb{Z}}\sup_{(u,v)\in T}\Big{|}\int_{B^{d}(0,1)}f (x+2^{k}(u+y))(1-|y|^{2})^{\frac{d-2}{2}}\int_{\mathbb{S}^{d-1}}g(x+2^{k}(v+ \sqrt{1-|y|^{2}}z))d\sigma(z)dy\Big{|}\] \[\lesssim M_{HL}f(x)\|g\|_{L^{\infty}}.\] The other estimate follows similarly. Next, we need to prove \(L^{p}-\)boundedness at points on the line segment \(OP\) (excluding \(P\)). Fix \(p>1+\min\{\frac{2s}{3d-2},\frac{2}{3d-2s}\}\). Using standard dilation argument, we have that \(\mathcal{A}_{2^{k}}^{i,j}(f,g)(x)=\mathcal{A}_{1}^{i,j}(D_{2^{k}}f,D_{2^{k}}g)( 2^{-k}x)\), where \(D_{2^{k}}f(x)=f(2^{k}x).\) Therefore, \[\|\mathcal{A}_{2^{k}}^{i,j}\|_{L^{p}\times L^{p}\to L^{\frac{p}{2}}}=\| \mathcal{A}_{1}^{i,j}\|_{L^{p}\times L^{p}\to L^{\frac{p}{2}}}=:C(i,j,p), \tag{8.3}\] where the operator norm satisfies the following bound (see Lemma 8.1), \[C(i,j,p)\leq 2^{-(i+j)\max\{\big{(}\frac{3d-2}{2}-\frac{3d-2i+2s}{2p}\big{)},(\frac{3d-2s}{2}-\frac{3d+2-2s}{2p})\}}.\] Since \(\sup\limits_{|k|\leq L}\mathcal{N}_{2^{k}}^{T}(f,g)(x)\uparrow\mathcal{N}_{ lac}^{T}(f,g)(x)\) as \(L\to\infty\) in view of the monotone convergence theorem, it is enough to prove that \[\left\|\sup\limits_{\|k\|\leq L}\mathcal{N}_{2^{k}}^{T}(f,g)\right\|_{L^{ \frac{p}{2}}}\leq\alpha_{L}(p)\|f\|_{L^{p}}\|g\|_{L^{p}}, \tag{8.4}\] where the constant \(\alpha_{L}(p)\) is independent of \(L\). We consider the following vector-valued operator to complete the proof, also see [10] for a similar argument. Now, invoking the estimate (8.4) we get \[\left\|\mathfrak{M}_{L}^{i,j}\big{(}\{f_{k}\}_{|k|\leq L},\{g_{k} \}_{|k|\leq L}\big{)}\right\|_{L^{\frac{p}{2}}(\ell_{\infty})} = \left\|\sup\limits_{|k|\leq L}\mathcal{A}_{2^{k}}^{i,j}(f_{k},g_{ k})\right\|_{L^{\frac{p}{2}}}\] \[\lesssim \left\|\sup\limits_{|k|\leq L}\mathcal{A}_{2^{k}}\Big{(}M_{HL}( \sup\limits_{|k|\leq L}|f_{k}|),M_{HL}(\sup\limits_{|k|\leq L}|g_{k}|)\Big{)} \right\|_{L^{\frac{p}{2}}}\] \[\leq \alpha_{L}(p)\left\|M_{HL}(\sup\limits_{|k|\leq L}|f_{k}|)\right\| _{L^{p}}\left\|M_{HL}(\sup\limits_{|k|\leq L}|g_{k}|)\right\|_{L^{p}}\] \[\lesssim \alpha_{L}(p)\left\|\{f_{k}\}_{|k|\leq L}\|_{L^{p}(\ell_{\infty}) }\|\{g_{k}\}_{|k|\leq L}\|_{L^{p}(\ell_{\infty})}.\] On the other hand, using (8.3) we get \[\left\|\mathfrak{M}_{L}^{i,j}\big{(}\{f_{k}\}_{|k|\leq L},\{g_{k} \}_{|k|\leq L}\big{)}\right\|_{L^{\frac{p}{2}}(\ell_{\frac{p}{2}})} = \left\|\Big{\{}\mathcal{A}_{2^{k}}^{i,j}(f_{k},g_{k}):|k|\leq L \Big{\}}\right\|_{L^{\frac{p}{2}}(\ell_{\frac{p}{2}})}\] \[\leq C(i,j,p)\Big{(}\sum\limits_{|k|\leq L}\|f_{k}\|_{L^{p}}^{\frac{p }{2}}\|g_{k}\|_{L^{p}}^{\frac{p}{2}}\Big{)}^{\frac{p}{2}}\] \[\leq C(i,j,p)\left\|\{f_{k}\}_{|k|\leq L}\right\|_{L^{p}(\ell_{p})}\| \{g_{k}\}_{|k|\leq L}\big{\|}_{L^{p}(\ell_{p})}\] \[\leq C(i,j,p)\left\|\{f_{k}\}_{|k|\leq L}\right\|_{L^{p}(\ell_{1})} \|\{g_{k}\}_{|k|\leq L}\big{\|}_{L^{p}(\ell_{1})}\,.\] Interpolate between estimates (8.5) and (8.6) to get that \[\left\|\mathfrak{M}_{L}^{i,j}\Big{(}\{f_{k}\}_{|k|\leq L}\times\{g_{k}\}_{|k| \leq L}\Big{)}\right\|_{L^{\frac{p}{2}}(\ell_{p})}\leq(\alpha_{L}(p)C(i,j,p))^{ \frac{1}{2}}\left\|\{f_{k}\}_{|k|\leq L}\right\|_{L^{p}(\ell_{2})}\left\|\{g_ {k}\}_{|k|\leq L}\right\|_{L^{p}(\ell_{2})}.\] Applying Littlewood-Paley theory, we get \[\left\|\sup\limits_{|k|\leq L}\mathcal{A}_{2^{k}}^{i,j}(f,g)\right\|_{L^{\frac{ p}{2}}}\leq\alpha_{L}(p)^{1/2}C(i,j,p)^{1/2}\|f\|_{L^{p}}\|g\|_{L^{p}}.\] Therefore, when \(p\geq 2\), we obtain \[\alpha_{L}(p) \leq \alpha_{L}(p)^{1/2}\sum\limits_{i,j\geq 1}C(i,j,p)^{1/2}+1\] \[\implies\alpha_{L}(p) \lesssim \Big{(}\sum\limits_{i,j\geq 1}C(i,j,p)^{1/2}\Big{)}^{2}+1<\infty.\] When \(1+\min\{\frac{2s}{3d-2},\frac{2}{3d-2s}\}<p<2\), we have \[\alpha_{L}(p)\lesssim\Big{(}\sum_{i,j\geq 1}C(i,j,p)^{\frac{p}{2}}\Big{)}^{\frac{ 2}{p}}+1<\infty.\] **Proof of boundedness in the region \(\Omega(\{O,A,Q,P,R,B\})\):** It remains to prove the boundedness in the triangles \(\Omega(\{A,Q,P\})\) and \(\Omega(\{P,R,B\})\). We prove the boundedness in the region \(\Omega(\{A,Q,P\})\) and the other follows by symmetry. Moreover, by interpolation, it is enough to prove weak type bounds on the line segment \(AQ\), excluding the point \(Q\). Let \(A_{l}=\{y\in\mathbb{R}^{d}:2^{-l}<\sqrt{1-|y|^{2}}\leq 2^{-l+1}\}\). By slicing argument, we can write the integral as \[\left|\int_{\mathbb{S}^{2d-1}}f(x+2^{k}(u+y))g(x+2^{k}(v+z))\;d \sigma_{2d-1}(y,z)\right|\] \[= 2\left|\int_{B(0,1)}f(x+2^{k}(u+y))(1-|y|^{2})^{\frac{d-2}{2}} \int_{\mathbb{S}^{d-1}}g(x+2^{k}(v+\sqrt{1-|y|^{2}}z))\;d\sigma(z)dy\right|\] \[= 2\left|\sum_{l=1}^{\infty}\int_{A_{l}}f(x+2^{k}(u+y))(1-|y|^{2} )^{\frac{d-2}{2}}\int_{S^{d-1}}g(x+2^{k}(v+\sqrt{1-|y|^{2}}z))\;d\sigma(z)dy\right|\] \[\lesssim \sum_{l=1}^{\infty}2^{-l(d-2)}\left(\sup_{2^{-l}<r\leq 2^{-l+1} }\left|\int_{\mathbb{S}^{d-1}}g(x+2^{k}(v+rz))\;d\sigma(z)\right|\right)\int_ {A_{l}}|f(x+2^{k}(u+y))|\;dy.\] Therefore, we get that \[\mathcal{N}^{T}_{lac}(f,g)(x)\lesssim\sum_{l=1}^{\infty}2^{-l(d-2)}N_{T,l}(g) (x)M_{l}(f)(x),\] where \[N_{T,l}(g)(x) = \sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^{-l}<r\leq 2^{-l+1}} \left|\int_{\mathbb{S}^{d-1}}g(x+2^{k}(v+rz))\;d\sigma(z)\right|\] \[= \sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^{-l}<r\leq 2^{-l+1}} \left|g*\sigma_{2^{k}r}(x+2^{k}v)\right|\] and \[M_{l}(f)(x)=\sup_{k\in\mathbb{Z}}\sup_{u\in T_{1}}\int_{A_{l}}|f(x+2^{k}(u+y)) |\;dy.\] We can see that \(M_{l}\) maps \(L^{1}(\mathbb{R}^{d})\) to \(L^{1,\infty}(\mathbb{R}^{d})\) for all \(1\leq l<\infty\) with constant independent of \(l\). Now, we provide \(L^{p}-\) estimates for the intermediary operators \(N_{T,l}\). **Lemma 8.2**.: _Let \(d\geq 3\) and \(0\leq s<d-2\). Suppose that \(T\subset\mathbb{R}^{d}\) is a compact set with finite upper Minkowski content \(s\). Then_ \[\|N_{T,l}(f)\|_{p}\lesssim 2^{l(\frac{2d-s}{p}-d+s)}\|f\|_{p}\;\;\;\mbox{ for}\;\;\;p>1+\frac{1}{d-s-1}.\] Proof.: We fix \(2^{-l}<r\leq 2^{-l+1}\). Then we have the following decomposition of the measure \(\sigma_{2^{k}r}\) by identity 5.1, \[\sigma_{2^{k}r}=\phi_{2^{k+l}r}*\sigma_{2^{k}r}+\sum_{j=1}^{l}\psi_{2^{k+l-j}r }*\sigma_{2^{k}r}+\sum_{j=1}^{\infty}\psi_{2^{k-j}r}*\sigma_{2^{k}r}.\] Therefore, we have \[N_{T,l}(f)(x) \leq \sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^{-l}<r\leq 2^{-l+1}}|f *\phi_{2^{k+l}r}*\sigma_{2^{k}r}(x+2^{k}v)|\] \[+\sum_{j=1}^{l}\sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^{-l}< r\leq 2^{-l+1}}|f*\psi_{2^{k+l-j}r}*\sigma_{2^{k}r}(x+2^{k}v)|\] \[+\sum_{j=1}^{\infty}\sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^ {-l}<r\leq 2^{-l+1}}|f*\psi_{2^{k-j}r}*\sigma_{2^{k}r}(x+2^{k}v)|\] \[=: I+II+III.\] For the first term, we have the following kernel estimate \[|\phi_{2^{k+l}r}*\sigma_{2^{k}r}(x+2^{k}v)|\lesssim 2^{-kd}\left(1+\frac{|x+2^{k }v|}{2^{k}}\right)^{-N}\lesssim 2^{-kd}(1+2^{-k}|x|)^{-N}.\] This implies that \(I\lesssim M_{HL}f(x)\) and hence, we have \(L^{p}-\)boundedness of the term \(I\) for all \(1<p\leq\infty\). The kernel estimate of the second term \[|\psi_{2^{k+l-j}r}*\sigma_{2^{k}r}(x+2^{k}v)|\lesssim 2^{(j-k)d}\left(1+\frac{|x +2^{k}v|}{2^{k}}\right)^{-N}\lesssim 2^{(j-k)d}(1+2^{-k}|x|)^{-N},\] gives that \[\sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^{-l}<r\leq 2^{-l+1}}|f*\psi_{2^{k+l- j}r}*\sigma_{2^{k}r}(x+2^{k}v)|\lesssim 2^{jd}M_{HL}f(x). \tag{8.7}\] By the covering argument similar to Lemma 5.2, we can get that \[\left\|\sup_{v\in T_{2}}\sup_{2^{-l}<r\leq 2^{-l+1}}|f*\psi_{2^{k+ l-j}r}*\sigma_{2^{k}r}(\cdot+2^{k}v)|\right\|_{2} \lesssim 2^{\frac{js}{2}}\left\|\sup_{2^{-l}<r\leq 2^{-l+1}}|f*\psi_{2^ {k+l-j}r}*\sigma_{2^{k}r}(\cdot)|\right\|_{2}\] \[\lesssim 2^{\frac{js-(j-l)(d-2)}{2}}\|f\|_{2},\] where the last inequality can be obtained from the method for \(L^{2}-\)boundedness of the spherical maximal function (\(d\geq 3\))[10]. From the Littlewood-Paley inequality, we can get that \[\left\|\sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^{-l}<r\leq 2^{- l+1}}|f*\psi_{2^{k+l-j}r}*\sigma_{2^{k}r}(\cdot+2^{k}v)|\right\|_{2}\] \[\lesssim\left(\sum_{k\in\mathbb{Z}}\left\|\sup_{v\in T_{2}}\sup_{2 ^{-l}<r\leq 2^{-l+1}}|f*\psi_{2^{k+l-j}r}*\sigma_{2^{k}r}(\cdot+2^{k}v)|\right\|_{2 }^{2}\right)^{\frac{1}{2}} \tag{8.8}\] \[\lesssim 2^{\frac{js-(j-l)(d-2)}{2}}\|f\|_{2}.\] Interpolating between the \(L^{1}-\)estimate (8.7) and \(L^{2}-\)estimate (8.8), we get that \[\left\|\sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^{-l}<r\leq 2^{-l+1}}|f* \psi_{2^{k+l-j}r}*\sigma_{2^{k}r}(\cdot+2^{k}v)|\right\|_{p}\lesssim 2^{j( \frac{3d-2-s}{p}-2d+2+s)}2^{l(d-2-\frac{d-2}{p})}\|f\|_{p}.\] Now summing in \(j\), we get that \[\|II\|_{p}\lesssim 2^{l(\frac{2d-s}{p}-d+s)}\|f\|_{p}.\] For the third term, we have the kernel estimate \[\psi_{2^{k-j}r}*\sigma_{2^{k}r}(x+2^{k}v)\lesssim\frac{2^{j}}{2^{kd}r^{d}} \left(1+\frac{|x+2^{k}v|}{2^{k}r}\right)^{-N}\lesssim\frac{2^{j}}{r^{d}} \chi_{B(0,2)(x)}+\frac{2^{j}}{(1+|x|)^{N}}\chi_{B(0,2)^{e}}(x).\] Thus, we have \[\sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^{-l}<r\leq 2^{-l+1}}|f*\psi_{2^{k- j}r}*\sigma_{2^{k}r}(x+2^{k}v)|\lesssim 2^{j}2^{ld}M_{HL}f(x). \tag{8.9}\] Next, we prove the \(L^{2}-\)estimate of the third term. By the covering argument, we have that \[\left\|\sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^{-l}<r\leq 2^{-l+ 1}}|f*\psi_{2^{k-j_{F}}}*\sigma_{2^{k}r}(\cdot+2^{k}v)|\right\|_{2}\] \[\lesssim \left(\sum_{k\in\mathbb{Z}}\left\|\sup_{v\in T_{2}}\sup_{2^{-l}<r \leq 2^{-l+1}}|f*\psi_{2^{k-j_{F}}}*\sigma_{2^{k}r}(\cdot+2^{k}v)|\right\|_{2} ^{2}\right)^{\frac{1}{2}}\] \[\lesssim \left(\sum_{k\in\mathbb{Z}}2^{(j+l)s}\left\|\sup_{2^{-l}<r\leq 2 ^{-l+1}}|f*\psi_{2^{k-j_{F}}}*\sigma_{2^{k}r}(\cdot+2^{k}v)|\right\|_{2}^{2} \right)^{\frac{1}{2}} \tag{8.10}\] \[\lesssim 2^{\frac{l_{F}}{2}}2^{-j^{\frac{d-2-s}{2}}}\|f\|_{2}.\] Interpolating the \(L^{1}\) from (8.9) and \(L^{2}-\)estimate (8.10), we get that for \(1<p\leq 2\) \[\left\|\sup_{k\in\mathbb{Z}}\sup_{v\in T_{2}}\sup_{2^{-l}<r\leq 2^{-l+1}}|f* \psi_{2^{k-j_{F}}}*\sigma_{2^{k}r}(\cdot+2^{k}v)|\right\|_{p}\lesssim 2^{-j(d-s- 1-\frac{d-s}{p})}2^{l(\frac{2d-s}{p}-d+s)}\|f\|_{p}.\] Then, summing in \(j\) concludes the proof of Lemma 8.2. We now complete the proof of the boundedness of \(\mathcal{N}^{T}_{lac}\) at points on the line segment \(AQ\), excluding the point \(Q\). By Lemma 8.2 and the weak type \((1,1)\) boundedness of \(M_{l}\), we get that \[\|\mathcal{N}^{T}_{lac}(f,g)\|_{\frac{p}{r^{+1}},\infty}\lesssim\sum_{l=1}^{ \infty}2^{-l(d-2)}2^{l(\frac{2d-s}{p}-d+s)}\|f\|_{1}\|g\|_{p}\lesssim\|f\|_{1} \|g\|_{p},\] where the series in \(l\) is summable for \(p>1+\frac{1}{d-s-1}.\) This completes the proof of Theorem 4.1. Proof of Theorem 4.2: Lacunary bilinear maximal operator \(\mathcal{N}^{T}_{lac}\) in dimension \(d=1\) We consider a new (sub)linear operator \[\mathbf{A}^{T}_{lac}f:=\sup_{u\in T,k\in\mathbb{Z}}\Big{|}\int_{\mathbb{S}^{1 }}f(x+2^{k}(u+y))\ d\sigma(y,z)\Big{|}.\] **Lemma 9.1**.: _Let \(T\) be a compact subset of \(\mathbb{R}\). Then \(\mathbf{A}^{T}_{lac}\) is bounded in \(L^{p}(\mathbb{R})\) for \(1<p\leq\infty\), and \(\mathbf{A}^{T}_{lac}\) maps \(L^{1}(\mathbb{R})\) to \(L^{1,\infty}(\mathbb{R})\)._ Proof.: Note that we can rewrite the operator as above in the following way. \[\mathbf{A}^{T}_{lac}f(x)\simeq\sup_{u\in T,k\in\mathbb{Z}}\int_{0}^{1}f(x+2^{ k}(u+t))\ \frac{dt}{\sqrt{1-t^{2}}}.\] Moreover, we can write \[\int_{0}^{1}f(x+2^{k}(u+t))\ \frac{dt}{\sqrt{1-t^{2}}} \lesssim \sum_{j=1}^{\infty}2^{\frac{i}{2}}\int_{1-2^{-(j-1)}}^{1-2^{-j}} f(x+2^{k}(u+t))\ dt\] \[= \sum_{j=1}^{\infty}\frac{2^{-\frac{i}{2}}}{|I_{j}|}\int_{I_{j}}f(x +2^{k}(u+t))\ dt,\] where \(I_{j}=[1-2^{-(j-1)},1-2^{-j}].\) From estimates of shifted dyadic Hardy-Littlewood maximal function[16], we get weak type (1,1) estimates for \(\mathbf{A}^{T}_{lac}\). Now interpolating with the trivial \(L^{\infty}(\mathbb{R})-\)estimate, we get the desired lemma. Proof of Theorem 4.2.: **Proof of boundedness in the region \(\Omega(\{O,A,B\})\):** As in the proof of Theorem 4.1, it suffices to prove weak-type estimates at the points \(A\) and \(B\). Observe that \[\mathcal{N}^{T}_{lac}(f,g)(x)\leq\|f\|_{\infty}\mathbf{A}^{T}_{lac}g(x),\ \mathcal{N}^{T}_{lac}(f,g)(x)\leq\|g\|_{\infty}\mathbf{A}^{T}_{lac}f(x).\] Therefore, using Lemma 9.1 we get that for \(0\leq s\leq 1\), \(\mathcal{N}^{T}_{lac}\) maps \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) to \(L^{p_{3},\infty}(\mathbb{R})\) for \((\frac{1}{p_{1}},\frac{1}{p_{2}})\in\{A,B\}\). **Proof of boundedness in the region \(\Omega(\{O,A,D,B\})\):** It remains to prove weak type estimates for points on the diagonal \(OD\), excluding the point \(D\) for \(0\leq s<\frac{1}{2}\). Using the identity (5.1) we get for any given \(k\in\mathbb{Z}\), \[\mathcal{N}^{T}_{2^{k}}(f,g)(x)\leq\mathcal{A}^{\phi,\phi}_{2^{k}}(f,g)(x)+ \mathcal{A}^{\phi,\infty}_{2^{k}}(f,g)(x)+\mathcal{A}^{\infty,\phi}_{2^{k}}(f, g)(x)+\sum_{i,j\geq 1}\mathcal{A}^{i,j}_{2^{k}}(f,g)(x).\] Observe that \[\sup_{k\in\mathbb{Z}}\mathcal{A}^{\phi,\phi}_{2^{k}}(f,g)(x) \lesssim M_{HL}f(x)M_{HL}(g)(x),\ \sup_{k\in\mathbb{Z}}\mathcal{A}^{\phi,\infty}_{2^{k}}(f,g)(x)\lesssim M_{HL} f(x)\mathbf{A}^{T}_{lac}g(x)\] \[\text{and} \sup_{k\in\mathbb{Z}}\mathcal{A}^{\infty,\phi}_{2^{k}}(f,g)(x) \lesssim\mathbf{A}^{T}_{lac}f(x)M_{HL}g(x).\] Therefore, we only need to prove desired estimates for \(\mathcal{A}^{i,j}_{2^{k}}\). The arguments used to deal with the higher dimensional case can be used to prove that \[\|\mathcal{A}^{i,j}_{1}(f,g)\|_{L^{1/2}}\leq C2^{(i+j)s}\|f\|_{L^{1}}\|g\|_{L^{ 1}}, \tag{9.1}\] where \[\mathcal{A}^{i,j}_{1}(f,g)(x)=\sup_{u,v\in T}\Big{|}\int_{\mathbb{S}^{1}}f* \psi_{2^{-i}}(x+u+y)g*\psi_{2^{-j}}(x+v+z)\ d\sigma(y,z)\Big{|}.\] Note that \(|\widehat{\mathcal{G}_{\mathbb{S}^{1}}}(\xi,\eta)|\lesssim(1+|(\xi,\eta)|)^{- 1/2}\). Therefore, from Lemma 8.1 we get that \[\|\mathcal{A}^{i,j}_{1}(f,g)\|_{L^{1}}\lesssim 2^{-(i+j)/4}2^{(i+j)\frac{ \pi}{2}}\|f\|_{L^{2}}\|g\|_{L^{2}}. \tag{9.2}\] Now summing over \(i,j\) we get \(L^{2}(\mathbb{R})\times L^{2}(\mathbb{R})\) to \(L^{1}(\mathbb{R})\) boundedness when \(s<\frac{1}{2}\). Further, interpolating the estimates (9.1) and (9.2) we get \[\|\mathcal{A}^{i,j}_{1}(f,g)\|_{L^{p/2}}\lesssim 2^{\frac{(i+j)s}{p}}2^{- \frac{(i+j)}{2}(1-\frac{1}{p})}\|f\|_{L^{p}}\|g\|_{L^{p}}.\] Finally, applying the vector-valued argument as in Section 8 we get \(L^{p}(\mathbb{R})\times L^{p}(\mathbb{R})\to L^{\frac{p}{2}}(\mathbb{R})\) boundedness of \(\mathcal{N}^{T}_{lac}\) for \(\frac{1}{p}<\frac{1}{1+2s}\), when \(s<1/2\). ## 10. Necessary conditions In this section, we construct an example to show the sharpness of the restricted weak type inequality \(A^{T}_{1}:L^{1+\frac{\tau}{2-1},\tau}\to L^{1+\frac{\tau}{2-1},\infty}\). The example is a modification of the arguments given in [10], where they showed the sharpness in the particular case when \(T\) is of Minkowski dimension \(\frac{1}{2}\) in dimension \(d=2\). To obtain the counterexample, we will need two self-similar sets of appropriate Minkowski content whose sum is an interval. We say \(C\) is a Cantor set of ratio \(\lambda\) if \(C\) is self-similar with respect to the similitudes \(S_{1}(x)=\lambda x\) and \(S_{2}(x)=\lambda x-(1-\lambda)\) i.e., \(C=S_{1}(C)\cup S_{2}(C)\). It is well-known that the Minkowski dimension of \(C\) is \(\frac{\log 2}{\log\frac{1}{\lambda}}\), see Section 2.2 of Chapter 7 in [10]. The following lemma justifies the existence of two Cantor sets whose sum is an interval. **Lemma 10.1**.: _Let \(C_{1}\) and \(C_{2}\) be two Cantor sets with ratios \(\lambda_{1}\) and \(\lambda_{2}\) respectively such that \(\frac{\lambda_{1}}{1-2\lambda_{1}}\frac{\lambda_{2}}{1-2\lambda_{2}}\geq 1\). Then \(C_{1}+C_{2}\) is a closed interval. In fact, we have \(C_{1}+C_{2}=[0,2]\)._ We refer the interested reader to [Theorem 1.1, [12]] for more details. **Proposition 10.2**.: _For \(r>1\), the operator \(A^{T}_{1}\) does not map \(L^{1+\frac{\tau}{2-1},\tau}\) to \(L^{1+\frac{\tau}{2-1},\infty}\)._ Proof.: Let \(C_{1},C_{2}\subset[0,1]\) be Cantor sets of ratios \(2^{-\frac{1}{s}}\) and \(2^{-\frac{1}{1-s}}\) respectively. Then from Lemma 10.1 we can see that \([-1,1]=C_{2}-C_{1}=\{c_{2}-c_{1}:c_{1}\in C_{1},c_{2}\in C_{2}\}\). For a small \(a>0\), define \[f(x)=\sum\limits_{i=1}^{N}4^{(d-1)i}\chi_{\{ce_{1}:c\in C_{2}\}+B(0,a4^{-i})}(x).\] For \(r>1\), we have the following bound on Lorentz space norm of \(f\), \[\|f\|_{L^{1+\frac{s}{d-1},r}}\lesssim N^{\frac{1}{r}}.\] To see this, consider the sets \(F_{j}=\left(4^{(d-1)j}\sum\limits_{k=0}^{j}4^{-(d-1)k},4^{(d-1)(j+1)}\sum \limits_{k=0}^{j}4^{-(d-1)k}\right]\) for \(1\leq j\leq N-1\). If \(s\in F_{j}\), we have \[\{x\in\mathbb{R}^{d}:\ |f(x)|>s\}=\bigcup\limits_{y\in C}y+B(0,a4^{-(j+1)}).\] Denote \(d_{f}(s)=s|\{x\in\mathbb{R}^{d}:\ |f(x)|>s\}|\) and observe that, \[sd_{f}(s)^{\frac{1}{r}}=s[(a4^{-(j+1)})^{d-(1-s)}]^{\frac{d-1}{d-1+s}}\lesssim 1.\] Therefore, we have \[\|f\|_{L^{1+\frac{s}{d-1},r}} =\left(\frac{d-1+s}{d-1}\right)^{\frac{1}{q}}\left(\int_{0}^{4^{ d-1}}[d_{f}(s)s]^{q}\frac{ds}{s}+\sum\limits_{j=0}^{N-1}\int_{F_{j}}[d_{f}(s)s]^{q} \frac{ds}{s}\right)^{\frac{1}{r}}\] \[\lesssim\left(\int_{0}^{4^{d-1}}4^{-(d-1)}s^{q-1}\ ds+\sum\limits _{j=0}^{N-1}\int_{F_{j}}\frac{ds}{s}\right)^{\frac{1}{r}}\] \[\leq\left(4^{(q-1)(d-1)}+\sum\limits_{j=0}^{N-1}\right)^{\frac{1 }{q}}\lesssim N^{\frac{1}{r}}\] Let \(T=\{ce_{1}:\ c\in C_{1}\}\). Now for \(x\in[-1,1]e_{1}+\mathbb{S}^{d-1}\), there exists \(u\in T\) and \(y\in\mathbb{S}^{d-1}\) such that \[x+u+y=z\in\{ce_{1}:c\in C_{2}\}\ \text{ and }\ |\mathbb{S}^{d-1}(x+u)\cap B(z,a4^{-i})| \simeq 4^{-i(d-1)}.\] Thus, we can see that \(A_{1}^{T}f(x)\gtrsim N\). Now, suppose \(A_{1}^{T}\) maps \(L^{1+\frac{s}{d-1},r}\) to \(L^{1+\frac{s}{d-1}}\), then \[1\lesssim|\{x:A_{1}^{T}f(x)\gtrsim N\}|\lesssim\left(\frac{\|A_{1}^{T}\|_{L^{ 1+\frac{s}{d-1},r}-\to L^{1+\frac{s}{d-1},r}}\|f\|_{L^{1+\frac{s}{d-1},r}}}{N} \right)^{1+\frac{s}{d-1}}\lesssim\frac{\|A_{1}^{T}\|_{L^{1+\frac{s}{d-1},r}- \to L^{1+\frac{s}{d-1},\infty}}}{N^{\left(1-\frac{1}{r}\right)\left(1+\frac{ s}{d-1}\right)}}.\] The desired result follows by taking the limit \(N\to\infty\). ## Acknowledgement Ankit Bhojak and Saurabh Shrivastava acknowledge the financial support from Science and Engineering Research Board, Department of Science and Technology, Govt. of India, under the scheme Core Research Grant, file no. CRG/2021/000230. Surjeet Singh Choudhary is supported by CSIR(NET), file no.09/1020(0182)/2019- EMR-I for his Ph.D. fellowship. Kalachand Shuin is supported by NRF grant no. 2022R1A4A1018904 and BK 21 Post doctoral fellowship. The authors acknowledge the support and hospitality provided by the International Centre for Theoretical Sciences, Bangalore (ICTS) for participating in the program - Modern trends in Harmonic Analysis (code: ICTS/Mtha2023/06).
2308.13859
Long-distance high-fidelity continuous-variable quantum key distribution with non-Gaussian operations: An exact closed form solution
In this paper, we derive a closed form expression for the output state of a CV-QKD protocol in the presence of zero-photon catalysis (ZPC) and quantum scissor (QS). Then, based on this closed form solution, we use direct search algorithm to find the appropriate values of input state and QS parameters, which considerably enhance the range and the fidelity of a CV-QKD protocol. In the special case of pure loss channel, the largest range of the protocol is only 6.5% less than the fundamental limit of repeaterless quantum communication. In addition, examination of the protocol for different values of excess noise, reveals that their is a trade-off between range and fidelity, and high value of fidelity can be obtained at the cost of a slight reduction in protocol range.
Khatereh Jafari, Mojtaba Golshani, Alireza Bahrampour
2023-08-26T12:35:20Z
http://arxiv.org/abs/2308.13859v2
long-distance high-fidelity continuous-variable quantum key distribution with non-Gaussian operations: An exact closed form solution ###### Abstract In this paper, we derive a closed form expression for the output state of a CV-QKD protocol in the presence of zero-photon catalysis (ZPC) and quantum scissor (QS). Then, based on this closed form solution, we use direct search algorithm to find the appropriate values of input state and QS parameters, which considerably enhance the range and the fidelity of a CV-QKD protocol. In the special case of pure loss channel, the largest range of the protocol is only 6.5% less than the fundamental limit of repeaterless quantum communication. In addition, examination of the protocol for different values of excess noise, reveals that their is a trade-off between range and fidelity, and high value of fidelity can be obtained at the cost of a slight reduction in protocol range. Continuous-Variable Quantum Key Distribution, Secure Key Rate, Zero Photon Catalysis, Quantum Scissor, Long-range, High-Fidelity \({}^{*}\)[email protected] ## I Introduction Unlike classical cryptography, which its security is due to the high complexity of the mathematical problem, quantum key distribution (QKD) is a method that share unconditionally secure key between two users (traditionally called Alice and Bob) based on fundamental laws of quantum physics[1; 2; 3]. In the continuous variable QKD (CV-QKD), the information is encoded on the continuous characteristics of the light, e\({}_{g}\) its quadratures[4; 5; 6; 7; 8]. Compared to its discrete counterpart[9; 10; 11], CV-QKD has some advantages, namely compatibility with conventional technology, low cost and high rate[12; 13]. Nevertheless, this protocol has drawbacks such as short range and sensitivity to the excess noise[14]. In order to dispel these problems, different setups including quantum scissor (QS), photon addition, photon subtraction, and photon catalysis are proposed[15; 16; 17; 18]. QS is a non-deterministic operation that reduces the dimension of Hilbert space into the two dimension, and provides an arbitrary linear combination of vacuum and single photon states[19]. Based on transmittance coefficients of scissor's beam splitters, QS can act as a heralded noiseless linear amplifier for small amplitude input signals, and consequently can boost the performance of CV-QKD protocols at long distances[20; 21; 22]. Photon Catalysis is a feasible approach to improve the performance of CV-QKD protocols[16; 17; 23]. Zero photon catalysis (ZPC) is a special type of quantum catalysis that reduces the weight of higher order Fock states and can be treated as a heralded noiseless attenuator[23]. This Gaussian operation, by reduction the modulation variance of the input signal, is able to increase secure transmission distance of the CV-QKD protocols. Recently, we offered a CV-QKD protocol equipped with both ZPC and QS, and demonstrated that this suggested scheme outstandingly enhances the fidelity of the teleportation even at long distances[24]. However, in that work, due to the time-consuming numerical calculations, the optimal values of transmittance coefficients of QS's beam splitters, to achieve the maximum secure range, have not been obtained. This paper seeks to maximize the secure transmission distance of that protocol. In order to realize this, some parts of the previous protocol have been changed. In Ref. [24], we used a modified QS that truncates the Hilbert space to third-order Fock states[25]. However, in this research the standard QS is adopted. This modification considerably simplifies both the experimental setup (because of the simpler sources and detectors of the standard QS) and computational complexity (optimization of one beam splitter instead of two). Moreover, here, we assume Alice uses Gaussian modulation in contrast to the previous work that discrete (quadrature phase-shift keying (QPSK)) modulation was adopted[26]. In the entanglement-assisted protocol, this means Alice uses a two-mode squeezed vacuum (TMSV) state and performs heterodyne measurement on her mode[27]. In this case, exact closed-form expression can be obtained for the output state of the protocol, which significantly reduces the required computational time of the optimization process. Moreover, As we will see in the next section, performing of the ZPC operation on one mode of TMSV state (in contrast to QPSK state) is equivalent to reducing the modulation variance of that state. Therefore, it is possible to further simplify the protocol and eliminate the ZPC operation, and as an alternative optimize the modulation variance of TMSV state. In this paper, we derive an exact expression for the output state of a protocol in the presence of standard QS at the Bob side. Then, based on this result, the optimal modulation variance and transmission coefficient of QS are numerically cal
2305.04301
On the perceived relevance of critical internal quality attributes when evolving software features
Several refactorings performed while evolving software features aim to improve internal quality attributes like cohesion and complexity. Indeed, internal attributes can become critical if their measurements assume anomalous values. Yet, current knowledge is scarce on how developers perceive the relevance of critical internal attributes while evolving features. This qualitative study investigates the developers' perception of the relevance of critical internal attributes when evolving features. We target six class-level critical attributes: low cohesion, high complexity, high coupling, large hierarchy depth, large hierarchy breadth, and large size. We performed two industrial case studies based on online focus group sessions. Developers discussed how much (and why) critical attributes are relevant when adding or enhancing features. We assessed the relevance of critical attributes individually and relatively, the reasons behind the relevance of each critical attribute, and the interrelations of critical attributes. Low cohesion and high complexity were perceived as very relevant because they often make evolving features hard while tracking failures and adding features. The other critical attributes were perceived as less relevant when reusing code or adopting design patterns. An example of perceived interrelation is high complexity leading to high coupling.
Eduardo Fernandes, Marcos Kalinowski
2023-05-07T14:59:23Z
http://arxiv.org/abs/2305.04301v1
# On the perceived relevance of critical internal quality attributes when evolving software features ###### Abstract Several refactorings performed while evolving software features aim to improve internal quality attributes like cohesion and complexity. Indeed, internal attributes can become critical if their measurements assume anomalous values. Yet, current knowledge is scarce on how developers perceive the relevance of critical internal attributes while evolving features. This qualitative study investigates the developers' perception of the relevance of critical internal attributes when evolving features. We target six class-level critical attributes: low cohesion, high complexity, high coupling, large hierarchy depth, large hierarchy breadth, and large size. We performed two industrial case studies based on online focus group sessions. Developers discussed how much (and why) critical attributes are relevant when adding or enhancing features. We assessed the relevance of critical attributes individually and relatively, the reasons behind the relevance of each critical attribute, and the interrelations of critical attributes. Low cohesion and high complexity were perceived as very relevant because they often make evolving features hard while tracking failures and adding features. The other critical attributes were perceived as less relevant when reusing code or adopting design patterns. An example of perceived interrelation is high complexity leading to high coupling. internal quality attribute, refactoring, software feature, software evolution, industry case study + Footnote †: Funded by CNPq (grant 312827/2020-2) and FAPERJ (grant 200.773/2019). ## I Introduction Several techniques have been proposed to monitor symptoms of source code degradation [1, 2, 3, 4]. Many of these techniques rely on measuring the code structure aimed at spotting degraded code structures and design [1]. Using internal quality attributes is a major technique. Each internal attribute captures a particular aspect of internal software quality [5]. Two examples of internal attributes are cohesion [6] and complexity [7]. While cohesion captures the interrelation degree of attributes and methods within a class [6, 2], complexity captures the cognitive difficulty of understanding code elements [7, 2]. Assessing metric values may assist in managing critical internal attributes, e.g., low cohesion and high complexity. A critical attribute is an internal attribute whose metrics used for capturing it assume anomalous values in comparison to the reference value [5, 8]. This paper targets metrics that become critical as their values increase, e.g., Lack of Cohesion (LCOM2) [6] and Cyclomatic Complexity (CC) [7] whose high values suggest classes with non-cohesive features or a high complexity [6, 7]. Critical attributes are symptoms that developers should consider managing - either mitigating or fully addressing - for the sake of software evolution [5]. Refactorings are largely advertised as effective means to help manage degradation symptoms [9, 10], including critical attributes. However, until recently, there was little empirical evidence on how refactorings affect internal attributes. A recent study addressed this literature gap in a large quantitative study [5] targeting five internal attributes: cohesion, complexity, coupling, inheritance, and size. That study suggests the refactoring effect on improving, worsening, or keeping internal attributes unaffected is diverse. Popular refactoring types like Extract Method and Move Method [9, 10] improve one or another attribute while worsening others. Developers should carefully apply these refactorings to avoid the unexpected worsening of internal attributes, thereby potentially harming software evolution. Due to the quantitative nature of the previous study mentioned above [5], an important question remained unanswered: _how much (and why) are critical attributes perceived as relevant by developers while evolving features?_ Answering this question is essential to assist refactorings along with software evolution. This knowledge could help in managing critical attributes, while preventing the worsening of potentially relevant attributes. This paper presents an industrial case study, based on case study guidelines [11], that addresses the aforementioned gap. We investigate the relevance degree reported by developers on critical attributes spotted by the five internal attributes assessed by the previous quantitative study [5]: cohesion, complexity, coupling, inheritance, and size. We elicit reasons why developers find each critical attribute relevant (or irrelevant) while evolving features. We also reveal some interrelations of critical attributes that may be relevant while evolving features. We recruited two development teams of the ExACTa PUC-Rio industry-academia Research and Development (R&D) collaboration initiative working on projects with Petrobras. Each team engaged in one focus group session [12]. Developers discussed the relevance of six critical attributes: low class cohesion, high class complexity, high class coupling, large class hierarchy depth, large class hierarchy breadth, and large class size. We refer to _relevance_ as the need for either mitigating or fully addressing critical attributes while evolving features. Our study results suggest the following. First, low class cohesion and high class complexity are perceived as relevant while evolving features. This result stands out because popular refactorings [9, 10], such as Extract Method and Move Method, often worsen cohesion [5]. Second, high class coupling, large class hierarchy depth, large class hierarchy breadth, and large size are not necessarily perceived as relevant by developers while evolving features. This result is curious because refactorings rarely worsen inheritance, and only a few refactoring types worsen coupling and size (e.g., Pull Up Method [5]). Third, we found relationships between certain critical attributes, e.g., high class complexity may lead to high class coupling. These results could support the design of refactoring tools that, for enhancing code structures, optimize certain critical attributes to the detriment of others based on their perceived relevance for developers. **Data availability statement:** We made our study artifacts available online [13]. ## II Background ### _Critical attributes_ Anomalous metric values can help monitor internal quality attributes [14, 15, 5]. Each internal attribute concerns an internal property of the system. Table I defines the five internal attributes we investigate as defined in a recent work [5]. These definitions are limited to the scope of our work. E.g., complexity [7] could target other granularities rather than class, but we are concerned about the class complexity. Table I also samples a set of six metrics assessed in the recent work mentioned above [5], which are discussed throughout this paper. All metrics are grouped (second column) according to the internal attribute they aim at capturing (first column). All six metrics become critical if their values increase after performing a change. We relied on our experiences with software development in industry and insights extracted from previous studies [14, 6] for taking this decision. In particular, we fixed the interpretation on when DIT becomes critical adopted by the authors of that recent work [5]. While they considered that DIT becomes critical as its value decreases, we consider the opposite as suggested by the paper that formalized DIT [6]. Changes affect internal attributes in three ways [5]. They can _improve an internal attribute_ if the metrics used for capturing it increase or decrease towards becoming non-critical metrics. They can _keep an internal attribute unaffected_ by neither increasing nor decreasing the metrics used for capturing it. They can _worsen an internal attribute_ if the metrics capturing it increase or decrease towards becoming critical metrics. ### _Refactorings_ Refactoring is applying changes to improve the internal software quality [18, 19]. Developers may purely intend to enhance code structures while refactoring, or they may refactor as a means to achieve other intents, e.g. adding or enhancing features [9, 10]. Each refactoring has a type targeting the enhancement of a particular code structure. Popular refactoring types [9, 10] include Extract Method, i.e. extracting a new method from an existing one, and Move Method, i.e. moving an existing method from one class to another class. The refactoring types are implicitly associated with one or more internal attribute. This is because each refactoring type should modify the code structure and its design in such a way it changes one or more metric values. Each internal attribute is associated with a particular subset of refactoring types [5]. For instance, the class cohesion may improve through Move Attribute and Move Method if the refactored code element (attribute or method) was at least partially the root-cause of the low cohesion. Throughout this paper, we will take some of those associations between internal attribute and refactoring types to discuss how refactorings could be used as means for managing critical attributes while evolving features. ## III Study characterization ### _Problem statement_ A recent study [5] quantitatively investigated the refactoring effect on five internal quality attributes. The results regarding the refactoring effect on each internal attribute were quite diverse. On the one hand, refactorings quite often enhance code structure and design, thereby helping in managing critical attributes, regardless of the applied refactoring type [5]. On the other hand, in the case of floss refactorings - which often occur with feature additions and enhancements - 35-55% of refactorings keep the critical attributes unaffected [5]. More critically, also in the case of floss refactorings, 9-35% of refactorings worsen these attributes. The design of that study [5] prevented its authors from investigating the reasons behind such a high rate of refactorings that either keep internal attributes unaffected or worsen them. We hypothesize that developers only remove those degradation symptoms that matter for evolving features. Thus, developers tend to postpone or discard the removal of other, less relevant, degradation symptoms. Unfortunately, empirical evidence that supports this assumption is scarce. Previous work [20, 21] investigated the developer's perception of design smells relevance for evolving features, but we could not find similar studies in the context of critical attributes. This paper is a first attempt to characterize how developers perceive critical attributes as relevant (or irrelevant) when evolving features. We target critical attributes associated with the five internal attributes assessed in the previous work mentioned above [5]: cohesion, complexity, coupling, inheritance, and size. The critical attributes analyzed are all the class level ones: low class cohesion, high class complexity, high class coupling, large class hierarchy depth, large class hierarchy breadth, and large class size. We discarded critical attributes at other system levels (e.g., methods) were not yet considered. We argue that developers should constantly monitor and enhance the internal quality at the class level [18, 2] and that this focus would therefore provide a valuable contribution. ### _Research objectives_ We structured our main research goal based on the Goal Question Metric goal definition template [22] as follows: _analyze_ the perception of software developers on the relevance of critical attributes are relevant while evolving features; _for the purpose of_ understanding; _with respect to_ the individual and relative perceived relevance of the critical attributes, reasons why each critical attribute becomes relevant, and interrelations between critical attributes; _from the point of view of_ developers engaged in software evolution tasks; _in the context of_ two development teams evolving two systems implemented in Java within a Brazilian industry-academia R&D collaboration initiative. We designed our industry case studies based on focus group sessions. Case studies enable observing phenomena in their natural context [11]. Focus groups, on the other hand, allow extracting experiences from the participants, promoting discussions and knowledge sharing among participants [12]. Thus, we considered that case studies with focus groups for data collection would be suitable for contextualized discussions of developers on evolving features. ### _Context_ We conducted our case study within the ExACTa PUC-Rio ([https://exacta.inf.puc-rio.br](https://exacta.inf.puc-rio.br)) industry-academia R&D collaboration initiative, in the context of projects with Petrobras, a large company operating in the oil and gas industry. ExACTa currently has more than 70 full-time employees and delivered and evolves software-based solutions for several companies. Our case study aims at investigating the relevance of critical attributes based on metrics computed for Java systems in a recent work [5]. As a result, we opted for selecting only teams using the Java programming language. We ended up selecting two projects as cases for our study: ship performance evaluation (Case A) and smart freight (Case B). See Section IV-B for details. Due to the subjective nature of the qualitative data analyzed and discussed throughout this work (e.g., the developer's perception of their own source code), we kept the developers associated with the development of each system anonymous. Therewith, we expect to preserve participants from any personal judgment on their perception. Concerning the implementation, both cases followed the Lean R&D approach [23] and use Git ([https://git-scm.com/systems](https://git-scm.com/systems)) for performing code review via pull requests, and Azure DevOps ([https://azure.microsoft.com/pt-br/](https://azure.microsoft.com/pt-br/) services/devops/) for managing software development tasks. Both systems rely on the Spring MVC Framework ([https://spring.io/](https://spring.io/)) for implementing Web systems using the Model-View-Controller (MVC) architectural pattern. ## IV Case study design ### _Research questions_ **RQ:**_What is the relevance degree of each critical attribute for evolving features from the developer's perception?_ RQ targets two aspects of the relevance of critical attribute. First, the relevance degree of each critical attribute in isolation. In this case, we aim at understanding how important it is for developers to either mitigating or fully address a critical attribute to favor software evolution. Second, the relative relevance of the six critical attributes altogether: low class cohesion, high class complexity, high class coupling, large class hierarchy depth, large class hierarchy breadth, and large class size. Thus, we expect to understand what critical attributes have the highest priority when it comes to evolving features. **RQ:**_What are the reasons behind considering a critical attribute as relevant for evolving features?_ RQ explores the circumstances that make certain critical attributes actually relevant for evolving features. We ask participants of each focus session group to argue why each critical attribute deserves special attention while either adding or enhancing features. This knowledge is useful for supporting decision-making in development teams with little time for delivering features. Indeed, there might be hundreds of stakeholders' demands for developers to worry about while performing software evolution [24]. Thus, they may find certain critical attributes worth managing in the detriment of other critical attributes. ### _Case and subject selection_ **Case A:** This case consists of implementing a software solution for shipping logistics management purposes at Petrobras. Such management occurs through the integration of multiple systems, which allow practitioners to identify and cope with situations in which a chartered ship is not performing as expected or is not available for delivering a service. The proposed system heavily depends on computing business rules, e.g. with respect to payments and available fuel computation. Four employees of Petrobras often participate with feedback and advice during the software development process. They support the R&D team in Scrum planning and review cycles whenever possible. **Case B:** This case consists of implementing a software solution for handling freight calculation. It is responsible for assisting practitioners at Petbrobras in predicting freight prices to transport materials via road transport. The system relies on a large data set, including information on truck size and distances among cities. The proposed system also estimates fair prices to transport materials between refineries and so forth. Thus, this system is also heavily grounded in business rule computation, data processing, and model-based prediction. Table II provides an overview of the participant background for each case, which we collected from an online Characterization Form sent to participants minutes before the start of each focus group session. Regarding the highest education degree, while Case A has a PhD, both cases counted on the participation of one Master and one Specialist in knowledge areas related to Computer Science. Although participants of Case A have 4.33 years of experience on average against 8.33 years for Case B, participants in both cases have participated in the development of 5 systems on average. The Characterization Form asked participants to report on their familiarity with software metrics (Q4). According to the data of Table II, two participants of Case A said they have heard of metrics but are not really sure about what metrics mean (Q4.b), while one participant said to have a general understanding but does not use metrics in his projects (Q4.c). Case B participants claimed to be slightly more familiar with metrics: while one participant said to have a general understanding but not to use metrics (Q4.c), two participants said they have a good understanding and use metrics sometimes (Q4.d). We also asked participants on how much they are concerned with improving the quality of source code in their systems (Q6). While participants of Case A simply agreed with this statement, participants of Case B showed to be significantly more concerned about this matter. Judging by the lower familiarity of participants in Case A with metrics, this result is reasonable. Finally, the Characterization Form asked about the familiarity of participants with the concept of internal attributes. As data of Table II suggests, participants of Case A oscillated a lot in terms of expertise with internal attributes: one of them only heard about it (Q6.b), another one said to have a general understanding but not analyzing internal attributes in his projects (Q6.c), and the last one said to have a strong understanding and analyzing internal attributes frequently (Q6.e). One could say these participants know at least a little about internal attributes because of the metrics they are familiar with. Conversely, all participants of Case B informed to have a good understanding and analyze internal attributes occasionally (Q6.d). Participants of Case A are less familiar with internal quality and its management than participants of Case B, but all participants are considerably experienced in software development. ### _Data collection procedures_ Figure 1 depicts the procedures adopted for collecting data throughout the case study. We organized these procedures in three major phases, which we describe below. **Phase 1:**_Preparation for the focus group session_ - Besides defining what cases would be assessed and recruiting participants to engage in discussions, we collected the background characterization information. This phase has the three procedures below. **Procedure 1.1:**_Select software projects_ - We contacted a development project manager at the R&D initiative asking for software projects whose systems are implemented using mainly Java. We made this decision because a recent work [5] focused on Java and to assure that participants are minimally familiar with object-oriented languages, which permeate all this work (e.g., the refactoring types and metrics explored here are mostly applicable to these languages). We ended up selecting two projects (Case A and Case B). **Procedure 1.2:**_Recruit developers_ - For each project, we asked for permission to invite developers for participating in our study. Due to the intensively collaborative nature of the R&D projects, both cases share developers, which contribute in the development of multiple systems. We opted for selecting two independent sets of participants, one per system. No participant engaged in two focus group sessions, even though they may have contributed to the development of both systems. The project manager played an essential role in defining what developers are more active in the development of each system. We recruited three participants per case. **Procedure 1.3:**_Characterize participants_ - We carefully designed and revised our background Characterization Form Figure 1: Data Collection Procedures aimed at collecting basic information on the participants' expertise. Our major goal was profiling each case so we could better interpret our study results. We opted for a short and simple form in order to prevent participants from being tried or discouraged to participate in discussions right after filling the form. As shown in Table II, we collected data on the participant education (Q1), experience with software development in industry (Q2 and Q3), familiarity with two key concepts of this work, i.e. software metrics (Q4) and internal attributes (Q6), and the concern of developers in improving code quality (Q5). Before discussing **Phase 2** and **Phase 3**, we explain the online environment used for promoting discussions on critical attributes. Figure 2 depicts the virtual template that we carefully designed using the MURAL online tool1. The MURAL team kindly granted us with a free workspace at the MURAL for Education program. Each session started with one empty version of our designed virtual discussion template to be shared by all participants of the same session. This template has seven well-defined sections. Sections A to F aimed at driving the discussion regarding each of the six critical attributes. Section G aimed at driving the discussion on the relative relevance of all six critical attributes. Footnote 1: [https://www.mural.co/](https://www.mural.co/) Figure 3 depicts only Section A designed for discussing low class cohesion. Section A1 contains a short description of the critical attribute based on the literature [5, 18, 2]. Section A2 provides the participants with examples of metrics aimed at capturing the respective critical attribute. Section A3 was designed for developers to add notes on why the critical attribute is relevant for evolving features. Section A4 is similar but focused on why critical attributes may be irrelevant for evolving features. Finally, Section A5 is designed for capturing the relevance degree of the critical attribute based on a five-point scale: very irrelevant, irrelevant, neutral, relevant, and very relevant. **Phase 2:**_Discussion per critical attribute_ - This is the first phase associated with focus group session itself. We collected all data regarding the developer's perception of critical attributes as relevant (or irrelevant) for evolving features. However, in this phase, each critical attribute is discussed in isolation. We defined the three procedures below. **Procedure 2.1:**_Introduce attribute and metrics_ - The discussion on each critical attribute starts by providing the participants with a short definition of the critical attribute based on the literature [5, 18, 2]. After that, we provided them with a few examples of metrics designed for capturing the respective internal attribute. We sampled metrics arbitrarily based on our notion of metrics that developers could understand more easily. For instance, for exemplifying Lack of Cohesion we opted for (LCOM2) [6] rather than LCOM3 [16] because the latter implies explaining concepts like disjoint components in a graph. **Procedure 2.2:**_Discuss attribute (ir-)relevance_ - We asked the participants to elicit reasons why each critical attribute is relevant (or irrelevant) for evolving features. Each reason should be documented as a note in the appropriate section: Section A3 for relevant and Section A4 for irrelevant. From time to time, we reminded participants that evolving features include adding new features as much as enhancing existing features of the system. In addition, we constantly recommended participants to share knowledge and experiences surrounding each critical attribute, especially when discussions lost intensity. All participants were asked to collaborate with circumstances where mitigating or fully addressing a critical attribute is important for facilitating software evolution. **Procedure 2.3:**_Discuss relevance degree_ - After discussing why a critical attribute is relevant (or irrelevant) for evolving features, the facilitator of the focus session group promoted a recap. Each note on the (ir-)relevance of the critical attribute was read out loud. Whenever the facilitator felt that a note is poorly written, he asked the participants to provide further considerations on the note. At the end of this procedure, we asked each participant to assign one vote to the relevance degree of the critical attribute using a five-point Likert scale. **Phase 3:**_Discussion for all critical attributes_ - After discussing each critical attribute in isolation, the focus session Figure 3: Section Dedicated to Discussing Low Class Cohesion Figure 2: Template of Focus Group Session Defined at MURAL group ended with a discussion about the relative relevance of critical attributes. The procedure of this phase follows. **Procedure 3.1:**_Discuss relative relevance of attributes_ - We asked participants to rank those critical attributes that matter the most while evolving features. Each participant received five votes. These votes were meant to be distributed throughout the six critical attributes. We arbitrarily chose to assign five votes per participant in order to prevent them from assigning one vote for each critical attribute, thereby making it hard to conclude anything on the relative relevance. Each focus group session was conducted online via a Zoom Meeting. We kept video and audio records of both sessions to support the analysis of data provided by the participants. We often accessed the video and audio records for understanding what developers meant with each note. The focus group sessions for Case A and Case B each lasted approximately two and a half hours. ### _Data analysis procedures_ We asked participants of each focus group session to provide us with reasons why each critical attribute is relevant (or irrelevant) for evolving features. We collected these reasons through notes posted by the participants in the session's virtual mural. Aimed at analyzing these reasons, we first transcribed all notes exactly as they were written by the participants into a spreadsheet. Based on our impressions as facilitators of the focus group sessions, and after watching the video and audio records, we rewrote the notes aimed at fixing typos, filling communication gaps (e.g. omitted words), and making the target critical attributes of each note explicit. Finally, we translated the notes from Brazilian Portuguese to English. We applied thematic synthesis [25] on the qualitative data for Case A and Case B) separately. First, we separated the notes regarding the relevance of all critical attributes from those regarding irrelevance. Second, for each set of notes (relevant and irrelevant), we grouped them according to their core theme, i.e. fine-grained themes discussed throughout each note. Third, we grouped these core themes into macro-themes, i.e. themes that are more comprehensive than the core themes. Fourth, we separated those macro-themes into two categories: the ones regarding _Code Structure and Design_ and the ones regarding _System Functionality_. ## V Relevance of critical attributes for evolving features (RQ1) ### _Relevance of critical attributes per case_ Figure 4 depicts how many participants voted for a certain degree of relevance with respect to each critical attribute under investigation. We grouped the results by case: Case A data in the left and Case B in the right. Regarding Case A, three critical attributes are ultimately perceived as relevant by the developers while evolving features: low class cohesion, high class complexity, and large class size. These are the only critical attributes for which no participant reported perceptions as either neutral or (very) irrelevant. Data of Table II suggests that participants of Case A are less familiar with metrics and internal attributes. Thus, one could speculate that low class cohesion, high class complexity, and large class size are intuitive degradation symptoms - and potentially harmful to evolving features. With respect to Case B, our study results changed considerably. First, only low class cohesion and large class complexity are reportedly relevant for evolving features. Curiously, in the opposite way of Case A, large class size was ultimately considered irrelevant while performing software evolution. None of the participants assigned the very irrelevant degree, but they also did not assign any relevant degree. This may be due to Case B participants being the most familiar with metrics and internal attributes (Table II). Maybe they are experienced enough to acknowledge that certain large classes are acceptable depending on factors such as the system domain and the inherent difficulty of a business rule. ### _Relevance of critical attributes for both cases_ Figure 5 depicts the overall developer's perception on how relevant each critical attribute is, regardless of the case. Two critical attributes are ultimately relevant for developers: low class cohesion and high class complexity. The other four critical attributes are not necessarily relevant while evolving features: high class coupling, large class hierarchy depth, large class hierarchy breadth, and large size. Curiously, the aggregated data shows that developers are not exactly sure whether large class hierarchy depth. The high rate of neutral votes suggest that this critical attribute is not even an issue when debating and performing software evolution. Figure 4: Relevance of Critical Attributes per Case Figure 5: Relevance of Critical Attributes for Both Cases ## VI Reasons behind the (ir-)relevance of critical attributes (\(\text{RQ}_{2}\)) ### _Overall results for Case A_ Table III lists the notes provided by participants on why each critical attribute (first column) is either relevant or irrelevant for evolving features. The second column distinguishes notes about the attribute relevance or irrelevance. Large class size was the most discussed critical attribute in terms of number of notes, which is not surprising because it is a top-three most relevant attribute according to Figure 4. This is interesting the usefulness of size metrics for assessing the internal software quality is quite debatable [26, 27, 2, 16]. On the one hand, participants reported, for instance that "large class size makes it hard to maintain source code" and "large class size increases error proneness of the source code," both topics discussed by previous work [26, 27]. On the other hand, participants mentioned that "large class size is irrelevant when developers deal with urgency in program delivery." The second most discussed critical attribute is low class cohesion, which is also a top-three most relevant attribute according to Figure 4. This is an attribute whose applicability in measuring internal quality has been shown in different development scenarios [28, 6]. Participants said that "high class cohesion facilitates source code reuse," something that previous work has assessed [28]. Participants also said that "low class cohesion makes it hard to find errors," which is a recurring argument through the responses of both cases (Case A and Case B). On the other hand, participants discussed that "low class cohesion is irrelevant when the time to delivering code is short," which has been discussed with respect to large class size as well. Curiously, in the particular case of low class cohesion, participants seem to have more arguments on the relevance of this critical attributes for evolving features. The remaining critical attributes were also significantly discussed. Large class hierarchy depth had many notes regarding its relevance for evolving features, while high class complexity, high class coupling, and large class hierarchy breadth. Curiously, high class complexity had the highest number of notes on its relevant while evolving features; this results is interesting because high class complexity is the last top-three most critical attribute according to Figure 4. Regarding the two critical attributes associated with inheritance, participants tended discuss irrelevance by means of the practical usefulness of class hierarchies in a system. Examples of quotes that illustrate this issue are "large depth is irrelevant when it allows reusing code located at the highest hierarchical levels" and "large breadth is irrelevant when reusing properties used by all entities of the Entity Relationship Diagram." Quotes like these justify, at least in parts, why the majority of participants assigned a neutral or irrelevant degree for the relevance of both attributes (cf. Figure 4). During each focus group session, we constantly stimulated participants to report as many aspects of either relevance or irrelevance by critical attribute. Judging by the considerable number of arguments favor and against the relevant of all attributes, we concluded that our effort in promoting a healthy and productive discussion among participants paid off. ### _Thematic synthesis results for Case A_ Figure 6 depicts our results for the thematic synthesis procedures applied to notes on why critical attributes are _relevant_ for evolving features. The root note of the tree corresponds to the major theme, i.e. the relevance of all critical attributes altogether. The first intermediate level includes the two categories mentioned above (Code Structure and Design and System Functionality). The second intermediate level includes the macro-themes. We assigned in brackets the critical attributes associated with each micro-theme or macro-theme whenever the node corresponds to a leaf from the tree - i.e. the node has no variants. We have found seven macro-themes, four associated with code structure and design. The leaves correspond to the nine micro-themes. Regarding Code Structure and Design, participants mentioned that critical attributes are relevant for: _Comprehension_, i.e., the ability of reading and understanding the code elements; _Critical Attribute_, i.e. analyzing and reasoning about the occurrence of Fowler-like design smells [18]; and _Organization_, i.e. the way how code elements are organized within the source code structure. The micro-themes have intuitive names, but note _Overall Organization_ includes aspects neither associated with _Hierarchy_ nor with _Reuse_. On System Functionality, participants said that critical attributes are relevant for: _Change Propagation_, i.e. critical attributes may spot cases in which certain changes are unexpectedly or undesirably propagated throughout the system; _Failure_, i.e. the occurrence of bugs, faults, or failures; and _Performance_, i.e. aspects of the system performance such as the speed to respond to requests. Complementarily, Figure 7 is a tree of themes on why critical attributes are _irrelevant_ while evolving features (cf. tree Fig. 6: Themes on Why Attributes are Relevant for Case A root). The first intermediate level corresponds to the categories Code Structure and Design and System Functionality. The second intermediate level corresponds to seven macro-themes identified, four of them associated with system functionality. The leaves are the five micro-themes identified. We assigned in brackets the critical attributes associated with each micro-theme or macro-theme whenever the node corresponds to a leaf from the tree - i.e. the node has no variants. With respect to Code Structure and Design, participants mentioned that critical attributes, in general, are irrelevant for: _Critical Attribute_, i.e. analyzing and reasoning about other critical attributes; _Organization_, i.e. the way how code elements are organized within the source code structure; and _Programming Language_, i.e. aspects derived from the syntax, structure, and features provided by the programming used during software evolution. The micro-themes have quite intuitive names, but we highlight that _Design Pattern_ refers to Gamma-like design patterns [29]. The participants argued that adopting certain patterns might lead to critical attributes. Despite their theoretical or practical impact on the internal software quality, the critical attributes are either irrelevant or cannot be managed. Regarding System Functionality, participants reported that critical attributes are irrelevant for: _Failure_, i.e., certain circumstances associated with bug fixing - in this case, when "fixing bugs in legacy code"; _Product Delivery_, i.e., aspects of delivering a system; _Proof of Concept_, i.e., aspects associated with the implementation of source code particularly aimed at proving concepts to stakeholders during the iterative development cycles of agile processes; and _Software Requirements_, i.e., requirements in general. Via the **Critical Attribute** macro-theme, we have found interesting insights on the interrelation between different critical attributes. About _relevance_, participants of Case A said that: i) "high class complexity leads to high class coupling," ii) "large breadth may increase the class complexity," and iii) "non-cohesive classes tend to be larger than necessary." In summary, we found three tuples of perceived interrelations: i) (high class complexity, high class coupling), ii) (large class hierarchy breadth, high class complexity), and iii) (low class cohesion, large class size), respectively. This result could support the design of refactoring tools that, aimed at enhancing code structures, optimize certain critical attributes that are interrelated. Regarding _irrelevance_, participants of Case A said that: i) "high class coupling is acceptable when coding a highly coupled entity of the Entity Relationship Diagram" and ii) "large class size is irrelevant when the methods are naturally very complex." Thus, we found two tuples of interrelations: i) (high class coupling, high entity coupling) associating attributes at the levels of class and Entity Relationship (ER) model, and ii) (large class size, high method complexity) associating attributes at the levels of class and method, respectively. Again, these results could drive the design of novel refactoring tools for enhancing code structure and design. **A parallel between the relevance of critical attributes and design smells:** Earlier in Section IV-A, we discussed that critical attributes may help in detecting design smells [18, 2]. Curiously, the results of Figures 6 suggest that critical attribute are closely associated with certain design smells. Especially, Case A participants suggest that low class cohesion and large class size are relevant because of their association with Duplicated Code, i.e. different code snippets realizing the same feature, and Large Class, i.e. a class overloaded with several features. This result is interesting because a previous study [20] suggests that developers often perceive Complex Class, Large Class -equivalent to God Class [1], Long Method, and Spaghetti Code (all associated with low class cohesion and large class size) as potentially harmful to software evolution. ### _Overall results for Case B_ Table IV lists the number of notes on why each critical attribute is either relevant or irrelevant for evolving features. Large class size tied with high class coupling as the critical attributes with the highest number of notes. Curiously, these critical attributes were the only ones to have at least one vote for irrelevant (Figure 4). As previously discussed with respect to Case A, the usefulness of size metrics for assessing the internal software quality has been debated by previous work with mixed opinions [26, 27, 2, 16]. This debate is reflected by the highest number of votes for irrelevant from the entire case study (Figure 5). On the one hand, participants said that "large size is relevant if the programming screen size is small" and because "large size rarely occurs in isolation" in terms of problems associated with internal software quality. On the other hand, participants also said that "large size is irrelevant in utility classes" and "large size is irrelevant if the integrated development environment (IDE) can collapse large source code blocks." These comments particularly suggest that large class size a problem of the development environment rather than a system problem. Fig. 7: Themes on Why Attributes are Irrelevant for Case A Regarding high class coupling, participants also showed different perspectives on relevance and irrelevance. On the one hand, participants reported that "high class coupling is relevant when implementing fault tolerance/error handling." They also were very specific on the metrics used for computing this critical attribute, with statements like "high class coupling is relevant when CBO is high but the class is coupled with classes at different program levels." CBO is the Coupling between Objects metric of Table I, which we displayed in our virtual mural during the focus session group for exemplification. By the way, with "program level" the participants clarified that they refer to a system package or module. On the other hand, participants said that "high class coupling [is irrelevant because it] may support source code reuse" and, in opposition to the previous case, "high class coupling is irrelevant when CBO is high but the class is coupled with classes at the same program level." It is worth mentioning that, although is has mostly been seen as relevant for evolving features (Figure 4, this critical attribute received one vote for irrelevant. The third most discussed critical attributes is low class cohesion. Participants of Case B reported that "low class cohesion makes it hard to maintain a program" and "low class cohesion makes it hard to track errors." On the other and, from the developer's perception, "low class cohesion is irrelevant if it affects an utility class" and "low class cohesion is irrelevant in very small programs," for instance. Comments like these suggest that the relevance of low class cohesion strongly depends on what the system implements. If the system is too simple or the class provides features to the whole system, low class cohesion is acceptable. Regardless of that, this critical attribute is curiously the one with most votes for relevant (with one vote for very relevant) - according to data of Figure 5. The other three critical attributes - high class complexity, large class hierarchy depth, and large class hierarchy breadth - were less discussed in comparison with the same attributes in Case A. Curiously, the overall perception of developers in Case B are quite different and valuable. Regarding the attribute relevance, participants report that "high class complexity makes it hard to implement new business rules," "large depth makes it hard to know where to implement a new program feature," and "large breadth is relevant if child classes redefine the concrete behavior inherited from their parent class." This notes add up as they confirm how different critical attributes may hinder feature additions and enhancements, which are the basis of software evolution. In addition, all the three critical attributes received at least one vote for relevant (Figure 4). It is worth mentioning that the two critical attributes regarding inheritance received neutral votes. This results suggest that, for developers considerably concerned on internal software quality (Table II), inheritance is not a major concern. ### _Thematic synthesis results for Case B_ Figure 8 is a tree showing results for the thematic synthesis procedures applied to notes on why critical attributes are _relevant_ during software evolution. The root note of the tree corresponds to the major theme, that is, the irrelevance of all critical attributes altogether. The first intermediate level corresponds to the two major categories of themes: Code Structure and Design and System Functionality. The second intermediate level corresponds to the macro-themes. We have derived seven macro-themes where the majority (five of them) is associated with code structure and its design. The leaves are the seven micro-themes found in total. We assigned in brackets the critical attributes associated with each micro-theme or macro-theme whenever the node corresponds to a leaf from the tree - i.e. the node has no variants. With respect to Code Structure and Design, participants mentioned that critical attributes in general are relevant for: _Comprehension_, i.e., the ability of reading and understanding the code elements; _Critical Attribute_, i.e. analyzing and reasoning about other critical attributes; _Design Smell_ i.e. assessing or reasoning about the occurrence of Fowler-like design smells [18]; _Organization_, i.e. the way how code elements are organized within the source code structure; and _Programming_, i.e. aspects of programming a system that may affect how the internal software quality is perceived. All micro-themes received intuitive names with no need for further explanation. Regarding System Functionality, participants reported only two macro-themes; _Failure Correction_, i.e. the ability to fix bugs, faults, or failures affecting the system behavior; and _Failure Detection_, i.e. the task of tracking unexpected system behaviors realized by bugs, faults, or failures in a system. Figure 9 depicts the themes derived the topic of why critical attributes are _irrelevant_ while evolving features (see the tree root). The first intermediate level corresponds to the categories Code Structure and Design and System Functionality. The second intermediate level corresponds to five macro-themes identified, three of them associated with code structure and its design. The leaves correspond to four micro-themes. We assigned in brackets the critical attributes associated with each micro-theme or macro-theme whenever the node corresponds to a leaf from the tree - i.e. the node has no variants. With respect to Code Structure and Design, participants mentioned that critical attributes in general are relevant for: _Comprehension_, i.e., the ability of reading and understanding the code elements; _Critical Attribute_, i.e. analyzing and reasoning about other critical attributes; _Design Smell_ i.e. assessing or reasoning about the occurrence of Fowler-like design smells [18]; _Organization_, i.e. the way how code elements are organized within the source code structure; and _Programming_, i.e. aspects of programming a system that may affect how the internal software quality is perceived. All micro-themes received intuitive names with no need for further explanation. Regarding System Functionality, participants reported only two macro-themes; _Failure Correction_, i.e. the ability to fix bugs, faults, or failures affecting the system behavior; and _Failure Detection_, i.e. the task of tracking unexpected system behaviors realized by bugs, faults, or failures in a system. Figure 8: Themes on Why Attributes are Relevant for Case B Figure 9: Themes on Why Attributes are Irrelevant for Case B With respect to Code Structure and Design, participants reported that critical attributes in general are relevant for: _Critical Attribute_, i.e. analyzing and reasoning about other critical attributes; _Organization_, i.e. the way how code elements are organized within the source code structure; and _Programming_, i.e. aspects associated with activity of programming a system. All micro-themes have intuitive names. Still, it is worth mentioning that _Authorship_ refers to the authorship of source code implemented by the developers. Regarding System Functionality, participants reported that critical attributes are irrelevant in cases associated with: _Corem_, i.e. nature of the features realized by the system; and _Utility_, i.e. classes of the system that serve as feature providers to the majority of the system - also known as utility classes. Analyzing the **Critical Attribute** macro-theme, we derived interesting insights on the interrelation between different critical attributes. About _relevance_, participants of Case B said that: i) "high class coupling is relevant when affecting non-cohesive classes," ii) "large depth makes large breadth worse," and iii) "large size rarely occurs in isolation." Based on these quotes, we identified three tuples of interrelations: i) (high class coupling, low class cohesion), ii) (large class hierarchy depth, large class hierarchy breadth), and iii) (large class size, any critical attribute), respectively. We did not find any similarities with the interrelations of Case A and Case B. Regarding _irrelevance_, participants of Case B said that: i) "high class coupling is irrelevant when the class is highly cohesive," ii) "large size is irrelevant if the affected class is not complex," iii) "large size is irrelevant if the methods are cohesive," iv) "large size rarely occurs in isolation," and v) "low class cohesion is irrelevant in very small programs." Thus, we found five tuples of interrelations: i) (high class coupling, low class cohesion), ii) (large class size, high class complexity), iii) (large class size, low method cohesion), iv) (large class size, any critical attribute), and v) (low class cohesion, high system size), respectively. We could not find any similarities with the interrelations of Case A and Case B. **A parallel between the relevance of critical attributes and design smells:** The results of Figure 6 also suggest that critical attribute are closely associated with some design smells. As a complement to the data of Case A (Section VI-B) Case B participants suggest that large class hierarchy depth is relevant because it may lead to Duplicated Code. This result stands out because a previous study [21] suggests that Duplicated Code is often perceived as harmful to software evolution. Our study results confirm, to some extent, the practical relevance of at least five traditional design smells: Complex Class, Large Class/God Class, Long Method, and Spaghetti Code for Case A and Duplicated Code for Case B. ## VII Related work Some studies have investigated whether developers perceive anomalous metric values as useful degradation symptoms [30, 27, 31, 32]. One study [31] assessed factors leading to a high attractiveness of open source systems to new developers. Results suggest that developers may be discouraged to contributing to systems with anomalous values of complexity metrics. Other studies [30, 32] captured the developer's perception on the usefulness of coupling metrics. Results suggest coupling metrics solely based on the analysis of code structures, such as CBO, are less effective than those derived from semantic aspects of the system (e.g., feature location). Another study [27] concludes that developers often perceive anomalous metric values for complexity and size as indicators of unclear code (i.e., code that is potentially hard to understand and modify). Each metric aims at capturing a particular internal attribute (Section II-A). Thus, one could assume the aforementioned study results provide hints on how relevant the critical attributes are for software evolution in industry. Still, we could not find industry-focused studies on the perception of multiple class-level critical attributes as relevant while evolving features. We only found studies on the developer perception of design smells as useful degradation symptoms [20, 21]. A few insights on critical attributes appeared in the smell-centered studies because certain design smells are defined by combining multiple critical attributes [18, 2]. Particularly, these studies suggest that Large Class and Long Method are design smells that make it hard for developers to understand and modify code [20, 21]. This result stands out because both design smell types are often detected by combining low cohesion, high complexity, and large size [33, 8], which were perceived by our study participants as relevant (Figure 5). ## VIII Threats to validity **Construct Validity:** We defined the case study protocol and artifacts based on empirical software engineering research guidelines, e.g., [25] and [11]. We used an extensive recent study [5] as a reference for defining our study steps and procedures. Thus, we expected to support the proper data collection and analysis (Section IV-C). All six critical attributes investigated rely on the five internal attributes assessed in an extensive study [5]. By focusing on the same set of attributes, we expected to support the comparison of both quantitative and qualitative studies. Two researchers contributed with insights on how to organize and conduct the focus group sessions. Finally, our study targeted six critical attributes at the class level. These attributes have been typically used for monitoring the internal software quality [5]. We are aware that the participant background may have affected our study results, although we conducted a careful recruitment process (see Section IV-B). **Internal Validity:** Aimed at stimulating participants to engage in the focus group sessions, we informed our agreement in donating food to charity for each engaged participant. Each focus group session lasted no longer than two hours and a half. During the focus groups sessions, we instructed participants on the formal definition of each critical attribute and the metrics frequently used for capturing them. Therewith we expected to normalize the participants' knowledge in preparation for a fruitful discussion. We are aware that our interactions with the study participants may have biased their reported perceptions, although we did our best to avoid sharing our thoughts and perceptions during the sessions. Finally, we kept video and audio records of each focus group session, with the permission of all participants, to support our posterior data collection and analysis, helping to avoid missing and incorrect data. **External Validity:** We opted for performing case studies, which typically encompass only a small set of subjects and cases [11]. One could argue that such a limited set would have hindered the derivation of relevant study results on developers' perceptions of critical attributes. Although we partially agree with this argument, focus groups are not intended for large-scale analyses [12]. Rather, these studies aim at promoting discussions among a few participants in such a way that controlling their participation and reaching qualitative depth (e.g., explanations) becomes possible. Finally, the two systems analyzed in this work are mainly implemented in the Java programming language, which is largely used in industry. **Reliability:** One could argue that the developer's perception may not be the best way of establishing interrelations of critical attributes. Particularly, there may be other interrelations not covered by our study or neglected by the developers for some reason. Still, we believe that considering developers' perceptions makes sense for different reasons. For instance, developers argue that low class cohesion and high class coupling are interrelated, which is reflected in recurring strategies for detecting design smells such as Large Class [18, 2]. Similar reasoning applies to other interrelations, such as high class complexity and large class size [18, 2]. Interrelations reported by the participants may not saturate all possibilities but, still, their perceptions provide meaningful insights into how they can be best assisted aiming at managing critical attributes in practice. To further improve the reliability and allow the replication of our analyses, all artifacts and data, including the complete transcriptions of the focus group sessions, are available in our online material [13]. ## IX Study implications **Implication to Practitioners:**_Existing techniques could help in mitigating or fully addressing critical attributes while evolving features_ - The focus group sessions aimed at confirming and complementing preliminary insights of previous studies [20, 21] on the importance of addressing critical attributes for facilitating software evolution. Our results include the validation of low class cohesion and high class complexity as ultimately relevant from the developer's perspective. Existing techniques for detecting design smells - which are usually combinations of two or more critical attributes - could be useful for assisting developers in analyzing critical attributes in their systems. There is a myriad of options in the literature for this particular purpose [1, 3, 19, 34]. In addition, the literature of anomalous metric values and critical attributes is diverse and comprehensive [14, 15, 5, 2]. Still, our Background Form in particular (Section IV-B) reminded us that developers may not be sufficiently aware of the techniques already proposed for supporting the analysis of critical attributes in practical settings. Raising such awareness is fundamental for developers to discuss and manage critical attributes, especially in cases of short time for delivery products - a recurring issue (cf. Table III and Table IV). **Implication to Researchers:**_Recommender systems should incorporate mechanisms for driving changes also considering practical developer intents, rather than the pure enhancement of code structures_ - Recommender systems, e.g. [3, 34], assist developers in enhancing code structures [34] and evolving software architectures [3]. In both cases, the traditional refactorings of Fowler's Refactoring book [18] are employed towards a facilitated software development in general. These tools rely on the assumption that code structures and design that are free of degradation symptoms are favorable to adding or enhancing features. This assumption is not incorrect _per se_. However, the ideal code structure and design suggested by these tools may be either unnecessary or too costly to achieve. Our study results suggest that, during software evolution, developers tend to concentrate effort on applying only those changes necessary to achieve their major intent. While existing recommender systems typically suggests at once dozens of changes, their practical adoption sounds unrealistic at times. This observation is especially valid when the developer wants to add or enhance features very locally in the code. Our results could help researchers in re-designing tools for addressing degradation symptoms - including critical attributes with fewer changes and focused on problems that actually affect the tasks of evolving features. Tool designers could incorporate our insights on the practical relevance of critical attributes, as their interrelations, to assist disciplined refactorings aimed at mitigating or fully addressing critical attributes. ## X Final remarks We conducted a qualitative case study on the perception of developers from two different development teams on the relevance of critical internal quality attributes when evolving software features. We also elicited and conducted a thematic analysis of reasons why developers find each critical attribute relevant. We reveal some interrelations of critical attributes as perceived by the developers. In particular, we found that low class cohesion and high class complexity are perceived as highly relevant and that they are interrelated with other critical attributes. Refactoring tools such as [3, 34] optimize multiple critical attributes at once, while we found that not all critical attributes are equally relevant to developers. We encourage considering the perceived relevance of critical attributes while designing tools as a means to better meet the developers' expectations. While the thematic analysis of the two selected cases provided valuable insights into the perceived relevance of critical internal quality attributes when evolving software features, there are inherent limitations of this first qualitative industry case study on the topic and we plan to replicate our study. In particular, we aim at higher diversity and representativeness -ness [35], e.g., by also recruiting participants working on projects implemented in other programming languages.
2308.07804
Quantum and Classical Combinatorial Optimizations Applied to Lattice-Based Factorization
The availability of working quantum computers has led to several proposals and claims of quantum advantage. In 2023, this has included claims that quantum computers can successfully factor large integers, by optimizing the search for nearby integers whose prime factors are all small. This paper demonstrates that the hope of factoring numbers of commercial significance using these methods is unfounded. Mathematically, this is because the density of smooth numbers (numbers all of whose prime factors are small) decays exponentially as n grows. Our experimental reproductions and analysis show that lattice-based factoring does not scale successfully to larger numbers, that the proposed quantum enhancements do not alter this conclusion, and that other simpler classical optimization heuristics perform much better for lattice-based factoring. However, many topics in this area have interesting applications and mathematical challenges, independently of factoring itself. We consider particular cases of the CVP, and opportunities for applying quantum techniques to other parts of the factorization pipeline, including the solution of linear equations modulo 2. Though the goal of factoring 1000-bit numbers is still out-of-reach, the combinatoric landscape is promising, and warrants further research with more circumspect objectives.
Willie Aboumrad, Dominic Widdows, Ananth Kaushik
2023-08-15T14:31:25Z
http://arxiv.org/abs/2308.07804v1
# Quantum and Classical Combinatorial Optimizations ###### Abstract The availability of working quantum computers has led to several proposals and claims of quantum advantage. In 2023, this has included claims that quantum computers can successfully factor large integers, by optimizing the search for nearby smooth integers (numbers all of whose prime factors are small). This paper demonstrates that the hope of factoring numbers of commercial significance using these methods is unfounded. Mathematically, this is because the density of smooth numbers decays exponentially as \(n\) grows and numerical evidence suggests lattice-based methods are not particularly well-suited for finding these. Our experimental reproductions and analysis show that lattice-based factoring does not scale successfully to larger numbers, that the proposed quantum enhancements do not alter this conclusion, and that other simpler classical optimization heuristics perform much better for lattice-based factoring. However, many topics in this area have interesting applications and mathematical challenges, independently of factoring itself. We consider particular cases of the CVP, and opportunities for applying quantum techniques to other parts of the factorization pipeline, including the solution of linear equations modulo 2. Though the goal of factoring 1000-bit numbers is still out-of-reach, the combinatoric landscape is promising, and warrants further research with more circumspect objectives. ## 1 Introduction and Outline The story of lattice-based factoring goes back to the 1990's, when Schnorr published work outlining the possibilities (Schnorr, 1990). This work was published in a peer-reviewed journal, and it led to interesting further work, including demonstrating that the Shortest Vector Problem (SVP) is NP-hard. However, Schnorr's method did not lead to prime factorization in practice. In 2021, Schnorr (2021) claimed to have studied a variant of the method in enough detail to claim that it would be able to factorize integers up to 2048 bits, thus cracking the current RSA encryption protocol. This work has not been published in peer-reviewed venues, but the claim remains in the preprint version. This was built on by Yan et al. (2022) and Hegade and Solano (2023) to propose quantum versions. This article shows that neither the Schnorr factorization claims, or the proposed quantum improvements, perform effectively enough to factorize numbers above about \(80\) bits (current hardware), and there is no reason to believe that the methods will provide the most effective methods for factorizing larger numbers. Indeed, we provide a purely classical heuristic that outperforms the claimed quantum advantage, but is still not nearly efficient enough to compete with more standard sieving methods. Thus we conclude that factoring remains an exponentially hard problem using Schnorr's method, and given the status of current hardware, that systems based on the assumption that factoring is a difficult problem (such as RSA encryption) are not currently at risk. The article is laid out as follows. Section 2 gives a background on integer factoring, enough to explain Schnorr's lattice-based method in Section 3. Section 4 explains the proposed quantum optimizations, and Section 5 describes our experiments and results. The paper finishes with a brief summary of other attempts to optimize the factorization process, and other opportunities for quantum advantage in this area, including combinatoric challenges such as the knapsack problem, and the solution of linear equations modulo 2. ## 2 Factorization Basics -- Congruence of Squares Most modern factorization methods are based on Dixon's method [Dixon, 1981], which itself goes back to Fermat's method. The following observations underlie these methods. Say we seek to factorize an integer \(n\). Then if we obtain \(x,y\), with \(x\not\equiv\pm y\mod n\), satisfying \[x^{2}\equiv y^{2}\mod n, \tag{1}\] both \(\gcd(x-y,n)\) and \(\gcd(x+y,n)\) are non-trivial factors of \(n\), which may be computed efficiently using Euclid's algorithm.1 For instance, suppose \(n=1649\), and note that \(x=1743\) and \(y=80\) satisfy Footnote 1: See Euclid (ed. Joyce) [1996, Bk VII, Prop 2] \[x^{2}\equiv 1453\equiv y^{2}\mod n.\] Clearly \(x\not\equiv\pm y\mod n\), so \[\gcd(x-y,n)=\gcd(1683,1649)=17,\] and \[\gcd(x+y,n)=\gcd(1843,1649)=97\] are non-trivial factors of \(n\). Thus factorization reduces to constructing a solution to congruence (1) satisfying \(x\not\equiv\pm y\mod n\); in other words, it reduces to finding an _interesting solution_ to (1) Pomerance [2008]. There are many _uninteresting_ solutions, i.e., those that also satisfy \(x\equiv\pm y\mod n\), like \((x,x)\) and \((x,-x)\) for any \(x\in[0,n)\), that do not lead to factorizations of \(n\). Factorization methods cannot discern interesting solutions to (1) a priori, but in practice this is not a significant impediment because the probability that an arbitrary solution to (1) is interesting is at least \(50\%\): for details, review the simple enumeration argument in Pomerance [2008]. Even Shor's quantum factorization algorithm Shor [1994] finds factors by constructing interesting solutions to (1). Shor's method proceeds as follows. To begin, it randomly selects an integer \(a\in(1,n)\) coprime to \(n\) and then uses a quantum algorithm, based on the quantum Fourier transform, to produce an optimized order-finding subroutine (Nielsen and Chuang, 2002, SSA4.3 and SS5.3) to compute the smallest integer \(r\) such that \[a^{r}\equiv 1\mod n. \tag{2}\] If the _order_\(r\) of \(a\) is odd, the algorithm draws a different starting value and tries again until it finds an integer with even order.2 Having produced a solution to congruence (2) with \(r\) even, Shor's algorithm obtains a solution to (1) by setting \(x=a^{r/2}\) and \(y=1\). This solution is interesting only if \(a^{r/2}\not\equiv-1\mod n\), or equivalently if \(r\equiv 2\mod 4\), and Shor's algorithm continues randomly selecting \(a\in(1,n)\) until it finds an interesting solution to (1) via \(x=a^{r/2}\) and \(y=1\). Footnote 2: The probability that \(r\) is even is greater than \(1-\frac{1}{2^{m}}\), where \(p_{m}\), the \(m^{th}\) prime number, is the largest prime factor of \(m\)(Nielsen and Chuang, 2002, Theorem 5.3). Although factorization methods differ in how they obtain solutions to (1), the construction usually involves two stages: data collection and processing. ### Data Collection and Smooth Numbers In the data collection phase factoring methods search for certain _smooth integers_. A number is said to be \(p\)-smooth if it has no prime factors larger than \(p\). If the first \(m\) prime numbers are written as \(\{p_{1},p_{2},\ldots,p_{m}\}\), then any \(p_{m}\)-smooth number can be written as a product \(\Pi_{i=1}^{m}p_{i}^{e_{i}}\). This pre-determined set of "small" primes is often called the _factor basis_. The data collection phase seeks \(m+1\)\(p_{m}\)-smooth numbers. We note that the size \(m\) of the factor basis is typically a hyperparameter that must be tuned. The data collection typically involves searching a large space. For instance, the Quadratic Sieve collects smooth integers by sieving a large interval centered around \(\sqrt{n}\); the General Number Field Sieve, currently the fastest known method for factorizing integers with more than \(100\) digits, refines the search by exploiting the theory of algebraic number fields (Pomerance, 1996). The collection step accounts for the bulk of the computational load: in the record-breaking RSA-\(250\) factorization completed in February \(2020\), the sieving step accounted for roughly \(2,450\) of the \(2,700\) core-years (using a single Intel Xeon Gold \(6130\) CPU running a \(2.1\) GHz clock rate as a reference) (Boudot et al., 2022). ### Processing Relations from Smooth Numbers In the processing step, the methods combine several congruences to produce a solution to (1). The technique goes back to the work of Maurice Kraitchik in the 1920s, as described by Pomerance (1996). For example, to factor 2041, the smallest number \(x\) such that \(x^{2}>2041\) is 46, and we have \[46^{2}\equiv 75,\quad 47^{2}\equiv 168,\quad 49^{2}\equiv 360,\quad 51^{2} \equiv 560\quad(\text{all mod }2041).\] These remainders all have prime factors no greater than 7: \[75=3\cdot 5^{2},\quad 168=2^{3}\cdot 3\cdot 7,\quad 360=2^{3}\cdot 3^{2}\cdot 5,\quad 560=2^{4}\cdot 5\cdot 7.\] It is easy to multiply these by adding the exponents, and to deduce that \[(46\cdot 47\cdot 49\cdot 51)^{2}\equiv 2^{10}\cdot 3^{4}\cdot 5^{4}\cdot 7^{2} \pmod{2041}.\] From here, we deduce that \(x=46\cdot 47\cdot 49\cdot 51\equiv 311\pmod{2041}\) and \(y=2^{5}\cdot 3^{2}\cdot 5^{2}\cdot 7\equiv 1416\pmod{2041}\) give a solution to (1). Lastly we compute \(\gcd(1416-311,2041)=13\), and indeed \(2041=13\times 157\). Finding relations where the remainder modulo \(n\) is smooth is crucial, because it allows for the use of linear algebraic techniques via _index calculus_, as the following example illustrates. Suppose we fix \(P=\{p_{1},\ldots,p_{m}\}\) as the factor basis, with \(p_{j}\) denoting the \(j^{\text{th}}\) prime. In addition, suppose that in the data collection step our method found \(m+1\) integers \(x_{i}\), for \(i=1,\ldots,m+1\), such that \(x_{i}^{2}\mod n\) is \(p_{m}\)-smooth. This means there exist \(\vec{e}_{i}\in\mathbb{Z}_{\geq 0}^{m}\) such that \[x_{i}^{2}\equiv p_{1}^{e_{i1}}\cdots p_{m}^{e_{im}}\mod n. \tag{3}\] Using multi-index notation, we may write \(x_{i}^{2}\equiv p^{\vec{e}_{i}}\mod n\). Note that (3) implies \[x_{i}^{2}\cdot x_{k}^{2}\equiv\prod_{j=1}^{m}p_{j}^{e_{ij}+e_{kj}}\mod n,\] or equivalently \(x_{i}^{2}\cdot x_{k}^{2}\equiv p^{\vec{e}_{i}+\vec{e}_{k}}\mod n\). Thus for any subset \(\mathcal{I}\subseteq\{1,\ldots,m+1\}\), the product \(\prod_{i\in\mathcal{I}}x_{i}^{2}=\big{(}\prod_{i\in\mathcal{I}}x_{i}\big{)}^{2}\) is a square that is equivalent to \(p^{\sum_{i\in\mathcal{I}}\vec{e}_{i}}\) modulo \(n\). Therefore if \(\mathcal{I}\) is such that \(p^{\sum_{i\in\mathcal{I}}\vec{e}_{i}}\) is also a square, we obtain a solution to (1) via \[x=\prod_{i\in\mathcal{I}}x_{i}\quad\text{and}\quad y=p^{\frac{1}{2}\sum_{i\in \mathcal{I}}\vec{e}_{i}}.\] We may construct such a subset \(\mathcal{I}\) by solving a linear system. Since \[p^{\sum_{i\in\mathcal{I}}\vec{e}_{i}}=p^{\sum_{i=1}^{m+1}z_{i}\vec{e}_{i}}\] with \(z_{i}=1\) if \(i\in\mathcal{I}\) and \(z_{i}=0\) otherwise, \(p^{\sum_{i=1}^{m+1}z_{i}\vec{e}_{i}}\) is a square if and only if each coordinate of \(\sum_{i=1}^{m+1}z_{i}\vec{e}_{i}\) is even. Thus we may construct \(\mathcal{I}\) by finding \(z_{1},\ldots,z_{m+1}\in\mathbb{F}_{2}\) such that \[\sum_{i=1}^{m+1}z_{i}\vec{e}_{i}\equiv 0\mod 2. \tag{4}\] In other words, we may obtain the desired index set \(\mathcal{I}\) by reducing \(E=\big{[}\vec{e}_{1}\ \ \cdots\ \ \vec{e}_{m+1}\big{]}\) modulo \(2\) and solving the homogeneous system \(Ez=0\). Note that a non-trivial solution is guaranteed to exist when \(E\) has more columns than rows, i.e., when we have collected at least one more smooth number than there are primes in the factor basis. There is a clear algorithmic trade-off here: a smaller smaller factor basis \(\{p_{i}:i=1\ldots m\}\) reduces both the number of relations needed and the complexity of deducing a solution to (1) from these relations, but it makes it harder to find candidate relations where the remainder mod \(n\) is \(p_{m}\)-smooth. ## 3 Schnorr's Lattice-Based Factoring and the Closest Vector Problem We are now in a position to explain in detail the data collection and processing subroutines defined by Schnorr's method. In Schnorr (1990), Schnorr claimed to have devised a method that can produce smooth integers useful for factorization, by computing approximate solutions to instances of the Closest Vector Problem (CVP). ### From Factoring to the Closest Vector Problem The connection between integer factoring and the CVP problems lies in the data collection phase. We use logarithms to translate the problem of finding small integers whose product is close to \(n\), into the problem of finding combinations of small integers with logarithms that sum to the logarithm of \(n\). The linearity of this formulation means that the set of possible combinations can be described as a lattice of vectors (whose dimension is given by the size of the factoring basis). (In this context, a _lattice_ is the set of all integer linear combinations of a given set of basis vectors, so for a basis \(B=\{b_{1},\ldots,b_{m}\}\), the _lattice generated by \(B\)_ is \(\{\lambda_{1}b_{1}+\ldots+\lambda_{m}b_{m}:\lambda_{i}\in\mathbb{Z}\}\).) The search for useful approximate factoring relations is then formulated as the search for vectors in this lattice that are close to a target vector which is determined by \(\ln n\). In essence, this reduces the data collection step of the prototypical factorization algorithm to a sequence of combinatorial optimization problems. Schnorr's method is based on the following observations. Again, suppose we seek to factor an integer \(n\), set \(p_{0}=-1\), and let \(p_{1},\ldots,p_{m}\) denote the first \(m\) primes. In addition, suppose we obtain integers \(e_{j}\) such that \[\epsilon\coloneqq\bigg{|}\sum_{j=1}^{m}e_{j}\ln p_{m}-\ln n\bigg{|}\approx 0. \tag{5}\] Thus if we set \[u=\prod_{e_{j}\geq 0}p_{j}^{e_{j}}\quad\text{and}\quad v=\prod_{e_{j}<0}p_{j} ^{-e_{j}}, \tag{6}\] we obtain \[\bigg{|}\ln\left(\frac{u}{vn}\right)\bigg{|}=\epsilon,\quad\text{which means}\quad u-vn=vn(e^{\epsilon}-1)\approx\epsilon vn.\] The last approximation follows from Taylor's theorem. Shnorr's claim is that since \(\epsilon\) is small, \(u-vn\) is also small and therefore it is _likely_ to be \(p_{m}\)-smooth. Such pairs of integers play a key role: Schnorr calls a pair \((u,v)\) such that both \(u\) and \(u-vn\) are \(p_{m}\)-smooth a _fac-relation_; in Yan et al. (2022) fac-relations are known as _smooth relation pairs_. Schnorr's method produces the coefficients in (5) by approximating solutions to the CVP, in order to obtain smooth pairs \((u,v)\) as in (6). Many different CVP instances are considered, by randomly assigning different weights to different elements of the factor basis. In particular: let \(B_{m,c}\in\mathbb{Z}^{(m+1)\times m}\) denote the _prime lattice_ generated by the columns of \[B_{m,c}=\begin{bmatrix}f(1)&0&0&0\\ 0&f(2)&0&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&f(m)\\ \lceil 10^{c}\ln(p_{1})\rceil&\lceil 10^{c}\ln(p_{2})\rceil&\cdots&\lceil 10^{c}\ln(p_{ m})\rceil\end{bmatrix}, \tag{7}\] where \(c\) is a tunable parameter,3 and \(f(j)=\log p_{\sigma(j)}\), with \(\sigma\) denoting a random permutation on \(m\) letters. Now let \(t\in\mathbb{Z}^{m+1}\) denote the _target vector_ \[t=\begin{bmatrix}0\\ \vdots\\ 0\\ \lceil 10^{c}\ln n\rceil\end{bmatrix}.\] The point is that if \(e\in\mathbb{Z}^{m}\) is such that \(B_{m,c}e\) is the prime lattice vector closest to \(t\), the difference in (5) is as small as possible. This gives the corresponding \(u-vn\) a better chance of being smooth, amongst most other candidates corresponding to different prime lattice vectors. It is difficult to quantify the smoothness probability of \(u-vn\) as a function of its defining prime lattice vector, because it depends on both the estimation error \(\epsilon\) and the smooth number \(v\), and these are interrelated. Regardless, we note that the amongst prime lattice points that are equidistant to the target vector \(t\), the smoothness probability of \(u-vn\) is largest for the pair with smallest \(v\); in other words, prime lattice points with smaller negative coordinates yield higher quality solutions. While Schnorr's original work derives some performance guarantees, in the form of \(\ell_{\infty}\)- and \(\ell_{1}\)-norm bounds, they are asymptotic and rely on norms that are irrelevant to the CVP heuristic algorithms employed in practice. In particular, Schnorr proves that if we find \(e\in\mathbb{Z}^{m}\) satisfying \[||B_{m,c}e-t||_{\infty} \leq\ln p_{m},\quad\text{and}\] \[||B_{m,c}e-t||_{1} \leq(2c-1)\ln n+o(\ln p_{m})\] then we have that \(|S(u,v)|=p_{m}^{o(1)}\), with the asymptotic for \(n\to\infty\)[20, Lemma 2]. The conclusion suggests that if we find \(e\) such that \(B_{m,c}e\) is sufficiently close to \(t\), then \(S\) is sufficiently likely to be smooth, because it is a sufficiently small integer. In summary, Schnorr [1990] argued that if we can solve the CVP, we can use this to find lots of useful fac-relations, and solve the factorization problem. Perhaps the most contentious point is the size of the lattice needed in order to reliably collect smooth relations: numerical evidence from our experiments, described in Section 5, suggests both Schnorr [1990] and Yan et al. [2022] vastly underestimate it. As a guideline, we use number theoretic results developed between the 1930s and 1950s to estimate the density of smooth numbers below \(n\) as \(n\) gets large [14, 1951]. Concretely, we combine Dickman's function with the Prime Number Theorem to estimate that the proportion of smooth integers below \(n\) available for collection under Yan et al. [2022]'s regime of sublinear lattice dimension is exponentially small, as follows. To begin, let \(\Psi(n,p)\) denote the number of \(p\)-smooth integers below \(n\). Then a theorem of Dickman guarantees that for fixed \(a\), \(\Psi(n,n^{1/a})\sim n\rho(a)\), with \(\rho\) denoting the Dickman (or Dickman-de Bruijn) function. This is the continuous function satisfying the differential equation \[x\rho^{\prime}(x)+\rho(x-1)=0 \tag{8}\] with initial conditions \(\rho(x)=1\) for \(0\leq x\leq 1\). Therefore, \[\frac{\Psi(n,n^{1/a})}{n}\sim\rho(a);\] the proportion of \(n^{1/a}\)-smooth integers below \(n\) grows like \(\rho(a)\). Now the Prime Number Theorem estimates that \(p_{m}\sim m\log m\), so it follows that \[\frac{\Psi(n,p_{m})}{n}\sim\rho\bigg{(}\frac{\log(n)}{\log(m)+\log(\log(m))} \bigg{)},\] since \(n^{1/a}=m\log(m)\) when \(a=\log(n)/(\log(m)+\log(\log(m))\). The plot in Figure 1 shows that when \(m=\log(n)/\log(\log(n))\) is sublinear in the bit-length of \(n\), as Yan et al. (2022) suggest, the proportion of smooth integers below \(n\) available for collection is exponentially small. ### Vector Lattices and Approximate Solutions to the Closest Vector Problem Given a lattice, the Shortest Vector Problem (SVP) is to find the shortest nonzero vector in the lattice, as measured by some norm. The Closest Vector Problem (CVP) is to find the closest lattice vector to a given target vector (Figure 2). Both of these problems also have approximate alternatives, for example, to find any vector whose distance is no more than \(1+\varepsilon\) times the minimal possible distance. The problems are closely related, though not identical. Generally, the CVP is considered to be the harder problem, partly because the SVP can be solved for a basis \(B=\{b_{1},\ldots,b_{n}\}\) if a version of the CVP can be solved for each basis vector (Micciancio, 2001; Micciancio and Goldwasser, 2002). By the early 1980s, van Emde (1981) showed that the CVP is NP-complete, and the SVP is NP-hard with the \(l_{\infty}\) norm. By the late 1990s, many more results were known, including that that the SVP with the \(l_{2}\) norm is NP-hard (for randomized reductions, that is, there is a probabilistic Turing-machine that reduces any problem in NP to a polynomial number of instances of the SVP) (Ajtai, 1998). As Ajtai (1998) indicates, the potential application to factoring integers, arising from Schnorr (1990) and related work, was one of the motivations for these research efforts. Figure 1: When the lattice dimension \(m=\log(n)/\log(\log(n))\) is sublinear in the bit-length of \(n\), the proportion of \(p_{m}\)-smooth integers available for collection is exponentially small. Particularly important constructions in vector lattices, discovered during the 1980s, include the Lenstra-Lenstra-Lovasz (LLL) lattice basis reduction algorithm, and Babai's nearest plane algorithm. The LLL algorithm reduces a basis to an equivalent basis with short, nearly orthogonal vectors, in time polynomial in the basis length (Lenstra et al., 1982). It has become a standard simplification step, and the algorithm is supported in several software packages. Babai's nearest plane algorithm is explained in Babai (1986). Given an LLL-reduced basis for a lattice, it finds a vector whose distance from the target vector \(t\) is not greater than a factor of \(2(\frac{2}{\sqrt{3}})^{d}\) times the closest possible distance, for a lattice of dimension \(d\). Let \(b_{1}^{*},\ldots,b_{n}^{*}\) denote an LLL-reduced basis (sorted by increasing norm) for the prime lattice \(B_{n,c}\). Additionally, let \(G=\begin{bmatrix}g_{1}&\cdots&g_{n}\end{bmatrix}\) denote the matrix obtained by running Gram-Schmidt orthogonalization (without normalizing the columns) on \(b_{1}^{*},\ldots,b_{n}^{*}\). Then Babai's nearest plane algorithm essentially suggests we should calculate the projection of the target vector onto the span of \(G\), round the resulting coefficients to the nearest integer, and use those coefficients to return a linear combination of the LLL-reduced basis. The algorithm computes the coefficients sequentially. First set \(b^{\mathrm{opt}}\gets t\). Then for each \(j\) from \(m\) down to \(1\), compute \(\mu_{j}=\langle t,g_{j}\rangle/\langle g_{j},g_{j}\rangle\), let \(c_{j}=\lceil\mu_{j}\rfloor\) denote the nearest integer to \(\mu_{j}\), and update \[b^{\mathrm{opt}}\gets b^{\mathrm{opt}}-c_{j}b_{j}^{*} \tag{9}\] The name 'Babai's nearest _plane_ algorithm' is used because at step \(j\), we take the plane or more general \(j\)-dimensional subspace \(\mathrm{Span}\{g_{1},\ldots,g_{j}\}\), and find the integer \(c_{j}\) such that the distance from \(c_{j}b_{j}+\mathrm{Span}\{g_{1},\ldots,g_{j-1}\}\) to the target vector \(t\) is minimized. So each step finds a nearest (hyper)plane, and the algorithm's output (the final \(b^{opt}\)) is a vector close to the target vector \(t\). Key theoretical results and heuristic techniques for working with vector lattice problems were thus developed during the 1980s and 1990s, during the same period that the main factoring approaches we use today were established (c.f. (Dixon, 1981) - Pomerance (1996)). When Schnorr (1990) proposed that this application of the CVP could be the key step to enable polynomial-time prime factorization, only some of the complexity results and bounds that we know today were established. Approximating heuristics for solving the CVP account for most of the computational load in Schnorr's method. The challenge this leaves is to demonstrate whether these heuristics are effective enough, to find close-enough vectors, that generate enough useful fac-relations, to create enough modular equations, so that the processing strategy of Section 2.1 works to find solutions to (1). This challenge is discussed in the next section, and forms the main theme for much of the rest of this paper. Figure 2: Shortest and Closest Vector Problems. For the SVP (left), the task is to find the shortest non-zero lattice vector (integer linear combination of \(b_{1}\) and \(b_{2}\)). For the CVP (right), the task is to find the lattice vector that is closest to the target blue triangle. ### Does the Schnorr Method Work? While Schnorr's method can be implemented end-to-end, we have found no evidence that it can be used to factorize numbers that cannot already be factorized much faster using an established sieving technique. When introducing the method, Schnorr (1990) claimed that the reduction of the factoring problem to the search for fac-relations worked in polynomial time, so long as we can make heuristic assumptions about the distribution of smooth integers. Much has been clarified in the theory of lattice problems since: for example, Lemma 2 of Schnorr (1990) is proved for the \(\ell_{1}\) and \(\ell_{\infty}\) norms, which have limited practical bearing, because the most popular CVP heuristics rely on \(\ell_{2}\) measurements. This was before Ajtai (1998) demonstrated the NP-hardness (under random reduction) of the SVP under the \(\ell_{2}\)-norm. Schnorr has proposed extensions to these proposals, based on optimizations including pruning (Schnorr, 2013), permutation, and primal-dual reduction (Schnorr, 2021), though these are invited papers and preprints, not peer-reviewed publications.4 Claims that the methods lead to fast factoring algorithms are based on complexity analyses, and involve delicate considerations of many of the approximations and bounds in results on lattices. In addition, (Schnorr, 2021) added publicity by adding the claim that "this destroys the RSA cryptosystem", but these claims have been disputed and largely dismissed, not in refereed publications, but in online forums.5 Footnote 4: An earlier preprint with some of the material in Schnorr (2021) was retracted; we refer to the newer preprint, “Fast Factoring Integers by SVP Algorithms, corrected”. Footnote 5: Online discussions include _Does Schnorr’s 2021 factoring method show that the RSA cryptosystem is not secure?_ at [https://crypto.stackexchange.com/questions/88582/](https://crypto.stackexchange.com/questions/88582/). All the answers agree that it does not, some based on detailed algorithmic analyses, and some emphasizing that it’s so easy to demonstrate when a factorization algorithm works for large numbers, by presenting their factors — in the absence of which, we must assume that the algorithm doesn’t work for large numbers. Code to analyze the extraction of suitable fac-relations at large scales has been shared by Ducas (2023), and works using the SageMath platform (The Sage Developers, 2023). The motivation here is partly to investigate the complexity of the underlying lattice operations and the accuracy of approximations, furthering earlier research (Ducas, 2018). His attempts to produce sufficient fac-relations to enable factorization demonstrate that the algorithmic analysis of these problems is fascinating and more difficult than we might intuitively expect, but that it does not perform well enough to compete with standard sieving methods. In our own experiments (described below), we have successfully implemented an end-to-end factorization workflow using Schnorr's methods, but they have been unable to factorize numbers larger than \(72\) bits when running overnight, whereas the quadratic sieve method has factorized \(110\) bit and larger numbers in a matter of moments. The challenge is not only in finding fac-relations, but also in finding fac-relations with enough variety to solve the resulting system of equations: it happens that many different random lattice diagonal permutations result in the same fac-relation. Nonetheless, the mathematical and computational results reviewed so far left a slight possibility, at least in principle. Is there some approximate solution to the CVP, that is closer than the Babai nearest plane approximation, but does not suffer from the NP-hardness of solving the full CVP, so that the search for fac-relations is both tractable and fruitful enough to solve the factoring problem? This is the hope that encouraged the quantum approaches to which we turn next. ## 4 Proposed Quantum Optimizations The claims of Schnorr (2021) have motivated attempts to factorize integers using quantum computers, including those reported in the preprints of Yan et al. (2022) and Hegade and Solano (2023). While these works have attracted some attention, the same caveats apply as with Schnorr (2021): the methods have not demonstrated prime factors at a challenging scale, the works are not peer-reviewed, and experts in the field have expressed doubts and concerns (Aaronson, 2023). The main contribution in Yan et al. (2022) is to use the Quantum Approximate Optimization Algorithm (QAOA, Farhi et al. (2014)) to refine the CVP approximations produced by Babai's method, in the hope that this refinement leads to more useful fac-relations. In essence, QAOA serves as a sophisticated rounding mechanism: instead of greedily rounding each \(\mu_{j}\) to the nearest integer in isolation in Babai's algorithm, we take a holistic view and choose \(x_{j}\in\{0,1\}\) to minimize \[\left\|t-\left(b^{\mathrm{op}}+\sum_{j=1}^{m}x_{j}\kappa_{j}b_{j}^{*}\right) \right\|_{2}^{2},\] with \(b^{\mathrm{op}}\) denoting Babai's CVP approximation, \(\kappa_{j}=\mathrm{sign}(\mu_{j}-c_{j})\), and \(b_{j}^{*}\) denoting the \(j\)th column of the LLL-reduction of \(B_{m,c}\). We note that \(\kappa\) is known as the _coding vector_ in Yan et al. (2022). Choosing a minimizing \(x\in\{0,1\}^{m}\) is tantamount to solving a Quadratic Unconstrained Binary Optimization (QUBO) program, which Yan et al. (2022) interpret as a minimum-energy eigenstate problem via the Ising map. In particular, if we let \(\sigma_{j}^{z}\) denote the Pauli-\(Z\) gate acting on the \(j\)th qubit and set \(Z_{j}=\frac{1}{2}(I-\sigma_{j}^{z})\), the problem Hamiltonian is given by \[H=\sum_{i=1}^{m+1}\left(t_{i}-b_{i}^{\mathrm{op}}-\sum_{j=1}^{m}\kappa_{j}b_{ ij}^{*}Z_{j}\right)^{2}. \tag{10}\] Since \(H\) is diagonal with respect to the computational basis, we see that each eigenstate of \(H\) corresponds to one of the \(2^{m}\) possible roundings. In turn, this means that every \(H\)-eigenvector \(|\psi\rangle\) with lower energy than \(|0\rangle\) produces an enhanced CVP solution via \[b^{|\psi\rangle}\coloneqq b^{\mathrm{op}}+\sum_{j=1}^{m}\kappa_{j}\psi_{j}b_{ j}^{*}. \tag{11}\] The hypothesis in Yan et al. (2022) is that lower-energy eigenstates are more likely to yield smooth relation pairs, via Relation 6 with the lattice coordinates \(e\) defined by \[b^{|\psi\rangle}=B_{m,c}e. \tag{12}\] They explain in detail some of the steps to factorize an \(11\)-bit, a \(26\)-bit, and a \(48\)-bit number, using \(3\), \(5\), and \(10\) qubits respectively. The number of qubits here corresponds to the dimension of the lattice used. Following Yan et al. (2022), a preprint by Hegade and Solano (2023) has also been published, claiming that a digitized-counterdiabatic quantum computing (DCQC) algorithm outperforms QAOA at the task of refining the Babai approximations to the CVP. Hegade and Solano (2023) present results showing that the DCQC method retrieves the lowest energy state of the corresponding Hamiltonian, with greater probability than a corresponding QAOA method. Their report is much shorter than that of Yan et al. (2022), and only compares the two quantum approaches, without analyzing how this affects the rest of the factoring pipeline. For this reason, our results will be compared with those of Yan et al. (2022), which are much more comprehensively explained. Our results will also support the claim that the improvement claimed by Hegade and Solano (2023) would not be enough to materially alter our conclusions about whether any such variants will enable Schnorr's factoring method to work end-to-end on large numbers. ## 5 Factoring Experiments and Results In this section we discuss the results of various experiments we ran using our own implementation the methods of Schnorr (2013) and Yan et al. (2022). In addition, we developed and implemented alternative lattice-based factoring heuristics that can be understood as variations of Schnorr's original method, and bring extra insight on these claims and results. The results are summarized in Tables 1, 2 and 3. Each row is an average from factorizing 10 randomly chosen semiprime numbers with the given bit-length. The key conclusion is that, while the quantum optimization (QAOA) obtains more smooth-relation pairs for each lattice tested than the Schnorr method itself, the simple classical optimization (Local Search) produces an even greater yield using a much easier and faster alternative; the discrepancy between original method and the QAOA optimization is mostly explained by the fact that the latter tests multiple candidates per lattice, while in the former we only check the candidate produced by Babai's nearest plane approximation. In particular, compare the number of smooth candidates each method needed to test in order to find a factor. It is important to note that the number of lattices required for factorization scales exponentially with the input's bit-length, as the graph in Figure 3 suggests: this explains why lattice-based factoring requires exponential time. The rest of this section analyzes these results and the heuristic alternatives in much more detail. ### Simulation Parameters, Variants, and Detailed Results Three configurable hyper-parameters are common to all the heuristics we present below: the lattice dimension \(m\) and the so-called "precision" parameter \(c\) used to define the prime lattice \(B_{m,c}\), as in Relation 7; and the length \(M\) of the factor basis used to collect \(p_{M}\)-smooth candidates. We note while \(M=m\) in Schnorr's original proposal, Yan et al. (2022) propose using \(M>m\) to increase the probability that any given relation pair is smooth. Of course this increases the number of fac-relations that need to be discovered in the collection step, but Yan et al. (2022) claim they can reduce the overall computational load with an appropriate choice of \(M>m\). However, they provide no guidance on how to select an appropriate \(M\) beyond the three concrete examples in Table \((S5)\) that are not accompanied by any justification. Hence, the lattice dimension and number of qubits we used do not always match those used by Yan et al. (2022) exactly. Instead, our experiments and results in Table 2 were configured so that a repeatable automated process was used for all bitlengths, so that the larger trends are more reliable. Our implementation makes the following choices by default: given \(n\), we set \[m=\bigg{\lceil}\frac{3}{2}\frac{\log(n)}{\log(\log(n))}\bigg{\rceil},\quad c=m /4,\quad\text{and}\quad M=m^{2}. \tag{13}\] Numerical evidence collected from various factoring experiments, much like the ones described below, guided our choice of hyper-parameters, but we make no claims about their optimality. Notice our default lattice dimension \(m\) is sublinear in the bit-length of \(n\), as proposed by Yan et al. (2022). As an extra check, we ran a 48-bit factorization attempt using a lattice dimension of 10, to follow as closely as possible the method of Yan et al. (2022) for this bit-length. It required more than 95000 lattices to be searched, each one of which would be a QAOA optimization job. Yan et al. (2022) gloss over the scale of this problem, saying just "The calculations of other sr-pairs are similar and will be obtained by numerical method." This difference is crucial: it is not factoring a 48-bit number on a quantum computer as claimed, but instead, they performed a tiny part of a massively parallel process on a quantum computer. ### Heuristic Comparisons and the Availability of Smooth Relations A key challenge in predicting the behavior of these methods at scale is in understanding the relationships between the distribution of smooth numbers and relations, lattice vectors, and shortest vector lengths. This section describes heuristic methods and experiments that were developed to shed light on these questions. Figure 4 compares the performance of six different heuristics used to factor the \(30\)-bit integer \(n=612742391\) by approximating solutions to the CVP on lattices of dimension \(11\) and using \(11^{2}=121\) primes in the factor basis (which thus goes up to \(p_{121}=661\)). Each plot illustrates the candidates \(S=u-vn\) tested for smoothness: there is a green dot for each candidate that turned out to be \(p_{121}\)-smooth, and a red one for each candidate that was not. The dashed purple line indicates the bit-length of the input \(n\), while the dashed yellow line indicates the average Figure 3: The number of lattices of sublinear dimension needed to factor an \(n\)-bit integer scales exponentially. \begin{table} \begin{tabular}{r r r r r r r r} \hline \hline Input bit & Lattice & Lattices & Candidates & Total & Unique & Unique SR & Time (s) \\ length & dimension & tested & extracted & SR pairs & SR pairs & per lattice \% & Time (s) \\ \hline 15 & 7 & 91.1 & 1287.2 & 345.6 & 190.1 & 208.66 & 4.66 \\ 20 & 8 & 98.9 & 1486.8 & 190.0 & 139.9 & 141.48 & 4.89 \\ 25 & 9 & 179.9 & 2796.8 & 117.0 & 106.7 & 60.63 & 11.23 \\ 30 & 11 & 469.9 & 7358.8 & 122.7 & 112.5 & 24.09 & 30.83 \\ 35 & 12 & 1809.6 & 28494.7 & 142.7 & 130.3 & 7.26 & 133.85 \\ 40 & 13 & 6200 & 98119.4 & 179.3 & 163.0 & 2.64 & 409.73 \\ 45 & 14 & 25000 & 397176.9 & 215.8 & 186.5 & 0.753 & 2311.47 \\ 50 & 15 & 88340 & 1406730 & 253.8 & 215.5 & 0.245 & 9495.31 \\ \hline \hline \end{tabular} \end{table} Table 2: Yan et al. QAOA method: Performance statistics for various bit-lengths \begin{table} \begin{tabular}{r r r r r r r} \hline \hline Input bit & Lattice & Lattices & Candidates & Total & Unique & Unique SR & Time (s) \\ length & dimension & tested & extracted & SR pairs & SR pairs & per lattice \% & Time (s) \\ \hline 15 & 7 & 92.3 & 1476.8 & 507.2 & 194.5 & 211.13 & 0.73 \\ 20 & 8 & 97.7 & 1563.2 & 286.7 & 152.2 & 155.85 & 0.66 \\ 25 & 9 & 99.9 & 1598.4 & 143.1 & 105.0 & 105.10 & 0.63 \\ 30 & 11 & 220.0 & 3520.0 & 168.4 & 118.6 & 54.97 & 0.95 \\ 35 & 12 & 570.0 & 9120.0 & 180.4 & 130.0 & 23.18 & 2.08 \\ 40 & 13 & 1999.9 & 31998.4 & 297.7 & 176.4 & 8.82 & 4.45 \\ 45 & 14 & 5800 & 92800 & 346.4 & 189.0 & 3.31 & 11.76 \\ 50 & 15 & 17500 & 280000 & 450.6 & 216.5 & 1.24 & 37.52 \\ 55 & 16 & 63200 & 1011200 & 615.4 & 244.4 & 0.39 & 176.43 \\ 60 & 17 & 184500 & 2952000 & 958.7 & 277.3 & 0.15 & 602.54 \\ \hline \hline \end{tabular} \end{table} Table 3: Local Search Method: Performance statistics for various bit-lengths bit-length of the candidates for smoothness tested by the Quadratic Sieve. Notice that in each case the green dots tend to cluster around the lower-left region of the plot. Thus the general trend is clear: candidates with shorter bit-length are more likely to be smooth, as predicted by the Dickman-de Brujin function, and these tend to correspond to better CVP approximations. In other words, broadly speaking, lattice vectors that are closer to the target tend to produce candidates that are more likely to be smooth. In fact Schnorr's claims are based on this tenuous relationship and, as mentioned above, this relationship lays the foundation for Yan et al. (2022)'s promise to enhance Schnorr's methods by refining the CVP approximations using quantum computations. However, the plots in Figure 4 indicate that the situation is much more complicated. To elucidate this nuance, we must describe each heuristic in some detail. We begin with the top row in Figure 4. The plot on the left illustrates the smoothness candidates obtained using Schnorr's method together with Babai's nearest plane approximation for the CVP, as described in Section 3. In particular, this heuristic extracts a single smooth candidate from each lattice: first we use Babai's nearest plane algorithm to find a lattice vector close to the target and then we test \(S=u-vn\), with \((u,v)\) as defined by Relation 6, for \(p_{M}\)-smoothness. (The experiments used the 'extended factor basis' for relation search, where an \(m\)-dimensional lattice is searched, and relations that are smooth for some \(p_{M}\) where \(M>m\) are accepted: this is a departure from strictly reimplementing Schnorr's method, but enables more direct comparison with the behavior of the other methods.) Figure 4: Performance comparison for six different lattice-based heuristics for finding smooth relations. The candidate is the number \(u-vn\) from which we try to extract a smooth relation. The ideal candidates are smaller (lower down), closer to the target vector (further left). The general trend is clear: candidates with shorter bit-length are more likely to be smooth, as predicted by the Dickman-de Brujin function, and these tend to correspond to better CVP approximations. The plot in the top middle illustrates the candidates obtained by refining Babai's CVP approximation using _exact_ Hamiltonian energy minimization. Concretely: first we use Babai's nearest plane algorithm to find a lattice vector close to the target, as above; next we set up the Ising Hamiltonian defined by Relation 10; and then for _each_ eigenstate with lower energy than the classical approximation state \(|0\rangle\) we test the candidate \(S=u-vn\) for \(p_{M}\)-smoothness. Again, we take \((u,v)\) as defined by Relation 6 but in this case we use the lattice coordinates corresponding to the refined CVP approximation as explained in Relation 12. In essence, this heuristic follows Yan et al. (2022)'s proposal except that it obtains the eigenstates exactly using linear algebraic techniques instead of QAOA; in addition, it extracts a smooth candidate from each eigenstate with lower energy than \(|0\rangle\), instead of testing lattice coordinates corresponding to the most likely minimum-energy eigenstate candidates as determined by the QAOA. It is important to note that the size of the matrix describing the problem Hamiltonian increases exponentially, so this heuristic may only be used for "small" integers: powerful modern laptops can handle inputs with up to \(\sim 75\) bits. ``` 0: prime lattice \(B\), LLL-reduction \(B^{*}\), Babai CVP approximation \(b^{\mathrm{op}}\), coding vector \(\kappa\) \(b^{\mathrm{prev}}\gets 0,\quad b^{\mathrm{curr}}\gets b^{\mathrm{ op}}\) while\(b^{\mathrm{curr}}\neq b^{\mathrm{prev}}\)do \(b^{\mathrm{prev}}\gets b^{\mathrm{curr}}\) Construct Hamiltonian \(H\) as in Relation 10 Compute minimum-energy eigenstate \(|\psi^{*}\rangle\) Compute \(b^{|\psi^{*}\rangle}\) as in Relation 11 \(b^{\mathrm{curr}}\gets b^{|\psi^{*}\rangle}\) for\(j=1\dots m\)do if\(\psi^{*}_{j}=1\)then \(\kappa_{j}\leftarrow-\kappa_{j}\) endif endfor endwhile Compute lattice coordinates \(e\gets B\backslash b^{\mathrm{curr}}\) return\(S=u-vn\) with \((u,v)\) as defined by Relation 6 ``` **Algorithm 1** Hill-climbing refinement heuristic The plot on the top right in Figure 4 illustrates the results for the "hill-climbing" heuristic detailed in Algorithm 1, with a small modification: if \(b^{\mathrm{curr}}\) equals \(b^{\mathrm{prev}}\) at the end of the first iteration, we generate a new coding vector by independently and uniformly drawing from \(\{0,1\}\) and try again. Essentially, this heuristic iterates the exact Hamiltonian energy minimization routine described in the last paragraph: at each step, it updates the CVP approximation resulting from the minimum-energy eigenstate. Thus, assuming the hypothesis in Yan et al. (2022), we expect the hill-climbing heuristic to outperform all others because it uses the best CVP refinement out of all the heuristics we propose. Now we move on to the bottom row of plots. The plot on the bottom left describes the candidates obtained using the proposal described in Yan et al. (2022), with one modification: we use multi-angle QAOA in place of the usual QAOA. We found that assigning an independent parameter to each rotation gate in the variational ansatz allows the quantum optimization routine to obtain eigenstates with lower energy than those obtained with straight-up QAOA. This plot dis plays results computed using IonQ's Aria-1 (noisy) simulator. To produce this plot, we sampled each optimal state distribution obtained using the multi-angle QAOA subroutine 1000 times and we tested up to 16 smooth candidates for each lattice; in this case, each candidate corresponds to one of the eigenstates assigned the highest likelihood by our multi-angle QAOA subroutine via Relation 11. The plot in the bottom middle illustrates the results of our _Local Search_ heuristic, detailed in Algorithm 2. Essentially, given a search parameter \(k\), this heuristic extracts a candidate from each of the \(2^{k}\) possible roundings of the Babai coefficients \(c_{1},\ldots,c_{k}\), as in Relation 9, which correspond to the \(k\) smallest-norm columns of the LLL-reduced lattice. The plot displays results obtained using \(k=4\) (testing \(16\) candidates per lattice) and shows that this heuristic indeed outperforms any claimed quantum advantage as it obtains the factorization of \(n\) upon testing less smooth candidates than any other algorithm. Note that the Local Search heuristic does _not_ involve any quantum computations. The plot on the right displays the results of our _Random Round_ heuristic, which extracted 16 candidates from each lattice, each corresponding to a random rounding of the \(m\) Babai coefficients \(c_{1},\ldots,c_{m}\), as in Relation 9, via Relation 11. We include this plot mainly to demonstrate that the Local Search heuristic chooses roundings intentionally, and these indeed outperform a random selection. With a description of the six different factoring heuristics in mind, we are ready for a more nuanced analysis. Though in general it seems that better CVP approximations are more likely to yield smooth relations, it turns out that producing higher quality CVP solutions does not necessarily lead to faster factoring, largely because we tend to encounter the same fac-relation over and over: notice that both the Schnorr and the exact Hamiltonian minimization heuristic see the same fac-relation more than three times on average; notice that the same ratio is much closer to 1 when using any of the heuristics in the bottom row. This observation undermines Yan et al. (2022)'s argument: while we may obtain better CVP approximations in some cases using quantum computations, these do not yield enough unique fac-relations sooner. Thus we observe no quantum advantage when following the method proposed in Yan et al. (2022), even when we improve it by replacing QAOA with its multi-angle variant. A further key point to consider is that Yan et al. (2022) vastly underestimate the number of qubits needed to reliably unique collect fac-relations. The number of qubits in Yan et al. (2022)'s proposal is equal to the dimension of the lattice used. The plot in Figure 5 shows that when the lattice dimension \(m=\lceil\frac{3}{2}\frac{\log(n)}{\log(\log(n))}\rceil\) is sublinear in the bit-length of \(n\), as proposed in Yan et al. (2022), the proportion of fac-relations extracted per lattice decreases exponentially. This means that factoring with sublinear resources still takes exponential time, because we need to test exponentially many lattices in order to collect enough fac-relations. ### Quantum Processor Optimization for QAOA Jobs In this section we discuss the results of the QAOA experiments we conducted, in particular following the 3-qubit case of Yan et al. (2022). In this case we seek to factor \(n=1961\) using lattices of dimension \(m=3\). According to Table \((S5)\) in Yan et al. (2022), the factor basis has \(15\) primes, so we extract smooth relation candidates using lattices of dimension \(m=3\) and collect all pairs that are \(p_{15}\)-, or \(47\)-smooth. As in Yan et al. (2022), we fix \(c=1.5\). In addition, we consider the prime lattice \[B_{3,\,1.5}=\left[\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&2\\ 22&35&51\end{array}\right]\] and the target vector \[t=\left[0,\,0,\,0,\,240\right],\] Figure 5: The proportion of fac-relations extracted per lattice tested decreases exponentially when the lattice dimension \(m=\lceil\frac{3}{2}\frac{\log(n)}{\log(\log(n))}\rceil\) is sublinear in the bit-length of \(n\), as proposed in Yan et al. (2022). as in Relation \((S37)\) of Yan et al. (2022). Applying the LLL-reduction algorithm, with parameter \(\delta=0.99\), to the prime lattice \(B_{3,\,1.5}\) yields the reduced matrix \[B^{*}=\left[\begin{array}{rrr}1&-4&-3\\ -2&1&2\\ 2&2&0\\ 3&-2&4\end{array}\right],\] as in Relation \((S40)\) of Yan et al. (2022).6 Footnote 6: Here we take a moment to note a discrepancy in Yan et al. (2022): while Algorithm 2 claims they compute LLL reductions with parameter \(\delta=0.75\), the SageMath implementation of the LLL-reduction algorithm with this parameter yields \[\left[\begin{array}{rrr}1&-3&-4\\ -2&2&1\\ 2&0&2\\ 3&4&-2\end{array}\right]\] instead. When we apply Babai's method to approximate the closest lattice vector to \(t\), using the reduced matrix \(B^{*}\), we obtain \[b^{\mathrm{op}}=\left[0,\,4,\,4,\,242\right]^{T}\quad\text{and}\quad\kappa= \left[-1,\,-1,\,-1\right]\] as in Relation \((S44)\) and Table \((S3)\) in Yan et al. (2022). Using Relation 10, we obtain the Hamiltonian \[H=-4Z_{0}Z_{1}+\frac{5}{2}Z_{0}Z_{2}+3Z_{1}Z_{2}-\frac{3}{2}Z_{0}-\frac{7}{2}Z _{1}-4Z_{2}+\frac{87}{2},\] as in Relation \((S52)\) in Yan et al. (2022). Table 4 describes the eigen-pairs and corresponding relation pairs associated to the lowest \(4\) energies of \(H\). Then Figure 6 illustrates a single layer of the quantum circuit used in the associated QAOA computation. Figure 7 plots the results of QAOA experiments conducted using quantum circuits with \(p=1\) and \(p=2\) layers, contrasting noiseless and noisy simulation. In addition, in the depth \(p=1\) case we executed the QAOA run on IonQ's Aria QPU. Our implementation features expected energy calculations that are differentiable with respect to the lattice parameters, so we use the quasi-Newton BFGS optimization routine. We rely on SciPy's implementation and leverage analytical gradient evaluations using our implementation of a parameter-shift rule. For each depth \(p\), every execution uses the same initial parameter values. These parameter values were chosen amongst \(50\) sets of initial values used in "dry runs" of the experiment, where we relied on Qiskit's noiseless \begin{table} \begin{tabular}{c|c|c|c|c|c} Energy Level & Eigenvalue & Eigenstate & \((u,v)\) & \(S=u-vN\) & Is \(p_{15}\)-smooth? \\ \hline \hline 0 & 33 & 100 & (1800, 1) & \(-7\times 23\) & Yes \\ 1 & 35 & 011 & (1944, 1) & \(-17\) & Yes \\ 2 & 36 & 000 & (2025, 1) & \(2^{6}\) & Yes \\ 3 & 42 & 001 & (3645, 2) & 277 & No \\ \end{tabular} \end{table} Table 4: The eigen-pairs and corresponding relation pairs associated to the lowest \(4\) energies of \(H\). er simulator. Each set of initial values in the "dry runs" was generated uniformly at random from \([-\pi,\pi]\). The results show good agreement between the simulators and the hardware in this simple example. We note that we do see improved convergence in the depth \(p=2\) case as we increase the number of circuit layers, but the noisy and noiseless simulations diverge further. Though indeed in theory "the quality of the optimization improves as \(p\) is increased", as stated in Farhi et al. (2014), in practice we observe that adding too many more layers can reduce the quality of the solution, because the additional noise incurred by running a deeper circuit eventually outweighs the gains. ## 6 Further Lattice Variations Attempted As well as the different heuristic techniques whose results are documented above, other attempts to improve factorization performance included: * Varying the strategy for choosing lattices to search. Schnorr (2021) and Yan et al. (2022) use slightly different lattice basis vectors, both derived from random permutations. We explored these options, and sampling from other distributions. This did not significantly improve results. * Redundancy (finding the same smooth relation from many lattices) was a key problem throughout. We tried to introduce deliberate variation by choosing lattices from permutations with large \(\ell_{1}\)-norm distances from permutations already used: there was no improvement in the rate of finding new smooth relations (and a large computational cost). While it was infeasible to try every combination of options, these experiments further demonstrated that there is no clear path to factoring large numbers using this family of methods. ## 7 Possible Alternatives for Quantum Advantage The closest vector problem is related to the knapsack problem (Salkin and De Kluyver, 1975). In the knapsack problem, the challenge is to find a combination of objects of different sizes or weights that can fit into a knapsack with a given carrying capacity: so if we add the restriction that all the coefficients \(e_{j}\) in Equation 5 must be positive, and the sum must not be greater than Figure 6: QAOA circuit for 3-qubit Hamiltonian, single layer (depth \(p=1\)) \(\ln n\), this effectively transforms the CVP into a knapsack problem. We were able to use similar optimization methods to obtain good results for knapsack problems and related variants, using variational quantum circuits. These results will be described in further work. We also considered the use of quantum computing for the data processing part of the factoring pipeline. The data processing part consists of solving a linear system of equations modulo 2. As well as factoring, modulo-2 linear systems have applications in error correction and cryptography. This potential application of quantum computing to a different part of the factoring problem is documented by Aboumrad and Widdows (2023). ## 8 Conclusion We have carefully reviewed the methods and claims of Schnorr (2021) and Yan et al. (2022), which led to speculation that lattice-based factorization of large numbers could be tractably achieved using classical or near-term quantum computers. We also implemented these methods in a complete factorization pipeline, which supports a much more systematic analysis of the computational claims in practice. In spite of its many interesting mathematical properties, Schnorr's method does not lead to a faster factoring algorithm than the sieving methods already available in standard libraries. Optimizations of Schnorr's method can reduce the number of lattices we need to test, but do not alter the fundamental problem, which is that smooth relations are rare and hard to find. The chief advantage of QAOA over Schnorr's method is because QAOA tests many smooth relation candidates per lattice: but this multiplicity can easily be tried with other methods, with greater improvements, as shown with the Random Round and Local Search heuristics introduced in this paper. Though Yan et al. (2022) may be correct in their assertion that parts of a factorization calculation of a 2048-bit integer may 'fit' in a QPU with 372 qubits (in the sense that one could run the QAOA job to enhance the Babai CVP approximation), it would still take exponential time to complete the factorization because the probability of actually finding a fac-relation is exponentially small. In other words, while it would be possible to run the appropriate QAOA jobs, we would need to run exponentially many of them in order to factorize the input. This means that RSA is still safe from this kind of attack. When analyzing claims of quantum advantage, it is important to consider systems as a whole. In this case, the maxim that "a chain is as strong as its weakest link" suggests an analysis which revealed clear flaws in Schnorr's method. Instead, starting from the principle that "a chain is as interesting and its most interesting link" has led to considerable confusion in the community, because a small quantum optimization in one part of a process can be used to claim an overall quantum advantage, for which there is no end-to-end evidence. Nonetheless, there are potentially useful directions for quantum computing in these areas. Of these, we believe the use of quantum computers to solve modulo 2 linear systems of equations to be promising.
2307.06347
Lagrange's discrete model of the wave equation in dimension greater than one
A celebrated theorem of Lagrange states that a solution of the wave equation with one-dimensional space variable is the uniform limit, as N tends to infinity, of a second order ODE obtained from a mechanical model discretizing a string as N identical harmonic oscillators. Answering to a question posed by G. Gallavotti we generalize this result to the case of any space dimension.
Massimo Villarini
2023-07-12T17:31:07Z
http://arxiv.org/abs/2307.06347v1
# Lagrange's discrete model of the wave equation in space dimension greater than one ###### Abstract. A celebrated theorem of Lagrange [5] concerns a discrete model, based on Newton's Second Law of dynamics, for the PDE describing the transversal motion of a string: such mechanical model is expressed through a family of second order ODEs, depending on a discretization parameter, whose solutions Lagrange proved to converge uniformly to the solutions of the PDE, as the discretization parameter tends to \(0\). Answering to a question posed by Gallavotti [2], we generalize this theorem to the case of \(n\)-dimensional space variable, \(n>1\). The proof is based on the convergence analysis of the simplest finite difference numerical scheme for the wave equation. 1 Footnote 1: Massimo Villarini, Dipartimento di Scienze Fisiche, Informatiche e Matematiche, via Campi 213/b 41100, Universita di Modena e Reggio Emilia, Modena, Italy E-mail: [email protected] ## 1. Introduction In 1759 Lagrange published two memoires on sound propagation [5] related to a scientific controversy between Euler and d'Alembert on their solutions of the PDE modelling the vibration of a string. In those memoires Lagrange proposed a mechanical model of a discretized vibrating string, based on Newton's Second Law of dynamics, leading to a linear second order ODE with \(N\) degrees of freedom, \(N\) being the number of points of a lattice discretizing the string continuum. He succeded in diagonalizing the matrix of the linear ODE, therefore finding its explicit analytic solution: this result seems to be specific to the spatial one-dimensional case of the vibrating string (see [2] SS4, comment before Proposition 16). Letting \(N\to\infty\), _i.e._ passing to the continuum limit, he proved convergence of the explicit solutions of the ODE to solutions of the vibrating string PDE, hence reobtaining and generalizing Euler results: we will quote this result as _Lagrange's 1759 Theorem_. This was perhaps the first rigorous example of reduction of a scalar field PDE to first principles. A complete account of Lagrange's theory can be found in the book [2] by G. Gallavotti, where the author also generalizes Lagrange's mechanical and ODE models to the case of spatial dimension \(n>1\), and proposes the natural generalization of Lagrange's 1759 Theorem to that case (see Proposition 16. SS4 in [2] and the following Observation, where the sought generalization is settled as an open question). The main result of the present article, Theorem (2.1), is the generalization of Lagrange's 1759 Theorem to the case of spatial dimension \(n>1\): it answers in the affirmative to the question posed implicitly by Gallavotti. The proof is based on the convergence analysis of the simplest finite difference scheme for the wave equation, _cfr._[3]. The same approach, namely reducing, through convergence analysis of suitable finite difference scheme, the PDE modeling scalar field theory to a family of ODEs depending on a discretizing parameter, turns to be useful for the heat equation, too, and will be subject of a forthcoming article. Theorem (2.1) is stated in the next section, where it is proved in a simplified version. More general versions are proved in the subsequent sections, and finally the theorem is proved in full generality. ## 2. The homogeneous case In this section we will state our main result, generalizing Lagrange's 1759 Theorem to the case of spatial dimension \(n>1\), and we will prove it in a simplified form. Let \(\Omega\) be an open, connected, bounded subset of \(\mathbb{R}^{n}\), \(n\geq 1\), having boundary \(\partial\Omega\) which is a smooth \((n-1)\)-manifold: weakening of these hypotheses is briefly discussed at the end of the article. The d'Alembert (continuos) differential operator is \[\square=\frac{\partial^{2}}{\partial t^{2}}-c^{2}\Delta_{x}\] where \[\Delta_{x}=\frac{\partial^{2}}{\partial x_{1}^{2}}+\cdots+\frac{\partial^{2 }}{\partial x_{n}^{2}}.\] Up to a linear change of coordinates d'Alembert operator reduces to \[\square=\frac{\partial^{2}}{\partial t^{2}}-\Delta_{x}\] and in this form it will be considered throughout this article, without explicit displaying in the Laplace operator the space variable \(x\). The scalar variable \(t\) will be referred to as time. Let \(T>0\). We will be interested in the following mixed initial/boundary problem for the wave equation \[(L)\begin{cases}\frac{\partial^{2}u}{\partial t^{2}}-a(x)\Delta u+\sigma(x)u =w(x,t)\;,\;in\;\Omega\times(-T,T)\\ u(x,0)=f(x)\;,\;in\;\Omega\times\{0\}\\ \frac{\partial u}{\partial t}(x,0)=g(x)\;,\;\Omega\times\{0\}\\ u(x,t)\equiv h(x)\;,\;in\;\partial\Omega\times(-T,T)\end{cases}\] The choice of constant \(T\) is instrumental in the proof, and it does not correspond to any restriction of the domain of definition of the solution of \((L)\). The functions \[a,\sigma:\Omega\to\mathbb{R}\] are the coefficients of the PDE (\(a>0\) is the velocity of wave propagation, \(\sigma\) is the flexibility coefficent measuring resistence to deformation of the medium ), the functions \[f,g:\Omega\to\mathbb{R}\] are the initial data \[h:\partial\Omega\to\mathbb{R}\] is the boundary datum and \[w:\Omega\times\mathbb{R}\to\mathbb{R}\] is the forcing term. We suppose that initial and boundary data satisfy the _compatibility conditions_ \[(LC)\begin{cases}lim_{y\to x}f(y)=h(x)\;y\in\Omega\;,\;x\in\partial\Omega\\ lim_{y\to x}g(y)=0\;y\in\Omega\;,\;x\in\partial\Omega\\ \Delta_{|\partial\Omega}h=0\end{cases}\] where \(\Delta_{|\partial\Omega}\) is the restriction of the Laplace operator with respect to \(x\) to the submanifold \(\partial\Omega\). In this section we will mainly consider the following simplified case of \((L)\) \[(E)\begin{cases}\square u=0\;,\;in\;\Omega\times(-T,T)\\ u(x,0)=f(x)\;,\;in\;\Omega\times\{0\}\\ \frac{\partial u}{\partial t}(x,0)=g(x)\;,\;\Omega\times\{0\}\end{cases}\] _i.e._ the homogeneous, constant velocity \((a=1)\), infinitely flexible \((\sigma=0)\), initial data case of \((L)\). We introduce now a discretization of \((E)\). Let \(\Delta x\), \(\Delta t\) two positive numbers, and let \[\begin{cases}\Sigma_{\Delta x}=\{x\in\mathbb{R}^{n}:x=k\Delta x,\;k\in\mathbb{ Z}^{n}\}\\ \Sigma_{\Delta t}=\{t\in\mathbb{R}:t=p\Delta x,\;p\in\mathbb{Z}\}\\ \Sigma_{\Delta x,\Delta t}=\Sigma_{\Delta x}\times\Sigma_{\Delta t}\end{cases}\] be the corresponding lattices. We define the set \(\mathcal{A}_{T}\) of _admissible lattices_ as \[\mathcal{A}_{T}=\{(\Delta x,\Delta t):\frac{T}{\Delta t}\in\mathbb{N},\;\frac {\Delta t}{\Delta x}\leq\frac{1}{\sqrt{n}}\}\] The condition appearing in the definition of \(\mathcal{A}_{T}\) \[\frac{\Delta t}{\Delta x}\leq\frac{1}{\sqrt{n}} \tag{2.1}\] is the celebrated Courant-Friedrichs-Lewy condition, [1]. Let \[E_{\infty}=\cup_{(\Delta x,\Delta t)\in\mathcal{A}_{T}}\Sigma_{(\Delta x, \Delta t)}\cap(\Omega\times(-T,T))\] \(E_{\infty}\) is dense in \(\Omega\times(-T,T)\). We will consider the discrete differential operators \[\delta_{\Delta t}u(x,t)=\frac{u(x,t+\Delta t)-u(x,t)}{\Delta t}\] \[\delta_{\Delta t}^{-1}u(x,t)=\frac{u(x,t)-u(x,t-\Delta t)}{\Delta t}\] \[\delta_{\Delta t}^{-1}u(x,t)=\frac{u(x,t+\Delta t)-u(x,t-\Delta t)}{2\Delta t}\] \[\delta_{\Delta t}^{-1}\circ\delta_{\Delta t}u(x,t)=\frac{u(x,t+\Delta t)-2u( x,t)+u(x,t-\Delta t)}{\Delta t^{2}}\] and the analogously defined discrete differential operators \(\delta_{k,\Delta x},\delta_{k,\Delta x}^{-1},\delta_{k,\Delta x}^{-1}\circ \delta_{k,\Delta x}\), where \(k=1,\ldots,n\) and _e.g._ \[\delta_{k,\Delta x}=\frac{u(x+\Delta x\;e_{k},t)-u(x,t)}{\Delta x}\] where \(e_{k}\) is the \(k\)-th vector of the canonical basis in \(\mathbb{R}^{n}\). Let \[\Sigma_{\Delta x}\cap\Omega=\Omega_{\Delta x}\cup\partial_{\Delta x}\] where \(x\in\Omega_{\Delta x}\) if it is a point of the lattice \(\Sigma_{\Delta x}\) which belongs to \(\Omega\) togheter with all its \(2n\)-neighbours, while \(x\in\partial_{\Delta x}\Omega\) is a point of the lattice which belongs to \(\overline{\Omega}\), the closure of \(\Omega\), such that at least one of its \(2n\)-neighbours does not belong to \(\Omega\). We define the discrete differential operator \(((\Delta x,\Delta t)\)-approximation of the d'Alembert operator) \[\square_{\Delta x,\Delta t}=\delta_{\Delta t}^{-1}\circ\delta_{\Delta t}-\sum_ {k=1}^{n}\delta_{k,\Delta x}^{-1}\circ\delta_{k,\Delta x}\] and we will consider the following approximated version of \((E)\) \[(E)_{\Delta x,\Delta t}\begin{cases}\square_{\Delta x,\Delta t}v(x,t)=0\;in\; \Sigma_{\Delta x,\Delta t}\cap(\Omega\times(-T,T))\\ v(x,0)=f(x)\;in\;(\Sigma_{\Delta x}\cap\Omega)\times\{0\}\\ \delta_{\overline{\Delta t}}v(x,0)=g(x)\;in\;(\Sigma_{\Delta x}\cap\Omega) \times\{0\}\end{cases}\] After considering this initial value problem, we will add to it the boundary condition \[v(x,t)=0\;in\;\partial\Omega_{\Delta x}\times((-T,T))\cap\Sigma_{\Delta t}\] hence getting an approximated version \((L)_{\Delta x,\Delta t}\) of a simplified form of \((L)\). The approximated versions of initial and mixed valued problems for a PDE are referred to as _finite difference numerical schemes_ in numerical analysis. In particular the numerical scheme based on the definition of \(\square_{\Delta x,\Delta t}\) is defined the _simplest numerical scheme_ for the wave equation in [3] SS7. We observe that \((E)_{\Delta x,\Delta t}\), being a linear explicit scheme, has always a solution, which is unique. We will use a standard notation in the theory of evolution equations: a function \[u:\Omega\times\mathbb{R}\to\mathbb{R}\;,\;(x,t)\to u(x,t)\] will be also denoted as \[u(x)(t)\] emphasizing its interpretation as a \(t\)-parametrized curve in some space of functions defined on \(\Omega\). According to this interpretation we will sometimes write \[\dot{u}(x)(\cdot)=\frac{\partial u}{\partial t}(x,\cdot).\] We are ready to state our main result: **Theorem 2.1**.: _Let \(f,a,\sigma,h\in C^{5}(\mathbb{R}^{n},\mathbb{R})\), \(g\in C^{4}(\mathbb{R}^{n},\mathbb{R})\), \(w\in C^{5}(\mathbb{R}^{n}\times(-T,T),\mathbb{R})\), \(w\) having compact support. and consider their restrictions as the corresponding functions appearing in \((L)\). Let \(f^{(p)},w^{(p)},g^{(q)}\), \(p=0,1,\ldots,5,q=0,1,\ldots,4\) decay suitably fast as \(|x|\to\infty\) in order that the Fourier transforms_ \[\hat{f^{(p)}}(\alpha)=\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n}}f^{(p )}(s)e^{-i\alpha\cdot s}\;ds\] \[\hat{g^{(q)}}(\alpha)=\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n}}g^{( q)}(s)e^{-i\alpha\cdot s}\;ds\] _are well-defined: for instance let \(f,w\in C^{5}(\mathbb{R}^{n},\mathbb{R})\cap W^{5,1}(\mathbb{R}^{n},\mathbb{R})\) and \(g\in C^{4}(\mathbb{R}^{n},\mathbb{R})\cap W^{4,1}(\mathbb{R}^{n},\mathbb{R})\). Notation \(\hat{f}=\hat{f}^{(0)}\), \(\hat{g}=\hat{g}^{(0)}\) will be used._ _Let_ \[v^{\Delta x,\Delta t}:\Sigma_{\Delta x,\Delta t}\cap(\Omega\times(-T,T))\to \mathbb{R}\] _be the solution of \((L)_{\Delta x,\Delta t}\), \((\Delta x,\Delta t)\in\mathcal{A}_{T}\). Then_ a.1) \[lim_{(\Delta x,\Delta t)\to(0,0)}v^{\Delta x,\Delta t}(x,t)=u(x,t)\] (2.2) _uniformly for_ \((x,t)\in E_{\infty}\)_._ a.2) _The limit in (_2.2_) can be extended as a uniform limit to_ \(\Omega\times(-T.T)\)_._ a.3) _The previous statements hold for the difference quotients of_ \(v^{\Delta x,\Delta t}\) _entering in the definition of_ \(\square_{\Delta x,\Delta t}\)_, which converge uniformly in_ \(\Omega\times(-T,T)\) _to the partial derivatives of_ \(u\) _entering in the definition of_ \(\square\)_. Therefore_ \(u(x,t)\) _is a_ \(C^{2}\)_-solution of_ \((L)\)_._ _In the homogeneous case, with \(a\equiv 1\), \(\sigma\equiv 0\), we have in \(E_{\infty}\) the analytic expression_ \[v^{\Delta x,\Delta t}(x,t)=\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n} }e^{i\alpha\cdot s}(\hat{f}(\alpha)\cos(\beta t)+\hat{g}(\alpha)\frac{\Delta t \sin(\beta t)}{\sin(\beta\Delta t)})\;d\alpha\] _and in \(\Omega\times(-T,T)\)_ \[u(x,t)=\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n}}e^{i\alpha\cdot s}( \hat{f}(\alpha)\cos(\alpha t)+\hat{g}(\alpha)\frac{\sin(|\alpha|t)}{\sin(| \alpha|)})\;d\alpha\] _where \((\alpha,\Delta x,\Delta t)\to\beta(\alpha,\Delta x,\Delta t)\) is a real analytic function in a neighbourhood of \(\mathbb{R}^{n}\times\{0\}\times\{0\}\)._ b.1) _for_ \((\Delta x,\Delta t)\in\mathcal{A}_{T}\)__ \[\varphi^{\Delta x}(x)(t)=lim_{\Delta t\to 0}v^{\Delta x,\Delta t}(x,t)\] _exists and is uniform with respect to_ \((x,t)\in E_{\infty}\)_._ b.2) _The limit in the previous statement can be extended as a uniform limit to_ \(\Omega\times(-T.T)\)_._ b.3) _The previous statements hold for the difference quotient_ \(\delta_{\Delta t}^{-1}\circ\delta_{\Delta t}v^{\Delta x,\Delta t}\)_, which converges uniformly in_ \(\Omega\times(-T,T)\) _to the second derivative with respect to_ \(t\) _of_ \(\varphi^{\Delta x}(x)(t)\) _._ _In the homogeneous case, with \(a\equiv 1\), \(\sigma\equiv 0\), in \(E_{\infty}\)_ \[\varphi^{\Delta x}(x)(t)=\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n} }e^{i\alpha\cdot s}(\hat{f}(\alpha)\cos(\beta(\alpha,\Delta x,0)t)+\hat{g}( \alpha)\frac{\sin(\beta(\alpha,\Delta x,0)t)}{\beta(\alpha,\Delta x,0)})\;d\alpha\] b.4) \[lim_{\Delta x\to 0}\varphi^{\Delta x}(x)(t)=u(x,t)\] _i.e._ \[lim_{\Delta x\to 0}lim_{\Delta t\to 0}v^{\Delta x,\Delta t}(x,t)=u(x,t).\] _uniformly for \((x,t)\in\Omega\times(-T,T)\)_ * _for each fixed_ \(x\) _,_ \(\varphi^{\Delta x}(x)(t)\) _is solution of the Lagrange's ODE_ \[\begin{cases}\ddot{\xi}(x)=a(x)\sum_{k=1}^{n}\delta_{k,\Delta x}^{-1}\circ\delta _{k,\Delta x}\xi(x)+\sigma(x)\xi(x)+w(x,t)\\ \xi(x)(0)=f(x)\;x\in\Omega_{\Delta x}\\ \xi(x)(0)=g(x)\;x\in\Omega_{\Delta x}\\ \xi(x)(t)\equiv 0\;x\in\partial\Omega_{\Delta x}\times(-T,T)\end{cases}\] _Remark 2.2_.: : statements \(a.1)-a.3)\) are a slight generalization of results by Courant _et al._[1] and H. Lewy [6], see also [3] SS7: in the quoted articles only the homogeneous case is considered and, more important, the ratio \(\frac{\Delta t}{\Delta x}\) is kept fixed in the convergence analysis, a condition which does not fit with statement \(b.4)\). For this reason in statements \(a.1)-a.3)\) we consider convergence analysis of \(v^{\Delta x,\Delta t}\) to the solution \(u(x,t)\) of the wave equation putting no restriction on the way \((\Delta x,\Delta t)\to(0,0)\), except its appartenence to \(\mathcal{A}_{T}\). Statements \(b.1)-b.4),c)\) are the sought generalization of Lagrange's 1759 Theorem. The hypotheses of the theorem are satisfied _e.g._ if all the functions have derivatives of any order and have compact support. The remaining part of this section is devoted to the proof of this theorem in the case of constant velocity (\(a\equiv 1\)), infinite flexibility (\(\sigma\equiv 0\)), absence of forcing term (\(w\equiv 0\)). We will mostly consider the case of a Cauchy problem, adding the necessary comments to deal with boundary conditions at the end of this section. The proof begins with a lemma that, though not strictly necessary, makes clear the fundamental argument leading to Theorem (2.1). Here and throughout this section we will consider complex-valued functions \(v:\mathbb{R}^{n}\to\mathbb{C}\), the possibility to get back to real-valued functions consisting in taking the real part of \(v\). **Lemma 2.3**.: * _Among the couples_ \((eigenvalue/eigenvector)\) _of_ \(\square\) _acting on complex-valued functions defined in_ \(\mathbb{R}^{n}\) _are_ \[(-\beta+|\alpha|^{2},e^{i(\alpha\cdot x+\beta t)})\] \(x\in\mathbb{R}^{n},\beta\in\mathbb{R},\alpha\in\mathbb{R}^{n}\)_._ * _Among the solutions of_ \(\square u=0\) _are the functions_ \[(x,t)\to e^{i(\alpha\cdot x)-|\alpha|^{2}t}\] * _The initial value problem_ \((E)\)_, with_ \(\Omega=\mathbb{R}^{n}\) _and_ \(f,g\) _satisfying the same hypotheses in Theorem (_2.1_), has for_ \((x,t)\in\Omega\times(-T,T)\) _the unique solution_ (2.3) \[u(x,t)=\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n}}e^{i\alpha\cdot x}( \hat{f}(\alpha)\cos(|\alpha|t)+\frac{\hat{g}(\alpha)}{|\alpha|}\sin(|\alpha| t))\;d\alpha.\] Proof.: Statements \(a)\), \(b)\) are a straightforward computation. Statement \(c)\) follows observing that problem \((E)\) with initial data \(f=e^{i\alpha\cdot x},g\equiv 0\) has solution \[(x.t)\to e^{i\alpha\cdot x}\cos(|\alpha|t)\] while problem \((E)\) with data \(f\equiv 0,g=e^{i\alpha\cdot x}\) has solution \[e^{i\alpha\cdot x}\frac{\sin(|\alpha|t)}{|\alpha|}\] Then by using linearity of the differential equations and regularity assumptions on data, which allows to exchange the order of application between the differential operator \(\square\) and the integral, statement \(c)\) follows. The next lemma is analogous to the previous one: we just substitute the (continuous) d'Alembert operator with its discretized version \(\square_{\Delta x,\Delta t}\), and observe that eigenfunctions of such discretized d'Alembert operator are the same of the continuous version of it, while the eigenvalues depend analytically on the discretization parameters, hence allowing an explicit expression of the solutions of \((E)_{\Delta x,\Delta t}\) in term of the Fourier transform of the data. The precise statement is **Lemma 2.4**.: _Let \((\Delta x,\Delta t)\in\mathcal{A}_{T}\)._ * _Among the couples_ \((eigenvalue/eigenvector)\) _of_ \(\square_{\Delta x,\Delta t}\) _acting on complex-valued functions defined in_ \(\mathbb{R}^{n}\) _are_ \[(G(\alpha,\beta^{2},\Delta x,\Delta t),e^{i(\alpha\cdot x+\beta t)})\] _where_ \[G(\alpha,\beta^{2},\Delta x,\Delta t)=-\frac{\sin^{2}(\frac{\beta\Delta t}{2} )}{(\frac{\beta\Delta t}{2})^{2}}\beta^{2}+\sum_{k=1}^{n}\frac{\sin^{2}( \frac{\alpha_{k}\Delta x}{2})}{(\frac{\alpha_{k}\Delta x}{2})^{2}}\alpha_{k}^ {2}\] * _Among the solutions of_ \(\square_{\Delta x,\Delta t}=0\) _are the functions_ \[(x,t)\to e^{i(\alpha\cdot x+\beta t)}\] _where_ \(\beta^{2}=\beta^{2}(\alpha,\Delta x,\Delta t)\) _and_ \(\beta(\alpha,\Delta x,\Delta t)\) _is the positive real positive square root of the real analytic branch of solutions of_ \(G(\alpha,\beta^{2},\Delta x,\Delta t)=0\)_. The fact that_ \(\beta^{2}\) _is real follows by (_2.1_)_ * _The function_ \((\alpha.\Delta x.\Delta t)\to\beta(\alpha,\Delta x,\Delta t)\) _defined in_ \(b1)\) _is real analytic in a neighbourhood of_ \(\mathbb{R}^{n}\times\{0\}\times\{0\}\subset\mathbb{R}^{n}\times\mathbb{R}\times \mathbb{R}\)_, therefore for any fixed_ \(M>0\) _there exists_ \(R>0\) _such that_ \(\beta(\alpha,\Delta x,\Delta t)\) _is real analytic in_ \(\{\alpha\in\mathbb{R}^{n}:|\alpha|\leq M\}\times\{|\Delta x|<R\}\times\{|\Delta x |<R\}\)_, and_ \(\beta(\alpha,0,0)=|\alpha|\)_._ * _The initial value problem_ \((E)_{\Delta x,\Delta t}\)_, with_ \(f,g\) _satisfying the hypotheses of Theorem (_2.1_) and with_ \((\Delta x,\Delta t)\in\mathcal{A}_{T}\)_, has the unique solution_ \[v^{\Delta x,\Delta t}(x,t)=\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n} }e^{i\alpha\cdot x}(\hat{f}(\alpha)\cos(\beta t)+\hat{g}(\alpha)\frac{\Delta t }{\sin(\beta\Delta t)}\sin(\beta t))\;d\alpha\] _where_ \(\beta=\beta(\alpha,\Delta x,\Delta t)\) _and_ \((x,t)\in\Sigma_{\Delta x,\Delta t}\cap(\Omega\times(-T,T))\)_._ Proof.: A straightforward computation gives \[\delta_{\Delta t}^{-1}\circ\delta_{\Delta t}e^{i(\alpha\cdot x+\beta t)}=-e^ {i(\alpha\cdot x+\beta t)}\frac{\sin^{2}\frac{\beta\Delta t}{2}}{(\frac{ \beta\Delta t}{2})^{2}}\beta^{2}\] and analogously \[\delta_{k,\Delta x,}^{-1}\circ\delta_{k,\Delta x}e^{i(\alpha\cdot x+\beta t)}=-e^{ i(\alpha\cdot x+\beta t)}\frac{\sin^{2}\frac{\alpha_{k}\Delta x}{2}}{(\frac{ \alpha_{k}\Delta x}{2})^{2}}\alpha_{k}^{2}\] and statements \(a)\), \(b.1)\) follow. From (2.1) and \[\begin{cases}G(\alpha,|\alpha|^{2},0,0)=0\\ \frac{\partial G}{\partial(\beta^{2})}(\alpha,|\alpha|^{2},0,0)=-1\end{cases}\] the equation \(G(\alpha,\beta^{2},\Delta x,\Delta t)=0\) defines implicitly a _real_ analytic branch of \(\beta^{2}=\beta^{2}(\alpha.\Delta x,\Delta t)\) emanating from \(|\alpha|^{2}\) whose positive square root is \(\beta(\alpha,\Delta x,\Delta t)\): the fact that the equation \(G=0\) has real solutions follows from Courant-Friedrichs-Lewy condition (2.1). The rest of statement \(b.2)\) follows from elementary geometric properties of the analytic set defined by \(G=0\). Finally the proof of statement \(c)\) is analogous to statement \(c)\) of the previous lemma. Perhaps the only useful remark is that, knowing that \[(x,t)\to ce^{i\alpha\cdot x}\sin(\beta t)\] is a solution of \(\square_{\Delta x,\Delta t}v=0\) for any real constant \(c\), satisfying \(v(x.0)=0\), the condition \(\delta_{\overline{\Delta t}}v(x,0)=e^{i\alpha\cdot x}\) reads as \[ce^{i\alpha\cdot x}\frac{\sin(\beta\Delta t)-\sin(-\beta\Delta t)}{2\Delta t}= e^{i\alpha\cdot x}\] which determines \(c=\frac{\Delta t}{\sin(\beta\Delta t)}\) and proves \(c)\): the validity of the analytic form of \(v^{\Delta x,\Delta t}\) in the domain \(\Sigma_{\Delta x,\Delta t}\cap\Omega\times(-T,T)\) follows from (2.1). To prove statement \(a.1)\) of Theorem (2.1) for the simplified inital data problem \((E)\) and its approximation \((E)_{\Delta x\Delta t}\) we must show that for any fixed \(T>0\) \[\forall\varepsilon>0\;\exists\;\overline{\Delta x}=\overline{\Delta x}( \varepsilon,T),\overline{\Delta t}=\overline{\Delta t}(\varepsilon,T)>0\] such that if \(\frac{\Delta t}{\Delta x}\leq\frac{1}{\sqrt{n}}\), \(0<\Delta t<\overline{\Delta t}\), \(0<\Delta x<\overline{\Delta x}\), \(\frac{T}{\Delta t}\in\mathbb{N}\) then \[|v^{\Delta x,\Delta t}(x.t)-u(x,t)|<\varepsilon\] uniformly for \((x,t)\in E_{\infty}\). Using the analytic expression of \(u(x,t)\) we get \[|v^{\Delta x,\Delta t}(x.t)-u(x,t)|\leq\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{ \mathbb{R}^{n}}(|\hat{f}(\alpha)||\cos(\beta t)-\cos(|\alpha|t)|+|\hat{g}( \alpha)||\frac{\Delta t\sin(\beta t)}{\sin(\beta\Delta t)}-\frac{\sin(|\alpha |t)}{|\alpha|}|)\;d\alpha=\] \[\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{|\alpha|\leq M}\Theta\;d\alpha+\frac{1}{( 2\pi)^{\frac{n}{2}}}\int_{|\alpha|>M}\Theta\;d\alpha\] where \(M>0\). We observe that \(t\in\Sigma_{\Delta t}\cap(-T,T)\) and \(T=N\Delta t\), \(N\in\mathbb{N}\), therefore \(t=k\Delta t\), \(k\in\mathbb{Z},|k|\leq N\). Then \[\sin(\beta t)=\sin(\beta k\Delta t)=\sin(\beta(k-1)\Delta t+\beta\Delta t)\] therefore \[|\frac{\sin(\beta t)}{\sin(\beta\Delta t)}|\leq 1+|\frac{\sin(\beta(k-1) \Delta t)}{\sin(\beta\Delta t)}|\] whose iteration \(k\)-times gives \[|\frac{\Delta t\sin(\beta t)}{\sin(\beta\Delta t)}|\leq T \tag{2.4}\] therefore \[\Theta\leq 2|\hat{f}(\alpha)|+|\hat{g}(\alpha)|T.\] The hypotheses on \(f,g\) imply that \(|\hat{f}(\alpha)|,|\hat{g}(\alpha)|\) decays sufficiently fast in order that there exists \(M=M(\varepsilon,T)\) such that \[\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{|\alpha|>M}\Theta\;d\alpha<\frac{ \varepsilon}{2}.\] For such fixed \(M\) we must find sufficiently small \(\overline{\Delta x}\), \(\overline{\Delta t}\) such that \[\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{|\alpha|\leq M}\Theta\;d\alpha<\frac{ \varepsilon}{2}\] if \(0<\Delta x<\overline{\Delta x}\), \(0<\Delta t<\overline{\Delta t}\), and (2.1) holds. The Fourier transforms of data are bounded on \(\mathbb{R}^{n}\), and from Lemma (2.4) the function \((\alpha,\Delta x,\Delta t)\to\beta(\alpha,\Delta x,\Delta t)\) is analytic in a neighbourhood of \(\{|\alpha|\leq M\}\times[-\overline{\Delta x},\overline{\Delta x}]\times[- \overline{\Delta t},\overline{\Delta t}]\), and the last inequality follows from uniform continuity in \(\mathbb{R}\) of trigonometric functions. The proof of statement \(a.1)\) in Theorem (2.1) is concluded: for the considered simplified version of \((L)\), and with fixed ratio \(\frac{\Delta t}{\Delta x}\), it is due to Lewy [6], see also [3] SS7.3. We remark that convergence is uniform for \((x,t)\in E_{\infty}\). The last argument, based on the splitting of the Fourier transform expression of the difference between an approximated and a limit solution in a high frequency part, uniformly estimated for given \(T\) by the fast decay of Fourier transform of data, and in a bounded frequency part, estimated by uniform continuity of trigonometric function and analytic extension up to 0-value of the discrtization parameters of the frequency function of the approximated solution, will be used several times in the proof: we will refer to it as _frequency splitting argument_. Let \((x,t)\in\Omega\times(-T,T)-E_{\infty}\): there exists a sequence \((x,t_{p})\in\Sigma_{\Delta x,\Delta t_{p}}\in\mathcal{A}_{T}\) such that \((x,t_{p})\to(x,t)\) and \(\Delta t_{p}=\frac{\Delta t}{2^{p}}\) for a given \(\Delta t\). Let \(v^{p}=v^{\Delta x,\Delta t_{p}}\). Then \[|v^{p}(x,t_{p})-u(x,t)|\leq|v^{p}(x,t_{p})-u(x,t_{p})|+|u(x,t_{p})-u(x,t)|\] The first term of the sum in the _r.h.s._ of the last inequality can be made as small as we wish independently of \((x,t_{p})\in E_{\infty}\) just choosing \(p\) sufficiently big, as proved before. The second term in the _r.h.s._ of the last equality, which is defined in \(\Omega\times(-T,T)\), can be made as small as we wish using the frequency splitting argument and uniform continuity of the integrand in the analytic expression of \(u(.,.)\): this ends the proof of statement \(a.2)\). Incidentally, the analytic expression of the limit \(u(.,.)\) proves the this extension of the limit of the solution of \((E)_{\Delta x,\Delta t}\) from \(E_{\infty}\) to \(\Omega\times(-T,T)\) is unique. This argument does not request all the regularity assumptions in Theorem (2.1): the higher regularity hypotheses are used when the same argument is applied to the difference quotients of \(v^{\Delta x,\Delta t}\) to get the conclusions in statement \(a.3)\). To prove statements \(b.1)-b.3)\) of Theorem (2.1) we will use the analytic expression obtained in Lemma (2.4) and prove that, remembering that when \((\Delta x,\Delta t)\in\mathcal{A}_{T}\) then \(\Delta t=\frac{T}{N}\), \(N\in\mathbb{N}\) \[lim_{\Delta t\to 0}v^{\Delta x,\Delta t}=lim_{\Delta t\to 0}\frac{1}{(2\pi)^{\frac{n}{2}}} \int_{\mathbb{R}^{n}}e^{i\alpha\cdot x}(\hat{f}(\alpha)\cos(\beta t)+\hat{g}( \alpha)\frac{\Delta t}{\sin(\beta\Delta t)}\sin(\beta t))\;d\alpha=\] \[\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n}}e^{i\alpha \cdot x}(\hat{f}(\alpha)\cos(\beta_{0}t)+\hat{g}(\alpha)\frac{1}{\beta_{0}} \sin(\beta_{0}t))\;d\alpha=\varphi^{\Delta x}(x)(t) \tag{2.5}\] where the last equality is the definition of \(\varphi^{\Delta x}(x)(t)\) and \[\beta_{0}=\beta(\alpha,\Delta x,0)=\sqrt{\sum_{k=1}^{n}\frac{\sin^{2}(\frac{ \alpha_{k}\Delta x}{2})}{(\frac{\alpha_{k}\Delta x}{2})^{2}}\alpha_{k}^{2}}.\] To prove the above equalities we must prove that \[\forall\varepsilon>0\;\exists\overline{\Delta t}=\overline{\Delta t}( \varepsilon,T)>0\] such that \(\forall\Delta t\in]0,\overline{\Delta t}[\) we have \[|\int_{\mathbb{R}^{n}}e^{i\alpha\cdot x}(\hat{f}(\alpha)(\cos(\beta t)-\cos( \beta_{0}t))+\hat{g}(\alpha)(\frac{\Delta t}{\sin(\beta\Delta t)}\sin(\beta t )-\frac{1}{\beta_{0}}\sin(\beta_{0}t)))\;d\alpha|<\varepsilon\] As we did in proving statement \(a.1)\) we use the frequency splitting argument observing that \[|\int_{\mathbb{R}^{n}}e^{i\alpha\cdot x}(\hat{f}(\alpha)(\cos( \beta t)-\cos(\beta_{0}t))+\hat{g}(\alpha)(\frac{\Delta t}{\sin(\beta\Delta t )}\sin(\beta t)-\frac{1}{\beta_{0}}\sin(\beta_{0}t)))\;d\alpha|<\] \[\int_{\mathbb{R}^{n}}(|\hat{f}(\alpha)||\cos(\beta t)-\cos(\beta _{0}t)|+|\hat{g}(\alpha)||\frac{\Delta t}{\sin(\beta\Delta t)}\sin(\beta t)- \frac{1}{\beta_{0}}\sin(\beta_{0}t))|\;d\alpha=\] \[=\int_{\mathbb{R}^{n}}\Lambda\;d\alpha=\int_{|\alpha|\leq M} \Lambda\;d\alpha+\int_{|\alpha|>M}\Lambda\;d\alpha\] for \(M>0\). Using that for any \(z\in\mathbb{R}\) one has \(|\frac{\sin z}{z}|\leq 1\), choosing \(t\) such that \(|t|\leq T\) and using (2.4) we get \[|\frac{\Delta t}{\sin(\beta\Delta t)}\sin(\beta t)-\frac{1}{\beta_{0}}\sin( \beta_{0}t))|\leq 2T\] hence there exists \(M=M(\varepsilon,T)\) such that \[\int_{|\alpha|>M}\Lambda\;d\alpha<\frac{\varepsilon}{2}.\] Applying the same argument used before, we conclude from analytic extension up to \(\Delta x=0\), \(\Delta t=0\) of \(\beta(\alpha,\Delta x,\Delta t)\), that there exist a positive constant \(\overline{\Delta t}\) such that if \(\Delta t<\overline{\Delta t}\) then \[\int_{|\alpha|\leq M}\Lambda\;d\alpha<\frac{\varepsilon}{2}\] and \(b.1)\) has been proved. \(b.2)\) is proved in the same way we proved \(a.2)\): here is crucial to observe that writing \[\varphi^{\Delta x}(x)(t)=\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n}}e^ {i\alpha\cdot s}(\hat{f}(\alpha)\cos(\beta(\alpha,\Delta x,0)t)+\hat{g}(\alpha )\frac{\sin(\beta(\alpha,\Delta x,0)t)}{\beta(\alpha,\Delta x,0)t})t\;d\alpha\] such analytic expression is defined in \(\Omega\times(-T.T)\). The proof of \(b.3)\) is then similar to that of \(a.3)\) To prove \(b.4)\) we need a lemma, analogous to Lemma (2.3) and Lemma (2.4), for the mixed continuous/discrete differential operator \[\Diamond=\frac{d^{2}}{dt^{2}}-\sum_{k=1}^{n}\delta_{k,\Delta x}^{-1}\circ\delta_ {k,\Delta x}\] **Lemma 2.5**.: * _Among the couples_ \((eigenvalue/eigenvector)\) _of_ \(\Diamond\) _there are_ \[(-\beta^{2}+\sum_{k=1}^{n}\frac{\sin^{2}(\frac{\alpha_{k}\Delta x}{2})}{( \frac{\alpha_{k}\Delta x}{2})^{2}}\alpha_{k}^{2}),e^{i(\alpha\cdot x+\beta t)}\] _where from (_2.1_)_ \(\beta\in\mathbb{R}^{+}\) _and_ \(\alpha\in\mathbb{R}^{n}\)_._ * _Among the solutions of_ \(\Diamond\xi=0\) _are the functions_ \[(x,t)\to e^{i(\alpha\cdot x+\beta_{0}t)}\] _where_ \(\beta_{0}=\sqrt{\frac{\sin^{2}(\frac{\alpha_{k}\Delta x}{2})}{(\frac{\alpha_ {k}\Delta x}{2})^{2}}\alpha_{k}^{2})}\)__ * _The initial value problem (Lagrange's model for the homogeneous, constant velocity, infinite flexibility, the wave equation)_ (2.6) \[\begin{cases}\Diamond\xi(x)=0\;in\;\mathbb{R}^{n}\times\mathbb{R}\\ \xi(x)(0)=f(x)\;in\;\mathbb{R}^{n}\times\{0\}\\ \dot{\xi}(x)(0)=g(x)\;in\;\mathbb{R}^{n}\times\{0\}\end{cases}\] _with_ \(f,g\) _as in Theorem (_2.1_), has solution_ (2.7) \[\varphi^{\Delta x}(x)(t)=\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n}}e ^{i\alpha\cdot x}(\hat{f}(\alpha)\cos(\beta_{0}t)+\frac{\hat{g}(\alpha)}{ \beta_{0}}\sin(\beta_{0}t))\;d\alpha\] _which is defined in_ \(\Omega\times(-T,T)\)_._ Proof.: sSatements \(a)\), \(b)\) are straightforward computations. Statement \(c)\) follows computing solutions of (2.6) with data \(f(x)=e^{i\alpha\cdot x}\), \(g\equiv 0\), respectively \(f\equiv 0\), \(g(x)=e^{i\alpha\cdot x}\): then using linearity of the equation and regularity assumptions on data, which allow the passage of derivatives up to second order of the function defined by (2.7) inside the integral in its definition, ends the proof of statement \(c)\). Statement \(cb.4\) of Theorem (2.1) for the homogeneous initial data problem, with \(a\equiv 1\), \(\sigma\equiv 0\), then follows from (2.5), (2.7). To complete the proof of Theorem (2.1) in this setting we must prove that \[\forall\varepsilon>0\;\exists\overline{\Delta x}=\overline{\Delta x}( \varepsilon,T)>0\] such that if \(\Delta x<\overline{\Delta x}\) then \[|\varphi^{\Delta x}(x)(t)-u(x,t)|<\varepsilon \tag{2.8}\] where \(u(x,t)\) is the solution of \((E)\). From (2.3) and (2.7) \[|\varphi^{\Delta x}(x)(t)-u(x,t)|\leq\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{R}^{n}(| \hat{f}(\alpha)||\cos(\beta_{0}t)-\cos(|\alpha|t)|+|\hat{g}(\alpha)||\frac{\sin (\beta_{0}t)}{\beta_{0}}-\frac{\sin(|\alpha|t)}{|\alpha|}|)\ d\alpha=\] \[\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{R}^{n}\Gamma\ d\alpha=\frac{1}{(2\pi)^{ \frac{n}{2}}}\int_{\{|\alpha|\leq M\}}\Gamma\ d\alpha+\frac{1}{(2\pi)^{\frac{n }{2}}}\int_{\{|\alpha|>M\}}\Gamma\ d\alpha\] Once again we use the frequency splitting argument: from (2.4), \(|\frac{\sin z}{z}|\leq 1\), \(|t|\leq T\) we get \[\Gamma\leq 2|\hat{f}(\alpha)|+2|\hat{g}(\alpha)|T\] therefore there exists \(M=M(\varepsilon,T)>0\) such that \(\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\{|\alpha|>M\}}\Gamma\ d\alpha<\frac{ \varepsilon}{2}\). Moreover boundness of Fourier transforms of data, analyticity up to \(\Delta x=\Delta t=0\) of \(\beta(\alpha,\Delta x,\Delta t)\) and \[lim_{\Delta x\to 0}\beta_{0}(\alpha,\Delta x,0)=|\alpha|\] uniformly for \(|\alpha|\leq M\), implies the existence of \(\overline{\Delta x}=\overline{\Delta x}(\varepsilon,T)>0\) such that if \(\Delta x<\overline{\Delta x}\) (2.8) holds. This ends the proof of Theorem (2.1) for a homogeneous, simplified, initial value problem. The case of an equally simplified homogeneous mixed initial/boundary problem is dealt with in the same way we treated the initial data problem: the boundary condition, when translated into an assignment of the values of the solutions to the mixed problem analogous to \((E)_{\Delta x,\Delta t}\), leads to the same analytic expressions of \(v^{\Delta x,\Delta t}(x,t)\), \(u(x,t)\), \(\varphi^{\Delta x}(x)(t)\) we found for the case of the purely initial value problem. One has only to avoid possible ambiguities in the definition of points in \(\Omega_{\Delta x}\) and in \(\partial_{\Delta x}\Omega\) which could occur, for instance from the existence of "double points" in the boundary \(\partial\Omega\), see [1]. Here a double point \(x\in\partial\Omega\) is a point such that for any ball \(B\) centered at \(x\) the set \(B\cap\Omega\) is not connected. In any case, the conclusion for the homogeneous, constant velocity, infinitely flexible case with mixed initial/boundary condition are, for the case of smooth boundary \(\partial\Omega\), those of Theorem (2.1): they can be easily extended to the case of corners in the boundary of \(\Omega\) when the boundary has no double points. _Remark 2.6_.: The interpretation of the solution \(u(x,t)\) of, say, \((E)\) as the iterated limit \[lim_{\Delta x\to 0}lim_{\Delta t\to 0}v^{\Delta x,\Delta t}(x,t)=u(x,t)\] could suggest an analogous property obtained by inversion of the order of limits, whose possible interpretation is the convergence of the Euler broken line approximation of the evolution ODE equivalent to the wave equation: this claim is actually false, in general, for the Courant-Friedrichs-Lewy condition (2.1) clearly shows that the role played by the spatial and time discrtization parameters is not symmetric. ## 3. The inhomogeneous case In this section we will prove Theorem (2.1) in the same simplified version considered in the previous one, except for the substitution in \((E)\) of the homogeneous differential equation with the inhomogeneous one \[\square u=w\ in\ \Omega\times(-T,T).\] As in the previous section we will mainly pay attention to the initial data case. We will reduce the inhomogeneous Cauchy problem to a family of homogeneous ones _via_ variation of consatnt method (Duhamel principle), hence rededucing its proof to that of the homogeneous case. Firstly we write the inhomogeneous form of \((E)\) as an equivalent first order system of PDE \[(EV)\begin{cases}\dot{\xi}(x)=A\xi(x)+W(x)\;in\;\Omega\times(-T,T)\\ \xi_{1}(x)(0)=f(x)\;in\;\Omega\\ \dot{\xi}_{2}(x)(0)=g(x)\;in\;\Omega\end{cases}\] where \(\xi={}^{t}(\xi_{1},\xi_{2})\), \(A\xi(x)={}^{t}(\xi_{2},\Delta\xi_{1})\) and \(W(x)(\cdot)={}^{t}(0,w(x)(\cdot))\) The discretized version of this problem, which is equivalent to \((E)_{\Delta x,\Delta t}\), is \[(EV)_{\Delta x,\Delta t}\begin{cases}\delta_{\Delta t}\xi(x)(t)=A_{\Delta x} \xi(x)(t)+W(x)(t)\;in\;\Omega\times(-T,T)\\ \xi_{1}(x)(0)=f(x)\;in\;\Omega\\ \delta_{\Delta t}\xi_{2}(x)(0)=g(x)\;in\;\Omega\end{cases}\] where \(A_{\Delta x}\) is obtained from \(A\) substituting the Laplace operator with respect to the spatial variable with its discretized version \(\sum_{k=1}^{n}\delta_{k,\Delta x}^{-1}\circ\delta_{k,\Delta x}\) and all the functions appearing in \((EV)_{\Delta x,\Delta t}\) are evaluated in the lattices \(\Sigma_{\Delta x,\Delta t}\), \(\Sigma_{\Delta x}\), \((\Delta x,\Delta t)\in\mathcal{A}_{T}\). Let \(\eta^{\Delta x,\Delta t}(x)(t)\) be the solution of homogeneous case (\(W\equiv 0\)) of \((EV)_{\Delta x,\Delta t}\): from the theory developed in the previous section it has the form \[\eta^{\Delta x,\Delta t}(x)(t)=\int_{\mathbb{R}^{n}}\mathcal{W}^{\Delta x, \Delta t}(x,\alpha)(t)\begin{pmatrix}\hat{f}(\alpha)\\ \hat{g}(\alpha)\end{pmatrix}\;d\alpha\] where \[\mathcal{W}^{\Delta x,\Delta t}(x,\alpha)(t)=\frac{e^{i\alpha\cdot x}}{(2\pi) ^{\frac{n}{2}}}\begin{pmatrix}\cos(\beta t)&\frac{\Delta t\sin(\beta t)}{\sin (\beta\Delta t)}\\ -\frac{\sin(\beta t)}{\beta}&\cos(\beta t)\frac{\beta\Delta t}{\sin(\beta \Delta t)}\end{pmatrix}\] \(\beta=\beta(\alpha,\Delta x,\Delta t)\) being defined in Lemma (2.4). Analogously the solution of \((EV)\) in the homogeneous case is \[\eta(x)(t)=\int_{\mathbb{R}^{n}}\mathcal{W}(x,\alpha)(t)\begin{pmatrix}\hat{f }(\alpha)\\ \hat{g}(\alpha)\end{pmatrix}\;d\alpha\] where \[\mathcal{W}(x,\alpha)(t)=\frac{e^{i\alpha\cdot x}}{(2\pi)^{\frac{n}{2}}} \begin{pmatrix}\cos(|\alpha|t)&\frac{\sin(|\alpha|t)|}{|\alpha|}\\ -|\alpha|\sin(|\alpha|t)&\cos(|\alpha|t).\end{pmatrix}\] With these notations statement \(a.1)\) of Theorem (2.1), proved in the present homogeneous case in the previous section, implies \[lim_{\Delta x,\Delta t\to(0,0)}\mathcal{W}^{\Delta x,\Delta t}(x,\alpha)(t)= \mathcal{W}(x,\alpha)(t)\] uniformly for \((x,t)\in E_{\infty}\). We define the mixed discrete/continuous Cauchy problem \[(EV)_{\Delta x}\begin{cases}\dot{\xi}(x)(t)=A_{\Delta x}\xi(x)(t)+W(x)(t)\;in \;\Omega\times(-T,T)\\ \xi_{1}(x)(0)=f(x)\;in\;\Omega\\ \dot{\xi}_{2}(x)(0)=g(x)\;in\;\Omega\end{cases}\] whose solution is \[\varphi^{\Delta x}(x)(t)=\int_{\mathbb{R}^{n}}\mathcal{W}^{\Delta x}(x,\alpha)(t) \begin{pmatrix}\hat{f}(\alpha)\\ \hat{g}(\alpha)\end{pmatrix}\;d\alpha\] In the last section we proved that \[\begin{cases}lim_{\Delta t\to 0}\mathcal{W}^{\Delta x,\Delta t}(x,\alpha)(t)= \mathcal{W}^{\Delta x}(x,\alpha)(t)\\ lim_{\Delta x\to 0}\mathcal{W}^{\Delta x}(x,\alpha)(t)=\mathcal{W}(x,\alpha)(t) \end{cases}\] uniformly for \((x,t)\in E_{\infty}\). By linearity of the wave equation, to prove Theorem (2.1) in the inhomogeneous case it is sufficient to prove it when the initial data are \(f\equiv 0\), \(g\equiv 0\). By the classical theory of variation of constants applied to \((E)_{\Delta x,\Delta t}\) we get that the solution of \((E)_{\Delta x,\Delta t}\) with such initial data is \[\begin{cases}(x,t)\to\int_{0}^{t}\mathcal{W}^{\Delta x,\Delta t}(x)(t-s)W(x)(s )\;ds\\ \mathcal{W}^{\Delta x,\Delta t}(x)(t-s)=\int_{\mathbb{R}^{n}}\mathcal{W}^{ \Delta x,\Delta t}(x,\alpha)(t-s)W(x,\alpha)(s)\;d\alpha\end{cases}\] where _e.g._\(W(x,\alpha)(s)\) is the Fourier transform of \(W(x)(s)\) with respect to the \(x\)-variable. Analogously the solution of \((E)_{\Delta x}\) with null initial data is \[\begin{cases}(x,t)\to\int_{0}^{t}\mathcal{W}^{\Delta x}(x)(t-s)W(x)(s)\;ds\\ \mathcal{W}^{\Delta x}(x)(t-s)=\int_{\mathbb{R}^{n}}\mathcal{W}^{\Delta x}(x, \alpha)(t-s)W(x)(s).\end{cases}\] Theorem (2.1) impies that if \((\Delta x,\Delta t)\in\mathcal{A}_{T}\), uniformly for \((x,t)\in E_{\infty}\) \[lim_{(\Delta x,\Delta t)\to(0,0)}\int_{0}^{t}\mathcal{W}^{\Delta x,\Delta t}(x )(t-s)W(x)(s)\;ds=\int_{0}^{t}\mathcal{W}(x)(t-s)W(x)(s)\;ds\] \[lim_{\Delta t\to 0}\int_{0}^{t}\mathcal{W}^{\Delta x,\Delta t}(x)(t-s)W(x)(s)\;ds= \int_{0}^{t}\mathcal{W}^{\Delta x}(x)(t-s)W(x)(s)\;ds\] \[lim_{\Delta x\to 0}\int_{0}^{t}\mathcal{W}^{\Delta x}(x)(t-s)W(x)(s)\;ds= \int_{0}^{t}\mathcal{W}(x)(t-s)W(x)(s)\;ds.\] On the other hand, the regularity assumptions on data and forcing term imply that \[\frac{d}{dt}\int_{0}^{t}\mathcal{W}(x)(t-s)W(x)(s)\;ds=W(x)(t)+\int_{0}^{t} \frac{d}{dt}\mathcal{W}(x)(t-s)W(x)(s)\;ds\] and the definition of \(\mathcal{W}(x)(t)\) implies that \[\begin{cases}\frac{d}{dt}\mathcal{W}(x)(t)c=A\mathcal{W}(x)(t)c\\ \mathcal{W}(x)(0)c=c\end{cases}\] therefore \[\frac{d}{dt}(\mathcal{W}(x)(t))c+ \int_{0}^{t}\mathcal{W}(x)(t-s)W(x)(s)\;ds)=A\mathcal{W}(x)(t)c+ W(x)(t)+ \int_{0}^{t}A\mathcal{W}(x)(t-s)W(x)(s)\;ds=\] \[A(\mathcal{W}(x)(t)c+\int_{0}^{t}\mathcal{W}(x)(t-s)W(x)(s)\;ds)+ W(x)(t),\] _i.e._ the function \[t\to\mathcal{W}(x)(t))c+\int_{0}^{t}\mathcal{W}(x)(t-s)W(x)(s)\ ds\] is solution of \((EV)\). The special case when \(c=0\) proves Theorem (2.1) for initial data \(f\equiv 0,g\equiv 0\), and therefore it ends the proof of such theorem in the inhomogeneous, constant velocity, infinitely flexible, initial data case. Finally applying the same argument explained at the end of the last section extends the conclusion to the mixed initial/boundary case. ## 4. Conclusion of the proof of the main theorem To finish the proof of Theorem (2.1) we write in the equation in \((L)\) the velocity function as \[a(x)=1+b(x)\] and write the solution \(u(x,t)\) of \((L)\) as \[u(x,t)=\phi(x,t)+v(x)\] where \(v(\cdot)\) is solution of \[(EL)\begin{cases}b(x)\Delta v=\sigma(x)v\;in\;\Omega\\ v(x)=h(x)\;in\;\partial\Omega\end{cases}\] and \(\phi(\cdot,\cdot)\) is solution of \[(L)^{\prime}\begin{cases}\square\phi=w\;in\;\Omega\times(-T,T)\\ \phi(x,0)=f(x)-v(x)\;in\;\Omega\times\{0\}\\ \frac{\partial\phi}{\partial t}(x,0)=g(x)\;in\;\Omega\times\{0\}\\ \phi(x,t)=0\;in\;\partial\Omega\times(-T,T).\end{cases}\] The compatibility conditions for \((L)^{\prime}\) follow from those of \((L)\). The problems \((EL)\), \((L)^{\prime}\) have discrtized versions \[(EL)_{\Delta x}\begin{cases}b(x)\sum_{k=1}^{n}\delta_{k,\Delta x}^{-1}\circ \delta_{k,\Delta x}v=\sigma(x)\;in\;\Sigma_{\Delta x}\cap\Omega\\ v(x)=h(x)\;in\;\partial\Omega_{\Delta x}\end{cases}\] and \[(L)^{\prime}_{\Delta x,\Delta t}\begin{cases}\square\phi=w\;in\;\Sigma_{ \Delta x,\Delta t}\cap\Omega\times(-T,T)\\ \phi(x,0)=f(x)-v(x)\;in\;\Sigma_{\Delta x}\cap\Omega\times\{0\}\\ \frac{\partial\phi}{\partial t}(x,0)=g(x)\;in\;\Sigma_{\Delta x}\cap\Omega \times\{0\}\\ \phi(x,t)=0\;in\;\partial\Omega-\Delta x\times(\Sigma_{\Delta t}(-T,T)).\end{cases}\] The existence of the solution \(v^{\Delta x}:\Sigma_{\Delta x\cap\Omega\to\mathbb{R}}\) of \((EL)_{\Delta x}\) was originally proved in [1] and it is a standard result in numerical theory of elliptic PDEs. The existence of the solution of \((L)^{\prime}_{\Delta x,\Delta t}\) follows by the explicit nature of the numerical scheme, and from ([1]), and \[lim_{(\Delta x,\Delta t)\to(0,0)}\phi^{(\Delta x,\Delta t)}(x,t)=\phi(x,t)\] is proved in the previous sections. The Lagrange's discrete mechanical model equation \[(L)^{\prime}_{\Delta x}\begin{cases}\ddot{\xi}(x)=\sum_{k=1}^{n}\delta_{k,\Delta x }^{-1}\circ\delta_{k,\Delta x}\xi(x)+w(x)\;in\;\Sigma_{\Delta x,\Delta t}\cap( \Omega\times(-T,T))\\ \xi(x)(0)=f(x)-v^{\Delta x}(x)\;in\;\Sigma_{\Delta x}\cap\Omega\\ \dot{\xi}(x)(0)=g(x)\;in\;\Sigma_{\Delta x}\cap\Omega\\ \xi(x)(t)\equiv 0\;in\;\partial\Omega_{\Delta x}\times(-T,T)\end{cases}\] has solution \(\varphi^{\Delta x}(x)(t)\) which, as proved in the previous sections, satisfies \[\begin{cases}lim_{\Delta t\to 0}\phi^{\Delta x,\Delta t}(x,t)=\varphi^{\Delta x}(x)(t) \\ lim_{\Delta x\to 0}\varphi^{\Delta x}(x)(t)=\phi(x,t)\end{cases}\] uniformly for \((x,t)\in\Omega\times(-T,T)\), hence \[lim_{\Delta x\to 0}\varphi^{\Delta x}(x)(t)+v^{\Delta x}(x)=u(x,t)\] uniformly for \((x,t)\in\Omega\times(-T,T)\). The proof of Theorem (2.1) is concluded. As a final comment we observe that, as mentioned at the end of the second section, Theorem (2.1) is valid even if \(\partial\Omega\) has corners ( _i.e._ points where the tangent space to the boundary does not exists, but it does exist a tangent cone), but it has no "double points", see definition in the second section. Moreover, the fact that the domain of dependence of the solution \(u(\cdot,\cdot)\) of \((L)\) is finite implies that the hypothesis of compactness of the support of \(w\) is actually unessential, see _e.g._ footnote at the end of SS5.3 in [4].
2301.04806
RF Injection Locking of THz Metasurface Quantum-Cascade VECSEL
RF injection locking and spectral broadening of a terahertz (THz) quantum-cascade vertical-external-cavity surface-emitting laser (QC-VECSEL) is demonstrated. An intra-cryostat VECSEL focusing cavity design is used to enable continuous-wave lasing with a cavity length over 30 mm which corresponds to a round-trip frequency near 5 GHz. Strong RF current modulation is injected to the QC-metasurface electrical bias to pull and lock the round-trip frequency. The injection locking range at various RF injection powers is recorded and compared with the injection locking theory. Moreover, the lasing spectrum broadens from 14 GHz in free-running mode to a maximum spectral width around 100 GHz with 20 dBm of injected RF power. This experimental setup is suitable for further exploration of active mode-locking and picosecond pulse generation in THz QC-VECSELs.
Yu Wu, Christopher A. Curwen, Mohammad Shahili, John L. Reno, Benjamin S. Williams
2023-01-12T04:29:05Z
http://arxiv.org/abs/2301.04806v1
# RF Injection Locking of THz Metasurface Quantum-Cascade VECSEL ###### Abstract RF injection locking and spectral broadening of a terahertz (THz) quantum-cascade vertical-external-cavity surface-emitting laser (QC-VECSEL) is demonstrated. An intra-cryostat VECSEL focusing cavity design is used to enable continuous-wave lasing with a cavity length over 30 mm which corresponds to a round-trip frequency near 5 GHz. Strong RF current modulation is injected to the QC-metasurface electrical bias to pull and lock the round-trip frequency. The injection locking range at various RF injection powers is recorded and compared with the injection locking theory. Moreover, the lasing spectrum broadens from 14 GHz in free-running mode to a maximum spectral width around 100 GHz with 20 dBm of injected RF power. This experimental setup is suitable for further exploration of active mode-locking and picosecond pulse generation in THz QC-VECSELs. ## 1 Introduction The terahertz (THz) spectral region has a need for high-resolution, high-speed spectroscopy techniques, as many gas phase polar molecular species have strong characteristic rotational lines there. Examples include applications in industrial and environmental monitoring,[1, 2] chemical detection and identification,[3, 4] combustion diagnostics.[5] The quantum cascade (QC) laser is well suited for spectroscopic applications as it has been demonstrated as a compact, electrically pumped semiconductor source which gives high power, broadband, coherent THz radiation.[6, 7, 8] Its inherently high optical nonlinearity induces self-phase locking through four-wave mixing, which promotes the generation of spontaneous frequency combs; these have been demonstrated in waveguide-based Fabry-Perot[9, 10, 11] and ring QC-lasers.[12, 13] Based on that, THz dual-comb spectroscopy has been demonstrated, surpassing the precision and speed of traditional Fourier spectrometers by several orders of magnitude.[14, 15, 16, 17] In separate experiments, THz quantum-cascade lasers have recently been implemented in the vertical-external-cavity surface-emitting laser (VECSEL) architecture, which exhibits watt-level output power, near diffraction-limited beam quality, and \(\sim\)20% continuous fractional single-mode tuning [18, 19, 20]. The key component of a QC-VECSEL is an amplifying reflectarray metasurface of metal-metal waveguide antennas that are loaded with QC-gain material. It is further paired with a partially transmissive output coupler to form the laser cavity. In contrast to ridge-waveguide QC-lasers, experiments have shown that QC-VECSELs tend to operate in single-mode regime despite having large gain bandwidths. For example, we have developed an intra-cryostat focusing VECSEL cavity to reduce the intra-cavity diffraction loss and enable continuous-wave (CW) lasing at 3.4 THz with a cavity length of \(\sim\)30 mm [21]. Even though the gain bandwidth of the metasurface used was at least 100 times larger than the free spectral range, only a single lasing mode was observed. This is mainly due to a lack of spatial hole burning within the QC-VECSEL metasurface which suppresses multi-mode instabilities [22], although it is perhaps compounded by the fact that no effort towards dispersion engineering has Figure 1: (a) Schematic of the QC-VECSEL based on an intra-cryostat focusing cavity design. (b) Scanning electron microscopy image of the fabricated QC-metasurface. The inset shows the dimension and E-field distribution in a single ridge antenna. (c) FEM simulated active metasurface reflectance, output coupler transmittance and GDD contributed by the two components. Shaded area indicates the frequency range where lasing is observed. (d) Schematic of experimental setup for RF injection locking. THz lasing spectrum (e) and beat note spectra (f) of the free-running QC-device in existence of optical feedback are collected at a DC bias of 0.235 mA, where optical feedback is provided by the moving FTIR mirror. The beat note spectra are measured in both Average (solid) and Max Hold modes (dashed) with a RBW of 2 kHz. yet been attempted. Despite these challenges, there is strong interest in achieving active mode-locked QCLs and frequency combs within the QC-VECSEL architecture. Radiofrequency (RF) current modulation of QC-lasers has been demonstrated to promote the generation of sidemodes; by injecting a RF signal near the cavity round-trip frequency, the generated sidemodes will lock existing adjacent free-running lasing modes or seed new ones, which allows for the stabilization and tuning of frequency comb states.[23; 24; 25; 26] RF modulation and injection locking is also an important mechanism for active mode-locking in QC-lasers; pulses as narrow as 4-5 ps have been reported.[27; 28] In this article, we demonstrate the emergence of spectral broadening and multimoding in THz QC-VECSEL as we inject strong RF current modulation to the QC-metasurface at a frequency close to the cavity round-trip frequency; at the same time, round-trip frequency pulling and locking to the injected RF signal is observed. The lasing bandwidth and injection locking range increase monotonically with the injected RF power. Lasing modes spanning \(\sim\)100 GHz are demonstrated under an injected RF power of 20 dBm at 4852.7 MHz, along with a locking range \(\sim\)5 MHz. ## 2 Sample and experimental setup The QC-VECSEL used for all measurements is based on an intra-cryostat focusing cavity design, as sketched in Figure 1(a). An off-axis paraboloid (OAP) mirror with a focal length of 12.7 mm is introduced into the VECSEL cavity to reduce the intra-cavity diffraction loss and enable CW lasing using small metasurfaces in long lasing cavities.[21] The QC-metasurface used in this paper is the same one as reported in ref [21], with a small bias area of diameter \(d=0.4\) mm for reduced injection current. It is designed to be resonant at 3.3 THz and consists of an array of ridges of width 12.2 \(\upmu\)m repeated with a period of 41.7 \(\upmu\)m (Figure 1(b)). An output coupler with \(R_{\mathrm{OC}}\approx 95\%\) is used in pair with the metasurface to form a laser cavity. Both two components are dispersive and contribute to the group delay dispersion (GDD) over one round trip - it exhibits a maximum value exceeding 0.35 ps\({}^{2}\) in the frequency range where lasing occurs. The simulated spectral response of the metasurface and the output coupler are plotted in Figure 1(c) based on full-wave 2D finite-element (FEM) electromagnetic reflectance simulation (Ansys HFSS). Detailed information of the active region design and simulation parameters can be found in the Supporting Information. The experimental setup for RF injection locking is depicted in Figure 1(d). All the measurements were performed in vacuum at a temperature of 77 K. We note that formable semi-rigid coaxial cable is used within the cryostat up to the chip carrier package (see Supporting Information). Due to impedance mismatch between the 50\(\Omega\) SMA port and the QC-device, the spectrum analyzer collects not only the generated beat note from the QC-device \(f_{\rm BN}\) (blue arrow), but also the RF injection signal reflected at the interface of SMA/QC-device \(f_{\rm RF}\) (green arrow). In free-running case, although only single-mode lasing was observed using the same QC-device in ref [21], here we note that the existence of optical feedback can induce multi-mode operation.[29, 30, 31] Due to the temporally varying small feedback from the scanning Fourier-transform infrared spectrometer (FTIR, Nicolet 8700) mirrors, we observed at least two lasing modes separated by \(\sim\)14 GHz in the emission spectrum (Figure 1(e)). Additional low intensity sidemodes may also exist but cannot be resolved by the limited FTIR resolution of 7.5 GHz. This phenomenon is similar as that observed in ref [30], where additional lasing modes are observed in a Mid-IR QC-laser under tilted optical feedback. Through nonlinear mixing of the free-running lasing modes, a weak electrical beat note signal is observed. It is collected using the spectrum analyzer (Agilent N9020a) both in Average mode and in Max Hold mode over 30 seconds, which indicates a round-trip frequency \(f_{\rm BN}\approx\) 4852.7 MHz and an equivalent cavity length around 31 mm (Figure 1(f)). A narrow 8-kHz -3dB linewidth in Average mode and 60-kHz linewidth in Max Hold mode of the intermodal beat note is on the same order as the free-running linewidth of the single THz lasing mode measured in [21]; it is likely contributed by only two (or a few) lasing modes. Furthermore, recent study of optical feedback has highlighted its effect on THz QC-lasers combs.[32, 33, 34] Here, we experimentally demonstrate that optical feedback plays an important role in determining not only the free-running beat note frequency but also the injection locking range (see Supporting Information). ## 3 RF injection locking With the knowledge of an accurate round-trip frequency, we are able to systematically study the modulation-dependent behavior of this QC-device. We swept the RF modulation frequency around the round-trip frequency at various modulation powers from -20 dBm to 20 dBm. All RF powers indicated in this paper refer to the nominal output level of the RF synthesizer (Hewlett-Packard 83650B) or after a 20 dBm amplifier (Hewlett-Packard 8349B). The DC bias is fixed at a current of 0.235 mA (\(\approx\)1.17\(\times I_{\rm th}\)), and the THz emission spectra as well as intermodal beat note are collected and plotted in Figure 2. At the lowest power level of -20 dBm, the spectral map in Figure 2(a) clearly shows that the beat note is pulled toward the injection signal and finally locked. A locking range of 30 kHz is demonstrated which increases with respect to the RF injection power. Starting from an RF power of -2.5 dBm, injection locking occurs before the beat note is fully pulled to meet the injected signal \(f_{\rm RF}\) (Figure 2(b)); at the same time, lasing bandwidth broadening is observed in the THz emission spectra. This spectral broadening increases with respect to RF power as shown in Figure 2(d). The maximum RF injection power used in this measurement is 20 dBm limited by the max allowable power of the bias-Tee. THz emission and RF beat note spectral maps in this case are plotted in Figure 2(e-f). The maximum spectral broadening occurs at \(f_{\mathrm{RF}}=\) 4852.7 MHz with lasing modes spanning around 100 GHz (Figure 2(g)). However, due to the limited FTIR resolution of 7.5 GHz, we were not able to spectrally resolve individual lasing modes. The corresponding power and voltage vs. current (_P-I-V_) curves are plotted in Figure 2(h) (solid curve). A maximum output power around 10 mW was collected using a pyroelectric detector (GentecEO). Compared with the _P-I_ characteristic in the free-running case (dashed curve), the output power, as well as the lasing threshold, is slightly lower. It is noticed from Figure 2(e) that the symmetry of the lasing spectrum is highest at \(f_{\mathrm{RF}}=\) 4852.7 MHz, where the maximum bandwidth is observed with relative low THz output power obtained from the _P-I_ curve. At injection frequencies above/below this value, the optical power increases - still smaller than that in free-running case - and concentrates toward lower/higher portion of the spectrum. This phenomenon is Figure 2: Beat note spectral map under constant RF injection power of -20 dBm (a) and -2.5 dBm (b) with RF modulation frequency sweeping around the round-trip frequency. (c) Experimental injection locking range at different RF injection powers (blue stars), following a 0.5-slope dependence in log–log scale (red dashed line). The free-running beat note frequency was shifted from \(\sim\) 4842 MHz in (a-c) to \(\sim\) 4853 MHz in (d-h) as the movement of cryostat changes the amount of optical feedback. (d) THz lasing spectra at increasing RF power when \(f_{\mathrm{RF}}\) is fixed at 4852.7 MHz. (e) Lasing spectral and (f) beat note maps of the device under constant RF injection power of 20 dBm. The estimated locking range is pointed out by the red arrows. The maximum spectral broadening occurs at \(f_{\mathrm{RF}}=\) 4852.7 MHz (white dashed line) and the THz lasing spectrum and _P-I-V_ curves in this case are plotted in (g-h). similar as that reported in ref [26] and a possible explanation can be found in ref [35] due to phase mismatch between the modulation period and group round-trip time. In Figure 2(f), although there is no beat note pulling observed, it is notable that the emission spectrum undergoes distinct change as the beat note disappears (pointed out by red arrows) - it is believed that this is a signature of injection locking and occurs in our measurements under different RF powers. The experimental locking range at various RF injection powers is plotted in Figure 2(c). To analyze the phenomenon of RF injection locking, Adler's equation is commonly used with a locking bandwidth given by:[23, 36] \[\Delta\nu=\frac{2\nu_{0}}{Q}\sqrt{\frac{P_{inj}}{P_{0}}}, \tag{1}\] where \(Q\) is the cold-cavity quality factor, \(\nu_{0}\) and \(P_{0}\) are the frequency and power of a free-running longitudinal mode, while \(P_{\mathrm{inj}}\) is the power of the injected sideband induced by RF injection. Adler's equation indicates a square root dependence of the locking bandwidth on the RF power and fits our experimental results well at low RF powers (red dashed line). However, our experimental locking range deviates from Adler's equation towards higher values under strong RF modulation. This may indicate the limitation of Adler's equation in explaining RF injection locking especially in the case when multiple new lasing modes are excited at RF powers \(>\) -2.5 dBm. Adler's equation assumes a weak injection signal where amplitude perturbation induced by the injection signal is not considered; a more rigorous derivation of the locking range is therefore needed. Moreover, we studied the behavior of this QC-device at various DC biases ranging from the lasing threshold to near the NDR point. Figure 3(a) shows the lasing spectra under 20 dBm RF injection at a frequency of \(f_{\mathrm{RF}}=4852.7\) MHz. Significant spectral Figure 3: (a) THz lasing spectra at various biases when RF signal at 4852.7 MHz is injected into the QC-device, the RF power used is 20 dBm for significant spectral broadening. (b) Injection locking range as a function of bias current under -15 dBm RF power in the cases of different length/strength/angle of optical feedback. broadening is observed at all the applied biases, and the lasing bandwidth increases only slightly with respect to the bias current as more modes are brought above the lasing threshold. As a next step, the effects of device bias on the injection locking bandwidth were investigated. We swept the RF modulation frequency around the round-trip frequency at a fixed injection power of -15 dBm and measured the injection locking bandwidth at various biases. Small injection power was used so the locking range can be more clearly observed. Additionally, we repeated such bias sweeps while providing different magnitude, phase, and angle of feedback light from an external mirror, the corresponding locking ranges are indicated by different colored curves in Figure 3(b). Our experimental observation reveals that the relationship between locking range and device bias is related to the condition of optical feedback, i.e. feedback length (phase), strength and tilted angle. This is significantly different from previous demonstrations using ridge-waveguide QC-lasers, where the locking range became smaller with increasing bias [23, 37]. In our system, we could make a simple assumption that there are two free-running modes, where mode \(\omega_{1}\) is induced by optical feedback around the main lasing peak \(\omega_{0}\) and is locked by the RF-excited sideband of the latter. The ratio of \(P_{\text{inj}}/P_{0}\) in Adler's equation can be estimated as the ratio of the free-running power of mode \(\omega_{0}\) and that of mode \(\omega_{1}\) as the injected RF power is fixed, which determines the injection locking range. How the locking range changes is therefore determined by how the relative power of two lasing modes develops with respect to bias. Unfortunately, this is not able to be observed experimentally limited by the resolution of our FTIR. In theory, the spectral characteristics of the device versus applied bias is expected to be affected by the changes of threshold gain induced by optical feedback and the alignment of compound-cavity modes formed in the external cavity with respect to gain, which is related to not only the length and strength of optical feedback, but also the tilt angle of external mirror [31]. To fully understand this phenomenon, a theoretical study of laser dynamics and instabilities of QC-VECSELs under optical feedback and systematic experiments of the RF-injected system with well-controlled, adjustable optical feedback will be needed and are beyond the scope of this paper. ## 4 Discussion and conclusion The injection locking range obtained in this paper is considerably smaller compared with those demonstrated in RF injection-locked Fabry-Perot waveguide QC-lasers at same level of RF power [23, 25]. One of the reasons is that QC-VECSELs have higher quality factors compared with ridge waveguide QC-lasers. Our VECSEL has a 31 mm-long external cavity and low loss from the \(\sim\)95% reflectance output coupler; using a coupled-cavity model we estimate a cold-cavity linewidth of \(\nu_{0}/Q\approx 70\) MHz. This is around 300 times smaller than a value of 25 GHz estimated in ref [23]. In addition, intrinsic and technical issues with our QC-VECSEL setup result in a low efficiency of RF power transfer at \(\sim\)4.8 GHz from the synthesizer to the QC-metasurface bias terminal. First, due to parasitic capacitances contributed by unbiased regions, the QC-metasurface itself exhibits a larger RC time-constant compared with a narrow ridge waveguide. Second, the electrical packaging has not been optimized for RF operation, where wire bonds and wire bonding pads contribute parasitic inductance and capacitance respectively. Consequently, there is a huge impedance mismatch between the 50\(\Omega\) SMA port and the QC-device, the resulting transmittance of RF signal through the SMA/QC-package boundary is simulated to be \(\sim\)4% at a target frequency of 4.8 GHz (see Supporting Information), only part of which will be applied to modulate the gain material. To make things worse, an additional \(\sim\)8 dB RF attenuation has been characterized accounting for losses through cables and directional coupler from the synthesizer to the SMA connector. In contrast to other demonstrations of ridge waveguide QC-lasers using RF coplanar probes,[23, 38] RF launchers,[39] or custom high-frequency PCB mounts[28] to achieve modulation of QC-lasers up to 35 GHz, microwave rectification technique indicates a significant roll-off at frequency higher than 3 GHz in our QC-device (see Supporting Information). In conclusion, we demonstrate RF injection locking in a THz QC-VECSEL based on intra-cryostat focusing cavity design. Round-trip frequency pulling and locking against an RF injection signal is observed. Furthermore, the RF amplitude modulation leads to broadening of the lasing spectrum up to a spectral width of 100 GHz. This is particularly notable, as multi-mode lasing in QC-VECSELs has been extremely difficult to achieve due to the lack of spatial hole burning within the metasurface; before now at most 9 lasing modes had been observed.[22] There are several obvious avenues for improvement. First, RF attenuation and impedance mismatch severely limits the modulation efficiency, and strong RF reflections impede the detection of the electrical beat note signal using a spectrum analyzer. This can be improved by optimizing the electrical packaging of the QC-device, i.e. reducing the capacitance and inductance portion of the equivalent circuit by 1) redesigning the QC-metasurface with reduced unbiased area and an improved RF feed structure; 2) replacing the electrical contact pad with a well-designed PCB 50\(\Omega\) transmission line feed up to the edge of the metasurface chip with minimal wire bond length. Second, we note that no particular effort to provide dispersion compensation has been attempted here; further engineering of GDD within the QC-VECSEL cavity may be needed to increase the lasing across the entire \(\sim\)1 THz gain bandwidth. Finally, given measurements of ridge-waveguide THz QC-lasers under strong RF modulation, it is quite likely that this device is generating short pulses in an active mode-locking regime.[27, 28] Further characterization techniques such as shifted-wave interference Fourier-transform spectroscopy (SWIFTs)[10, 40, 41] or asynchronous electro-optical sampling will be needed to recover the time-domain structure of the field.[42, 43] Supporting Information is available from the author. The authors thank David Burghoff, Andres Forrer, Giacomo Scalari, and Stefano Barbieri for valuable conversations. Microfabrication was performed at the UCLA Nanoelectronics Research Facility, wire bonding was performed at the UCLA Center for High Frequency Electronics. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solution of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. Partial funding was provided by the National Science Foundation (2041165), and the National Aeronautics and Space Administration (80NSSC19K0700).
2307.16154
Testing the validity of the surface approximation for reactions induced by weakly bound nuclei with a fully quantum-mechanical model
We examine the validity of surface approximation for breakup reactions using a fully quantum-mechanical model proposed by Ichimura, Austern, and Vincent (IAV). Analogous to the semi-classical picture, we introduce radial cut-offs to scattering waves in the IAV framework, which we refer to as IAV-cut. Systematic calculations are conducted for nonelastic breakup reactions induced by $^6$Li and deuterons at various incident energies. A comparison between the results obtained from IAV and IAV-cut is performed. The excellent agreement observed between IAV and IAV-cut in $^{6}$Li induced reactions, regardless of incident energy and target nuclei, signifies their insensitivity to the inner part of the scattering wave function, thus providing validation for the semi-classical picture. However, for deuteron induced breakup reactions, the IAV-cut results exhibit a suppression in the cross section, suggesting a strong dependence on the interior wave functions. This suppression is further enhanced as the incident energy increases.
Junzhe Liu, Jin Lei, Zhongzhou Ren
2023-07-30T07:42:24Z
http://arxiv.org/abs/2307.16154v2
Testing the validity of the surface approximation for reactions induced by weakly bound nuclei with a fully quantum-mechanical model ###### Abstract We examine the validity of surface approximation for breakup reactions using a fully quantum-mechanical model proposed by Ichimura, Austern, and Vincent (IAV). Analogous to the semi-classical picture, we introduce radial cut-offs to scattering waves in the IAV framework, which we refer to as IAV-cut. Systematic calculations are conducted for nonelastic breakup reactions induced by \({}^{6}\)Li and deuterons at various incident energies. A comparison between the results obtained from IAV and IAV-cut is performed. The excellent agreement observed between IAV and IAV-cut in \({}^{6}\)Li induced reactions, regardless of incident energy and target nuclei, signifies their insensitivity to the inner part of the scattering wave function, thus providing validation for the semi-classical picture. However, for deuteron induced breakup reactions, the IAV-cut results exhibit a suppression in the cross section, suggesting a strong dependence on the interior wave functions. This suppression is further enhanced as the incident energy increases. ## I Introduction The breakup of a nucleus into two or more fragments is an important mechanism among various channels of nuclear reactions. With the recent advancements in radioactive beam facilities, measuring the breakup reactions of rare atomic nuclei is now feasible [1]. This development has greatly enhanced our understanding of nuclear properties such as binding energy, spectroscopic factors, and angular momentum [2]. Presently, coupled-channel methods which can deal with the excitation of the internal freedom has been widely used to calculate the cross section of rare nuclei induced reactions [3]. In some experiments, weakly bound nuclei are produced to bombard with a target, ultimately fragmenting into two separate components. From an experimental perspective, determining all particles simultaneously and specifying the final states of each fragment are challenging. Alternatively, if the experiment is designed to detect only one of the fragments inclusively, the process can be simplified to \(a+A\to b+B^{*}\), where the projectile \(a\) is assumed to have a two-body structure (\(b+x\)) and \(B^{*}\) represents any possible state of the \(x+A\) system. This process of inclusive breakup has been extensively studied in experimental research [4; 5; 6; 7; 8]. If the three particles, \(b\), \(x\), and \(A\), remain in their ground state after the breakup, the corresponding process is referred to as elastic breakup (EBU). Breakup accompanied by target excitation, fusion between \(x\) and \(A\), and any possible mass rearrangement between \(x\) and \(A\) is referred to as nonelastic breakup (NEB). Precise calculations for NEB are necessary, for example, to exam the semi-classical approach which has been widely applied to knockout reaction [9], and in the surrogate method applied to study nuclear synthesis and the chemical evolution of stars [10]. Therefore, the evaluation of NEB cross sections is of great value both theoretically and experimentally. In 1985, Hussein and McVoy (HM) derived one of the earliest closed-form formulae for the inclusive breakup cross section [11]. HM's derivation provided deep insight through the summation over all \(x\)-\(A\) states. By utilizing the Glauber approximation to analyze scattering waves, they obtained an appealing and intuitive form with a clear probability interpretation of the breakup reaction. This evaluation of the NEB cross section is exclusively dependent on the asymptotic properties (\(S\)-matrix) between the fragments (\(b\) or \(x\)) and the target. This is a consequence of employing the semi-classical Glauber approximation. The HM model, along with structure calculations, finds extensive application in spectroscopic studies of one-nucleon removal reactions [9; 12; 13]. Furthermore, D. Baye and colleagues developed the dynamical eikonal model to address dissociation cross sections [14]. Rather than employing the adiabatic approximation used in the standard eikonal model for phase shift evaluation, they numerically solve a semi-classical time-dependent Schrodinger equation using straight-line trajectories. This model has found application in investigating reactions involving halo nuclei [15; 16]. The transfer to continuum (TC) model is another successful semi-classical approach used to evaluate the NEB cross section [17]. In the TC model, the transfer amplitude between the initial and final states is calculated using a time-dependent approach [18]. This transfer amplitude is calculated by using the asymptotic part of the initial bound state and the final continuum state. The main principle of this semi-classical approximation is to utilize the classical trajectory for approximating the relative motion between the projectile and the target. This semi-classical TC method has been widely applied to large numbers of break up reactions, from stable to exotic projectiles [19; 20]. In spite of the tremendous success attained by the previously mentioned semi-classical models, research has al ready been conducted to establish a quantum-mechanical model. In the early 1980s, Udagawa and Tamura (UT) [21] developed their NEB formalism using DWBA, while Austern and Vincent (AV) [22] carried out a similar derivation. After a long-standing dispute between these two groups, the equivalence of these two derivations has finally been proved in Ichimura, Austern, and Vincent (IAV)'s work [23]. Due to the computational limitations, this model is not implemented numerically until recently [24; 25; 26], and its validity has finally been tested through numbers of applications [27; 28]. This fully quantum-mechanical model starts from the effective three-body Hamiltonian, making no assumptions on the trajectory, and maintain the conservation laws naturally. The EBU is a process that all fragments and targets are properly separated, which allows us to assume that the cross section of this process depends solely on the asymptotic part of wave functions. Therefore, the EBU cross sections are unaffected by the interior part of wave functions [29]. Nevertheless, this conclusion may not apply to NEB, because NEB contains the fusion channel between the fragments and the target, and the calculation requires a short-range imaginary part of the optical potential to describe this fusion process. However, HM model with Glauber approximation ignores the inner part of the scattering wave function due to the semi-classical approximation, which lacks a direct comparison to fully quantum-mechanical approaches. Here we present a study on this surface approximation1, by introducing a radial cut-off to the scattering functions, where no other semi-classical assumptions need to be taken. We refer to this cut-off method in the IAV framework as IAV-cut. By varying the cut-off radius, we study the sensitivity to the inner wave functions, and thus test the validity of these semi-classical interpretations of reaction processes. Footnote 1: In this study, we refer to surface approximation as a procedure that relies on the asymptotic properties of the scattering wave function. The paper is organized as follows. In Sec. II we review the formalism of the IAV model and raise our surface approximation through the radial cut-off. In Sec. III we apply this cut-off to several inclusive reactions induced by \({}^{6}\)Li and deuterons. Finally, in Sec. IV we summarize the main results of this work and outline some future developments. ## II Theoretical framework In this section, we briefly review the IAV model [23; 30] and define our corresponding surface approximation in the IAV model. The inclusive breakup reaction under study takes the form \[a(=b+x)+A\to b+B^{*}, \tag{1}\] where the projectile \(a\) has a two body structure \((b+x)\), \(b\) is the detected particle, and \(B^{*}\) denotes any possible final state of the \(x+A\) system. In IAV model, fragment \(b\) is called the spectator, and fragment \(x\) is called the participant. The IAV model gives the NEB cross section \[\frac{\mathrm{d}^{2}\sigma}{\mathrm{d}\Omega_{b}\mathrm{d}E_{b}}\bigg{|}_{ \mathbf{post}}^{\mathbf{NEB}}=-\frac{2}{\hbar v_{a}}\rho_{b}(E_{b})\langle \psi_{x}(\mathbf{k_{b}})|W_{x}|\psi_{x}(\mathbf{k_{b}})\rangle, \tag{2}\] where \(v_{a}\) is the projectile-target relative velocity, \(\rho_{b}(E_{b})=\mu_{b}k_{b}/[(2\pi)^{3}\hbar^{2}]\) is the density of states for particle \(b\), \(\mu_{b}\) and \(k_{b}\) are the reduced mass and wave number, respectively, \(W_{x}\) is the imaginary part of \(U_{x}\) which describes \(x\)+\(A\) elastic scattering, \(\psi_{x}\) is the so-called \(x\)-channel wave function which is obtained by solving the inhomogeneous differential equation \[(E_{x}-K_{x}-U_{x})\psi_{x}(\mathbf{k_{b}},\mathbf{r_{x}})=\langle\mathbf{r_{x}}\chi_{b}^ {(-)}(\mathbf{k_{b}})|V_{\mathrm{post}}|\chi_{a}^{(+)}\phi_{a}\rangle, \tag{3}\] where \(E_{x}=E-E_{b}\), \(K_{x}\) is the kinetic energy operator for relative motion between fragment \(x\) and target \(A\), \(\chi_{b}^{(-)}\) is the scattering wave function with incoming boundary condition describing the scattering of \(b\) in the final channel with respect to the \(x\)+\(A\) subsystem, \(V_{\mathrm{post}}=V_{bx}+U_{bA}-U_{bB}\) is the post form transition operator, where \(V_{bx}\) is the potential binding two clusters \(b\) and \(x\) in the initial composite nucleus \(a\), \(U_{bA}\) is the fragment-target optical potential, \(U_{bB}\) is the optical potential in the final channel, \(\chi_{a}^{(+)}\) is the distorted-wave describing the \(a\)+\(A\) elastic scattering with an outgoing boundary condition, and \(\phi_{a}\) is the initial ground-state of the projectile \(a\). To simplify the calculations, we ignore intrinsic spins. As for the angle integrated NEB cross section, we have the partial wave expansion form of Eq. (2), \[\frac{\mathrm{d}\sigma}{\mathrm{d}E_{b}}\bigg{|}_{\mathbf{post}}^ {\mathbf{NEB}}= -\frac{1}{2\pi\hbar v_{a}}\rho_{b}(E_{b})\frac{1}{2l_{bx}+1} \tag{4}\] \[\times\sum_{l_{a}l_{b}l_{x}}\int\mathrm{d}r_{x}r_{x}^{2}|\mathcal{ R}_{l_{a}l_{b}l_{x}}(r_{x})|^{2}W_{x}(r_{x}),\] where \(\mathcal{R}_{l_{a}l_{b}l_{x}}\) represents the radial part in the partial wave expansion of \(\psi_{x}\). The variables \(l_{a}\), \(l_{b}\), \(l_{bx}\) and \(l_{x}\) represent the relative angular momenta between \(a\) and \(A\), \(b\) and \(B^{*}\), \(b\) and \(x\), and \(x\) and \(A\), respectively. Specifically, \(l_{bx}\) is determined by the initial bound state of the projectile, while the maximum values of \(l_{a}\), \(l_{b}\), and \(l_{x}\) are selected to ensure the convergence of the cross section. More details of the IAV model can be found in Ref. [24] and its Appendix. When the Coulomb interaction is taken into consideration, the incoming and outgoing distorted wave have the partial wave expansions, \[\langle r_{a}l_{a}m_{a}|\chi_{a}^{(+)}(\mathbf{k_{a}})\rangle=\frac{4\pi}{k_{a}r_{a }}i^{l_{a}}e^{i\sigma_{la}}u_{l_{a}}(r_{a})\left[Y_{l_{a}}^{m_{a}}(\tilde{k_{a} })\right]^{*}, \tag{5}\] \[\langle\chi_{b}^{(-)}(\mathbf{k_{b}})|r_{b}l_{b}m_{b}\rangle=\frac{4\pi}{k_{b}r_{b}}i^ {-l_{b}}e^{i\sigma_{l_{b}}}u_{l_{b}}(r_{b})Y_{l_{b}}^{m_{b}}(\hat{k_{b}}), \tag{6}\] where \(k_{a}\) and \(k_{b}\) are the relative wave numbers of the incident and outgoing channel, \(\sigma_{l_{a}}\) and \(\sigma_{l_{b}}\) are the Coulomb phase shift. One important numerical task is to determine the radial wave function \(u_{l_{a}}(u_{l_{b}})\), and our surface approximation method focus on the cut-off of these wave functions. In particular, we set the radial wave function \(u_{l_{a}}(u_{l_{b}})\) to zero below a specific cut-off radius. The surface approximation in nuclear reactions often means using the asymptotic behavior of the wave function to calculate the cross section [31, 32]. Instead of solving the radial Schrodinger equations, others use eikonal method to calculate the phase shift [11]. According to the unitarity of \(S\)-matrix, the cross section of NEB, which represents the absorption of participant \(x\) by target \(A\), can be expressed using \(S\)-matrices [33]. The key point of this kind of surface approximation is to extract the reaction information from the asymptotic behavior (\(S\)-matrix). In another word, only the exterior part of the scattering wave function influences the cross sections. However, in the IAV model, the key step is to solve the differential inhomogeneous equation to obtain the \(x\)-channel wave function. This wave function is subsequently used for computing the NEB cross section. As suggested by Baur [29], one suitable surface approximation for the IAV model may be replacing the wave function with some suitable form. The simplest one is the asymptotic form \[u_{l}(r)\approx\frac{i}{2}\left[H_{l}^{(-)}(r)-S_{l}H_{l}^{(+)}(r)\right]. \tag{7}\] However, due to the irregularity of this asymptotic form at the origin, it can not be implemented numerically. In order to prevent the divergence at the origin and maintain the boundary condition of wave function, we introduce a radial cut-off to the scattering wave functions both in the entrance and the exit channel consistently, which is mentioned by Baur [29] as well, \[u_{l_{a}}(r)=u_{l_{b}}(r)=0\qquad r<R_{\text{cut}}, \tag{8}\] where \(R_{\text{cut}}\) is the cut-off radius which is chosen according to the interaction radius of optical potential. It is important to note that implementing this cut-off will result in a discontinuity in the wave functions \(u_{l_{a}}(r)\) and \(u_{l_{b}}(r)\) at the cut-off radius. In the subsequent discussion, we will refer to the calculation using this cut-off method as IAV-cut. The angular integrated NEB cross section can be directly obtained by using the radial component of the \(x-\)channel wave function, \(\mathcal{R}_{l_{a}l_{b}l_{x}}\). Therefore, it is crucial to investigate the impact of the cut-off, especially when summing over \(l_{b}\) and \(l_{x}\) and only retaining the dependence on \(l_{a}\), which represents the angular momentum between the projectile and target. We denote this new radial part wave function as \(R_{l_{a}}(r_{x})\), and its modulus square has the relation to \(\mathcal{R}_{l_{a}l_{b}l_{x}}(r_{x})\) \[|R_{l_{a}}(r_{x})|^{2}=\sum_{l_{b}l_{x}}|\mathcal{R}_{l_{a}l_{b}l_{x}}(r_{x})| ^{2}. \tag{9}\] Then the angular integrated NEB cross section can be obtained by \[\begin{split}\frac{\text{d}\sigma}{\text{d}E_{b}}\bigg{|}_{\text {post}}^{\text{NEB}}=&-\frac{1}{2\pi\hbar v_{a}}\rho_{b}(E_{b}) \frac{1}{2l_{bx}+1}\\ &\times\sum_{l_{a}}\int\text{d}r_{x}r_{x}^{2}|R_{l_{a}}(r_{x})|^{2 }W_{x}(r_{x}),\end{split} \tag{10}\] ## III Application In this section, we present systematic calculations for the inclusive breakup induced by \({}^{6}\)Li and deuteron projectiles and compare the results of IAV with IAV-cut. The choice of cut-off radii is critical for our implementation. A cut-off radius that is too small will have little impact due to the removal of only a small part of the wave function. However, a cut-off radius that is too large will obscure the interacting details between the two nuclei, leading to a significant decrease in the cross section. In the global optical potential model [34, 35, 36, 37], the radius parameter that we used in the current study takes the form \[R_{0}=r_{0}\times A_{T}^{1/3}, \tag{11}\] where \(r_{0}\) is the geometric parameter of optical potential, \(A_{T}\) is the mass number of the target. The parameter \(R_{0}\) represents the effective range of the nuclear force. The mass number of the projectile is often omitted in the fitting of the global optical potential. Consequently, we select the cut-off radius to be of a similar order of magnitude as \(R_{0}\). This selection of the cut-off radius accounts for the variability in the effective range of interaction across different target nuclei. ### Convergence of the numerical method As previously mentioned, introducing a radial cut-off for the wave function leads to a discontinuity at the cut-off radius \(R_{\text{cut}}\). In numerical calculations, the Gaussian quadrature method is widely used to save computing time by integrating the wave function. However, accurately capturing this discontinuity at the cut-off radius \(R_{\text{cut}}\) often requires additional integration quadrature points. Consequently, using an excessive number of grid points in the integration significantly increases computer memory usage and computation time. Moreover, the Gaussian quadrature method is characterized by having more grid points at the upper and lower limits of the integration interval compared to the equally spaced grid points of Simpson's rule and the trapezoidal rule. As a result of implementing a radial cut-off for the wave function, somes quadrature points near the origin in the Gaussian method do not contribute to the overall integration result since the integrand becomes zero due to the cut-off. This leads to the waste of our computational resources. To improve the numerical efficiency, when evaluating the source term, which is the inhomogeneous term on the right-hand side of Eq. (3), we choose the integral2 region to start from \(R_{\rm cut}\), rather than setting the wave functions to zero and carrying out the integral from the origin. By reselecting the integration interval in this way, we no longer calculate the parts that do not contribute to the cross section, thus achieving rapid convergence of the integration result with fewer integration points. Footnote 2: More details of the evaluation of this source term can be seen in the Eq. (13) of Ref. [38] and its appendix. The variable \(r_{\rm bz}\) in the Eq. (13) of Ref. [38] is expressed by the coordinate set \((r_{x},r_{b})\) as presented in Eq. (45) in the appendix. The choice of \(r_{b}\) starting from \(R_{\rm cut}\) will break the continuity of wave function in \(r_{\rm bz}\) as well, so a test of convergence is necessary. To test the validity of this method, we consider the \({}^{28}\)Si\((d,pX)\) reaction at the incident kinetic energy of 30 MeV in the lab frame and relative kinetic energy between \(p\) and \({}^{29}\)Si\({}^{*}\) of 10 MeV in the CM frame. We plotted the differential cross section \({\rm d}\sigma/{\rm d}E\) in Fig. 1 as a function of the number of Gaussian quadrature points. The figure shows that increasing the number of quadrature points from 25 to 50 results in a sharp drop in the cross section, but further increasing the number of points leads to little change in the cross section and good convergence. Our test also demonstrated that to achieve the same convergence using Simpson's rule (shown as the dotted horizontal line), we needed to employ a minimum of 1000 grids, which is significantly greater than the number of quadrature points (\(>\)100) required for the Gaussian method. The good convergence shown in the figure supports our choice of the integral region. Similar rapid convergence can be achieved in other reaction systems as well. ### Application to \((^{6}{\rm Li},\alpha X)\) Since the Glauber approximation is widely used in the heavy ion induced knockout reactions [39; 12], we present studies on these reactions with IAV-cut to test the validity of the surface approximation. Here we consider the calculations for the \({}^{208}\)Pb\((^{6}{\rm Li},\alpha X)\) reaction. We treat \({}^{6}\)Li as \(\alpha\)+\(d\) cluster in the following discussion. The incoming channel optical potential, which describes the \({}^{6}\)Li+\({}^{208}\)Pb elastic scattering, is taken from Ref. [34]. Besides, the \(\alpha\)+\({}^{210}\)Bi\({}^{*}\) and \(\alpha\)+\({}^{208}\)Pb interaction are adopted from Ref. [35], and the \(d\)+\({}^{208}\)Pb interaction is taken from Ref. [36]. The potential binding fragments \(\alpha\) and \(d\) in the initial composite projectile is assumed to take the Woods-Saxon (WS) form with the following parameter set: \(a_{v}=0.7\) fm and \(r_{v}=1.15\) fm. The depth of this WS potential is fitted to reproduce the experimental binding energy of \({}^{6}\)Li. The nominal Coulomb barrier for this system is around 30.1 MeV [40]. The model space needed for converged solutions of the IAV model contains partial waves \(l\leq 90\) in the \({}^{6}\)Li+\({}^{208}\)Pb and \(\alpha\)+\({}^{210}\)Bi\({}^{*}\) relative motion, and \(l\leq 40\) in the \(d\)+\({}^{208}\)Pb channel at \(E_{\rm lab}=100\) MeV. The model space we chose was large enough to ensure the convergence of NEB cross section. For the \({}^{208}\)Pb\((^{6}{\rm Li},\alpha X)\) reaction, the radius parameter of the imaginary part of the optical potential between \({}^{6}\)Li and \({}^{208}\)Pb is \(R_{0}=9.08\) fm [34], so we choose the cut-off radius to be 4 fm, 6 fm, and 10 fm according to the previous discussion on the selection of cut-off parameter. It is important to note that a cut-off is applied consistently to both the incoming channel scattering wave function of \({}^{6}\)Li+\({}^{208}\)Pb and the outgoing channel scattering wave function of \(\alpha\)+\({}^{210}\)Bi\({}^{*}\). The differential cross section of this system at \(E_{\rm lab}=100\) MeV as a function of the outgoing energy of \(\alpha\) particles in CM frame is presented in Fig. 2. The solid line corresponds to results from the IAV model, while the dot-dashed, dashed, and dotted lines represent the cases for IAV-cut with cut-off radii of 4 fm, 6 fm, and 10 fm, respectively. First we notice that the four curves share the same shape, and the peaks are located around the same outgoing energy. We observe that for outgoing kinetic energies lower than 50 MeV or higher than 75 MeV, the four curves almost overlap, suggesting that the effect of the cut-off radius is minimal. However, between 50 MeV and 75 MeV, an increasing difference can be observed as the cut-off radius increases. Nevertheless, even in the worst case (i.e., with a 10 fm cut-off), the difference is still smaller than the typical experimental uncertainty. Interestingly, we can also see that the difference between the 4 fm cut and 6 fm cut cases is very small and almost invis Figure 1: NEB cross section of \({}^{28}\)Si\((d,pX)\) at \(E_{d}=30\) MeV with relative outgoing energy \(E_{p}=10\) MeV, as a function of numbers of Gaussian quadrature points. The dotted horizontal line represent the cross section with 1000 Simpson’s grids. ible in the figure. This indicates that the corresponding part from 4 to 6 fm of the wave function does not affect the cross section significantly. These results suggest that IAV-cut produces satisfactory outcomes compared to the original IAV calculation. Given the substantial importance of the angular momentum dependency of cross sections, we proceeded to examine the partial wave distribution of the cross sections and the effects of this cut-off method on the cross sections. The projectile-target angular momentum distribution of integrated NEB cross section for the same reaction is shown in Fig. 3. It is observed that the peaks are determined at approximately the same value of \(l_{a}\) which stands for the relative angular momentum between \({}^{6}\)Li and \({}^{208}\)Pb, and all curves exhibit the same bell-shaped distribution as described in [41]. The cross section shows a strong absorption effect of the \({}^{6}\)Li+\({}^{208}\)Pb interaction for low partial waves (\(l_{a}\leq 20\)), thus leading to zero NEB cross section. Furthermore, the impact of the cut-off is only evident for partial waves within the range \(20\leq l_{a}\leq 55\), with higher partial waves remaining unaffected. Mathematically, wave functions corresponding to large angular momentum states are equal to zero at sufficiently small radii because it is hard to penetrate into the high centrifugal barrier. As a result, setting the inner part of these wave functions to zero will not change the calculation of the cross section. These results illustrate a consistent agreement between cases using various radial cutoffs and the direct calculation based on the IAV model. This confirmation validates the utilization of the surface approximation in this reaction. In this context, the term surface approximation implies that the NEB cross section remains unaffected by the inner part of the incoming scattering wave of \({}^{6}\)Li+\({}^{208}\)Pb and the outgoing scattering Figure 3: Integrated \(\mathbf{\varpi}\) NEB cross section, as a function of the relative angular momentum between \({}^{6}\)Li and \({}^{208}\)Pb, for the \({}^{208}\)Pb(\({}^{6}\)Li, \(\alpha X\)) reaction at \(E_{\rm lab}=100\) MeV. The solid line corresponds to results from the IAV model, while the dot-dashed, dashed, and dotted lines represent the cases for IAV-cut with cut-off radii of 4 fm, 6 fm, and 10 fm, respectively. Figure 2: The angular integrated differential NEB cross section of \({}^{208}\)Pb(\({}^{6}\)Li, \(\alpha X\)) as a function of the outgoing energy \(E_{\alpha}\) in the CM frame, at a laboratory energy of 100 MeV. The solid line represents the result from the IAV model, whereas the dot-dashed, dashed, and dotted lines correspond to the cases with cut-off radii of 6 fm and 10 fm for IAV-cut, respectively. wave of \(\alpha\)+\({}^{210}\)Bi\({}^{*}\). We pick out the \(l_{a}\) = 20, 48, and 55 cases and draw \(|R_{l_{a}}|^{2}\) at relative outgoing kinetic energy \(E_{\alpha}\) = 64 MeV in CM frame in Figs. 4 (a), (b) and (c), respectively. The partial waves with \(l_{a}\) = 20, 48, and 55 belong to distinct regions as discussed in Fig. 3: strong absorption where the NEB cross section is close to zero, the maximum value of \(\sigma_{l}\) where the difference between the results obtained by the IAV model and IAV-cut is significant, and the region where the centrifugal barrier plays an influential role. The solid lines represent the IAV results, while the dotted lines denote the IAV-cut results with 10 fm cut-offs. First, the difference between wave functions shown in Fig. 4(a) does not exhibit a significant difference. Due to the strong absorption effect of the \({}^{6}\)Li+\({}^{208}\)Pb interaction, the probability flux for the low angular momentum component is removed to the fusion channel, resulting in a relatively small wave function for the breakup process. Thus, this low partial wave makes a minimal contribution to the NEB cross section. As for the \(l_{a}\) = 48 case shown in Fig. 4(b), a small difference occurs in the range of 15 \(\sim\) 25 fm and near 5 fm, while leaving its asymptotic parts unaltered. Since the EBU cross section is evaluated via the boundary conditions (\(S\)-matrices) [33], this surface approximation is valid for EBU calculations as well. Additionally, as shown in Fig. 4(c), no apparent difference appears in the wave functions at \(l_{a}=55\). This finding is consistent with the results in Fig. 3, which also does not exhibit clear changes in the high partial wave component. As discussed in Eq. (10), NEB cross sections can be evaluated by \(|R_{l_{a}}|^{2}W_{x}\), which is the product of the modulus square of wave function and the imaginary part of the \(d\)+\({}^{208}\)Pb optical potential. The products for the \(l_{a}\) = 20, 48 and 55 cases are presented in Figs. 4 (d), (e) and (f), respectively. The solid lines represent the IAV results, while the dotted lines denote the IAV-cut results with 10 fm cut-offs. It can be observed that for all the three partial waves in Figs. 4 (d), (e) and (f), value of the product goes to zero rapidly in the region \(r>15\) fm, owing to the short-range characteristic of the potential, thus making no contribution to the cross section. As a result, any changes on the wave function outside 15 fm will have no impact on the calculation of NEB cross section. The difference shown in panel (d) is significant; however, the magnitude of the quantity depicted in the figure is small, thus resulting in a relatively negligible NEB cross section. Panels (e) and (f) show no apparent difference between the IAV result and the IAV-cut result for the \(l_{a}\) =48 and 55 cases. This observation supports the conclusion that a cut-off on the scattering wave functions \(u_{l_{a}}\) and \(u_{l_{b}}\) does not alter the calculation of NEB cross sections. ### Application to \((d,pX)\) On the other hand, we study the deuteron induced inclusive breakup reactions to further investigate the validity of the surface approximation. First, we carry out the calculation for the \({}^{208}\)Pb(\(d,pX\)) reaction at \(E_{\rm lab}=70\) MeV. The proton-target and neutron-target interactions were adopted from the global parametrization of Koning and Delaroche (KD02) [37]. The incoming channel interaction between deuteron and the target is adopted from Ref. [36]. For the interaction binding the proton and the neutron in the projectile, we considered the Gaussian form \[V(r)=V_{0}\exp(-r^{2}/a^{2}), \tag{12}\] where \(a=1.484\) fm, and \(V_{0}\) is fitted to reproduce the experimental binding energy of deuteron. The model space needed for converged solutions contains partial waves \(l\leq 38\) in the \(d\)+\({}^{208}\)Pb and \(p\)+\({}^{209}\)Pb\({}^{*}\) relative motion, and \(l\leq 18\) in the \(n\)+\({}^{208}\)Pb channel at \(E_{\rm lab}=70\) MeV. Similar to the \({}^{6}\)Li induced cases, here we examine the partial wave dependence of the NEB cross section for this reaction. Fig. 5 presents integrated NEB cross section as a function of \(d\)+\({}^{208}\)Pb relative angular momentum. The solid line corresponds to results from the IAV model, while the dot-dashed, dashed, and dotted lines represent the cases for IAV-cut with cut-off radii of 4 fm, 6 fm, and 8 fm, respectively. The radius parameter of the imaginary part of the optical potential between \(d\) and \({}^{208}\)Pb is \(R_{0}=7.87\) fm [36]. From this figure, we observed that the difference between lines only occurs in the low partial waves (\(l\leq 20\)). When the cut-off radius increases, the variation in the difference also increases. Compared to the \({}^{6}\)Li induced reactions, this deuteron induced reaction is more sensitive to inner part of the scattering wave, because the suppression on cross section is enhanced gradually when the cut-off radius increases. Furthermore, in contrast to previous cases of \({}^{6}\)Li, no strong absorption effect is observed due to the significant contribution of the Figure 5: Integrated NEB cross section, as a function of the relative angular momentum between \(d\) and \({}^{208}\)Pb, for the \({}^{208}\)Pb(\(d,pX\)) reaction at \(E_{a}=70\) MeV. The solid line corresponds to results from the IAV model, while the dot-dashed, dashed, and dotted lines represent the cases for IAV-cut with cut-off radii of 4 fm, 6 fm, and 8 fm, respectively. low partial wave component to the NEB cross section. These result illustrates that, without a strong absorption effect, the cut-off of the low partial wave component will finally manifest in the NEB cross section. To account for this difference compared to the previous \({}^{6}\)Li induced cases, we also depicted \(|R_{l_{a}}|^{2}\) which in this case is the modulus square of the radial part of \(n+^{208}\)Pb wave function for \(l_{a}=10\), \(15\), and \(20\) for this reaction at the outgoing kinetic energy \(E_{p}=38\) MeV in CM frame. The results are shown in Fig. 6. For the \(l_{a}=10\) and \(l_{a}=15\) cases shown in Figs. 6(a) and (b), there is a noticeable difference in the wave function obtained from the IAV model and the IAV-cut. Specifically, the wave function obtained from IAV-cut within \(10\) fm shows a significant reduction. This effect is particularly prominent in the \(l_{a}=10\) case. This demonstrates that, contrary to the previous situation, the inner part of the \(n+^{208}\)Pb wave function with low angular momentum is highly sensitive to the interior part of the scattering functions \(u_{l_{a}}\) and \(u_{l_{b}}\). However, in the \(l_{a}=20\) case, there is no clear distinction between the results obtained from the IAV model and the IAV-cut. This is due to the presence of a strong centrifugal barrier, which reduces the scattering wave function inside the barrier to almost zero. The products of the imaginary part of the \(n+^{208}\)Pb potential and modulus square of \(n+^{208}\)Pb wave function are also presented in Figs. 6 (d), (e) and (f). Similar to the \({}^{6}\)Li induced cases, the values of the product go to zero rapidly beyond the effective range of nuclear force. It can be observed from Figs. 6 (d) and (e) that the results obtained from the IAV-cut are significantly suppressed compared to those obtained from the IAV model. This suppression leads to a reduction in the cross section for NEB in the IAV-cut. Consistent to the results in Fig. 5, no obvious difference can be seen in Fig. 6 (f) for high partial wave components. These results explain the suppression of cross section in Fig. 5. Surface approximation for this deuteron induced reaction is not as appropriate as the previous \({}^{6}\)Li induced cases where the strong absorption effect occurs. Nevertheless, the asymptotic behavior of the wave functions remains unchanged, which indicates that this surface approximation is still valid for EBU calculation. In the previous \({}^{6}\)Li induced case, where the IAV-cut and IAV models yield almost identical results, the validity of the surface approximation is supported for both the scattering wave functions \(u_{l_{a}}\) and \(u_{l_{b}}\). However, in the case of the deuteron, it is highly important to investigate whether the cross section suppression observed in the IAV-cut results from the cut-off of both wave functions or only one. We computed the integrated cross section by exclusively applying a cut-off to either the entrance channel scattering wave function \(u_{l_{a}}\) or the exit channel scattering wave function \(u_{l_{b}}\). The results are presented in Table 1. The first column represents the cut-off radius, the second column displays the cross section with a cut-off applied exclusively to the incoming channel wave function \(u_{l_{a}}\), while the third column shows the cross section with a cut-off applied exclusively to the exit channel wave function \(u_{l_{b}}\). The fourth column presents the results obtained by consistently applying cut-offs to both the incoming and exit channel wave functions. The original IAV result of the integrated cross section is \(486\) mb. Table 1 shows that the cross sections obtained from cutting \(u_{l_{a}}\), \(u_{l_{b}}\), and cutting both of them are of comparable magnitudes for different cut-off radii. This indicates that the cut-off behaves similarly for both \(u_{l_{a}}\) and \(u_{l_{b}}\), suggesting that no specific cut-off exhibits dominance over another. And applying a cut-off to either \(u_{l_{a}}\) or \(u_{l_{b}}\) will ultimately reduce the cross section when compared to the IAV model. In another word, both scattering wave functions concurrently contribute to the overall influence on the cross section. This can be further discussed in a theoretical perspective. Since the wave functions in the entrance and exit channels are represented in two dif \begin{table} \begin{tabular}{c|c|c|c} \hline \(\sigma_{\text{NEB}}\) (mb) & \(\sigma_{\text{cut-off}}\) & cut \(u_{l_{a}}\) & cut \(u_{l_{b}}\) & cut both \\ \hline \(4\) fm & \(469\) & \(470\) & \(467\) \\ \(6\) fm & \(449\) & \(441\) & \(440\) \\ \(8\) fm & \(413\) & \(428\) & \(410\) \\ \hline \end{tabular} \end{table} Table 1: Integrated cross section of \({}^{208}\)Pb(\(d,pX\)) reaction at \(E_{\text{lab}}=70\) MeV for different cut-off radius. Figure 6: The modulus square of \(n+^{208}\)Pb wave function \(|R_{l_{a}}|^{2}\) defined in Eq. (9) for (a) \(l_{a}=10\), (b) \(l_{a}=15\), and (c) \(l_{a}=20\), for the \({}^{208}\)Pb(\(d,pX\)) reaction at \(E_{\text{lab}}=70\) MeV and relative outgoing kinetic energy \(E_{a}=38\) MeV. The solid line represents the IAV result, while the dotted line corresponds to the IAV-cut case with \(8\) fm cut-off. The product of the imaginary part of the \(n+^{208}\)Pb potential and the wave function for (d) \(l_{a}=20\), (e) \(l_{a}=48\), and (f) \(l_{a}=55\). ferent sets of Jacobi coordinate, calculating the inhomogeneous terms in Eq.(3) requires coordinate transformation between these two sets. When calculating integrals, the two coordinate variables are not completely orthogonal. Therefore, when the product of two scattering wave functions from both the incoming and outgoing channels is integrated, any cut-off applied to one wave function will also exclude the region near the origin of the other wave function. Consequently, the excluded region of the other wave function will not contribute to the final determination of the cross section, regardless of whether this part is cut or not. In summary, the contribution of the internal parts of the incident and outgoing scattering wave functions cannot be separated when calculating the cross section. ### Discussion Based on the previous calculations and a comparison between the IAV and IAV-cut models, we conclude that the surface approximation is valid for \({}^{6}\)Li-induced breakup reactions but does not yield satisfactory results for deuteron-induced cases. In order to further investigate the validity of the surface approximation in the IAV framework, we performed systematic calculations of \({}^{6}\)Li and deuterons induced inclusive breakup reactions considering different incident energies and target masses. We carry out the calculations for the \({}^{28}\)Si\((d,pX)\) reactions at \(E_{\rm lab}\) =2, 6, 10, 20, 30, 60, and 100 MeV, the \({}^{208}\)Pb\((d,pX)\) reactions at \(E_{\rm lab}\) = 20, 30, 50, 70, and 100 MeV, the \({}^{28}\)Si\((^{6}\)Li\(,\alpha X)\) at \(E_{\rm lab}\) =5, 20,30, 40, 50, and 100 MeV, and \({}^{208}\)Pb\((^{6}\)Li\(,\alpha X)\) reactions at \(E_{\rm lab}\) =30, 40, 60, 80 and 100 MeV. The numerical computation of the NEB cross section using the IAV model are heavy tasks. This is primarily due to the slow convergence of wave functions for many partial waves, the large memory required to store these wave functions, and the need for more grid points to ensure the convergence of the numerical integration. As a consequence, our systematic analysis is computationally intensive and has reached our computing limitations. Thus, we were only able to examine a restricted range of incident energies and target masses, and our conclusions are applicable only under these limited circumstances. Here we introduce the relative deviation of the integrated NEB cross section to quantify the difference between IAV and IAV-cut: \[\delta=\frac{|\sigma_{\rm IAV}-\sigma_{\rm cut}|}{\sigma_{\rm IAV}}\times 100\%, \tag{13}\] where \(\sigma_{\rm cut}\) is the integrated NEB cross section calculated with IAV-cut and \(\sigma_{\rm IAV}\) is result computed directly with the IAV model. Figure 7 shows the relative deviation for the deuteron-induced cases discussed above at different incident energies. Panel (a) and (b) correspond to the \({}^{28}\)Si\((d,pX)\) and \({}^{208}\)Pb\((d,pX)\) cases, respectively. The cut-off radii are selected based on the optical potential parameters and are included in the figures. We use 2 and 4 fm cut-offs for the \({}^{28}\)Si\((d,pX)\) reactions and 4, 6, and 8 fm cut-offs for \({}^{208}\)Pb\((d,pX)\) reactions. The line with circle, square, diamond, and plus points represents the cases with 2, 4, 6, and 8 fm cut-offs, respectively. This figure demonstrates an upward trend as the incident kinetic energy increases. With increasing kinetic energy, the relative deviations become more considerable, ranging from a few percent to as much as 25% in the most extreme scenario. Moreover, the relative deviations still remain moderate for low incident energy near or below the Coulomb barrier, where the strong Coulomb force prevents the wave function from penetrating the interior region. Therefore, setting these wave functions to zero would not affect the calculation of the NEB cross section. Additionally, the figure shows that there is a decrease in the relative deviations with a 2 fm cut-off in panel (a) and a 4 fm cut-off in panel (b). In other words, surface approximation with these cut-offs is still valid in the NEB calculation. Figure 8 illustrates the relative deviation for the \({}^{6}\)Li induced cases mentioned above at different incident energies. Panel (a) and (b) present the \({}^{28}\)Si\((^{6}\)Li\(,\alpha X)\) and Figure 7: Relative deviation of integrated NEB cross section for (a) \({}^{28}\)Si\((d,pX)\), (b) \({}^{208}\)Pb\((d,pX)\). Cut-off radii are selected according to the potential parameters, which are included in the figure. Different symbols represent cases with different cut-off radii. \({}^{208}\)Pb(\({}^{6}\)Li, \(\alpha X\)), respectively. We use 2 and 4 fm cutoffs for the \({}^{28}\)Si(\({}^{6}\)Li, \(\alpha X\)) reactions and 4, 6, and 10 fm cut-offs for \({}^{208}\)Pb(\({}^{6}\)Li, \(\alpha X\)) reactions. The lines with circle, square, diamond, and star points represent the cases with 2, 4, 6, and 10 fm cut-offs, respectively. In Fig. 8 (a), we can observe a decreasing trend in the relative deviation. Specifically, as the kinetic energy increases, the projectile follows a classical trajectory, and a strong absorption effect of the \({}^{6}\)Li+\({}^{28}\)Si interaction occurs, both of which are key assumptions in semi-classical approaches. Figures 8(a) and (b) both demonstrate that the overall relative deviations of the cross section are less than 5%, which is within the typical experimental uncertainty, confirming the validity of surface approximation in these systems. The comparison between Fig. 7 and Fig. 8 reveals that, in the considered circumstances, the surface approximation for \((d,p)\) reactions is not as applicable as it is for (\({}^{6}\)Li, \(\alpha X\)) reactions. This is evidenced by the deviation values in Fig. 7 being 20% higher than those in Fig. 8. To further investigate the differences caused by \({}^{6}\)Li and deuteron-induced breakup reactions, we reintroduce the angular dependence of the incoming scattering wave function in order to make a direct comparison with the semi-classical trajectory picture. The expansion can be written as \[\chi^{(+)}_{a}(\mathbf{r})=\sum_{l}i^{l}(2l+1)\frac{u_{l}(r)}{kr}P_{l}(\cos\theta), \tag{14}\] where the Coulomb phase shift has already been inserted into the radial function \(u_{l}\). Figure 9 depicts the modulus square of the wave function \(|\chi^{(+)}_{a}|^{2}\) in the x-z plane for the elastic scattering of \({}^{6}\)Li+\({}^{208}\)Pb reaction at three different bombarding energies. This figure is arranged as a heatmap in polar coordinate, where lighter colors represent larger probability of finding a particle according to the probability interpretation of wave function. At 30 MeV, \({}^{6}\)Li is incapable of penetrating \({}^{208}\)Pb and instead is diffracted before reaching the target. A clearly defined classical trajectory becomes apparent in Figs. 9(b) and (c), because as the scattering angle decreases, the peak of probability density forms a straight line. Additionally, Figs. 9(b) and (c) demonstrate that the probability of detecting a particle at forward region is exceptionally low because there are no bright points in the forward region. This lack of probability in the forward region demonstrates the strong absorption effect of the \({}^{6}\)Li+\({}^{208}\)Pb interaction, in which low angular momentum components are fully fused into the target and do not contribute to the NEB cross section. These results affirms the validity of introducing radial cut-offs at high incident energies in IAV framework. As a comparison, the wave function for the elastic scattering of \(d+^{208}\)Pb is depicted in Fig. 10. Unlike the previous case, there is a strong interference in the forward angle in all the panels of Fig. 10, showing strong wave-like characteristics. This strong distortion of the scattering wave functions lack a correspondence to the classical trajectory picture, and thus the surface approximation fails. Besides, there is no evidence of a strong absorption effect of the \(d+^{208}\)Pb interaction, because there are many light points in the forward angle. Comparing wave functions of these reactions makes it straightforward to establish the validity of surface approximation based on whether the incoming channel elastic scattering process has a clear correspondence to a classical trajectory. ## IV Conclusion We present a study on the nonelastic breakup reactions induced by weakly bound nuclei with a fully quantum-mechanical model from Ichimura, Austern, and Vincent. Corresponding to the classical picture of trajectory in reaction processes, we introduce a radial cut-off to investigate the validity of the surface approximation on a fully quantum-mechanical basis. With a proper selection of the cut-off radius, we apply this surface approximation to the (\({}^{6}\)Li, \(\alpha X\)) and \((d,pX)\) reactions. Figure 8: Relative deviation of integrated NEB cross section for (a) \({}^{28}\)Si(\({}^{6}\)Li, \(\alpha X\)), and (b) \({}^{208}\)Pb(\({}^{6}\)Li, \(\alpha X\)). Cut-off radii are selected according to the potential parameters, which are included in the figure. Different symbols represent cases with different cut-off radii. We observed that the approximated cross sections computed with cut-offs for the \(({}^{6}\mathrm{Li},\alpha X)\) reactions exhibit good overall agreement with the accurate calculations. These results indicate that NEB cross section is insensitive to the inner wave function, and the semi-classical picture is valid. For \((d,pX)\) reactions, a non-negligible loss of cross-sections was observed after the cut-off, suggesting a strong dependence on the inner wave functions at low energies in the IAV framework. Setting the inner wave function to zero in \({}^{6}\mathrm{Li}\) induced reactions has little effect on the cross-section due to a strong absorption of small angular momentum components in the entrance channel. However, in the case of deuteron induced reactions, the distortion caused by the nuclear and the Coulomb forces does not correspond to a semi-classical trajectory picture. Consequently, there is a relatively stronger dependence on the inner scattering wave function in the IAV framework. However, these conclusions are based on a very limited set of systems. Further studies involving higher incident energies and more targets are called for. We plan to optimize our computer code so that we can conduct calculations for reactions involving heavier targets and higher energy, which are currently beyond our computing capability. ###### Acknowledgements. This work has been supported by National Natural Science Foundation of China (Grants No.12105204, No.12035011, and No.11975167), by the Fundamental Research Funds for the Central Universities.
2306.01701
Statistical field theory for nonlinear elasticity of polymer networks with excluded volume interactions
Polymer networks formed by cross linking flexible polymer chains are ubiquitous in many natural and synthetic soft-matter systems. Current micromechanics models generally do not account for excluded volume interactions except, for instance, through imposing a phenomenological incompressibility constraint at the continuum scale. This work aims to examine the role of excluded volume interactions on the mechanical response. The approach is based on the framework of the self-consistent statistical field theory of polymers, which provides an efficient mesoscale approach that enables the accounting of excluded volume effects without the expense of large-scale molecular modeling. A mesoscale representative volume element is populated with multiple interacting chains, and the macroscale nonlinear elastic deformation is imposed by mapping the end-to-end vectors of the chains by this deformation. In the absence of excluded volume interactions, it recovers the closed-form results of the classical theory of rubber elasticity. With excluded volume interactions, the model is solved numerically in three dimensions using a finite element method to obtain the energy, stresses, and linearized moduli under imposed macroscale deformation. Highlights of the numerical study include: (i) the linearized Poisson's ratio is very close to the incompressible limit without a phenomenological imposition of incompressibility; (ii) despite the harmonic Gaussian chain as a starting point, there is an emergent strain-softening and strain-stiffening response that is characteristic of real polymer networks, driven by the interplay between the entropy and the excluded volume interactions; and (iii) the emergence of a deformation-sensitive localization instability at large excluded volumes.
Pratik Khandagale, Timothy Breitzman, Carmel Majidi, Kaushik Dayal
2023-06-02T17:23:31Z
http://arxiv.org/abs/2306.01701v1
Statistical Field Theory for Nonlinear Elasticity of Polymer Networks with Excluded Volume Interactions ###### Abstract Polymer networks formed by cross-linking flexible polymer chains are ubiquitous in many natural and synthetic soft matter systems. Current micromechanics models generally do not account for excluded volume interactions except, for instance, through imposing a phenomenological incompressibility constraint at the continuum-scale. This work aims to examine the role of excluded volume interactions on the mechanical response. The approach is based on the framework of the self-consistent statistical field theory of polymers, which provides an efficient mesoscale approach that enables the accounting of excluded volume effects without the expense of large-scale molecular modeling. A mesoscale representative volume element is populated with multiple interacting chains, and the macroscale nonlinear elastic deformation is imposed by mapping the end-to-end vectors of the chains by this deformation. In the absence of excluded volume interactions, it recovers the closed-form results of the classical theory of rubber elasticity. With excluded volume interactions, the model is solved numerically in 3-dimensions using a finite element method to obtain the energy, stresses, and linearized moduli under imposed macroscale deformation. Highlights of the numerical study include: (1) the linearized Poisson's ratio is very close to the incompressible limit without a phenomenological imposition of incompressibility; (2) despite the harmonic Gaussian chain as a starting point, there is an emergent strain-softening and strain-stiffening response that is characteristic of real polymer networks, driven by the interplay between the entropy and the excluded volume interactions; and (3) the emergence of a deformation-sensitive localization instability at large excluded volumes. ## 1 Introduction A wide variety of soft-matter based systems are emerging as important for engineering and scientific applications, and have been the focus of research using both modeling and experiments, e.g. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. Polymer-network based materials such as elastomers and hydrogels are often at the heart of these soft-matter systems. An important question for both fundamental understanding and application is to predict the nonlinear elastic properties of polymer networks starting from a micromechanical model of individual chains. The physics of polymer network elasticity is governed by the conformational entropy of polymer chains and the inter-segment excluded volume interactions. These contributions can be roughly thought of as short-range and nonlocal interactions respectively. The short-range interactions are associated with Gaussian polymer chain response and depend on the relative configurations of adjacent segments in a chain. In contrast, the nonlocal interactions are due to the interaction between polymer segments that are nearby in space but nonlocal topologically (i.e., in terms of their position along the chain (Fig. 1)). While there are several useful phenomenological nonlinear elastic frame-indifferent models, e.g. Mooney-Rivlin [33], Ogden [34] and Gent [35], they lack a clear connection to the molecular structure of polymer network. An important class of physics-based approaches to study the elasticity of polymer networks are based on considering multiple Gaussian chains and then averaging over the chains in different ways. These include the 3-chain model by James and Guth [36], the 4-chain model by Flory and Rehner [37] and Treloar [38], the affine full-network model by Treloar [39], the 8-chain model by Arruda and Boyce [40], the non-homogeneous deformation based model by Wu and van Der Giessen [41], and the non-affine microsphere model by Miehe [42]; the recent work by Grasinger [43] provides a new perspective in which these myriad models are shown to be special cases of a general approach. While these models have provided important insights and prediction, they do not account for the nonlocal excluded volume effects. Consequently, incompressibility of the polymer network must be added as a phenomenological continuum-scale approximation of the missing mesoscale physics. Another class of physics-based molecular-statistical approaches are the constrained junction and constrained segment theories, that aim to account for constraints arising due to chain entanglements. Constrained junction theories, e.g. [44, 45, 46, 47, 48], apply topological constraints on the fluctuations of chain cross-link junctions. Constrained segment theories, e.g. [49, 50, 51, 52, 53], which are consistent with the tube model of rubber elasticity, incorporate constraints on the polymer segments along the chain contour. However, it is not easy to incorporate the nonlocal excluded volume interactions in these approaches. ### The Proposed Approach Our approach is composed of 2 key elements: first, the statistical field theory of polymers which provides an established and efficient approach to account for the physics of polymer chain elasticity as well as excluded volume interactions [54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70]; and, second, the use of the 8-chain network averaging model that provides a nonlinearly-elastic frame-indifferent approach to coarse-grain to the continuum scale [40]. An important work in this direction is [71], wherein a network with a simplified square lattice topology was studied using the field theory approach to understand copolymers. We begin by considering a representative volume element (RVE) of the polymer network. A typical mesoscale RVE consists of several polymer chains that are all interacting with configurations that are randomly distributed. While it is a significant challenge to account for this randomness, we follow the 8-chain RVE-averaging approach of Arruda and Boyce (Fig. 2, [40]) in approximating the RVE in the undeformed state as composed of 8 polymer chains connecting the center of a cube to each of the corners. The RVE then deforms under the action of the macroscopic deformation tensor \(\mathbf{F}\), i.e., the chain end-to-end vectors are mapped by \(\mathbf{F}\) from the undeformed to the deformed state (Fig. 3, [72]). An important element of [40] is that the RVE is oriented such that the cube is oriented along the principal directions of the stretch tensor \(\mathbf{U}\), where \(\mathbf{U}\) is the tensor square root of \(\mathbf{F}\), and can be obtained through the polar decomposition \(\mathbf{F}=\mathbf{R}\mathbf{U}\) where \(\mathbf{R}\in SO(3)\). Given the mapping of the end-to-end vectors of the chains, the polymer field theory is then used to compute the partition function of the deformed state, from which we can find the free energy and stress. Following [59], we use the continuous Gaussian chain model for a single polymer chain. Next, we consider chain segments that interact pairwise in real-space - and nonlocally in terms of position along the chain contour (Fig. 1) - through a pairwise interaction potential of mean force; these are given by Dirac potentials to model excluded volume effects. Given the inter-segment interaction and end-to-end vectors, the framework of polymer field theory enables us to compute, using the self-consistent scheme, the partition function and consequently the free energy of the RVE. We notice that because the ends of the polymer chain are constrained by the macroscale deformation \(\mathbf{F}\), this leads to a restricted ensemble. Further, nonlinear elasticity provides the Piola-Kirchoff stress tensor \(\mathbf{P}\) as the energy-conjugate of \(\mathbf{F}\), enabling us to compute the stress-deformation response of the polymer network. Key results from the model are as follows. In the absence of excluded volume interactions, we find that the closed-form orientationally-averaged elastic response matches with classical rubber elasticity [72]. Considering excluded volume interactions, closed-form solutions appear impossible, and we develop a 3-d finite element method (FEM) implementation to self-consistently solve the equations of the polymer field theory. We find that the linearized Poisson's ratio \(\nu\simeq 0.4943\), which is very close to the incompressible limit \(\nu\to 0.5\), without a phenomenological imposition of incompressibility, and that the elastic moduli are in line with typical polymer network gels. Further, despite the harmonic Gaussian chain as a starting point, there is an emergent strain-softening and strain-stiffening response that is characteristic of real polymer networks, driven by the interplay between the excluded volume interactions and the entropy; it does not require chains with limiting extensibility - such as the inverse Langevin approximation - to model this behavior. Finally, we find the emergence of a deformation-sensitive localization instability at large values of the excluded volume parameter. **Structure of the Paper.** Section 2 formulates the model. Section 3 summarizes the finite element approach for the self-consistent solution. Section 4 presents numerical results showing the predictions of the model. Figure 1: Excluded volume interactions are nonlocal in terms of the segment coordinates. ## 2 Model Formulation ### Deformation of a Single Polymer Chain We use the Continuous Gaussian Chain model for a single polymer chain [58]. In the undeformed state, the coarse-grained trajectory of the \(\alpha\)-th polymer chain is represented as a continuous 3-d space curve \(\mathbf{R}_{\alpha}(s)\), where \(s\) is the chain contour coordinate and varies along the chain contour, and is scaled such that \(0\leq s\leq 1\). The position vectors of the beginning and end points of the chain in the undeformed state are \(\mathbf{X}_{\alpha}^{0}\) and \(\mathbf{X}_{\alpha}^{1}\). The chain is deformed under the deformation gradient \(\mathbf{F}\). In the deformed state, \(\mathbf{r}_{\alpha}(s)\) is a 3-d curve that represents the coarse-grained trajectory of the \(\alpha\)-th chain, as shown in Figure 4. The position vectors of the beginning and end of the chain in the deformed state are \(\mathbf{x}_{\alpha}^{0}\) and \(\mathbf{x}_{\alpha}^{1}\). Following [72], we use that the chain end-to-end vector is mapped under the macroscale deformation \(\mathbf{F}\): \[\mathbf{x}_{\alpha}^{1}-\mathbf{x}_{\alpha}^{0}=\mathbf{F}\left(\mathbf{X}_{\alpha}^{1}-\mathbf{ X}_{\alpha}^{0}\right) \tag{2.1}\] We note that the affine deformation assumption depends strongly on the assumption that there are no entanglements [73, 74, 75, 76]. Figure 2: The 8-chain approximation is obtained by averaging over a volume element that aligns the polymer chains along the principal directions of the deformation. #### 2.a.1 Partition Function and Average Segment Density Consider the \(\alpha\)-th chain that consists of \(N\) coarse-grained polymer segments each of length \(a\), and under the influence of a field \(w(\mathbf{x})\) that will be used to account for the excluded volume interactions [59]. From [58], the partition function, \(Q_{\alpha}[w;\mathbf{F}]\), and the average segment density, \(\langle\hat{\rho}_{\alpha}(\mathbf{x};\mathbf{F})\rangle\), are: \[Q_{\alpha}[w;\mathbf{F}]=\frac{1}{V}\int\,\mathrm{d}\mathbf{x}\ q_{\alpha }(\mathbf{x},\mathbf{x}_{\alpha}^{0},s)\ q_{\alpha}^{*}(\mathbf{x},\mathbf{x}_{\alpha}^{1},1- s), \tag{2.2}\] \[\langle\hat{\rho}_{\alpha}(\mathbf{x};\mathbf{F})\rangle=\frac{1}{VQ_{ \alpha}[w;\mathbf{F}]}\int\limits_{0}^{1}\,\mathrm{d}s\ q_{\alpha}(\mathbf{x},\mathbf{x}_{ \alpha}^{0},s)\ q_{\alpha}^{*}(\mathbf{x},\mathbf{x}_{\alpha}^{1},1-s). \tag{2.3}\] Here, \(q_{\alpha}(\mathbf{x},\mathbf{x}_{\alpha}^{0},s)\) and \(q_{\alpha}^{*}(\mathbf{x},\mathbf{x}_{\alpha}^{1},1-s)\) are the partial partition functions for the two chain fragments, one from \(0\) to \(s\) and the other from \(1\) to \(s\), respectively, as shown in Figure 4. \(q_{\alpha}(\mathbf{x},\mathbf{x}_{\alpha}^{0},s)\) is obtained by solving the following PDE with the initial condition: \[\frac{\partial q_{\alpha}(\mathbf{x},\mathbf{x}_{\alpha}^{0},s)}{\partial s }=\frac{a^{2}N}{6}\nabla^{2}q_{\alpha}(\mathbf{x},\mathbf{x}_{\alpha}^{0},s)-w(\mathbf{x} )q_{\alpha}(\mathbf{x},\mathbf{x}_{\alpha}^{0},s),\quad q_{\alpha}(\mathbf{x},\mathbf{x}_{ \alpha}^{0},s)\Big{|}_{s=0}=(aN^{1/2})^{3}\ \delta(\mathbf{x}-\mathbf{x}_{\alpha}^{0}). \tag{2.4}\] Similarly, \(q_{\alpha}^{*}(\mathbf{x},\mathbf{x}_{\alpha}^{1},s^{\prime})\) is obtained by solving the same PDE as in (2.4), but with the initial condition corresponding to keeping the other end fixed: \[\frac{\partial q_{\alpha}^{*}(\mathbf{x},\mathbf{x}_{\alpha}^{1},s^{\prime})}{ \partial s^{\prime}}=\frac{a^{2}N}{6}\nabla^{2}q_{\alpha}^{*}(\mathbf{x},\mathbf{x}_{ \alpha}^{1},s^{\prime})-w(\mathbf{x})q_{\alpha}^{*}(\mathbf{x},\mathbf{x}_{\alpha}^{1},s^ {\prime}),\quad q_{\alpha}^{*}(\mathbf{x},\mathbf{x}_{\alpha}^{1},s^{\prime})\Big{|}_{ s^{\prime}=0}=(aN^{1/2})^{3}\ \delta(\mathbf{x}-\mathbf{x}_{\alpha}^{1}). \tag{2.5}\] The initial conditions above correspond to the physical constraint that the beginning and end points of the \(\alpha\)-th chain are fixed at the given spatial positions \(\mathbf{x}_{\alpha}^{0}\) and \(\mathbf{x}_{\alpha}^{1}\), respectively. #### 2.a.2 Reduction to Classical Rubber Elasticity In the absence of excluded volume interactions, obtained by setting \(w(\mathbf{x})\equiv 0\), we can find closed-form solutions for \(q_{\alpha}\) and \(q_{\alpha}^{*}\): \[q_{\alpha}(\mathbf{x},\mathbf{x}_{\alpha}^{0},s)=\left(\frac{3}{2\pi s} \right)^{3/2}\exp\left(-\frac{3|\mathbf{x}-\mathbf{x}_{\alpha}^{0}|^{2}}{2a^{2}Ns} \right), \tag{2.6}\] \[q_{\alpha}^{*}(\mathbf{x},\mathbf{x}_{\alpha}^{1},1-s)=\left(\frac{3}{2 \pi(1-s)}\right)^{3/2}\exp\left(-\frac{3|\mathbf{x}-\mathbf{x}_{\alpha}^{1}|^{2}}{2a^ {2}N(1-s)}\right). \tag{2.7}\] Figure 4: Single polymer chain fixed at both ends. The partition function \(Q_{\alpha}[w;\mathbf{F}]\Big{|}_{w=0}\) in (2.2) evaluates to the classical Gaussian distribution in 3-d: \[Q_{\alpha}[w;\mathbf{F}]\Big{|}_{w=0}\propto\left(\frac{3}{\sqrt{\pi}}\right)^{3} \left(\frac{a^{2}N}{6}\right)^{3/2}\exp\left(-\frac{3}{2a^{2}N}|\mathbf{x}_{\alpha} ^{1}-\mathbf{x}_{\alpha}^{0}|^{2}\right). \tag{2.8}\] Because the chains do not interact, the free energy of the \(\alpha\)-th polymer chain, \(H_{\alpha}\), is obtained from \(Q_{\alpha}\) using \(H_{\alpha}=-k_{B}T\log Q_{\alpha}\) to be: \[H_{\alpha}[w;\mathbf{F}]\Big{|}_{w=0}=\frac{1}{2}\left(\frac{3k_{B}T}{a^{2}N} \right)\left|\mathbf{F}\left(\mathbf{X}_{\alpha}^{1}-\mathbf{X}_{\alpha}^{0}\right)\right| ^{2}-\left(\frac{3k_{B}T}{2}\right). \tag{2.9}\] To account for the fact that chains are randomly oriented, we next average \(H_{\alpha}[w;\mathbf{F}]\Big{|}_{w=0}\) over all possible orientations of the chain end-to-end vector by integrating (2.9) over all orientations. That is, keeping \(\mathbf{F}\) fixed, we integrate \(\left(\mathbf{X}_{\alpha}^{1}-\mathbf{X}_{\alpha}^{0}\right)\) over the sphere of appropriate radius. The resulting expression for the orientationally-averaged free energy, \(H_{\alpha}^{avg}[w;\mathbf{F}]\Big{|}_{w=0}\), is: \[H_{\alpha}^{avg}[w;\mathbf{F}]\Big{|}_{w=0}=\frac{k_{B}T}{2}\left(\mathrm{tr}(\mathbf{ F}^{T}\mathbf{F})-3\right). \tag{2.10}\] This result recovers the classical rubber elasticity result [72], also known as the incompressible Neo-Hookean elastic strain energy. ### Deformation of the Polymer Network The pairwise excluded volume interactions are introduced through the field \(w(\mathbf{x})\) following [59]. We introduce \(\bar{u}(|\mathbf{x}-\mathbf{x}^{\prime}|)\), which is the pairwise interaction potential of mean force for two segments located at spatial coordinates \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\). The corresponding partition function for the polymer network in the deformed state, \(Z(\mathbf{F})\) in the field-theoretic setting is: \[Z(\mathbf{F})\propto\int\,\mathrm{D}\rho\int\,\mathrm{D}w\,\exp\bigg{(}-\frac{H[ w,\rho;\mathbf{F}]}{k_{B}T}\bigg{)}, \tag{2.11}\] where \(H[w,\rho;\mathbf{F}]\) is the effective Hamiltonian of the polymer network, and has the expression: \[\frac{H[w,\rho;\mathbf{F}]}{k_{B}T}=-\int\,\mathrm{d}\mathbf{x}\;w(\mathbf{x})\rho(\mathbf{x })+\;\frac{1}{2k_{B}T}\int\,\mathrm{d}\mathbf{x}\int\,\mathrm{d}\mathbf{x}^{\prime}\; \rho(\mathbf{x})\;\bar{u}(|\mathbf{x}-\mathbf{x}^{\prime}|)\;\rho(\mathbf{x}^{\prime})-\log \left(Q_{1}[w;\mathbf{F}]\ldots Q_{n}[w;\mathbf{F}]\right). \tag{2.12}\] The auxiliary fields \(w(\mathbf{x})\) and \(\rho(\mathbf{x})\) are interpreted as the fluctuating chemical potential field generated internally because of the inter-segment interactions and the fluctuating density of the polymer network, respectively [59]. \(Q_{\alpha}[w;\mathbf{F}]\) is the partition function for the \(\alpha\)-th chain in the polymer network under the influence of \(w(\mathbf{x})\), and is calculated using (2.2). The first term in (2.12) is the energy of interaction between the density and the chemical potential. The second term is the inter-segment interaction energy. The third term is the entropic contribution due to chain stretching. The total Helmholtz free energy of polymer network in the deformed state, \(H(\mathbf{F})\), is evaluated from the partition function \(Z(\mathbf{F})\) in (2.11) using: \[H(\mathbf{F})=-k_{B}T\log Z(\mathbf{F}). \tag{2.13}\] In the deformed state, the average segment density of the polymer network, \(\langle\hat{\rho}(\mathbf{x};\mathbf{F})\rangle\), is obtained as: \[\begin{split}\langle\hat{\rho}(\mathbf{x};\mathbf{F})\rangle\propto& \frac{1}{Z(\mathbf{F})}\int\,\mathrm{D}\rho\int\,\mathrm{D}w\,\Bigg{[}\exp \left(\int d\mathbf{x}\;w(\mathbf{x})\rho(\mathbf{x})-\frac{1}{2k_{B}T}\int\,\mathrm{d}\mathbf{ x}\int\,\mathrm{d}\mathbf{x}^{\prime}\rho(\mathbf{x})\bar{u}(|\mathbf{x}-\mathbf{x}^{\prime}|) \rho(\mathbf{x}^{\prime})\right)\\ &\left(\sum_{i=1}^{n}\left(\left(\int\limits_{0}^{1}\,\mathrm{d}s \;q_{i}(\mathbf{x},\mathbf{x}_{i}^{0},s)\;q_{i}^{*}(\mathbf{x},\mathbf{x}_{i}^{1},1-s)\right) \;\prod_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{n}Q_{k}[w;\mathbf{F}]\right)\right)\Bigg{]}.\end{split} \tag{2.14}\] ### Strain Energy Density of the Polymer Network To obtain the continuum elastic response using nonlinear elasticity, we introduce the elastic energy density (per undeformed unit volume) \(W(\mathbf{F})\) and treat the RVE as a continuum material point. This allows us to connect the total free energy \(H(\mathbf{F})\) evaluated for the RVE to \(W(\mathbf{F})\) at the corresponding spatial location: \[W(\mathbf{F})=\frac{H(\mathbf{F})}{V}, \tag{2.15}\] where \(V\) is the volume of the RVE in the undeformed state. We can then find the Piola-Kirchhoff stress tensor \(\mathbf{P}\) and the fourth-order elasticity tensor \(\mathbb{C}=[C_{ijkl}]\) using: \[\mathbf{P}=\frac{\partial W}{\partial\mathbf{F}}, \tag{2.16}\] \[C_{ijkl}=\frac{\partial^{2}W}{\partial F_{ij}\partial F_{kl}} \Bigg{|}_{\mathbf{F}=\mathbf{I}}, \tag{2.17}\] where \(\mathbf{I}\) is the second-order identity tensor. Applying (2.16), and substituting from (2.11), (2.12), (2.13), (2.15), we have: \[\mathbf{P}= \frac{\partial W}{\partial\mathbf{F}}=-\frac{k_{B}T}{V}\frac{1}{Z( \mathbf{F})}\frac{\partial Z(\mathbf{F})}{\partial\mathbf{F}}=-\frac{k_{B}T}{V}\frac{1}{Z (\mathbf{F})}\int\ \mathrm{D}\rho\int\ \mathrm{D}w\exp\left(-\frac{H[w,\rho;\mathbf{F}]}{k_{B}T} \right)\frac{\partial H[w,\rho;\mathbf{F}]}{\partial\mathbf{F}}\left(-\frac{1}{k_{B}T }\right) \tag{2.18}\] \[= -\frac{k_{B}T}{V}\frac{1}{Z(\mathbf{F})}\int\ \mathrm{D}\rho\int\ \mathrm{D}w\exp \left(-\frac{H[w,\rho;\mathbf{F}]}{k_{B}T}\right)\sum_{\alpha=1}^{n}\frac{ \partial}{\partial\mathbf{F}}\log Q_{\alpha}[w;\mathbf{F}].\] Then, defining the stress operator for the \(\alpha\)-th chain, \(\hat{\mathbf{P}}_{\alpha}\), as: \[\hat{\mathbf{P}}_{\alpha}:=-\frac{\partial}{\partial\mathbf{F}}\log Q_{\alpha}[w;\mathbf{ F}], \tag{2.19}\] we can write \(\mathbf{P}\) as the statistical average [58, Section 4.1.3] of \(\hat{\mathbf{P}}_{\alpha}\): \[\mathbf{P}=\frac{k_{B}T}{V}\left(\frac{1}{Z(\mathbf{F})}\int\ \mathrm{D}\rho\int\ \mathrm{D}w\exp \left(-\frac{H[w,\rho;\mathbf{F}]}{k_{B}T}\right)\sum_{\alpha=1}^{n}\hat{\mathbf{P}}_ {\alpha}\right)=\frac{k_{B}T}{V}\sum_{\alpha=1}^{n}\langle\hat{\mathbf{P}}_{ \alpha}\rangle. \tag{2.20}\] ### Mean-field Assumption The functional integration over the fields \(w\) and \(\rho\) in (2.11) and (2.14) makes it expensive to evaluate \(H(\mathbf{F})\) and \(\langle\bar{\rho}(\mathbf{x};\mathbf{F})\rangle\). Therefore, it is common to use a mean-field assumption to simplify the functional integration in the expression for \(Z(\mathbf{F})\) in (2.11) [58]. This assumption implies that functional integration over the fields \(w\) and \(\rho\) is dominated by the mean-fields \(\bar{w}\) and \(\bar{\rho}\), respectively. The mean-fields \(\bar{w}\) and \(\bar{\rho}\) are obtained by requiring the effective Hamiltonian \(H[w,\rho;\mathbf{F}]\) in (2.12) to be stationary with respect to variations in \(w(\mathbf{x})\) and \(\rho(\mathbf{x})\). This gives the self-consistent mean-field conditions: \[\frac{\delta H[w,\rho;\mathbf{F}]}{\delta w}\bigg{|}_{w=\bar{w}}=0, \tag{2.21}\] \[\frac{\delta H[w,\rho;\mathbf{F}]}{\delta\rho}\bigg{|}_{\rho=\bar{ \rho}}=0. \tag{2.22}\] Using the mean-field assumption, \(Z(\mathbf{F})\) in (2.11) simplifies to: \[Z(\mathbf{F})\ \approx\exp\left(-\frac{H[\bar{w},\bar{\rho};\mathbf{F}]}{k_{B}T} \right), \tag{2.23}\] where, \(H[\bar{w},\bar{\rho};\mathbf{F}]\) is the effective Hamiltonian in (2.12) evaluated using the mean-fields \(\bar{w}\) and \(\bar{\rho}\). Using (2.13) and (2.23), the total free energy of polymer network, \(H(\mathbf{F})\) under the mean-field assumption is: \[\frac{H(\mathbf{F})}{k_{B}T}=-\int\ \mathrm{d}\mathbf{x}\ \bar{w}(\mathbf{x})\bar{\rho}(\mathbf{x })\ +\ \frac{1}{2k_{B}T}\int\ \mathrm{d}\mathbf{x}\int\ \mathrm{d}\mathbf{x}^{\prime}\ \bar{\rho}(\mathbf{x})\ \bar{u}(|\mathbf{x}-\mathbf{x}^{\prime}|)\ \bar{\rho}(\mathbf{x}^{\prime})-\log \left(Q_{1}[\bar{w};\mathbf{F}]\cdots Q_{n}[\bar{w};\mathbf{F}]\right). \tag{2.24}\] Further, the average segment density, \(\langle\hat{\rho}(\mathbf{x};\mathbf{F})\rangle\), in (2.14) simplifies to: \[\langle\hat{\rho}(\mathbf{x};\mathbf{F})\rangle\approx\sum_{\alpha=1}^{n} \langle\hat{\rho}_{\alpha}(\mathbf{x};\mathbf{F})\rangle\Big{|}_{w=\bar{w}}, \tag{2.25}\] where \(\langle\hat{\rho}_{\alpha}(\mathbf{x};\mathbf{F})\rangle\) is the average segment density of the \(\alpha\)-th chain in the polymer network, obtained using (2.3). ### Excluded Volume Interaction Polymer segments in the network are considered to interact with each other according to a pairwise interaction potential of mean force \(\bar{u}\) whose physical origin is due to excluded volume effects. We account for the excluded volume effects by modeling a pairwise inter-segment interaction using a simple Dirac delta potential of mean force [57, 77]: \[\bar{u}(|\mathbf{x}-\mathbf{x}^{\prime}|)=k_{B}T\ u_{0}\ \delta(|\mathbf{x}-\mathbf{x}^{ \prime}|). \tag{2.26}\] where \(u_{0}\) is the excluded volume parameter. This form of inter-segment interaction potential assumes the presence of a solvent in the polymer network system with low density [49, 57]. The solvent mediates the interactions among polymer segments. For \(u_{0}>0\), implying repulsion between the segments, the excluded volume potential \(\bar{u}\) in (2.26) is positive-definite and has an inverse; following [58], this simplifies the field theory equations in (2.11) to: \[Z(\mathbf{F})\propto\int\ \mathrm{D}w\exp\left(-\frac{H[w;\mathbf{F}]}{k_{B}T}\right), \tag{2.27}\] where \(H[w;\mathbf{F}]\) is the effective Hamiltonian of polymer network in the simplified field theory: \[\frac{H[w;\mathbf{F}]}{k_{B}T}=\frac{1}{2u_{0}}\int\ \mathrm{d}\mathbf{x}\ (w(\mathbf{x}))^{2}- \log(Q_{1}[w;\mathbf{F}]\cdots Q_{n}[w;\mathbf{F}]). \tag{2.28}\] Equations (2.27) and (2.28) present the simplified field theory for the deformation of polymer network that is used in this work. The partition function in (2.27) is evaluated using the mean-field assumption as: \[Z(\mathbf{F})\approx\exp\left(-\frac{H[\bar{w};\mathbf{F}]}{k_{B}T}\right), \tag{2.29}\] where \(H[\bar{w};\mathbf{F}]\) is the effective Hamiltonian in (2.28) evaluated using the mean-field \(\bar{w}\). The mean-field \(\bar{w}\) is obtained by solving the stationarity condition for the effective Hamiltonian \(H[w;\mathbf{F}]\): \[\frac{\delta H[w;\mathbf{F}]}{\delta w}\Big{|}_{w=\bar{w}}=0. \tag{2.30}\] For the assumed form of the excluded volume interaction potential as in (2.26), there is alternatively an expression for the average segment density [58]: \[\langle\hat{\rho}(\mathbf{x};\mathbf{F})\rangle=\frac{1}{u_{0}}\langle w (\mathbf{x})\rangle=\frac{\bar{w}(\mathbf{x})}{u_{0}} \tag{2.31}\] where \(\langle w(\mathbf{x})\rangle\) is the statistical average of the fluctuating field \(w(\mathbf{x})\), and we use that under the mean-field assumption \(\langle w(\mathbf{x})\rangle=\bar{w}(\mathbf{x})\). Finally, using (2.29) and (2.13), we obtain the total free energy of polymer network, \(H(\mathbf{F})\) for the simplified field theory as: \[\frac{H(\mathbf{F})}{k_{B}T}=\frac{1}{2u_{0}}\int\ \mathrm{d}\mathbf{x}(\bar{w}( \mathbf{x}))^{2}-\log(Q_{1}[\bar{w};\mathbf{F}]\cdots Q_{n}[\bar{w};\mathbf{F}]), \tag{2.32}\] where \(\bar{w}(\mathbf{x})\) is obtained by self-consistently solving (2.31) and the mean-field condition in (2.30). ### Representative Volume Element Averaging: 8-Chain Model A typical polymer network consists of a large number of cross-linked polymer chains (Fig. 2) with random orientations at each continuum point, and is very challenging to directly solve. To simplify this problem, we adopt the 8-chain model for the RVE [40]. The 3-d RVE is assumed to be a cube in the undeformed configuration, with 8 chains running between the center and each of the corners. The cube is assumed to be oriented along the principal directions of the macroscale right stretch tensor \(\mathbf{U}\), where \(\mathbf{U}\) is the tensor square root of the deformation \(\mathbf{F}\) or alternatively is the positive-definite part of the right polar decomposition of \(\mathbf{F}\). We assume that each chain begins (\(s=0\)) at the center of the cube which is also taken to be the origin, and the chains terminate (\(s=1\)) at the corners. Denoting the terminating point of the chains in the undeformed and deformed state, respectively, by \(\mathbf{X}_{1}^{1},\ldots\mathbf{X}_{8}^{1}\) and \(\mathbf{x}_{1}^{1},\ldots\mathbf{x}_{8}^{1}\), the relation between the end-to-end vectors in the undeformed and deformed configurations from (2.1) is: \[\mathbf{x}_{\alpha}^{1}=\mathbf{F}\ \mathbf{X}_{\alpha}^{1},\qquad\alpha=1,\ldots 8. \tag{2.33}\] For a given value of \(\mathbf{F}\), the right stretch tensor \(\mathbf{U}\) is used to orient the cube, and the equation above provides the initial conditions for the partial partition functions \(q\) and \(q^{*}\) of each chain in (2.4) and (2.5). ## 3 Numerical Method Since the model with excluded volume interactions is not amenable to simple closed-form solutions, we turn to numerical solutions. The goal is to evaluate the total free energy \(H(\mathbf{F})\) and average segment density \(\langle\hat{\rho}(\mathbf{x};\mathbf{F})\rangle\). Our numerical method has the following steps: 1. We first generate an initial field \(w(\mathbf{x})=w^{0}(\mathbf{x})\). The initial guess can be based on heuristics when possible. 2. We next solve for the average segment density \(\langle\hat{\rho}(\mathbf{x};\mathbf{F})\rangle-\) using \(q\) and \(q^{*}\) obtained by solving (2.4) and (2.5) with the given \(w(\mathbf{x})\) - and the total free energy \(H(\mathbf{F})\) using (2.30). 3. The field \(w\) is updated using (2.31) as \(w(\mathbf{x})=u_{0}\langle\hat{\rho}(\mathbf{x};\mathbf{F})\rangle\). 4. In turn, we update \(\langle\hat{\rho}(\mathbf{x};\mathbf{F})\rangle\) and \(H(\mathbf{F})\) as above. This iteration continues until we reach convergence, which we define as a relative change in the total free energy of less than \(0.1\%\). For the solution of \(q\) and \(q^{*}\) in (2.4) and (2.5), we use the finite element method (FEM) in the open-source FEniCS framework [78]. The spatial domain is discretized using first-order Lagrange family finite elements. The integration along the chain with respect to \(s\) in (2.4) and in (2.5) is performed using the implicit Crank-Nicholson finite difference method with \(100\) steps for \(s\in(0,1)\). We test convergence of the FEM discretization as in Figure 5(b). While the RVE averaging nominally requires only 8 chains, these chains interact not only with each other, but also with other chains that are not contained in the RVE. To account for this, we use periodic images of the RVE; we find that one image on each face of the cubic RVE is sufficient, giving us 27 cubes over which we must perform various integrations; Figure 5(a) shows a schematic projection of this in 2-d, where the highlighted central RVE is used for the energy calculations. When the deformation is applied, the image RVEs are deformed following the central RVE. Similarly, \(w(\mathbf{x})\) is defined over the larger cluster of RVEs for performing, but need only be solved on the central RVE using periodicity. ## 4 Elastic Response with Excluded Volume Interactions In the calculations reported here, we use the following model parameters: total chain contour length \(L=0.12\mathrm{\SIUnitSymbolMicro m}\), number of polymer segments in single chain \(N=100\), excluded volume parameter \(u_{0}=0.005\ v_{seg}\) where \(v_{seg}=a^{3}\) is the volume of Figure 5: (a) Front view schematic of a bigger physical domain in 3-d that consists of 27 RVEs, each with 8 chains. The central \(8\) chain RVE (highlighted) is used for the free energy calculation. (b) The convergence of the energy as the mesh is refined, shown by plotting the free energy of a single chain without external field as a function of mesh size. The converged mesh size of \(35^{3}\) elements is used for the numerical computations. Note that the saddle point nature of the problem can lead to non-monotonic convergence. an individual monomer segment, and temperature \(T=303{\rm K}\). This choice of \(u_{0}\) gives an excluded cube with side \(0.17a\), and corresponds to the Flory-Huggins interaction parameter \(\chi=0.4975\), using \(u_{0}=(1-2\chi)v_{seg}\). This characterizes a good solvent that is very close to the \(\Theta\)-point [54; 58; 79; 80]. For the numerical calculations, larger values of \(u_{0}\) in 3-d require an excessively fine mesh to converge; however, the calculations described below qualitatively agree with 2-d calculations - where much larger values of \(u_{0}\) can be used - and with a few representative 3-d calculations that were conducted with a larger value of \(u_{0}\) and a very fine mesh. ### Identifying the Stress-Free Free Energy Minimizing State We first note that without excluded volume effects, all models based on the Gaussian chain approximation would predict that the polymer network shrinks to a point (\(\mathbf{F}=\mathbf{0}\)) because that would maximize the configurational entropy of the chain. Therefore, as we add excluded volume effects, we find that the equilibrium - stress-free or minimum energy - volume of the polymer network RVE increases. Specifically, the equilibrium state is achieved as a balance between the competing effects of entropic shrinkage and excluded volume repulsion between monomers; [81; 82] examine related issues in greater depth. We denote the initial side of the cubic RVE by \(L_{uc}^{0}:=2aN^{1/2}/\sqrt{3}\), where \(aN^{1/2}\) is the RMS average diameter of an unconstrained chain, and the pre-factor accounts for the geometry of the chain aligned along the diagonal of the RVE. However, we emphasize that this is _not_ the free energy minimizing state, i.e., it is not stress-free, in the presence of excluded volume interactions. To this initial state, we apply a deformation of the form \(\mathbf{F}=\lambda\mathbf{I}\), and compute the free energy for various values of \(\lambda\). Figure 6 shows the total free energy \(H\) as a function of \(\lambda=L_{uc}/L_{uc}^{0}\) for \(u_{0}=0.001\)\(v_{seg}\) and \(u_{0}=0.005\)\(v_{seg}\). We find that the stress-free stretches, i.e. the free-energy minimizing stretches, for \(u_{0}=0.001\)\(v_{seg}\) and \(u_{0}=0.005\)\(v_{seg}\) are respectively at \(\lambda=0.7\) and \(\lambda=1\) respectively. The large increase in the equilibrium volume of the polymer network system with an increase in the excluded volume parameter \(u_{0}\) is consistent with the phenomena of equilibrium swelling for polymeric gels [83; 84; 85; 86]. In all of the subsequent calculations discussed below, we set the undeformed state to correspond to the free energy minimizing stress-free state. ### Volumetric and Shear Response, and Near-Incompressibility We use the strain energy density \(W(\mathbf{F})\) to obtain the mesoscale elastic response using nonlinear elasticity; specifically, (2.16) and (2.17) are used to obtain the stress-stretch response and the elastic moduli respectively. We assume below that the network can be treated as approximately isotropic despite the 8-chain model. To obtain the bulk and shear moduli, we impose deformations of the form: \[\mathbf{F}_{v}=\begin{bmatrix}\lambda&0&0\\ 0&\lambda&0\\ 0&0&\lambda\end{bmatrix}\quad,\quad\mathbf{F}_{s}=\begin{bmatrix}1&\kappa_{s}&0 \\ 0&1&0\\ 0&0&1\end{bmatrix}. \tag{4.1}\] Figure 6: The total free energy as a function of the relative stretch for \(u_{0}=0.001\)\(v_{seg}\) and \(u_{0}=0.005\)\(v_{seg}\). The bulk and shear moduli, \(K\) and \(G\), respectively can be computed using: \[K=\frac{1}{9}\frac{\partial^{2}W}{\partial\lambda^{2}}\Big{|}_{\lambda\approx 1 }\quad,\quad G=\frac{\partial^{2}W}{\partial\kappa_{s}^{2}}\Big{|}_{\kappa_{s} \approx 0}. \tag{4.2}\] We find \(K=52.06\mathrm{kPa}\) and \(G=0.60\mathrm{kPa}\). Using isotropic linearized elasticity, this gives the Poisson's ratio \(\nu=\frac{3K-2G}{6K+2G}=0.4943\) and the elastic modulus \( E=\frac{9KG}{3K+G}=1.79\mathrm{kPa}\). These elastic moduli are consistent with polymer network gels [87, 88, 89, 90, 22]. We highlight that \(K\) is 2 orders of magnitude larger than \(G\), and \(\nu\) is very close to the incompressible limit of \(0.5\). We next examine the shear stress vs. shear strain curve. In principle, the shear stress can be computed using \(\tau=\frac{\partial W}{\partial\kappa_{s}}\). To avoid a lot of noise from numerical differentiation, we fit \(W\) by a polynomial and then differentiate the polynomial to obtain the curve shown in Figure 7. ### Extensional Response: Emergent Strain-Softening and Strain-Stiffening We next examine extensional loading where the deformation has the form: \[\mathbf{F}_{e}=\begin{bmatrix}\lambda_{1}&0&0\\ 0&\lambda_{2}&0\\ 0&0&\lambda_{3}\end{bmatrix} \tag{4.3}\] Here, we consider \(\lambda_{1}\) as the extensional stretch of interest. We make 2 different choices for the transverse stretches \(\lambda_{2}\) and \(\lambda_{3}\): the "constrained case" where they are constrained such that \(\lambda_{2}=\lambda_{3}=1\); and the "volume-preserving case" where they are set to be volume-preserving such that \(\lambda_{2}=\lambda_{3}=\lambda_{1}^{-\frac{1}{2}}\). Note that the second case is approximately equivalent to having no transverse stress. To obtain the extensional stress, we use \(\sigma=\frac{\partial W}{\partial\lambda_{1}}\). Figure 8 compares the stress-strain response of these cases. We notice that the constrained case has significantly higher stresses and tangent moduli. Figure 9 shows the evolution of the chain density Figure 7: Shear stress \(\tau\) vs. shear strain \(\kappa_{s}\) in simple shear. The segment density for the RVE at various stretches are shown in the insets. We notice that the chains have higher concentrations at the ends when the deformation is small, but are more uniformly distributed as the deformation increases. Note that the RVE itself does not appear to be sheared because the chain-averaging approach aligns the averaging RVE along the principal directions that correspond to the maximum and minimum elongation directions (Fig. 2). with stretch for the constrained case. From Figures 8 and 9, we notice that both cases show a pronounced strain-softening and strain-stiffening behavior that is Figure 8: The stress-strain curves for the constrained and volume-preserving extensional loadings. We notice that the stresses and the tangent moduli are both significantly larger when the system is constrained to undergo deformations that do not preserve the volume. Figure 9: Extensional stress \(\sigma\) vs. extensional stretch \(\lambda_{t}\) for the constrained case from Figure 8. The segment density for the RVE at various stretches are shown in the insets. As with shearing, we notice that the chains have higher concentrations at the ends when the deformation is small, but are more uniformly distributed as the deformation increases. characteristic of many real polymer networks such as polymeric gels. However, Gaussian chains do not show such behavior, and it is typical to use chains with limiting extensibility - such as the inverse Langevin approximation - to model this behavior. Here, we find that it is a consequence of the competition between the excluded volume parameter and the entropy. Figure 10 shows the decomposition of the free energy \(W\) into entropic \(W_{entropic}\) and excluded volume interaction \(W_{interaction}\) contributions. We observe that the excluded volume contribution is less than the entropic contribution in both cases. Further, we notice that \(W_{entropic}\) monotonically increases with \(\lambda_{1}>1\), and is consistent with the stretching of the Gaussian polymer chains; however, \(W_{interaction}\) monotonically decreases with \(\lambda_{1}>1\), and is consistent with the chains being more oriented, and hence having fewer excluded volume interactions. We notice that in the approximate range \(1.5<\lambda_{1}<2.5\) where we see strain softening, the decrease in the excluded volume interaction is faster than the rise in the entropic contribution, causing softening. For \(\lambda_{1}>3\), we have the opposite trend in that the entropy increases faster than the decrease in the excluded volume interaction, causing stiffening. In summary, strain softening occurs because of the initial decrease in excluded volume interactions, and subsequent strain stiffening occurs because of the later increase in entropic effects. ### Effect of Chain Length Figure 11 shows the effect of chain contour length on the elastic moduli of the polymer network. The chain contour length \(L\) is varied from \(0.01\mathrm{\SIUnitSymbolMicro m}\) to \(0.3\mathrm{\SIUnitSymbolMicro m}\) while keeping \(N\) fixed. We observe that both the elastic modulus \(E\) and the shear modulus \(G\) decrease with increased chain contour length. An increase in chain contour length corresponds to an increase in the average molecular weight (\(M_{c}\)) between the cross-links, and these results are consistent with experiments that show that an increase in \(M_{c}\) corresponds to a decrease in the elastic moduli [95, 96]. The range of elastic and shear moduli obtained using the model by varying chain contour length is consistent with the experimental values for polymer network soft matter such as elastomers and polymeric gels [97, 98, 99, 99, 100, 22, 87, 99, 101, 22]. ### Interactions between Deformation and an Excluded Volume Instability We next examine an instability driven by the increase in the excluded volume parameter. For computational feasability - because we aim to numerically confirm that the instability is sharp by a large number of calculations near the instability - we focus on 2-d systems; however, a few representative calculations suggest that 3-d is qualitatively similar. Because it is in 2-d, the undeformed RVE is a square with 4 chains. Figure 12 presents the average segment density field for various equi-biaxial stretches and various excluded volume parameter values. For a fixed RVE stretch of \(L_{uc}/L_{uc}^{0}=1\), where \(L_{uc}^{0}=\sqrt{2}\ aN^{1/2}\) in 2-d, we observe an instability at \(u_{0}\approx 0.7\ v_{seg}\) (\(v_{seg}=a^{2}\) in 2-d), leading to the localization of chains. Physically, the chains strongly repel each other and hence are highly restricted in the volume available. The instability is symmetry breaking, in that the originally square-symmetric chain configuration transitions to localize either vertically or horizontally from the original square symmetry; in our numerical simulations, we find that these occur essentially randomly due to numerical noise. As noted above, the instability is a sharp transition. We examine the effect of an imposed equi-biaxial stretch by setting \(L_{uc}/L_{uc}^{0}\) to \(1.5\) and \(2\) respectively. We notice that the critical values of \(u_{0}\) for the instability are, respectively, \(u_{0}\approx 0.8\ v_{seg}\) and \(u_{0}\approx 1.2\ v_{seg}\). This coupling between the deformation and chain localization suggests new routes to obtain patterning in polymer networks. Figure 10: The free energy \(W\) is decomposed into entropic \(W_{entropic}\) and excluded volume interaction \(W_{interaction}\) contributions for the constrained and volume-preserving cases. The symbols show the simulation results, and the lines show best fits that are differentiated to obtain stress-strain curves. ## 5 Discussion We have used the statistical field theory of polymers in combination with the 8-chain network averaging approach to study the mechanical response of polymer networks. The framework of polymer field theory provides a physics-based approach to accounting for excluded volume interactions, which are imposed phenomenologically in micromechanical models. In the absence of excluded volume interactions, we find that that the closed-form orientationally-averaged elastic response matches with the Figure 11: Elastic and shear moduli as a function of chain contour length; as expected from polymer theory, these scale as \(a^{-3}\)[103]. Figure 12: Excluded volume-driven instability observed in 2-d, and the effect of equi-biaxial deformation. Each subfigure shows the average segment density plotted over 9 RVEs, with each subfigure corresponding to different values of biaxial stretch \(L_{uc}/L_{uc}^{\text{}}\)and \(u_{0}\). The instability corresponds to a sharp transition in the chain configuration: it goes from being fairly uniform away from the crosslinking point to being concentrated along the horizontal and vertical directions. classical rubber elasticity [72]. With excluded volume effects, self-consistent numerical solutions using finite elements find that the predicted elastic moduli are in line with typical polymer network gels; particularly, the linearized Poisson's ratio \(\nu\simeq 0.4943\), which is very close to the incompressible limit \(\nu\to 0.5\), without a phenomenological imposition of incompressibility. Though the equilibrium state depends on the value of \(u_{0}\), the incompressible behavior is independent of the specific value of \(u_{0}\) for the values studied here. This can be physically understood by considering that \(\nu\) is computed around the equilibrium stress-free state. Due to entropic effects, the chains tend to reduce their end-to-end distance and would collapse to a point, while the excluded volume effects prevent the chains from collapsing completely. The equilibrium state is achieved when these opposing effects balance out. Around this equilibrium state, we find \(\nu\simeq 0.5\), which reflects the role of the solvent in preventing further reduction in volume. Despite the seeming presence of voids or open space in the density fields, the chains are constrained due to the excluded volume interactions; the voids will be occupied by the solvent which makes the system close to incompressible since the solvent cannot leave the polymer network upon deformation. This is consistent with the results of [104], wherein it was found that \(\nu\simeq 0.5\) at short times when the solvent has not had time to diffuse out of the polymer network; at longer times, the solvent diffuses out and the long-time equilibrium value of \(\nu\) depends on the shear modulus. While we have not considered this time-dependent behavior here, it is related to similar effects in poromechanics, namely the Terzaghi and Mandel model problems, wherein the stress response is closely tied to the drainage of the pore fluid [105, 106, 107, 108, 109]. Another interesting finding is that, despite the harmonic Gaussian chain as a starting point, there is an emergent strain-softening and strain-stiffening response that is characteristic of real polymer network gels, driven by the interplay between the excluded volume interactions and the entropy; it does not require chains with limiting extensibility - such as the inverse Langevin approximation - to model this behavior. We also we find the emergence of a deformation-sensitive localization instability at large values of the excluded volume parameter. A natural question for future is to examine the interplay between chain-scale instabilities such as microbuckling [110, 111, 112] and the network-scale instabilities observed here. We highlight, however, that these examples of instabilities are in fibrous networks, and it is possible that such instabilities occur here because of the regular network structure that has been assumed and will not appear in random networks. An important challenge, however, is that the isolated harmonic Gaussian chain does not display buckling or other instabilities; other nonlinear chain models are required to capture this behavior. Further, as noted in [8], electrical field interactions provide an effective compressive stiffness, and can induce new types of instabilities [12]. Incorporating chain models that go beyond the Gaussian approximation in polymer field theory is an interesting theoretical question. Along similar lines, while we capture strain stiffening and strain softening without the use of a chain model with limiting extensibility - such as the inverse Langevin model - it would be interesting to incorporate such models in polymer field theory to enable studying the interplay between entropic, excluded volume, and limited extensibility effects. The concept of chain topology or entanglement is an important aspect that is not taken into consideration in this study. As highlighted in [73, 74], these effects can play a significant role in the response of polymer networks. The mean-field framework employed in this study is unable to account for such effects directly. However, our inclusion of excluded volume effects provides some insight into the effects of entanglement. Additionally, we have observed that even without accounting for entanglements, excluded volume effects give rise to many interesting physical characteristics that are relevant to the response of real polymer networks. It should be noted that while entanglements are crucial for large deformation response, the linearized properties such as the Poisson's ratio are expected to be relatively unaltered. ### Code Availability A version of the code developed for this work is available at [https://github.com/pkhandag/polymer-network.git](https://github.com/pkhandag/polymer-network.git) ### Acknowledgments We thank Carlos Garcia Cervera for useful discussions; NSF XSEDE for computing resources provided by the Pittsburgh Supercomputing Center; AFRL for hosting visits by Kaushik Dayal; NSF (DMREF 1921857, DMS 2108784), ONR (N00014-18-1-2528), BSF (2018183), and AFOSR (MURI FA9550-18-1-0095) for financial support; and the anonymous reviewers for comments that improved the paper significantly.
2305.17433
A Unified Framework for Slot based Response Generation in a Multimodal Dialogue System
Natural Language Understanding (NLU) and Natural Language Generation (NLG) are the two critical components of every conversational system that handles the task of understanding the user by capturing the necessary information in the form of slots and generating an appropriate response in accordance with the extracted information. Recently, dialogue systems integrated with complementary information such as images, audio, or video have gained immense popularity. In this work, we propose an end-to-end framework with the capability to extract necessary slot values from the utterance and generate a coherent response, thereby assisting the user to achieve their desired goals in a multimodal dialogue system having both textual and visual information. The task of extracting the necessary information is dependent not only on the text but also on the visual cues present in the dialogue. Similarly, for the generation, the previous dialog context comprising multimodal information is significant for providing coherent and informative responses. We employ a multimodal hierarchical encoder using pre-trained DialoGPT and also exploit the knowledge base (Kb) to provide a stronger context for both the tasks. Finally, we design a slot attention mechanism to focus on the necessary information in a given utterance. Lastly, a decoder generates the corresponding response for the given dialogue context and the extracted slot values. Experimental results on the Multimodal Dialogue Dataset (MMD) show that the proposed framework outperforms the baselines approaches in both the tasks. The code is available at https://github.com/avinashsai/slot-gpt.
Mauajama Firdaus, Avinash Madasu, Asif Ekbal
2023-05-27T10:06:03Z
http://arxiv.org/abs/2305.17433v1
# A Unified Framework for Slot based Response Generation in a Multimodal Dialogue System ###### Abstract Natural Language Understanding (NLU) and Natural Language Generation (NLG) are the two critical components of every conversational system that handles the task of understanding the user by capturing the necessary information in the form of slots and generating an appropriate response in accordance with the extracted information. Recently, dialogue systems integrated with complementary information such as images, audio, or video have gained immense popularity. In this work, we propose an end-to-end framework with the capability to extract necessary slot values from the utterance and generate a coherent response, thereby assisting the user to achieve their desired goals in a multimodal dialogue system having both textual and visual information. The task of extracting the necessary information is dependent not only on the text but also on the visual cues present in the dialogue. Similarly, for the generation, the previous dialog context comprising multimodal information is significant for providing coherent and informative responses. We employ a multimodal hierarchical encoder using pre-trained DialoGPT and also exploit the knowledge base (Kb) to provide a stronger context for both the tasks. Finally, we design a slot attention mechanism to focus on the necessary information in a given utterance. Lastly, a decoder generates the corresponding response for the given dialogue context and the extracted slot values. Experimental results on the Multimodal Dialogue Dataset (MMD) show that the proposed framework outperforms the baselines approaches in both the tasks. The code is available at [https://github.com/avinashsai/slot-gpt](https://github.com/avinashsai/slot-gpt). Keywords:Conversational AI, Multimodal Dialogue System, Response Generation, DialoGPT ## 1 Introduction Advancement in Artificial Intelligence (AI) has opened up new frontiers in conversational agents. Human-machine interaction is an essential application of AI helping humans in their day-to-day lives. Progress in AI has led to the creation of personal assistants like Apple's Siri, Amazon's Alexa, and Microsoft's Cortana which assist humans in their everyday work. The machines' capability to comprehend and complete the user's goals has empowered researchers to build advanced dialogue systems. Dialogue systems with the ability to help users solve critical tasks such as selecting the appropriate restaurants, and hotels, or booking movie tickets have gained immense popularity in the field of artificial intelligence (AI). Through a well-designed conversation system as an efficient personal assistant, users can easily achieve everyday tasks through natural language interactions. With the growth in AI, the latest progress in deep learning has encouraged many neural conversational systems [60, 76, 29]. A typical goal-oriented dialogue system comprises several key modules such as: (i) Natural language understanding (NLU) component that helps in identifying the domain, intent and extract slot information from the user utterance [77, 82, 8, 47]; (ii). a dialogue state tracker (DST) that predicts the current dialogue state [90, 56]; (iii). a dialogue policy that regulates the next system action given the current state [74, 49]; (iv). a natural language generator (NLG) module that outputs a response given the semantic frame [62, 61, 71, 38]. These modules occur in a pipeline in every robust dialogue system. Therefore, it is slightly time-consuming and computationally expensive. With the progress in AI, the integration of information from different modalities, such as text, image, audio, and video, has been known to provide complete information for building effective end-to-end dialogue systems [58, 27, 39, 37] by bringing the different areas of computer vision (CV) and natural language processing (NLP) together. Hence, a multimodal dialogue system bridges the gap between vision and language, ensuring interdisciplinary research. Multimodal conversational systems provide completeness to the existing dialogue systems by providing necessary information that lacks in unimodal systems as the visual (in the case of images and videos) and audio information help build robust systems. In [58] the authors proposed a multimodal dialogue dataset having textual and image information for the fashion domain. From the dataset, it is clear that image information is necessary for selecting the right clothes and accessories for different individuals. Figure 1: NLU and NLG modules in a Dialogue System ### Motivation and Contribution As demonstrated in Figure 1, the primary goal of every NLU component is to extract necessary information in the form of slots from the user utterance, while the ultimate goal of the NLG module is to respond to the user based on the extracted semantic information. Both these tasks are complementary; hence information extracted from the NLU is significant for generating the correct response by the NLG unit. Instead of performing these tasks separately in a pipeline manner, recently, researchers have focused on performing these tasks simultaneously to improve the performance of both tasks [68, 18]. We take a step forward in our current work by proposing an end-to-end system that can concurrently extract the necessary slot information from the user utterance and provide the corresponding system response in a multimodal dialogue setting. This is more challenging as the slot information is not entirely dependent on the textual utterance but also on the visual information. Hence, for a better generation of responses, extraction of the correct semantic information from the current dialogue context is crucial. Slots are crucial as it provides the key semantic information for a better understanding of the user utterance. To provide informative responses to the user it is important to capture the semantic information in the form of slots. Based on the slot values, the response generation module can provide responses that are informative and engaging. The proposed end-to-end framework first captures the slot information and then uses this slot information captured from both text and image as input for the generation module. The key contributions of our current work are three-fold: * We propose the task of simultaneously performing two critical components of every conversational system, i.e. NLU and NLG, in a multimodal dialogue system employing information from both text and images. * We design a slot attention-based hierarchical generation system using pre-trained DialoGPT. * Our proposed system achieves the best performance compared to the existing and baseline approaches in both tasks. The rest of the paper is structured as follows. In Section 2, we present a brief review of the existing literature. We provide the details of the baseline and the proposed methodology in Sections 3 and 4 respectively. In Section 5, we provide the details of the dataset used and its statistics followed by implementation details and evaluation metrics. Experimental results are presented in Section 6 along with a detailed analysis, including error analysis. Finally, in Section 7 we conclude with future directions of research. ## 2 Related Work In any dialogue framework, Natural Language Generation (NLG) is a classic problem. With the fast growth of Artificial Intelligence (AI), there has been a trend in recent times to develop multimodal dialogue systems by combining text with images, audio and video modalities. A brief description of some of the works carried out in unimodal chatbots, accompanied by multimodal dialogue systems for both the tasks of slot filling and response generation, is provided below. ### Slot Filling Several deep learning architectures have also been employed for extracting essential information in the form of slots from a given utterance. The authors in [16] investigated deep belief networks (DBN) for slot filling on the ATIS dataset. In [42], the authors investigated Elman and Jordan-type RNNs for slot filling. In [41], several hybrid variants of RNN were proposed due to the stronger ability of RNNs to capture dependencies compared to traditional models, such as Conditional Random Field (CRF). In [86] lexical, syntactic and word-class features were used as input to an RNN for the SLU task of slot filling. The authors in [85] used the transition features to improve RNNs and the sequence level criteria for optimisation of CRF to capture the dependencies of the output label explicitly. The authors in [84] used deep LSTMs along with regression models to obtain the output-label dependency for slot filling. The usage of kernel deep convex networks (K-DCN) was investigated in [15] for slot filling. In [92], a focus mechanism for an encoder-decoder framework was proposed for slot filling on the ATIS dataset. The authors in [89] introduced a generative network based on the sequence-to-sequence model along with a pointer network for slot filling. In [63], an attention-based encoder-decoder framework has been employed for slot filling. In [52], a pre-trained language model was employed in an RNN framework for the slot-filling task. Attention-based RNN framework was proposed in [81] along with pre-trained word embeddings for identifying the slots on ATIS and MEDIA datasets. On the ATIS dataset in [26], an adversarial multi-task model combining a bi-directional language model with a slot tagging model was used for identifying the slots in a given user utterance. The adversarial framework was used in [34] for learning common representation across multiple domains for slot-filling tasks. In [93], the authors proposed the concept of transfer learning for the task of slot filling as it is an essential task of language understanding. Authors in [77] encoded lexicon information as features for use in a Long-short term memory neural network for slot-filling tasks. With advancements in AI, multimodality has been incorporated into conversational systems to make them more robust and complete. Recently, authors in [88] used an adaptive attention mechanism to extract the necessary slot values in a multimodal dialogue system. ### Response Generation #### 2.2.1 Unimodal Dialogue System: The effectiveness of deep learning clearly shows significant improvements in dialogue generation. Deep neural models are very effective in modelling the dialogues, as seen in [73; 62]. In [66], a context-sensitive neural language approach was presented where, given the textual conversational background, the model chooses the most likely answer. To capture the context of the previous queries by the users, the authors in [65] proposed a hierarchical framework capable of preserving past information. Sequence-to-sequence (seq2seq) neural models often generate incomplete and boring responses, such as "I don't know", "Okay", "Yes", "No", etc. Hence, bringing diversity in responses is an extremely challenging and interesting research problem for every conversational agent. Similarly, to preserve the dependencies among the utterances, a hierarchical encoder-decoder framework was investigated in [60, 61]. The authors in [83] extended the hierarchical encoder-decoder framework by adding a latent variable for understanding the intentions of the conversations in a task-oriented dialog system. Lately, memory networks [40] have been intensely investigated for capturing the contextual information in dialogues for the generation of responses infusing pointer networks. Hierarchical pointer networks [54] has also been employed for response generation in task-oriented dialogues. The authors in [80] incorporated a global encoder and a local decoder to share external knowledge in a task-oriented dialogue setup. The ability to infuse knowledge in responses was achieved by using a Bag-of-sequence memory unit [55] for generating coherent responses in goal-oriented dialogue systems. The authors in [57] proposed a multi-level memory framework for task-oriented dialogues. A memory-augmented framework with the ability to extract meaningful information during training for better response generation has been explored in [71]. With the release of MultiWoz [6], a task-oriented dialogue dataset, several works have focused on multi-domain dialogue generation. The authors in [5] used a pre-trained language model for dialogue generation. A hierarchical graph framework employing the dialogue acts of the utterances was investigated for dialogue generation in [9]. The meta-learning approach [43, 50] has been applied to different datasets to increase the domain adaptability for generating the responses. To increase the ability to memorize the dialogue context, the authors in [79] used a memory-to-sequence framework and the pointer generator for response generation. A multi-task framework to enhance the performance of natural language generation was investigated in [91]. In [10], working memory was employed for dialogue generation. The working memory interacts with two long-term memories that capture the dialogue history and the knowledge base tuples for the informative response generation. Recently, a heterogeneous memory network [33] has been explored for response generation having the capability to simultaneously use the dialogue context, user utterance, and the knowledge base for response generation. Dynamic fusion technique has been employed in [51] to share the features across different domains for a better generation. #### 2.0.1 Multimodal Dialogue System: Research in the dialogue system has recently shifted towards incorporating different sources of information, such as images, audio, video, and text in order to make a robust system. The research reported in [13, 45, 14, 23, 19] has been useful in narrowing the gap between vision and language. In [45], an Image Grounded Conversations (IGC) task was proposed, where conversations are natural and focused upon a shared image. Similarly, the authors in [13] introduced the task of visual dialogue, which requires an AI agent to hold a meaningful dialogue with humans in natural, conversational language about the visual content. Recently, video and textual modalities were investigated with the release of the DSTC7 dataset in [28] that used a multimodal transformer network to encode videos and incorporate information from the different modalities. Similarly in [27, 4, 32], the DSTC7 dataset has been used for generation by incorporating audio and visual features. The release of the Multimodal dialogue (MMD) dataset [58], having conversations on the fashion domain in both text and images, has facilitated response generation in multimodal setup. Several works on the MMD dataset reported in [2, 1, 30] used the hierarchical encoder-decoder model to generate responses by capturing information from text, images, and the knowledge base. Recently, [7] proposed attribute-aware and position-aware attention for generating textual responses. The authors in [12] used a hierarchical attention mechanism for generating responses on the MMD dataset. In [20], the authors proposed a stochastic method for generating diverse responses in a multimodal dialogue setup. Multi-domain multi-modal aspect controlled response generation task was introduced in [21]. Lately, the authors have focused on jointly addressing NLU and NLG tasks in a unimodal framework [72, 68, 18] for improving the performance of both tasks. Author's in [72] proposed a generative model which couples NLU and NLG through a shared latent variable. Similarly, in [68] a new learning framework was designed for language understanding and generation on top of dual supervised learning, providing a way to exploit the duality. Our current work differs from the existing NLU and NLG works as we intend to build a comprehensive framework that extracts the necessary slot information and generates the appropriate response adhering to the elicited slot information in a multimodal framework. The task becomes more complex as visual cues in the form of images are also crucial for providing the complete context for both the tasks along with the textual information. ## 3 Methodology In this section, we discuss the problem statement followed by the baseline and the proposed methodology. ### Problem Definition In this paper, we address the task of extracting the slot values from the user utterance and generating informative and relevant textual responses according to the conversational history in a multimodal dialogue setting. The dialogues consist of textual utterances along with multiple images. More precisely, given a user utterance \(U_{p}=u_{p,1},u_{p,2},...,u_{p,j}\), a set of images \(I_{p}=img_{p,1},img_{p,2},...,img_{p,j^{\prime}}\), with the dialogue history \(H_{p}=(U_{1},I_{1}),(U_{2},I_{2}),...,(U_{p-1},I_{p-1})\), we focus on extracting the slot information from \(U_{p}\) and \(I_{p}\) and simultaneously generate interesting, informative, context-aware response \(Y_{p}=(y_{p,1},y_{p,2},\ldots,y_{p,k})\) instead of template like generic and monotonous responses, such as _I don't know, Yes, No, Similar to..._, etc. This will enhance human-machine conversations by keeping the users engaged in the conversation. Here, \(p\) is the \(p^{th}\) turn of a given dialogue, while \(j\) is the number of words in a given textual utterance and \(j^{\prime}\) is the number of images in a given utterance. Note that in every turn, the number of images \(j^{\prime}\leq 5\), so in the case of only text, vectors of zeros are considered in place of image representation. ### Multimodal Hierarchical Encoder Decoder: We construct a generative model for response generation, an extension of the recently introduced Hierarchical Encoder-Decoder (HRED) architecture [61, 60]. Figure 2: Architecture of the proposed model As opposed to a standard sequence-to-sequence model [69], the dialogue context among the utterances is captured by adding utterance-level RNN (Recurrent Neural Network) over the word-level RNN, increasing the efficacy of the encoder to capture the hierarchy in dialogue. The multimodal HRED (MHRED) is built upon the HRED to include text and image information in a single framework. The critical components of MHRED are the utterance encoder, image encoder, context encoder, and decoder. #### 3.0.1 Utterance Encoder: Given an utterance \(U_{p}\), we use bidirectional Gated Recurrent Units (BiGRU) [11] to encode each word \(n_{p,i}\), where \(i\in(1,2,3,.....k)\) having \(d\)-dimensional embedding vectors into the hidden representation \(h_{U,p,i}\) as follows: \[\begin{split}\overrightarrow{h_{U,p,i}}&=GRU_{U,f} (n_{p,i},\overrightarrow{h_{U,p,i-1}})\\ \overleftarrow{h_{U,p,i}}&=GRU_{U,b}(n_{p,i}, \overleftarrow{h_{U,p,i-1}})\\ h_{U,p,k}^{txt}&=[\overrightarrow{h_{U,p,i}}, \overleftarrow{h_{U,p,i}}]\end{split} \tag{1}\] here \(\overrightarrow{h_{U,p,i}}\) represent the utterance representation in the forward direction while \(\overleftarrow{h_{U,p,i}}\) represents in the backward direction. The overall representation of the utterance is given by \(h_{U,p,k}^{txt}\). #### 3.0.2 Image Encoder: A pre-trained VGG-16 [64] having a 16-layer deep convolutional neural network (CNN) trained on more than one million images present in the ImageNet dataset is used for encoding the images. It can classify images into 1000 object categories, such as dresses, shoes, animals, keyboards, mouse, etc. As a result, the network can learn rich features from a wide range of images. Here, it is also used to extract the "local" image representation for all the images in the dialogue turns and concatenates them together. The concatenated image vector is passed through the linear layer to form the global image context representation as given below: \[\begin{split} T_{p,i}=VGG(I_{p,i})\\ T_{p}=Concat(T_{p,1},T_{p,2},\dots,T_{p,j^{\prime}})\\ h_{I,p}^{img}=ReLU(W_{I}T_{p}+b_{I})\end{split} \tag{2}\] where \(W_{I}\) and \(b_{I}\) are the trainable weight matrix and biases. In every turn, the number of images \(i\leq 5\), so in the case of only text, vectors of zeros are considered in place of image representation. #### 3.0.3 Context Encoder: The final hidden representations from both image and text encoders are concatenated for each turn and are given as input to the context-level GRU. A hierarchical encoder is built to model the conversational history on top of the image and text encoder. The decoder GRU is initialized by the final hidden state of the context encoder. \[h_{w,p}^{ctx}=GRU_{w}([h_{U,p,k}^{txt};h_{I,p}^{img}],h_{w,p-1}) \tag{3}\] where \(h_{w,p}^{ctx}\) is the final hidden representation of the context for a given turn. #### 3.2.2 Decoder: In the decoding section, we build another GRU for generating the words sequentially based on the hidden state of the context GRU and the previously decoded words. We use input feeding decoding and the attention [36] mechanism for enhancing the performance of the model. Using the decoder state \(h_{d,t}^{dec}\) as the query vector, the attention layer is applied to the hidden state of the context encoder. The context vector and the decoder state are concatenated and used to calculate a final distribution of probability over the output tokens. \[h_{q,t}^{dec}=GRU_{d}(y_{p,t-1},h_{q,t-1});\] \[\alpha_{t,m}=softmax(h_{w,p}^{ctx}{}^{T}W_{f}h_{q,t})\] \[c_{t}=\sum_{m=1}^{k}\alpha_{t,m}h_{w,p}^{ctx}, \tag{4}\] \[\tilde{h}_{t}=tanh(W_{\tilde{h}}[h_{q,t};c_{t}]);\] \[P(y_{t}/y_{<t})=softmax(W_{S}\tilde{h}_{t})\] where, \(W_{f}\), \(W_{S}\) and \(W_{\tilde{h}}\) are trainable weight matrices. ### Proposed Approach: To further improve the MHRED model's performance, we propose to apply slot attention to the utterances. The goal is to focus on the slot values in a user utterance crucial in generating an appropriate system response. If the model fails to attend to vital slot information, then it generates an inappropriate system response. For example, for the given user utterance "_Will **neck-tie** having **25 cm** size be paired well with any of these?_", the slot values **neck-tie** and **25 cm** are crucial to understanding user utterance. Furthermore, in a Multimodal Dialogue system, the user often refers to the images generated in the previous system responses. The model should account for this subtle but very essential information. For example, in the following user utterance, "_Show me more in style as in the **4th** image_", the model must understand that the user is referring to the **4th** image. Any failure in doing so will generate an inapt system response. Hence, in our proposed approach, we employ mechanisms to improve the performance of slot identification. The architecture of the proposed model is shown in Figure 2. **Slot Attention:** We employ self-attention on the output from Utterance Encoder \(h_{U,p,k}^{txt}\) as in the final Equation 1. We refer to Slot Attention as SA in the rest of the paper. Let \(h_{U,p,k}^{txt}\) be the Key (K), Query (Q) and Value (V). \[SA(K,Q,V)=softmax(\frac{QK^{\text{T}}}{\sqrt{d_{k}}})V. \tag{5}\] where \(d_{k}\) is the hidden dimension size of \(h_{U,p,k}^{txt}\). The output from Slot Attention is concatenated with the Image Encoder's output. The concatenated output is sent as an input to the Context Encoder. #### 3.2.2 Knowledge Base (KB): The knowledge base encoder used in our framework is the same as [2]. The knowledge base of the MMD dataset contains information about contextual queries and celebrities endorsing various products and brands. Hence, to provide this additional information to our proposed model, we employ self-attention on KB input to achieve more focused information as follows: \[\begin{gathered} h_{s}^{query}=n_{p}^{query}(h_{n-1}^{query},k_ {l,t})\\ h_{f}^{ent}=n_{p}^{ent}(h_{n-1}^{ent},d_{l,t})\\ h_{net}^{kb}=[h_{s}^{query},h_{f}^{ent}]\end{gathered} \tag{6}\] Let \(h_{net}^{kb}\) be the Key (K), Query (Q) and Value (V). \[SA(K,Q,V)=softmax(\frac{QK^{\text{T}}}{\sqrt{d_{k}}})V. \tag{7}\] where \(d_{k}\) is the hidden dimension size of \(h_{net}^{kb}\). We use the attended KB output and the decoder input as the combined input at each time step of the decoder. As the knowledge base (KB) input remains intact for a particular dialogue context, we concatenate the KB input with the decoder input in a similar manner as [2]. #### 3.2.3 Pretraining DialoGPT (P-GPT): In a dialogue system, understanding contextual information is crucial to performance enhancement. Pre-trained language models have achieved state-of-the-art results on several Natural Language Understanding (NLU) tasks [53]. Furthermore, pre-training dialogue systems significantly improved the performance of generation [25]. Therefore, we pre-train Multi-modal Dialog (MMD) dataset using DialoGPT [87]. The input to the DialoGPT is a combination of the previous system response, the current user utterance, and the current system response. This helps the model to learn long-range contextual information effectively. We use DialoGPT-small for a context size of 2. The pre-trained contextual embeddings are passed as input to the Text Encoder and KB Encoder. **Slot Prediction:** For slot prediction, we take the scores obtained by applying the softmax layer on the output of the dot product between Query (Q) and Key (K) in the self-attention for a given user utterance to find the distribution on the slot values for a given user utterance. **Training and Inference:** The generation model is trained using teacher-forced cross entropy [78] at every decoding step to minimize the negative log likelihood on the model distribution. We define \(\hat{y}=\hat{y}_{1},\hat{y}_{2},\hat{y}_{3},\ldots,\hat{y}_{m}\) as the ground truth of the given input sequence. \[J(\theta)=-\sum_{s=1}^{n}logp(\hat{y}_{t}|y_{1},y_{2},\ldots,y_{m-1}) \tag{8}\] here \(y_{1}\), \(y_{2}\)\(\ldots\) and \(y_{m-1}\) is the generated utterance. ## 4 Comparison Methods In this section, we provide a comparison with the other existing techniques comprising both state-of-the-art methods and other baselines. ### State-of-the-Art Models: **Seq2Seq:** It is an encoder-decoder framework with attention which is a standard baseline in Machine Translation, Generation [69]. The input to the encoder is dialogue history and the decoder output is the next round-generated dialogue. **HRED:** It is the first hierarchical encoder-decoder architecture proposed for text-based dialogue systems and is a standard baseline for Unimodal system [59]. It also follows a similar input-output format as the Seq2Seq model. **MHRED:** Multimodal hierarchical encoder-decoder is the first model proposed for Multimodal dialogue systems. Along with text, image is also served as input [58]. The input to the encoder is a concatenated input of image and text features and the decoder output is the generated dialogue. **UMD:** A user-guided attention model is proposed to consider hierarchical product taxonomy and users' attention to products. It is based upon MHRED architecture [12]. The attention model focuses on user preferences for the products and generates responses based on them. **OAM:** In this paper, a novel position and attribute-aware attention mechanism are proposed to learn the enhanced image representation conditioned on the user utterance. The proposed model can generate appropriate responses while preserving the position and attribute information [7]. **MAGIC:** Multimodal diAloG system with adaptIve deCoders (MAGIC) first judges response through understanding user intention. It then applies an adaptive decoder for generating apposite responses [46]. **MATE:** It is based on the standard transformer architecture. In the encoding stage, the transformer encoder is used to encode information from multimodal input. Generation is a two-stage process based on the transformer decoder. In the first stage, the focus is more on the encoded information. In the second stage, responses are refined by incorporating domain knowledge into the output of the first stage [24]. **LXMERT:** In LXMERT [70], the authors build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. We concatenate all the images and feed them as input to the visual encoder (i.e., object-relationship encoder) and the language encoder is used for utterance representation. Finally, the cross-modality encoder is used to capture the final utterance representation for both text and images. ### Baseline Models: To show the effectiveness of the proposed components, we implement the models without these components in the architecture. **Unimodal Baselines:** In order to prove that multimodal architectures perform better, we compare these with unimodal architectures. In unimodal architectures, only text served as the input to the models. **Without Kb:** We experiment with the models without using the Knowledge base as inputs at the decoder. It is used as a baseline to compare with the models using the Knowledge base as input. **Without Slot Attention:** In these models, Slot Attention (SA) is not applied. It is used as a comparison to demonstrate the efficiency of the Slot Attention component. **Without Pretrained Dialog-GPT (P-GPT) representations:** To prove our hypothesis that Pretrained Dialog-GPT improves performance, we perform experiments without Pretrained Dialog-GPT representations as input. It is to show the performance differentiation between the models using P-GPT and without using P-GPT. ## 5 Dataset and Experiments In this section, we provide the details of the datasets used for experiments, implementation details, evaluation metrics and the results obtained. ### Dataset Description: Our research is based on the Multi-modal Dialog (MMD) dataset [58]1 consisting of 150k chat sessions between the customer and sales representative. During the sequence of customer-agent interactions, domain-specific information in the fashion domain was collected. The dialogues easily integrate text and image knowledge into a conversation that brings together different modalities to create a sophisticated dialogue system. The dataset presents new challenges for multi-modal, goal-oriented dialogue systems having complex user sentences. The detailed information of the MMD dataset is presented in Table 1. The authors [58], for experimentation "unroll" the different images to incorporate only one image for a single utterance. Though computationally learns, the method eventually lacks the goal of capturing multi-modality over the context of multiple images and text. Therefore, in our study, we use a different version of the dataset as outlined in [1, 2] to capture a large number of images as the concatenated context vector for each turn of a dialogue. The motivation behind this is that multiple images are required to provide the correct responses to the users. ### Implementation Details #### 5.2.1 DialoGPT Pretraining We used the DialoGPT-small model for pre-training. Previous system response, current user utterances and current system response are concatenated together into a single sentence separated by a special token. The goal is to capture long range contextual information. Adam is used as the optimizer with a learning rate of 5e-5 and a batch size of 16. The pretraining is performed for 3 epochs. The maximum value of the gradient norm is 1.0. The model's weights are initialized with the already pre-trained weights in DialoGPT paper [87]. The loss function used is the same as the one used in the DialoGPT paper. #### 5.2.2 Model Training All the implementations are done using the PyTorch5 framework. The input embedding dimensions are 512 for randomly initialized word embeddings and 768 for pre-trained contextual embeddings. The hidden size for all the layers is 512. A dropout [67] of 0.3 is applied on the Slot Attention for all T-HRED models and 0.5 for all M-HRED models. All the models are trained for 15 epochs with a batch size of 256. AdamW [35] is used as the optimizer with a learning rate of 0.0001 for all the models. For image representation, a \begin{table} \begin{tabular}{|c|c c c|} \hline **Dataset Statistics** & **Train** & **Valid** & **Test** \\ \hline _Number of dialogues_ & 105,439 & 22,595 & 22,595 \\ _Avg. turns per Dialogue_ & 40 & 40 & 40 \\ _No. of Utterances with_ & 1.54M & 331K & 330K \\ _Text Response_ & 14 & 14 & 14 \\ _No. of Utterances with_ & 904K & 194K & 193K \\ _Image Response_ & & & \\ \hline \end{tabular} \end{table} Table 1: Dataset statistics of Multi-modal Dialogue (MMD) dataset 4096-dimensional FC6 layer from VGG-19 network [64] is used, which is trained on ImageNet. ### Automatic Evaluation Metric: To evaluate the model at the relevance and grammatical level, we report the results using standard metrics like Rouge-L [31] and BLEU-1,2,3 and 4 [48]. For comparison with the existing approaches, we report NIST metric [17] in a similar manner as [24]. To evaluate the slot value extraction performance, we use the traditional metrics such as F1 score and Accuracy similarly as [77, 88]. For accuracy and F1 scores, the percentage values in the range?=70% and!80%-represent fair results; while?=80% and!90% represents good models; and?=90% represents very good/excellent models. ### Human Evaluation Metrics We recruit six annotators (in a similar manner as [62, 71]) from a third-party company, having high-level language skills. We sampled 500 responses per model for evaluation with the utterance and the conversational history provided for a generation. First, we evaluate the quality of the response on two conventional criteria: _Fluency_ and _Relevance_. We also compute slot consistency for our proposed task which determines whether the generated response is consistent with the predicted slot information. These are rated on a five-scale, where 1, 3, and 5 indicate unacceptable, moderate, and excellent performance, respectively, while 2 and 4 are used for unsure. We compute Fleiss' kappa [22] to measure inter-rater consistency. The Fleiss' kappa for Fluency and Relevance is 0.53 and 0.49, indicating moderate agreement. For Slot Consistency, we obtain 0.65 as the kappa score indicating substantial agreement. ## 6 Result and Analysis This section presents the experimental results for both the tasks and the necessary analysis of the baseline models generated responses and the proposed methodology. \begin{table} \begin{tabular}{c|c|c c} \hline \hline \multicolumn{2}{c|}{**Model Description**} & **Accuracy F1 Score** \\ \hline **Unimodal Baselines** & \multicolumn{2}{c|}{_P-GPT HRED + SA_} & 57.8 & 57.4 \\ & _P-GPT HRED + Kb + SA_ & 59.4 & 58.6 \\ \hline **Multimodal Baselines** & \multicolumn{2}{c|}{_P-GPT MHRED + SA_} & 60.5 & 59.2 \\ \hline **Proposed Approach** & \multicolumn{2}{c|}{_P-GPT MHRED + SA + Kb_} & **62.4** & **61.3** \\ & _P-GPT MTrans + SA + Kb_ & **65.1** & **63.8** \\ & _P-GPT Mul-Trans + SA + Kb_ & **66.3** & **64.7** \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation results on Slot information. Here, P-GPT is the Pre-trained DialoGPT Figure 3: Attention visualization ### Slot Prediction Results The results of the slot values are provided in Table 2. This shows that the proposed approach outperforms all the existing baseline models (both unimodal and multimodal), and these improvements are statistically significant. With the addition of external knowledge, the slot performance improves significantly, thereby assisting the framework in capturing the correct values from the given user utterance. Besides, the pre-trained DialoGPT embeddings help provide a stronger context for the task with a gain of approximately 3% from the baseline _MHRED + Kb_ model. This justifies that both the pre-trained embeddings and external knowledge, along with slot attention, are crucial in providing the slot information for correct extraction and finally assist in a generation. In Figure 3, we provide a few attention visualizations of the user utterances to show that the slot attention designed to extract the correct values is efficient in capturing the correct information. The slot attention in Example 1 has been able to focus on the colour _brown_ and material _leather_, providing the right slot values. Similarly, in Example 2 the proposed framework has correctly attended the colour information _purple grey_ and position of the image \(1^{st}\) in the given user utterance. By utilizing transformers as an encoder we see a boost in performance with an improvement of 2% in F1 score and around 3% in accuracy for the slot prediction task. The obvious reason behind the improvement is the ability of the transformer network to better capture the contextual information of a given utterance in comparison to the MHRED model which uses GRU as the basic cell for encoding the utterance. The proposed approach achieves a performance gain of more than 6% and 11% in accuracy with respect to the unimodal baselines having the knowledge information and without knowledge respectively. Similarly, by adding the multimodal information using the transformer networks there is an increase of 3% compared to _P-GPT MHRED + SA + Kb_ framework. This proves that the transformer approach captures better semantic knowledge as opposed to the encoder-decoder method. ### Response Generation Results **Results on Automatic Evaluation:** In Table 36, we present the results of automatic evaluation. As already stated, we report BLEU 1,2,3 and 4 as BLEU measures the n-grams overlap between the generated response and the gold response that would help measure if the extracted slot values improve the performance of generation. From the results shown in the table, it is evident that our proposed framework performs significantly better in comparison to the baseline models. As the primary objective was to ensure that slot extraction helps in a better generation, the results justify that by capturing the correct values by the slot attention mechanism, we achieve better performance in the case of all the metrics. There is a definite improvement of 1% in the case of baseline and the proposed framework by incorporating slot attention. This proves that slot knowledge makes the generated response more informative by adding the correct slot values in the responses. Multimodality in the form of images plays a crucial role in building robust systems. In our framework, visual information in the form of images has been incorporated, and from the table, it is obvious that there is a slight improvement in the performance as opposed to the unimodal frameworks having only textual information. Finally, we employ pre-trained DialoGPT embeddings to enhance the performance of the overall generation process. From the table, it is visible that by utilizing the pre-trained embeddings, there is a gain in performance in the proposed approach and the baselines. In particular, there is approximately 1 point improvement in the final model with DialoGPT embeddings compared to the framework without any pre-trained embedding. This ensures that pre-training is beneficial in capturing better context, thereby providing stronger dialogue information to generate informative and coherent responses. The multimodality information provided more complementary information that is not presented by the textual modality thereby improving the overall performance. By employing transformer along with a knowledge base without having the slot information and pre-trained embeddings, we see that it performs better than the _MHRED + Kb_ model in terms of Rouge-L and BLEU scores. By adding slot information and pre-trained embedding, the performance of the model improves significantly. In _Mul-Trans + Kb_ framework, we use multimodal transformers in \begin{table} \begin{tabular}{c|c|c|c|c c c c} \hline \multicolumn{2}{c|}{**Model Description**} & \multicolumn{2}{c|}{**SA**} & \multicolumn{1}{c|}{**P-GPT**} & \multicolumn{1}{c}{**BLEU-1 BLEU-2 BLEU-3 BLEU-4 Rouge-L**} \\ \hline \multirow{4}{*}{**Unimodal**} & \multirow{2}{*}{_HRED_} & - & - & 0.624 & 0.535 & 0.475 & 0.425 & 0.666 \\ & & \(\surd\) & - & 0.635 & 0.544 & 0.483 & 0.433 & 0.668 \\ \cline{2-8} & & \(\surd\) & \(\surd\) & 0.638 & 0.547 & 0.486 & 0.436 & 0.671 \\ \cline{2-8} & \multirow{2}{*}{_HRED + Kb_} & - & - & 0.646 & 0.560 & 0.503 & 0.456 & 0.685 \\ & & \(\surd\) & - & 0.657 & 0.565 & 0.505 & 0.460 & 0.682 \\ \cline{2-8} & & \(\surd\) & \(\surd\) & 0.659 & 0.571 & 0.507 & 0.460 & 0.683 \\ \hline \multirow{2}{*}{**Multimodal**} & \multirow{2}{*}{_MHRED_} & - & - & 0.630 & 0.541 & 0.478 & 0.43 & 0.669 \\ & & \(\surd\) & - & 0.636 & 0.545 & 0.484 & 0.434 & 0.668 \\ \cline{2-8} & & \(\surd\) & \(\surd\) & 0.638 & 0.547 & 0.487 & 0.437 & 0.672 \\ \hline \multirow{4}{*}{**Proposed**} & \multirow{2}{*}{_MHRED_} & - & - & 0.649 & 0.563 & 0.503 & 0.455 & 0.685 \\ & & \(\surd\) & - & 0.659 & 0.571 & 0.512 & 0.463 & 0.688 \\ \cline{2-8} & & \(\surd\) & \(\surd\) & **0.662** & **0.574** & **0.514** & **0.465** & **0.690** \\ \cline{2-8} & \multirow{2}{*}{_MTrans + Kb_} & - & - & 0.665 & 0.569 & 0.515 & 0.462 & 0.693 \\ \cline{2-8} & & \(\surd\) & - & 0.673 & 0.579 & 0.527 & 0.470 & 0.711 \\ \cline{2-8} & & \(\surd\) & \(\surd\) & **0.684** & **0.587** & **0.535** & **0.479** & **0.724** \\ \cline{2-8} & \multirow{2}{*}{_Mul-Trans + Kb_} & - & - & 0.671 & 0.575 & 0.524 & 0.473 & 0.704 \\ \cline{2-8} & & \(\surd\) & - & 0.679 & 0.582 & 0.531 & 0.481 & 0.713 \\ \cline{2-8} & & \(\surd\) & \(\surd\) & **0.690** & **0.591** & **0.544** & **0.493** & **0.733** \\ \hline \end{tabular} \end{table} Table 3: Results using automatic evaluation metrics. Here, P-GPT is the pre-trained DialoGPT; HRED, MHRED, HRED + Kb, and MHRED + Kb framework without SA and P-GPT is similar to [1, 2] respectively. the sense that for utterance representation we use transformers while the image representation achieved from VGG-19 is fed as input to a transformer network in a similar manner as [3]. Here, the utterance information from the transformer along with the output of the image representation from the transformers is concatenated to get the context of the entire utterance representation having both textual and visual knowledge. The representation achieved from both transformers is then used for generating the response. As evident from Table 3, the model with transformer representations for both textual and visual representation achieves performance improvement over the _MTrans + Kb_ framework that uses a transformer network only for utterance representation. We also compare our framework with the LXMERT [70], model and we see that it performs better compared to all the baselines still our proposed network outperforms LXMERT. This is due to the fact that LXMERT captures object-oriented features accompanied by captions for a single image. But in our case, we don't have captions and also the images are multiple in number compared to the LXMERT framework. Also, certain utterances do not have visual information in most of the dialogues. Evidently, the performance of the _Mul-Trans + Kb_ model is significantly better as opposed to the RNN networks due to the capability and efficacy of the transformers in capturing better-contextualized representations using multi-head attention and feed-forward networks. The Rouge-L score and BLEU scores are the highest compared to all the baselines for the proposed _Mul-Trans + Kb_ model. \begin{table} \begin{tabular}{c|c|c c c c|c} \hline \hline \multicolumn{2}{c|}{**Model**} & \multicolumn{5}{c|}{**BLEU**} & \multirow{2}{*}{**NIST**} \\ \cline{3-3} \multicolumn{2}{c|}{} & **1** & **2** & **3** & **4** \\ \hline **Unimodal** & _Seq2Seq_[69] & 35.39 & 28.15 & 23.81 & 20.65 & 3.3261 \\ **Baselines** & _HRED_[59] & 35.44 & 26.09 & 20.81 & 17.27 & 3.1007 \\ \hline & _MHRED_[58] & 32.60 & 25.14 & 23.21 & 20.52 & 3.0901 \\ & _UMD_[12] & 44.97 & 35.06 & 29.22 & 25.03 & 3.9831 \\ **Multimodal** & _OAM_[7] & 48.30 & 38.24 & 32.03 & 27.42 & 4.3236 \\ **Baselines** & _MAGIC_[46] & 50.71 & 39.57 & 33.15 & 28.57 & 4.2135 \\ & _MATE_[24] & 56.55 & 47.89 & 42.48 & 38.06 & 6.0604 \\ & _LXMERT_[70] & 64.32 & 51.33 & 45.33 & 42.76 & 7.3855 \\ \hline & _P-GPT + MHRED + SA + (Joint Training)_ & **66.20** & **57.40** & **51.40** & **46.50** & **6.3164** \\ **Proposed** & _P-GPT + MTrans + SA + (Joint Training)_ & **68.40** & **58.70** & **53.50** & **47.90** & **8.1629** \\ **Approach** & _P-GPT + Mul-Trans + SA + (Joint Training)_ & **69.00** & **59.10** & **54.40** & **49.30** & **8.5371** \\ \hline \hline \end{tabular} \end{table} Table 4: Results of our proposed approach and the existing baselines **Comparison to the Existing Approaches:** In Table 4, we present the evaluation results of our proposed framework in comparison to the existing approaches. From the table, it is clearly evident that the use of the slot values improves the generation performance compared to the existing approaches that do not employ slot information for generation. The BLEU-4 score shows an improvement of more than 20 points compared to the unimodal baselines, such as Seq2Seq and HRED networks. By using the images, we see that the existing approaches have shown a notable gain in performance as opposed to the unimodal baselines. By using the pre-trained GPT embeddings and slot information, we outperform the best-performing framework [24], with a BLEU score of 8%. From this, it can be concluded that slot information assists in correctly responding to user demands and providing interesting and informative responses. The _MHRED_ baseline [58] merely concatenates the textual and visual information for generating responses which have lower BLEU scores in comparison to the proposed framework. In UMD [12], the authors used attention guided hierarchical recurrent encoder-decoder framework for generating responses. Also, enhanced visual representation achieved with the help of a taxonomy attribute tree was used for correct response generation. It is visible that by explicitly using the slot information in the _MHRED_ network, it outperforms the UMD framework, giving a boost of more than 20% in BLEU-4 scores. The improvement is mainly due to the usage of pre-trained \begin{table} \begin{tabular}{l|c|c|c|c c c} \hline \multicolumn{2}{c|}{**Model**} & **SA** & **P-GPT** & **F** & **R** & **SC** \\ \hline \multirow{4}{*}{**Unimodal Baselines**} & \multirow{2}{*}{_HRED_} & - & - & 3.43 & 3.27 & 2.91 \\ & & \(\surd\) & - & 3.51 & 3.36 & 3.03 \\ & & \(\surd\) & \(\surd\) & 3.57 & 3.43 & 3.12 \\ \cline{2-7} & \multirow{2}{*}{_HRED + Kb_} & - & - & 3.61 & 3.49 & 3.17 \\ & & \(\surd\) & - & 3.69 & 3.55 & 3.23 \\ & & \(\surd\) & \(\surd\) & 3.78 & 3.63 & 3.38 \\ \hline \multirow{2}{*}{**Multimodal Baselines**} & \multirow{2}{*}{_MHRED_} & - & - & 3.85 & 3.72 & 3.49 \\ & & \(\surd\) & - & 3.91 & 3.80 & 3.57 \\ & & \(\surd\) & \(\surd\) & 3.99 & 3.85 & 3.68 \\ \hline \multirow{4}{*}{**Proposed Approaches**} & \multirow{2}{*}{_MHRED + Kb_} & - & - & 4.08 & 3.87 & 3.71 \\ & & \(\surd\) & - & 4.15 & 3.92 & 3.78 \\ & & \(\surd\) & \(\surd\) & **4.16** & **4.02** & **3.82** \\ \cline{1-1} \cline{2-7} & \multirow{2}{*}{_MTrans + Kb_} & - & - & 4.29 & 4.02 & 3.87 \\ & & \(\surd\) & - & 4.36 & 4.18 & 4.05 \\ & & \(\surd\) & \(\surd\) & **4.42** & **4.20** & **4.13** \\ \cline{1-1} \cline{2-7} & \multirow{2}{*}{_Mul-Trans + Kb_} & - & - & 4.33 & 4.14 & 3.96 \\ & & \(\surd\) & - & 4.42 & 4.25 & 4.12 \\ \cline{1-1} & & \(\surd\) & \(\surd\) & **4.53** & **4.37** & **4.22** \\ \hline \end{tabular} \end{table} Table 5: Evaluation results using human evaluation metrics. Here, SA: Slot Attention, F: Fluency, R: Relevance, SC: Slot Consistency DialoGPT embeddings and slot attention that provide enhanced contextual information compared to the recurrent encoders. The _MTrans_ framework yields superior performance, proving the efficacy of Transformers as opposed to the recurrent networks. The OAM [7] network focuses upon the attributes and position of the images, and employs the MFB fusion technique to obtain the non-linear interaction between the modalities for generating coherent responses with a NIST score of 4.3236. Our proposed transformer-based approach attains around 4% gain in the NIST score in contrast to the OAM framework. Though MATE [24] exploits the transformer as encoder-decoder, our proposed approach still performs well in comparison. This is primarily because of the slot attention mechanism that correctly focuses on the correct attributes of the product, and makes the responses coherent, informative and interactive. In Table 6, we present the evaluation results of different frameworks on the SIMCC dataset [44] along with our proposed model. As shown in the table, our proposed framework performs slightly better than the best performing _MN_ (memory network) for SIMCC-Furniture data and _HRE_ (Hierarchical Recurrent Encoder) for SIMCC-Fashion data, respectively. One of the main reasons is that we use slot-based attention that helps in focusing on the attributes and the transformer framework which is more robust than the RNN framework. The effectiveness of our proposed framework is evidenced through another multimodal dataset, _viz._ SIMCC, which ensures that it can be used for similar other datasets. #### 4.2.2 Results of Human evaluation: Along with the automatic evaluation, we also report the results of the manual evaluation in Table 5. The results of the manual evaluation are in consonance with the automatic evaluation results. The fluency of the proposed framework is highest in comparison to all the baselines. By providing pre-trained embedding information, slot attention, and the external knowledge base, the responses are complete, thereby being grammatically fluent. The proposed framework's relevance score is maximum, ensuring that the responses are coherent with the given dialogue context. As can be seen from the table, the relevance scores of the multimodal frameworks are higher than \begin{table} \begin{tabular}{c|c|c} \hline \hline **Models** & **SIMCC-Furniture** & **SIMCC-Fashion** \\ \hline _LSTM_ & 0.022 & 0.022 \\ _HAE_ & 0.075 & 0.059 \\ _HRE_ & 0.075 & 0.079 \\ _MN_ & 0.084 & 0.065 \\ _T-HAE_ & 0.044 & 0.051 \\ \hline _Mul-Trans_ & \multirow{2}{*}{0.086} & \multirow{2}{*}{0.080} \\ _(our)_ & & \\ \hline \hline \end{tabular} \end{table} Table 6: Results of BLEU score on SIMCC dataset the unimodal networks as image information helps provide the full context for the generation of coherent responses. Also, the inclusion of the knowledge base improves the score in all the baselines and the proposed network. As our current work's primary objective is to generate more informative responses in accordance with the extracted slot information, we see that the scores of the slot consistency metric increase with the incorporation of slot attention. Also, the models having the knowledge base information outperform the other frameworks mainly because the external knowledge constitutes the important slot information for a particular dialogue, hence boosting the generation of responses. Utilizing transformers instead of GRU has been shown to improve the quality of responses as is evident from the results depicted in Table 5. The fluency of the responses takes a jump from _4.16_ score to _4.42_ proving the efficacy of Transformers to generate better responses. Similarly, the responses are relevant to the conversational history making the responses consistent with the ongoing dialogue. As the primary goal of our work is to make the responses interactive by using the correct slot information, hence on evaluating we see that compared to the _MHRED_ network, _Mtrans_ shows a performance gain in the slot consistency metric as well. _Mul-Trans_ model in consonance with the automatic evaluation results shows better performance compared to all the baselines in terms of manual evaluation as well. Visual representation from transformers has achieved providing better responses that are fluent and contextually coherent as well. Previously, the author in [68] employed a dual learning mechanism to jointly address the NLU and NLG tasks in a dialogue system. Similarly in [72], the author investigates a generative RNN framework for both tasks. Direct comparison to the existing frameworks has not been shown in our current work, firstly because these networks are trained solely on textual data. Secondly, in our work, we focus on extracting the slot information from the user utterance to facilitate the network in enhancing the next textual system response generation. In contrast, in the existing frameworks, NLU and NLG are performed on a single utterance only (i.e. the extracted slot is used to generate the same utterance while we generate the next response in the dialogue). Therefore, our work is novel as we exploit multimodal sources for both NLU and NLG tasks in a user-system dialogue setting. ### Case Studies and Error Analysis In Figure 4, we provide a few examples generated by our proposed framework and the baselines. In Example 1 from the Figure, the proposed approach generates a more informative and relevant response by including the correct slot values such as _block heels_ as desired by the user. Similarly, in the second example, the ability to generate the correct brand _Nike_ and type _t-shirts_ of the product makes the response diverse and interactive as opposed to the responses generated by the baseline networks. The baseline networks without the slot information tend to generate safer responses that lack specific patterns, and brand information in them. While the examples generated by the proposed approach it is obvious that the correct slot information assists in generating better responses and providing specified and desired products to the user thereby increasing customer satisfaction and increasing customer retention. We closely analyze the outputs of the generated response to be aware of the errors made by the proposed dialogue generation framework. The common errors made by the model are: * **Erroneous image selection:** The model is sometimes incompetent in selecting the images having contextual information of more than 5 turns, thereby generating incorrect responses in some cases. There are also cases, where due to the discussion of multiple images in the conversational history, wrong images get selected, making the responses incorrect. * **Additional Information:** The model sometimes generates additional/extra information in the case of attributes for a few products. For example, **Gold:**_The material of the trousers is cotton._, **Predicted:**_The trousers have cotton polyester material with check patterns._ This is mainly due to the fact that the conversation history has this additional information which also gets incorporated into the responses. * **Repetition:** Sometimes the baseline and proposed frameworks generate words that are repeated throughout the response. Also, unknown words due to fewer instances in the training data get generated as!unk? tokens in the responses. For example, **Gold:**_The 3rd image belongs to the Fossil brand with a blue dial._, **Predicted:**_The 3rd image has junk? junk? junk?_. * **Slot mismatch:** Sometimes due to multiple slot information in the ground truth response and the conversational history, the proposed framework at times gets confused and generates responses that have incorrect slot information. For example, **Gold:**_The red synthetic material top with bell sleeves Figure 4: Examples of generated responses by baseline and proposed framework will look good with the trousers._, **Predicted:**_The trousers have red material with bell patterns._ The above-mentioned errors could be minimized by including better image encoders to capture visual representations. In addition, fusion techniques that could capture non-linear interactions between the modalities could help with the errors of type _slot mismatch_. For _repetition_ errors we plan to investigate better pre-trained language models that could also handle the additional information issue. ## 7 Conclusions and Future Work With the progress in artificial intelligence, dialogue systems have reached new paradigms. Narrowing the gap between vision and language, multimodal conversational systems have gained immense popularity. Complementary information in the form of images, audio, or videos to the unimodal (text) systems has helped build robust systems. Task-oriented dialogue systems focus on assisting humans by assisting them to achieve their desired goals. Response generation is a crucial component in every dialogue system. Our current work emphasizes the task of generating responses in a multimodal dialogue system. In this paper, we have proposed an end-to-end framework capable of extracting slot values from user utterances and generating a suitable response. For improving the performance of slot extraction, we apply a self-attention mechanism on the utterances so that the appropriate slot values get focused. In addition to this, we also employ self-attention on Knowledge Base (KB) and use this attended representation to assist in generation. Furthermore, we pre-train DialoGPT Language Model onto a Multi-modal Dialog dataset. This effectively learns the previous dialogue context along with the current user utterance and system response. Contextual embeddings trained using DialoGPT are passed as the input to the model. We evaluated our proposed approach on the Multi-modal Dialog dataset and have shown significant performance improvement. Our proposed approach focused on essential slot information in qualitative and quantitative metrics, thereby improving the generation. Our current approaches focus most on the text and little on the image. In the future, we wish to enhance the performance by proposing methodologies that utilize information from the images. These methods include fusion techniques capable of incorporating information from both image and text, thereby improving Multimodal dialogue systems' performance.
2304.00266
Hidden Layer Interaction: A Co-Creative Design Fiction for Generative Models
This paper presents a speculation on a fictive co-creation scenario that extends classical interaction patterns with generative models. While existing interfaces are restricted to the input and output layers, we suggest hidden layer interaction to extend the horizonal relation at play when co-creating with a generative model's design space. We speculate on applying feature visualization to manipulate neurons corresponding to features ranging from edges over textures to objects. By integrating visual representations of a neural network's hidden layers into co-creation, we aim to provide humans with a new means of interaction, contributing to a phenomenological account of the model's inner workings during generation.
Imke Grabe, Jichen Zhu
2023-04-01T09:13:36Z
http://arxiv.org/abs/2304.00266v2
# Hidden Layer Interaction: A Co-Creative Design Fiction for Generative Models ###### Abstract. This paper presents a speculation on a fictive co-creation scenario that extends classical interaction patterns with generative models. While existing interfaces are restricted to the input and output layers, we suggest _hidden layer interaction_ to extend the horizonal relation at play when co-creating with a generative model's design space. We speculate on applying feature visualization to manipulate neurons corresponding to features ranging from edges over textures to objects. By integrating visual representations of a neural network's hidden layers into co-creation, we aim to provide humans with a new means of interaction, contributing to a phenomenological account of the model's inner workings during generation. generative AI, co-creation, post-phenomenology, design fiction + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal of Physics A: Mathematical and Physical Sciences ## 1. Introduction A certain mystique surrounds the latent space of generative AI models. Recent art projects like the immersive installations from Refik Anadol's Machine Hallucination series1 illustrate how exploring latent images can be a captivating human experience. When humans are exposed to "a world of the otherwise unseen" (Han et al., 2013, p.1), new phenomenological properties emerge, as Seberger and Slaughter point out for the case of deepfakes. Aiming to make sense of this relation, Benjamin et al. apply post-phenomenology to analyze how machine learning models influence our "phenomenological horizon" (Bergman et al., 2013; Bergman et al., 2013). They introduce the concept of _horizonal relations_ to describe how a probabilistic machine learning model mediates between a human and the world. Knowing that feature visualization can reveal how parts of a neural network represent certain features, one might wonder how making sense of latent space's hidden properties could influence the horizonal relation coined by generative models when humans co-create with them. While researchers have found ways to make sense of latent space's underlying properties in generative models (Bergman et al., 2013; Bergman et al., 2013), its hidden layers are not a part of the interaction in co-creation (Bergman et al., 2013). For example, one can find directions in latent space corresponding to semantic features (Bergman et al., 2013; Bergman et al., 2013), based on which humans might co-create with a generative model via features that translate back into the input vector (Golovolov et al., 2004). Or, one can investigate how objects are encoded in GANs' internal representations (Bergman et al., 2013) serving as a backbone for rewriting their weights in an application that lets users rearrange objects at the output layer level (Bergman et al., 2013). In both cases, however, human users interact with the input or output layer, respectively, which does not provide them with a sense of how changes feed back into the hidden layers. Insight into a co-creator's actions matters as it informs our understanding of the decisions taken in the creative process. This opens up our speculation of whether insight into a generator's hidden workings could contribute to co-creative processes. Design fiction can be useful for envisioning future interactions with generative AI (Grabes et al., 2018; Grabes et al., 2018). In this paper, we use speculative design (Bahdan et al., 2018) as a way to imagine a co-creative scenario, that involves interaction with the hidden representations of a generative AI. More specifically, we imagine a reconstrrained design (Bahdan et al., 2018) by combining existing technological elements, namely generative AI and feature visualization, to give human users a better sense of how a generative model's design space is constructed. By asking _How can interacting with hidden layers affect our experience of generative models?_, we aim to investigate whether feature visualization can expand the horizonal relation at play when co-creating with generative AI. ## 2. Method Speculative design can serve as a method to overcome preconceptions and dogmas underlying technological development (Bahdan et al., 2018). A characteristic of the recent development of generative AI is that we tend to compare the models' functionality to humans and expect similar behavior. However, as AI models come with inherently different capabilities from humans, we risk restricting their development to one stringent direction without asking what novel modes of co-creation the technology might bring to the table. By recognizing the constraints underlying our thinking when imagining new technological applications, we might imagine 'alternative presents' in a reconstrrained world (Bahdan et al., 2018). One practical aspect differentiating generative models from human brains is that one can 'look inside' them by printing out the activation on its layers. However, these values are challenging to make sense of. Here, feature visualization can be used as a tool to shed light on what hidden layers react to. By newly arranging technological elements (Bahdan et al., 2018), we speculate on a co-creation scenario that uses this insight. In doing so, we apply an alternative motivation, namely to imagine a co-creator that gives us access to and lets us intervene with how its creation process is constructed. ## 3. Current World: Closed Horizonal Relation Humans and generative AI can co-create following different patterns, ranging from simply prompting random generation over exploring design alternatives to manipulating the design space (Humans and Bennig, 2017; Grabes et al., 2018; Grabes et al., 2018). In the visual domain, latent variable models, such as Generative Adversarial Networks (GANs), use the input or output layer as the main access point for interaction. At the input level, humans can prompt GANs with a latent code that holds an encoding, such as a text description or other attributes (Grabes et al., 2018; Grabes et al., 2018). Through interaction, alteration to the latent code lets humans traverse the model's design space (Grabes et al., 2018). The generation process can also be influenced by intervening at the output layer level. Bau et al. (Bau et al., 2018) suggest rearranging elements in generated images to change the weights of the associated memory on the hidden layers. In other words, the output layer is the interface for making changes to the inside of the black-boxed network by rewriting its construction. In the terminology of evolutionary biology, one might say that we can either interact with a model via its phenotype, like in Bau et al.'s example, or via the genotype when manipulating the input vector, e.g., through encoded conditioning or by changing its variables. The layers in between remain undisclosed to the human co-creator. As generative models like GANs are probabilistic models and not rule-based, assuming their inner working is an impossible task. Benjamin et al. (Benjamin et al., 2018) use post-phenomenology to analyze how humans might experience the interaction with probabilistic models, more specifically how machine learning uncertainty functions as a design material. The authors formalize the phenomenological relationship by introducing _horizontal relations_, depicted by the term "ML \(\sim\) World", where the tilde operator describes the inference of the ML model from the world. Interacting with machine learning models can then be formalized as "Human \(\xrightarrow{\ }\) Tech - World", where the arrow is the interpretation of the world based on the inferred model, through another technology such as the computer. This shows how machine learning models can "populate' the world that humans experience with ready-made yet uncertain entities" (Bartos et al., 2017, p.12). In other words, we read the world through the model's inference space (Bartos et al., 2017). Applied to generative design, humans navigate in an inferred design space when co-creating with latent variable models. With our fictive design application, we speculate on enriching the horizonal relation underlying the design space by giving humans a sense of the hidden layers' workings by interacting through them. ## 4. Speculative World: Hidden Layer Interaction In the following, we present our speculation on the fictive scenario of _hidden layer interaction_, in which humans can edit the parameters inside a generative model via a visual interface. More specifically, we rearrange existing technological elements from the realm of computer vision and generative AI anew to come up with an alternative mode of interacting with a generative model. Our motivation is to extend existing co-creation patterns by having humans interact with a model beyond its input and output layers. Central to our speculation, we suggest using feature visualization to investigate which parts of a neural network capture certain output features. Researchers have used the method to show how different layers in neural networks for image processing encode edges, textures, patterns, parts, and objects (Krizhevsky et al., 2014). Through optimization, we imagine visualizing what a particular neuron'sees.' In other words, we could find neurons that activate strongly in connection to specific visual features. These features then act as the visual representation of the linked neurons. By manipulating the visual representation linked to the neuronal activation, the user is imagined to 'draw with neurons' in this activation space. The interaction with the generative model is imagined as follows. The user is presented with a selection of hidden layers of different abstraction levels ranging from edges to objects in the network. They can choose to 'pull up' one of those layers to change the neuronal activation on it. Here, they see facets that a feature captures, e.g., different geometric orientations for a low-abstraction layer representing edges, or different motives for a higher-abstraction layer representing objects. When interacting with a chosen layer, the user can alter the strength of a feature facet via an interface similar to adjusting the exposure when editing an image. E.g., on a layer representing texture as a feature, they might increase the activation of some textures while decreasing the activation of others. The visual representation of the texture's facets acts as a handle to change the activation of the corresponding neurons. Via these handles for visual features covering different levels of abstraction in the neural network, human users undertake internal modifications by changing the activation of neurons. By observing how neuronal changes affect the output of a generative model, they experience the roles of neurons distributed across the network, what behavior they cause, and how they relate. Hence, humans learn to think into the generative algorithm by looking backward in 'time' into the generation process. We argue that giving humans a sense of how a prompt at the input layer transforms through the multi-layered structure into an output affects how they experience the co-creative process by changing the horizonal relation towards the design space they navigate. Integrating the inner workings of neural networks into co-creative processes lets the human user step inside their artificial co-creator. ## 5. Conclusion We presented a co-creative generative design fiction making hidden representation experienceable in an interactive interface through feature visualization. More specifically, we propose interacting through a generative models' inner workings in a future co-creation scenario, giving human actors a sense of how a generative AI's design space is constructed. With our speculation, we aim to discuss the phenomenological relation underlying the experience of co-creating with generative models.
2310.03828
Serrated plastic flow in slowly-deforming complex concentrated alloys: universal signatures of dislocation avalanches
Under plastic flow, multi-element high/medium-entropy alloys (HEAs/MEAs) commonly exhibit complex intermittent and collective dislocation dynamics owing to inherent lattice distortion and atomic-level chemical complexities. Using atomistic simulations, we report on an avalanche study of slowly-driven model face-centered cubic (fcc) NiCoCrFeMn and NiCoCr chemically complex alloys aiming for microstructural/topological characterization of associated dislocation avalanches. The results of our avalanche simulations reveal a close correspondence between the observed serration features in the stress response of the deforming HEA/MEA and the incurred slip patterns within the bulk crystal. We show that such correlations become quite pronounced within the rate-independent (quasi-static) regime exhibiting scale-free statistics and critical scaling features as universal signatures of dislocation avalanches.
Kamran Karimi, Amin Esfandiarpour, Stefanos Papanikolaou
2023-10-05T18:22:53Z
http://arxiv.org/abs/2310.03828v1
Serrated plastic flow in slowly-deforming complex concentrated alloys: universal signatures of dislocation avalanches ###### Abstract Under plastic flow, multi-element high/medium-entropy alloys (HEAs/MEAs) commonly exhibit complex intermittent and collective dislocation dynamics owing to inherent lattice distortion and atomic-level chemical complexities. Using atomistic simulations, we report on an avalanche study of slowly-driven model face-centered cubic (fcc) NiCoCrFeMn and NiCoCr chemically complex alloys aiming for microstructural/topological characterization of associated dislocation avalanches. The results of our avalanche simulations reveal a close correspondence between the observed serration features in the stress response of the deforming HEA/MEA and the incurred slip patterns within the bulk crystal. We show that such correlations become quite pronounced within the rate-independent (quasi-static) regime exhibiting scale-free statistics and critical scaling features as universal signatures of dislocation avalanches. ## I Introduction The serrated response is a commonly observed phenomenon in a broad class of driven systems [1]. Examples include crackling sounds due to plasticity [2; 3; 4] or brittle fracture [5; 6; 7], Barkhausen noise in ferromagnetism [8] and even stick-slip dynamics of earthquakes at geological scales [9], just to name a few. Under a sufficiently slow driving rate, serrations refer to highly-intermittent and irrecoverable dynamics that a quiescent (but driven) system undergoes, beyond its threshold, as a certain form of relaxation. As for serrated plastic flow in deforming crystalline solids [10], the relaxation process typically occurs through slip bursts mainly due to the spontaneous nucleation/depinning of dislocations that exhibit a collective motion within the bulk leading to the so-called dislocation avalanches. The scale-free nature of dislocation avalanches --featuring a broad range of time, length, and energy scales [11]-- may indeed suggest some form of criticality/universality within the context of yielding transition [12]. The notion of universality is not always strictly defined in light of close ties between avalanche statistics and dislocations' substructure as well as their complex mechanisms of nucleation, glide, and interactions which are believed to show certain non-universal features [13; 14], depending on various factors such as crystalline phase, lattice orientation, chemical composition, specimen size, temperature, and deformation rate sensitivity. Given the above considerations, how could we infer such complex dislocation patterns and underlying interaction mechanisms by probing statistics of dislocation avalanches? The question posed here has very practical implications in the context of nanomechanical testing methods that, together with in-situ imaging techniques, can give us rich knowledge and insights about nanoscopic origins of plasticity. The existing literature has a wealth of experimental information on serration features of driven crystalline metals and associated microstructural signatures across a broad range of laboratory settings (see [15] and references therein). This includes a large suite of nano/micro scale mechanical tests (e.g. uniform tension, nano/micro pillars, nano-indentation) that are typically supplemented by acoustic emission (AE) measurements of intermittent bursts and/or in(ex)-situ microscopy. A rather generic observation is the power-law distributed magnitude of the latter \(P(S)\propto S^{-\tau}\) but with scaling exponent \(\tau\) that gets typically affected by underlying mechanisms at play and shows deviations from the mean-filed estimate \(3/2\)[1]. The experimental range mostly observed for pure metals (Ni, Al, Cu, Au, Mo, and Nb) is between \(\tau=1.5-1.9\)[15; 16], bearing in mind that various metrics have been proposed as avalanche size based on types of experimentation and associated observables. In-situ electron-microscopy-based investigations have been mainly centered on establishing meaningful links between the occurrence of plastic avalanches and coinciding microstructural evolution. In this context, significant size effects have been commonly identified in both stress serration features and dislocation morphologies, with the latter mainly arising from surface-induced limitations of deformation sources and augmented annhilation mechanisms [17; 18; 19; 20]. Rate effects [21] have been consistently reported to influence dislocation glide mechanisms and associated relaxation processes which were shown to systematically alter statistics of slip avalanches [22; 23]. Overall, the aforementioned studies suggest certain _indirect_ (but insightful) links between serrated flow features and dislocation slip patterns in pure crystalline metals. Such correlations become even more challenging within the framework of high/medium-entropy alloys (HEAs/MEAs), bearing in mind inherent atomic-level complexities, due to underlying lattice distortions, which are known to be the dominant source of HEAs/MEAs' exceptional properties [24; 25]. Unlike conventional alloys, these complex concentrated alloys possess a rugged energy landscape giving rise to intrinsic randomness in local Peierls stresses and, therefore, unusual pinning patterns and jerky glide dynamics of roughened dislocations [26; 27]. A relevant study by Hu et al. [28] demonstrated the accumulation of dislocation bands and pile-ups in a compressed HEA nanopillar, owing to the complex interplay with random obstacles, that significantly differs from the surface-induced annhilation mechanism observed in pure fcc metals. Another complication arises from compositional/microstructural heterogeneities (local chemical ordering [29; 30], nano-precipitation [31], deformation-induced phase transition [32]) that interplay with dynamics of dislocations in often unpredictable and inextricable ways [33] in chemically complex alloys. Prior applications of AE tests, as the gold standard in the field, mainly reported certain critical features of jerky plastic flow and associated strain bursts in HEAs [34; 35; 36] but did not fully succeed in directly associating their temporal dynamics to bulk substructural features [37]. In-situ characterization of nanoscale deformation patterns in combination with AE experiments has been practically challenging due to complex microstructural origins of acoustic signals, highly-specialized instrumentation, and sample preparation difficulties. Here in this study, our aim is to explore such microstructure-property correlations in the single-phase face-centered cubic (fcc) NiCoCrFeMn and NiCoCr as two exemplary HEA and MEA. Our motivation for revisiting plasticity in deforming Cantor and NiCoCr alloys is two folds: _i_) we aim to replicate laboratory-based investigations of dislocation avalanches using model simulation systems of NiCoCrFeMn and NiCoCr alloys _ii_) we seek for potential microstructural footprints in avalanche statistics to gain further understanding into underlying atomic-level deformation mechanisms. To this end, we perform atomistic simulations of model Cantor and NiCoCr alloys under uniaxial tension and analyze serration features of the stress response together with the incurred slip patterns within the bulk sample. As a comparative study, we further investigate avalanche properties in pure Ni which lacks the chemical/microstructural heterogeneity element (owing to lattice distortion) as in Cantor and NiCoCr alloys but still features nontrivial avalanche properties. We, in particular, probe deformation rate effects on spatial-temporal evolution of slip events and their statistics at room temperature. We find that dislocation avalanches in the slowly driven HEA and MEA exhibit a scale-free process characterized by asymptotic power-law regimes and critical scaling exponents that govern serrated plastic flow but show minimal variations with respect to the chemical composition. Our findings indicate that the morphology of microstructural changes is strongly rate-dependent and exhibits meaningful correlations with avalanche size statistics at slow driving rates. The paper's layout is as follows. In Sec. II, we describe the numerical setup, sample preparation, loading protocols, and relevant simulation details including interatomic forces and shear test description. Section III presents our simulation results relevant to investigations of dislocation avalanches followed by a phase analysis of the microstructure as well as their potential correlations under different deformation rates. In this context, Sec. III.1 examines avalanche statistics (size and duration) to characterize their rate-dependence. microstructural signatures of dislocation avalanches will be discussed in Sec. III.2 and III.3. Section IV presents relevant discussions and conclusions. Figure 1: **a)** The evolution of the normal stress \(\sigma_{zz}\) and spontaneous stress rate \(\partial_{t}\sigma_{zz}\) with time \(t\) at \(T=300\) K and \(\dot{\epsilon}_{zz}=10^{8}\) s\({}^{-1}\) corresponding to the Cantor alloy. **b)** A magnified view of a) with the \(i\)-th avalanche starting at \(t_{i}\) and having the duration of \(\mathcal{T}_{i}\). **c)** Fraction of atoms \(\rho_{\rm{hcp}}\) and \(\rho_{\rm{bcc}}\) with hcp and bcc arrangement versus time. **d)** A magnified view of c). **e)** Illustration of the uniaxial setup with the formation of the hcp clusters of atoms within [111] slip planes during an avalanche. **f)** Crystallographic orientation of a hcp cluster of size \(s_{\rm{hcp}}\) described by the azimuthal angle \(\theta\) and polar angle \(\phi\). Methods & Protocols We performed molecular dynamics simulations in LAMMPS [38] by implementing atomistic samples of size \(N=10,000\) within a three-dimensional periodic cell. We prepared cubic samples with dimension \(L=40\) A along the \(x[100]\), \(y[010]\), and \(z[001]\) directions. The NPT ensembles were implemented via a Nose-Hoover thermostat and barostat with relaxation time scales \(\tau_{d}^{\rm therm}=10\) fs and \(\tau_{d}^{\rm bar}=100\) fs (\(1\) fs \(=10^{-15}\) s). We also set the discretization time to \(\Delta t=1.0\) fs. Samples were initially prepared via an energy minimization at \(T=0\) K (at a fixed pressure) and subsequently thermalized at room temperature (\(T=300\) K) and constant pressure \(P=0\) bar for the duration of \(100\) ps prior to loading. The interatomic forces were derived from the modified embedded-atom method potential developed recently by Choi et al. [39]. Tensile tests were carried out by deforming the \(z\) dimension of the simulation box at constant strain rates \(\dot{\epsilon}_{zz}=10^{8}-10^{10}\) s\({}^{-1}\) with \(P_{xx}=P_{yy}=0\). We also checked that the stress response and associated statistics are almost rate-independent below \(\dot{\epsilon}_{zz}=10^{8}\) s\({}^{-1}\) corresponding to a quasi-static limit. ## III Results We performed a series of tensile tests on model Cantor, NiCoCr, and Ni alloys at different deformation rates and room temperature. The evolution of the (normal) stress \(\sigma_{zz}\) with the tensile strain \(\epsilon_{zz}\) and the associated rate \(\partial_{t}\sigma_{zz}\) are plotted in Fig. 1(a) at \(T=300\) K and \(\dot{\epsilon}_{zz}=10^{8}\) s\({}^{-1}\) corresponding to NiCoCrFeMn. Upon yielding, the mechanical response is characterized by a stick-slip-type behavior with abrupt force drops preceded by longer stress build-up periods as in Fig. 1(b). Similarly, the stress rate exhibits an intermittent dynamics with quiescent periods (i.e., \(\partial_{t}\sigma_{zz}\simeq 0\)) that are frequently interrupted by fairly short-lived bursts of events. The latter are typically accompanied by slips across close-packed \(\{111\}\) atomic planes in a fcc structure to form hcp layers within stacking fault regions as in Fig. 1(e) and (f). Figure 1(c) and (d) displays the fraction of atoms with hcp(bcc) structure \(\rho_{\rm hcp(bcc)}\) and its evolution with time \(t\). In what follows, we identify individual stress avalanches and probe their statistics (i.e., size and duration) showing non-trivial correlations with the structure of slip planes at low strain rates. ### Avalanche Analysis: Size & Duration We define the avalanche size as the magnitude of the stress drop \(S=-\int_{t_{i}}^{t_{i}+\mathcal{T}_{i}}\partial_{t}\sigma_{zz}\ dt\) corresponding to event \(i\) initiated at \(t_{i}\) with duration \(\mathcal{T}_{i}\) as illustrated in Fig. 1(b). During the avalanche period, the stress rate exceeds a threshold value set by the median rate, \(-\partial_{t}\sigma_{zz}\geq\dot{\sigma}_{\rm th}\) with \(\dot{\sigma}_{\rm th}={\rm median}(-\partial_{t}\sigma_{zz})\). We also checked the robustness of our results against variations in \(\dot{\sigma}_{\rm th}\) (data not shown). To remove the thermal noise from the stress signal, we use the optimal (Wiener) filtering (see the Supplementary Materials for further details) and ensure that our avalanche analysis is performed on sufficiently smooth timeseries. We systematically gather statistics of avalanches incurred at the strain interval \(\epsilon_{zz}=0.2-1.0\) within the steady-state flow regime. To improve the collected statistics, we consider fairly large statistical ensembles with order \(10-100\) realizations per deformation rate. The avalanche analysis is performed on an extensive dataset, typically including around \(10^{3}-10^{4}\) avalanches in each case, to ensure the robustness and accuracy of the estimated scaling exponents. Figure 2(a-f) displays the scatter plot of the avalanche size \(S\) and event duration \(\mathcal{T}\), scaled by \(\dot{\epsilon}_{zz}^{-1}\), as well as size distributions \(P(S)\) at \(T=300\) K and various rates \(\dot{\epsilon}_{zz}\) corresponding to Cantor alloy, NiCoCr, and pure Ni. The scatter plots in Fig. 2(a), (c), and (e) demonstrate that, statistically speaking, larger avalanches tend to have longer duration with a scaling behavior that may be described on average as \(\langle S\rangle\propto\mathcal{T}^{\gamma}\) with \(\gamma\simeq 1.0\). The observed scaling regime appears to be fairly limited at the slowest rate \(\dot{\epsilon}_{zz}=10^{8}\) s\({}^{-1}\) with the duration that tends to saturate at large avalanche sizes, possibly due to size effects. In Fig. 2(b), the size distribution associated with the slowest rate in Cantor alloy decays as a power-law \(P(S)\propto S^{-\tau}\) (above a certain cut-off size \(S_{c}\simeq 10^{-1}\) Gpa) which spans at least two decades in \(S\) and seems to be well-predicted by the mean-field estimate \(\tau=3/2\)[1]. As \(\dot{\epsilon}_{zz}\) is increased toward \(10^{10}\) s\({}^{-1}\), the size distributions tend to exhibit a steeper fall-off with a nearly exponential-like drop at the fastest deformation rate. We observe very similar trends for avalanche statistics in NiCoCr as in Fig. 2(c) and (d) in terms of the rate dependence, except for a comparatively limited power-law scaling regime associated with \(P(S)\) at \(\dot{\epsilon}_{zz}=10^{8}\) s\({}^{-1}\). The scaling between the size and duration in the case of pure Ni closely resembles that of the two alloys as in Fig. 2(e). However, the avalanche size exponent in Fig. 2(f) is notably shallower than the mean-field prediction \(\tau<3/2\) at the slowest rate but tends to become more mean-field like at the intermediate strain rate. We note that the observed power-law behavior in Ni appears to be quite sensitive to the filtering process and slight variations in the relevant parameters lead to a better agreement with theoretical predictions (refer to Fig. S3(c)). ### Microstructural Analysis: Crystal phase, Cluster Statistics, and Slip Planes As a structural metric associated with dislocation avalanches, we identified atomic structure types via the common neighbor analysis implemented in OVITO [40], seeking for atoms in hexagonal close-packed (hcp) and body-centered cubic (bcc) arrangements. The hcp atoms are associated with stacking faults which are bounded by partial dislocations in a face-centered cubic (fcc) structure. We investigated statistics of hcp clusters (including size and orientation) and sought for their correlations with stress avalanches. Here a cluster is defined as a set of adjacent atoms with the same structural type --hcp in this study. As a basic statistical property, \(P(s_{\rm hcp})\) denotes the probability distribution function associated with the number of clusters containing \(s_{\rm hcp}\) atoms. The radius of gyration associated with a cluster of size \(s_{\rm hcp}\) may be also defined as \(r_{g}=\sum_{i=1}^{s_{\rm hcp}}|\vec{r}_{i}-\vec{r}_{0}|^{2}/s_{\rm hcp}\) with the center of mass \(\vec{r}_{0}=\sum_{i=1}^{s_{\rm hcp}}\vec{r}_{i}/s_{\rm hcp}\). Figure 3(a), (c), and (e) illustrates that \(s_{\rm hcp}\propto r_{g}^{d}\) with fractal dimension \(d_{f}\simeq 2.0\). This almost agrees with Fig. 1(e) and (f) in the sense that, on average, hcp-type clusters tend to form fairly planar structures. At the slowest rate \(\dot{\epsilon}_{zz}=10^{8}\) s\({}^{-1}\), the proposed scaling seems to be quite consistent with observations for NiCoCr and Ni in Fig. 3(c) and (e) whereas Cantor alloy in Fig. 3(a) exhibits a slightly larger scatter in measurements likely due to microstructural heterogeneities. Figure 3(b), (d), and (f) plots \(P(s_{\rm hcp})\) at different strain rates. We note that the cluster size distributions develop fairly long tails with decreasing \(\dot{\epsilon}_{zz}\), due to system-spanning slip planes, with a decay that can be best described by a power-law \(P(s_{\rm hcp})\propto s_{\rm hcp}^{-\tau_{c}}\) over at least two decades in \(s_{\rm hcp}\). Here the distributions show a meaningful rate-dependence with the trend already observed for avalanche size distributions in Fig. 2(b) and (d). Nevertheless, we see a slower-than-predicted decay of \(P(s_{\rm hcp})\) at the slowest rate --\(\tau_{c}<\tau_{\rm pred}=2\) as inferred from percolation theory [41]-- for the three metals which crossovers to the predicted (mean-field) behavior at faster deformation rates. Having analyzed the size distributions of hcp clusters, Figure 3: Cluster size statistics at different strain rates \(\dot{\epsilon}_{zz}\) associated with NiCoCrFeMn, NiCoCr, and Ni. **a, c, e)** Scatter plot of cluster size \(s_{\rm hcp}\) and associated radius of gyration \(r_{g}\)**b, d, f)** Cluster size distribution \(P(s_{\rm hcp})\). The dashdotted lines denote power laws **a, c, e)**\(s_{\rm hcp}\propto r_{g}^{d_{f}}\) with \(d_{f}=2\)**b, d, f)**\(P(s_{\rm hcp})\propto s_{\rm hcp}^{-\tau_{c}}\) with \(\tau_{c}=2\). The error bars indicate standard errors. The data are shifted vertically for the sake of clarity. we now turn to their crystallographic orientation relationship. The latter is described based on the azimuthal angle \(\theta\) and polar angle \(\phi\) measured from the lattice coordinate frame as in Fig. 1(f). The density plots presented in Fig. 4 correspond to orientation maps \((\theta,\phi)\) at three different strain rates. Within these maps, the black (solid) circles denote the \(\{111\}\) and \(\{100\}\) orientations corresponding to the undeformed crystals as in Fig. 1(e). Our data in Fig. 4(a) confirm that, statistically speaking, hcp glide planes tend to align with four different sets of \(\{111\}\) closed-packed planes in Cantor alloy. The density maps also suggest a fair amount of activation in the vicinity of the \(\{100\}\) family which are mostly due to the loading-induced _reorientation_ of the slip planes. We note that uniaxial tension is performed normal to the \((001)\) plane with \(\phi=0^{\circ},180^{\circ}\). There exists a certain amount of data scatter possibly attributed to clusters being of very small size \(s_{\rm hcp}\ll 10\) and/or numerical artifacts because of non-planar topology of hcp clusters. An increase of the deformation rate in Fig. 4(b) and (c) tends to reorient hcp planes to a larger extent as illustrated by the broader and/or denser distributions around \(\{100\}\). As for NiCoCr in Fig. 4(d), a relatively insignificant reorientation of slip planes appears to be relevant at the slowest rate but intensifies with increasing rates in Fig. 4(e) and (f). Another observation is the preferential reorientation around \(\{110\}\) family planes (i.e. \(\phi=90^{\circ}\) and \(\theta=\pm 45^{\circ},\pm 135^{\circ}\)). In the case of pure Ni in Fig. 4(g), (h), and (i), the metal appears to indicate fairly consistent features with Cantor alloy but with a slightly weaker reorientation of slip planes. ### Correlation Analysis: Dislocation Avalanches & Microstructure We carried out a correlation analysis between avalanche sizes \(S\) and associated changes in the fraction of hcp atoms \(\rho_{\rm fcc}\) incurred over the duration of individual avalanches (refer to Fig. 1(c) and (d)). The latter is defined as \(\Delta\rho_{\rm hcp}=\int_{t_{z}}^{t_{z}+T_{i}}|\partial_{t}\rho_{\rm hcp}|\ dt\) corresponding to the Figure 4: Orientation density maps \((\theta,\phi)\) associated with hcp glide planes at various deformation rates \(\dot{\epsilon}_{zz}\) corresponding to **a-c**) Cantor alloy, **d-f**) NiCoCr, and **g-i**) pure Ni. The (black) symbols denote different crystallographic planes as in Fig. 1. The red and blue colors denote high and low densities, respectively. \(i_{\rm th}\) avalanche at \(t_{i}\) with duration \(\mathcal{T}_{i}\). We also obtained the (linear) correlation coefficient \(c_{XY}=\langle\hat{X}\hat{Y}\rangle\) between the two observables \(X=\log_{10}S\) and \(Y=\log_{10}\rho_{\rm hcp}\). Here \(\hat{X}\) indicates the deviation from the mean \(\langle X\rangle\), normalized by the standard deviation \({\rm std}(X)\) associated with each variable. The above analysis was repeated for the bcc arrangement with the results shown as the scatter plots of Fig. 5 at multiple \(\dot{\epsilon}_{zz}\). Our data in Fig. 5(a) and (b) exhibit a large scatter in Cantor alloy but the observed trend indicates meaningful variations between the two sets of observables at the slowest driving rate \(\dot{\epsilon}_{zz}=10^{8}\) s\({}^{-1}\). Here, (positive) correlations between \(S\) and \(\Delta\rho_{\rm hcp}\) (or \(\Delta\rho_{\rm bcc}\)) imply that avalanches of large size typically correspond to considerable microstructural changes, most likely associated with fcc-to-hcp and fcc-to-bcc phase transformations. \(\Delta\rho_{\rm hcp}\) features a relatively stronger association with \(S\), as demonstrated by larger correlation coefficients \(c_{XY}\), suggesting that plastic avalanches are presumably rooted in partial slips and formation of layers of hcp stacking in fcc NiCoCrFeMn. With increasing \(\dot{\epsilon}_{zz}\) toward \(\dot{\epsilon}_{zz}=10^{40}\) s\({}^{-1}\), such correlations become less pronounced, in particular, between avalanche sizes \(S\) and \(\Delta\rho_{\rm bcc}\). Our correlation analysis associated with NiCoCr and Ni as in Fig. 5(c-f) reveal similar correlation patterns compared with Cantor alloy. ## IV Conclusions & Discussions We initiated this study with the aim of _i_) replicating empirical observations on serrated stress response in plastically deforming Cantor and NiCoCr alloys _ii_) inferring structural signatures of dislocation avalanches and associated rate effects. As for _i_), our atomistic simulations have closely reproduced experimental data on the scale-free nature of dislocation avalanches in HEAs. This scale-invariance has been evidenced by robust scaling features (under slow "quasi-static" drive) described by asymptotic power-law distributions and associated critical exponents that, in general, match empirical evaluations as well as mean-field predictions. More specifically, the avalanche size exponents we measure are fairly compatible with mean-field estimates (\(\tau=\frac{3}{2}\)[1; 12; 42]) and within the experimentally reported range 1.3\(-\)2.0 [43; 42; 43] in chemically complex alloys. Notably, NiCoCr exhibits a fairly restricted scaling regime associated with avalanche size distributions that is possibly attributed to strong heterogeneities and large atomic misfits [44], driving this alloy away from criticality. One should also take the above estimates with grain of salt as various metrics have been utilized as avalanche size in the literature including (but not limited to) slip magnitude, stress drop, and emitted (acoustic) energies. The observed rate effects on statistics of dislocation avalanches are in line with experimentation on HEAs/MEAs that indicate dynamical cross-overs between different serration types at a certain range of deformation rates (see [10] and references therein). Beyond a certain rate threshold, the three deforming metals undergo a dynamical transition into a subcritical state characterized by non-critical exponential-like statistics of avalanches. To discern the role of local lattice distortions associated with the Cantor and NiCoCr alloys, we have additionally probed statistics of avalanches in pure Ni showing fairly consistent features with the former metals in terms of the overall rate dependency. Nevertheless, we have found a relatively shallow decay of avalanche size distributions within the quasi-static regime exhibiting a non-mean-field behavior \(\tau<\frac{3}{2}\) which was also reported in two-dimensional dislocation dynamics simulations [45]. This might be indicative of the relative abundance of big avalanches over small ones in the pure metal possibly due to the absence of chemical/structural disorder. In certain alloys, this heterogeneity element may act as effective obstacles against propagating avalanches and, therefore, strongly influence their statistical prop Figure 5: Correlations between avalanche size and associated change in crystal structures corresponding to NiCoCrFeMn, NiCoCr, and pure Ni. Scatter plot of avalanche size \(S\) and the creation/annihilation ratio of **a, c, e**) hcp structure \(\Delta\rho_{\rm hcp}\)**b, d, f**) bcc structure \(\Delta\rho_{\rm bcc}\) at various rates \(\dot{\epsilon}_{zz}\). Here \(c_{XY}\) denotes the corresponding correlation coefficient between \(\Delta\rho_{\rm hcp}\) and \(\Delta\rho_{\rm bcc}\) with \(S\). erties, often resulting in the dominance of smaller-size avalanches. We conjecture that the disorder strength and associated length might be relevant parameters that govern the critical exponent \(\tau\) and its deviation from the mean-field prediction. In line with _ii_), our microstructural analyses have demonstrated nontrivial scaling features associated with the dynamics and topology of slip planes that could be better understood in the context of percolation transition. In this framework, we find robust power-law distributed cluster sizes with scaling exponent \(\tau_{c}\simeq 1.0\) at slow driving rates that is shallower than the mean-field prediction \(\tau_{c}^{\rm mf}=2\)[41] in the studied metals but cross-overs to the latter exponent at intermediate rates beyond which the cluster size distribution enters a subcritical regime, analogous to avalanche size distributions. We note that Ni features a non-mean-field scaling for both avalanche size \(S\) and cluster size \(s_{\rm hcp}\) distributions in the rate-independent regime. As for Cantor alloy (and to some degree NiCoCr alloy), the former distribution indicates an asymptotic mean-field behavior. This is rather counter-intuitive as one would naively expect the two observables to intercorrelate strongly. Our speculation is that, due to underlying disorder in the HEA (and/or MEA), plastic avalanches may primarily occur as a result of the accumulation of broadly-distributed yet _randomly_-triggered individual slips. By contrast, the observed trends associated with pure Ni might be indicative of spatial and/or temporal correlations between the latter. Our correlation analysis may establish a direct mapping between avalanches of certain size and incurred slip patterns at slow strain rates. Such a close correspondence between the two variables is not maintained at finite deformation rates which could be taken as another indication that high strain rates and/or stresses tend to drive these systems away from criticality. Our findings on the strong coupling between temporal and spatial evolution of dislocation avalanches may contribute to ongoing efforts within the material science and physics communities that aim to infer underlying morphology and microstructural changes by solely probing the mechanical signals. Recently, the state-of-the-art machine learning models have emerged as robust computational tools to classify/reconstruct the bulk microstructure based on feature extraction of surface measurements (e.g. frequency content, magnitudes, signal duration, and energy scales). Combined with in-situ imaging techniques, such surrogate models and their predictions can provide a valuable insight into the microstructural origins of plasticity in an timely, efficient, and accurate manner. ###### Acknowledgements. This research was funded by the European Union Horizon 2020 research and innovation program under grant agreement no. 857470 and from the European Regional Development Fund via Foundation for Polish Science International Research Agenda PLUS program grant no. MAB PLUS/2018/8.
2307.00736
Ultra-high Q alumina optical microresonators in the UV and blue bands
UV and visible photonics enable applications ranging from spectroscopic sensing to communication and quantum information processing. Photonics structures in these wavelength regimes, however, tend to experience higher loss than their IR counterpart. Particularly in the near-UV band, on-chip optical microresonators have not yet achieved a quality factor beyond 1 million. Here we report ultra-low-loss photonic waveguides and resonators patterned from alumina thin films prepared by a highly scalable atomic layer deposition process. We demonstrate ultra high Q factor of 1.5$\,\times\,$10$^6$ at 390nm, a record value at UV bands, and 1.9$\,\times\,$10$^6$ at 488.5nm.
Chengxing He, Yubo Wang, Carlo Waldfried, Guangcanlan Yang, Jun-Fei Zheng, Shu Hu, Hong X. Tang
2023-07-03T03:47:19Z
http://arxiv.org/abs/2307.00736v2
# Ultra-high Q alumina optical microresonators in the UV and blue bands ###### Abstract UV and visible photonics enable applications ranging from spectroscopic sensing to communication and quantum information processing. Photonics structures in these wavelength regimes, however, tend to experience higher loss than their IR counterpart. Particularly in the near-UV band, on-chip optical microresonators have not yet achieved a quality factor beyond 1 million. Here, we report ultra-low-loss photonic waveguides and resonators patterned from alumina thin films prepared by a highly scalable atomic layer deposition process. We demonstrate ultra high Q factor of \(1.5\times 10^{6}\) at 390 nm, a record value at UV bands, and \(1.9\times 10^{6}\) at 488.5 nm. 1Department of Electrical Engineering, Yale University, New Haven, CT 06520, USA 2Entegis Inc., Billerica, MA 01821, USA 3Energy Sciences Institute, Yale University, West Haven, CT 06516, USA *[email protected] ## 1 Introduction UV and visible band integrated photonics has witnessed rapid progress in recent years. Applications such as atomic clocks [1], biochemical sensing [2, 3], visible light communications [4], quantum sensing [5, 6], quantum information processing based on trapped ions [7, 8] and atoms [9, 10], all call for UV and blue band photonic integrated circuits with high scalability and low loss. Yet, low-loss photonics at short wavelengths remains difficult to achieve as material absorption dramatically increases when photon energy approaches the bandgap of a material, and Rayleigh scattering scales as \(\lambda^{-4}\). One approach to reduce the loss at short wavelengths is to use very thin silicon nitride (Si\({}_{3}\)N\({}_{4}\)) waveguides cladded with low loss silica [9, 11] so that the propagation mode is weakly confined. In this way, absorption inherent to the Si\({}_{3}\)N\({}_{4}\) waveguide core can be diluted, while scattering loss induced by waveguide sidewall roughness is also reduced due to the reduced sidewall height. However, so far, devices employing Si\({}_{3}\)N\({}_{4}\) as the waveguide core still show strong absorption in the UV and blue bands. [12, 13] Further reduction of waveguide loss at short wavelengths requires a new waveguide core material that demonstrates a large bandgap while still maintaining a higher refractive index than the low-loss cladding material, usually silica, to provide good confinement. One such candidate is AlN, which has a large bandgap of 6.2 eV. In our previous work, we employed single-crystalline AlN as waveguide core [14] and demonstrated a quality factor of 210 k at 390 nm, which was a significant advance for devices operating in the near-UV bands, but still below the state-of-the-art achieved by IR and near-visible optical resonators. An alternative to AlN for short-wavelength passive photonic integrated platforms is amorphous alumina. Recent progresses in deposition methods have greatly improved the quality of amorphous alumina films, which now demonstrate a band gap that is comparable to bulk sapphire (7.0 - 8.3 eV of ALD alumina [15, 16] vs. 8.8 eV [17]). Many reports on amorphous alumina films deposited via either reactive sputtering or atomic layer deposition (ALD) also confirmed very low loss at short wavelengths (< 0.3 dB/cm at 405 nm) [18, 19, 20], and their compatibility with photonic platforms, either standalone [21] or combined with other materials [22]. Recently, near-UV extended cavity diode lasers (ECDLs) were demonstrated by interfacing low-loss alumina waveguides with InGaN semiconductor amplifiers [23]. Thanks to the amorphous microstructure of these alumina films, the associated deposition process also has no requirements for the lattice structure of the substrate on which it is grown, thus relaxing requirements for substrate material. Furthermore, both ALD and reactive sputtering process are CMOS compatible, paving the way for CMOS integration with amorphous alumina-based photonics. In this letter, we leverage an industrial ALD process to grow alumina as the waveguide core. This highly scalable process is capable of providing uniform growth coverage to substrates over 20" in diameter and can coat hundreds of 4" wafers in a single batch. Because the absorption of UV and blue light is low in alumina, the propagation mode can be fully supported in the waveguide core, minimizing the scattering loss at top and bottom surfaces of the alumina film. With new waveguide core material and corresponding design principles, our resonators demonstrated ultra high Q of \(1.5\times 10^{6}\) at \(390\,\mathrm{nm}\) and \(1.9\times 10^{6}\) at \(488.5\,\mathrm{nm}\). Those are the highest quality factors reported at corresponding wavelengths for resonators featuring high confinement design, including previously demonstrated alumina resonator [18, 23]. ## 2 Design and Simulation We employ a shallow etch geometry to minimize sidewall scattering loss. The resonators are air-cladded to promote refractive index contrast with the alumina core, reducing the etch depth needed for confinement.The etch depth is optimized by simulating the radiation loss of waveguides subject to different etch depths in Lumerical. For ring resonators with \(400\,\mathrm{\SIUnitSymbolMicro m}\) radius, it was found that radiation loss can be suppressed to less than \(0.06\,\mathrm{dB/cm}\) at \(488.5\,\mathrm{nm}\) and less than \(0.001\,\mathrm{dB/cm}\) at \(390\,\mathrm{nm}\) when the etch depth is greater than \(80\,\mathrm{nm}\) out of \(400\,\mathrm{nm}\) thick alumina film. During fabrication, the etch depth is targeted at \(100\,\mathrm{nm}\), deep enough so that the resonators are not radiation loss limited. The width of the waveguide is set to \(4.5\,\mathrm{\SIUnitSymbolMicro m}\) so that the outer sidewall provides most of the confinement for the propagation mode, and the overlapping between propagation mode, particularly TE00 mode, and the inner sidewall is minimal, further reducing scattering loss. It should be noted that this wide waveguide allows for propagation of multiple propagation modes, however, as Fig. 2 shows, by optimizing bus-to-resonator coupling, the coupling to the higher order modes can be greatly suppressed. We utilized an under-coupled point-coupling design to reduce coupling-induced loss and probe the intrinsic alumina resonator loss. The coupling geometry between straight bus waveguide and ring resonator is optimized by varying the bus waveguide width and the gap between the bus waveguide and the ring resonator. We use a combination of simulations with FIMMWAVE software and experimental data to optimize the parameters. The optimal coupling condition for \(390\,\mathrm{nm}\) light is \(0.65\,\mathrm{\SIUnitSymbolMicro m}\) wide bus waveguide and \(1\,\mathrm{\SIUnitSymbolMicro m}\) gap between bus waveguide and ring resonator. For \(488.5\,\mathrm{nm}\) light, the optimum coupling condition is \(0.75\,\mathrm{\SIUnitSymbolMicro m}\) wide bus waveguide and \(1.1\,\mathrm{\SIUnitSymbolMicro m}\) gap. We also fabricated and measured microrings with smaller gaps, which provide stronger coupling. ## 3 Device fabrication and results The fabrication started with \(4\,\mathrm{\SIUnitSymbolMicro m}\) of wet thermal oxide grown on silicon wafers. The test wafers were then coated at Entegris by applying a blanket layer of atomic layer deposition (ALD) amorphous alumina. The ALD deposition of alumina coating was performed by sequentially cycling of TMA / H\({}_{2}\)O with pulsing times between \(0.05\,\mathrm{s}\) and \(0.15\,\mathrm{s}\) and nitrogen purge times between \(18\,\mathrm{s}\) and \(20\,\mathrm{s}\) at temperatures between \(180\,\mathrm{\SIUnitSymbolMicro C}\) and \(250\,\mathrm{\SIUnitSymbolC}\), using a 20" diameter crossflow thermal ALD coatings system custom-built at Entegris in a class 10,000 clean room. The growth rate of this deposition recipe is approximately \(1.1\,\mathrm{\SIUnitSymbolMicro A}\) / cycle. The thickness of the alumina coatings was measured to be a nominal \(420\,\mathrm{nm}\), determined by spectroscopic reflectometry using an Angstrom Sun SR300 system. The ring resonators and associated bus waveguides were defined with a 100kV electron-beam lithography system (Raith EBPG 5200+) with a negative FOx-16 resist. To mitigate electron charging effects due to the highly insulating alumina and silica layers, 200 nm of poly(4-styrene sulfonic acid) (PSSA) was spun on top of FOx-16 resist before 10 nm of gold was sputtered to provide grounding for stray electrons. The PSSA is water soluble to help the removal of gold after e-beam lithography. After the removal of gold by dipping in water, the chip was developed in 25 percent tetramethylammonium hydroxide (TMAH) developer. The pattern was then transferred to the alumina layer using an Oxford PlasmaPro 100 Cobra Inductively Coupled Plasma Reactive Ion Etching (ICP-RIE) system with a BCl\({}_{3}\) based etching recipe. Leftover FOx-16 resist was then removed from the chip by dipping the chip in 10:1 buffered oxide etch for 10 seconds. To further reduce the absorptive loss in alumina waveguide, the chip was annealed in the atmosphere at 500 \({}^{\circ}\)C (for 390nm chip) and 600 \({}^{\circ}\)C (for 488.5nm chip) for 5 hours to achieve the lowest loss and while avoiding crystallization, which has been reported to take place above 800\({}^{\circ}\)C [24, 25]. To characterize the ring resonators, we construct a sweeping blue/UV laser by frequency doubling a Ti-Sapphire laser (M2 SolsTiS, 700-1000 nm) to 390 nm and 488.5 nm. The Ti-Sapphire laser is locked to an external cavity, ensuring <50kHz linewidth and the wavelength of the laser is precisely determined by a 0.1 pm resolution wavemeter, or 0.05 pm resolution after frequency doubling. To create a sweeping \(\sim\)390 nm laser, the \(\sim\)780 nm pump laser from Ti:Sapphire laser is coupled to a lithium triborate (LBO) doubling crystal in a resonant cavity (M2 ECD-X). To create a sweeping \(\sim\)488.5 nm laser, \(\sim\)977 nm pump laser from Ti:Sapphire laser is sent through a Magnesium-doped Periodically Poled Lithium Niobate (MgO:PPLN, Covesion MSHG 976-0.5-30) crystal to frequency double to 488.5 nm. The MgO:PPLN crystal is put Figure 1: (a) Schematics of ALD chamber capable of processing hundreds of 4\({}^{\circ}\) wafers at the same time. (b) Microscope image of patterned ring resonators and interconnection waveguides. Inset: Bus waveguide and ring resonator coupling region. (c) SEM image of cleaved waveguide facet and EDS analysis revealing the elemental mapping of (d) Al (e) Si. in an oven, whose temperature is adjusted as the frequency of the pump laser is scanned to maintained phase matching condition of the MgO:PPLN crystal for maximal frequency doubling efficiency. It should be noted that the output power from the Ti:Sapphire laser is wavelength dependent as the spacing of the etalon in the resonant cavity is being continuously tuned. The extended transmittance spectrum covering two FSRs is therefore stitched from four continuous scans around 390 nm (five around 488nm), with the etalon spacing being retuned for maximum power output at the beginning of each piecewise scan. Fig. 2 shows the transmittance of the alumina ring resonator at \(\sim\) 390 nm and \(\sim\) 488.5 nm, respectively. For the extended transmittance spectrum, the pump Ti:Sapphire laser is scanned at 500 MHz/s, corresponding to a scan speed of 1 GHz/s after frequency doubling. For the zoomed in resonances depicted in the insets, to ensure high wavelength resolution, the pump Ti:Sapphire laser is scanned at 200 MHz/s (400 MHz/s after frequency doubling). Multiple sets of resonance peaks can be observed for both the 390 nm and 488.5 nm cases, as the 4.5 wide waveguide supports multiple TE transmission modes. Despite this, the coupling to TE00 mode is being optimized while the coupling to other modes are suppressed. For TE00 modes at 390 nm, the TE00 modes exhibit an FSR of 65.6 GHz, while the TE10 modes have an FSR of 66.3 GHz, as predicted by the 400 \(\upmu\)m radius ring geometry. Even higher TE modes are not prominent. One of the TE00 mode resonance peaks demonstrates a loaded Q factor of 1.2 M, and has an extinction ratio of 2.4 dB. Using the formula \(Q_{\mathrm{int}}=\frac{2Q_{\mathrm{L}}}{1+\sqrt{10^{-68}/10}}\) (Here \(Q_{\mathrm{int}}\) stands for intrinsic Q, \(Q_{\mathrm{L}}\) stands for loaded Q, and ER stands for extinction ratio in dB.), for under-coupled conditions, we obtain an intrinsic Q of 1.5 M for this resonance. For TE modes at \(\sim\) 488.5 nm, the FSR is 68.5 GHz for TE00 mode and 68.6 GHz for TE10 modes, with one of the TE00 mode resonance peaks demonstrating a high loaded Q of 1.4 M and has an extinction ratio of 3.2 dB, corresponding to a loaded Q of 1.9 M. The current Q of our device is likely limited by the residual absorption of alumina and the scattering of the remaining alumina sidewall roughness as the radiation loss limited Q for the resonator is calculated to be beyond \(10^{10}\) for both TE00 and TE10 modes at wavelengths shorter than 500 nm. Since the modal absorption for TE00 and TE10 mode is the same, the Q difference between the two sets of resonances can be attributed to coupling loss and scattering loss. The waveguide is also capable of transmitting TM modes, however, the confinement of bus waveguide is weak for TM modes and the radiation loss limited Q for TM modes of ring resonator is calculated to be <10 M. Thus, we did not perform any further measurements of TM mode transmittance. In Fig. 3, we compare the performance of our alumina ring resonator to other recent works on UV and blue band photonics. At wavelengths larger than 450 nm, low confinement Si\({}_{3}\)N\({}_{4}\) ring resonators with > 1.5 mm radii still hold the record for quality factors [9, 11]. At shorter wavelengths, absorption inherent to the Si\({}_{3}\)N\({}_{4}\) waveguide core would drastically impacts the performance of these devices. AlN was the star material for nanophotonic devices operating at UV-blue band, and progress in AlN film quality boosted the quality factor of AlN based resonators to up to \(2.1\times 10^{5}\) at 390 nm [14]. Alumina film deposited with reactive sputtering and ALD boasts even larger bandgap compared to AlN and renewed the quality factor record to \(4.7\times 10^{5}\) at 405 nm [23, 18]. With ALD deposited alumina film and optimized geometry, our alumina ring resonator raises the quality factor record at UV band once again to \(1.5\times 10^{6}\) at 390 nm. From the Q measurements, the propagation loss of the current ring resonator can be derived to be 0.84 dB/cm at 390 nm and 0.51 dB/cm at 488.5 nm based on the expression \(\alpha=4.343\times\frac{2\pi n_{g}}{Q_{\mathrm{int}}\lambda}\), where \(n_{g}\) is the group refractive index obtained through \(n_{g}=\frac{c}{2\pi R_{\mathrm{ring}}\times\mathrm{FSR}}\). The superior performance of the ring resonator in this paper can be attributed to the implementation of low-loss amorphous alumina as the waveguide's core material, as well as the ring geometry, which utilizes shallow etching to reduce the scattering loss. ## 4 Conclusion In conclusion, we demonstrate ultra-high-Q UV and blue band ring resonators featuring low loss ALD alumina as the waveguide core, and optimized geometry which sees propagation mode strongly confined within the alumina waveguide core. This work pushes the intrinsic Q record of high confinement ring resonators to 1.5 M at 390 nm and 1.9 M at 488.5 nm, corresponding to a propagation loss of only 0.84 dB/cm and 0.51 dB/cm respectively. Our results present an important solution in terms of material choice and waveguide design to achieve low-loss integrated photonics in UV and blue band. Figure 2: Transmittance spectrum of alumina ring resonators at (a) 390 nm and (b) 488.5 nm showing two sets of transmitted TE modes. Insets top left: Simulated TE00 propagation modes in ring resonators at 390 nm and 488.5 nm respectively. Insets bottom left: Ring resonators under test at 390 nm and 488.5 nm respectively. Insets right: Zoomed-in views of TE00 resonances demonstrating the highest loaded and intrinsic Q of 1.2 M / 1.5 M at 390 nm and 1.4 M / 1.9 M at 488.5 nm respectively. \(Q_{\mathrm{L}}\) stands for loaded Q, ER stands for extinction ratio in dB and \(Q_{\mathrm{int}}\) stands for intrinsic Q. Funding.This work is funded in part by the Office of Naval Research (ONR) grant N00014-20-1-2693. The materials used in this work is developed under the support of Department of Energy under grant No. DE-SC0019406. Acknowledgments.The authors thanks Michael Rooks, Yong Sun, Lauren McCabe and Kelly Woods for support in the cleanroom and assistance in device fabrication. Disclosures.The authors declare no conflicts of interest. Data availability.Data is available upon reasonable request.
2307.07059
Vertex-based Networks to Accelerate Path Planning Algorithms
Path planning plays a crucial role in various autonomy applications, and RRT* is one of the leading solutions in this field. In this paper, we propose the utilization of vertex-based networks to enhance the sampling process of RRT*, leading to more efficient path planning. Our approach focuses on critical vertices along the optimal paths, which provide essential yet sparser abstractions of the paths. We employ focal loss to address the associated data imbalance issue, and explore different masking configurations to determine practical tradeoffs in system performance. Through experiments conducted on randomly generated floor maps, our solutions demonstrate significant speed improvements, achieving over a 400% enhancement compared to the baseline model.
Yuanhang Zhang, Jundong Liu
2023-07-13T20:56:46Z
http://arxiv.org/abs/2307.07059v1
# Vertex-Based Networks to Accelerate Path Planning Algorithms ###### Abstract Path planning plays a crucial role in various autonomy applications, and RRT* is one of the leading solutions in this field. In this paper, we propose the utilization of vertex-based networks to enhance the sampling process of RRT*, leading to more efficient path planning. Our approach focuses on critical vertices along the optimal paths, which provide essential yet sparser abstractions of the paths. We employ focal loss to address the associated data imbalance issue, and explore different masking configurations to determine practical tradeoffs in system performance. Through experiments conducted on randomly generated floor maps, our solutions demonstrate significant speed improvements, achieving over a 400% enhancement compared to the baseline model. Yuanhang Zhang + Jundong Liu School of Electrical Engineering and Computer Science Ohio University Path Planning, RRT*, Vertex, FCN Footnote †: Thanks to Ohio University Research Committee (OURC) for funding. ## 1 Introduction Path planning aims to determine a feasible route for an autonomous agent to travel from a starting point to a target location within an environment while avoiding obstacles. This process has a wide range of applications across various domains. The common goal of path planning is to discover a route that is safe, efficient, and smooth. Traditional path planning algorithms can be grouped into two primary categories: grid search-based and sampling-based. Among grid-search algorithms, the A* algorithm [1] is one of the most prominent solutions, capable of guaranteeing the finding of an optimal path if one exists; however, it may encounter difficulties in high-dimensional state spaces. Sampling-based algorithms, such as _Rapid Random-exploring Trees_ (RRT) [2] and _Optimal Rapid Random-exploring Trees_ (RRT*) [3], operate through randomly selecting states from the state space, rather than investigating all possible states, therefore speeding up the exploration process. RRT uniformly sample states within the state space while gradually building a tree structure of these states. RRT* enhances RRT by reorganizing the tree, granting it probabilistic completeness and asymptotic optimality. Recently, machine learning-based approaches have been proposed to address the intricate challenges associated with path planning. These approaches can generally be categorized into supervised learning (SL) and reinforcement learning (RL) methods. SL-based solutions perform perception and decision-making simultaneously, predicting control policies directly from raw input images [4]. RL-based methods, on the other hand, rely on human-designed reward functions, allowing learning agents to explore policies through trial and error [5]. While promising, learning-based path planning solutions often lack theoretical guarantees on performance. Moreover, SL requires annotated data, which can be difficult or expensive to acquire. The latest RRT-based solutions, including informed RRT* [6] and connect RRT [7], while using traditional strategies, are regarded as the state-of-the-art solutions in the field. This can be attributed to their flexibility in handling changes in the environment and their capability of to navigate high-dimensional state spaces. Moreover, RRT* has the guarantee of asymptotic optimality and probabilistic completeness, which ensures that the solution achieves optimality under specific conditions. However, RRT* solutions suffer sensitivity to the initial solution and slow convergence to the optimal solution. To overcome these limitations, several network-based solutions have been proposed to speech up the sampling process. Trained on optimal paths, Neural RRT* [8] and Motion Planning Networks [9] predict the probability distribution of the path to achieving faster samplings. Neural Informed RRT* [10] guides RRT* tree expansion using an offline-trained neural network during online planning. Although considerable speed-ups have been demonstrated in comparison to the original RRT* algorithm, the aforementioned acceleration networks commonly take the entire A* search space as the target area and estimate probabilities based on the proximity to optimal paths. Moreover, they may struggle with highly dynamic environments or those with rapidly changing obstacles, as the planning may become quickly outdated. In this paper, we propose to enhance the speed-up of the sampling process by shifting the target areas from the neighborhood of optimal paths to that of vertices (corners or turning points). Our design is based on the rationale that critical vertex points in the optimal paths provide an insightful and adequate abstraction of the paths, while requiring much less space. Focusing on vertices, however, results in a side effect that the training data would be highly imbalanced. We address this issue using focal loss [11] in this work. We also explore different thresholding setups for the network outputs to examine the system tradeoffs in performance. ## 2 Background Rapidly-Exploring Random Trees (RRTs) comprise a family of path planning algorithms that depend on incremental sampling. The RRT algorithm [2] starts with a single-vertex tree that represents the initial state and has no edges. Over each iteration, the algorithm generates a state \(x_{\text{rand}}\) from a uniform sampling of the search space and tries to link it to the nearest vertex \(x_{\text{nearest}}\) in the tree. If this linkage is feasible, the Steering function manipulates \(x_{\text{rand}}\) to produce \(x_{\text{new}}\). The new state \(x_{\text{new}}\) and new edge \((x_{\text{nearest}},x_{\text{new}})\) are then added to the growing tree. The RRT* algorithm [3] introduces two additional procedures: \(\mathrm{Extend}(G,x_{\text{new}})\) function and the \(\mathrm{Rewire}(G)\) process. During the \(\mathrm{Extend}\) procedure, RRT* searches for optimal parent vertices around \(x_{\text{new}}\) within a certain radius. After integrating \(x_{\text{new}}\) into the tree, RRT* rewires neighbor vertices to assess whether a path through \(x_{\text{new}}\) can provide a lower cost than the current path. The procedure of the RRT* algorithm is illustrated in Algorithm 1. ObstacleFree\((x_{\text{nearest}},x_{\text{new}})\) function determines if the line segment connecting \(x_{\text{nearest}}\) and \(x_{\text{new}}\) is obstacle-free. RRT* is asymptotically optimal, which means that as the sampling iterations approach infinity, the path converges to the optimal path. ``` Input:\(x_{\text{init}},x_{\text{goal}},Map\) Output:\(G=(V,E)\) \(V\leftarrow\{x_{\text{init}}\};E\leftarrow\emptyset\) ; for\(i=1,\cdots,n\)do \(x_{\text{rand}}\leftarrow\) UniformSample() ; \(x_{\text{nearest}}\leftarrow\) Nearest(\(G=(V,E),x_{\text{rand}}\)) ; \(x_{\text{new}}\leftarrow\) Steer(\(x_{\text{nearest}},x_{\text{rand}}\)) ; if\(\mathrm{ObstacleFree}(x_{\text{nearest}},x_{\text{new}})\)then Extend(\(G,x_{\text{new}}\)) ; Rewire(\(G\)) ; return\(G=(V,E)\) ; ``` **Algorithm 1**RRT* The Neural RRT* algorithm [8] trains a CNN model on successful path planning cases to generate a nonuniform sampling distribution. For a given task, the trained network can predict the probability distribution of the optimal path for on the map, which can guide and speed up the sampling process. ## 3 Method In this work, we propose a method to improve the Neural RRT* category by redirecting the sampling guidance from the neighborhoods of optimal paths to key vertices. To achieve this, we train a neural network called _VertexNet_, and subsequently integrate it with the RRT* algorithm. ### VertexNet Our VertexNet is designed to predict the likelihood of each pixel being a vertex on the optimal path, which we refer to as _vertex-ness_. We approach this task as an image mapping problem and address it using a fully convolutional network, as depicted in Fig. 1. The input to VertexNet consists of an RGB image representing a floor map, where obstacles, source, and target points are differentiated by distinct colors. The ground-truth is a corresponding vertex map extracted from the A* optimal path. The output of VertexNet is a vertex-ness map, which will subsequently be integrated into the RRT* algorithm to guide the sampling process. VertexNet is modified from the U-Net [12], primarily by adopting ResNet34, a residual network [13] as the backbone for the encoder. This modification aims to enhance the network's ability to capture important features from the input images and facilitate effective training. The updated encoder consists of basic blocks as described in [13], each of which contains two 2-dimensional convolutional layers, two batch normalization layers, and one Rectified Linear Unit (ReLU) activation. In total, our network has 54 weighted layers and 41,221,168 parameters, among which, 19,953,520 are trainable. The network takes floor map images of size \(200\times 200\) as inputs. #### 3.1.1 Ground Truth and Training Objective The ground-truth images in our work are generated following a three-step process. In the first step, we employ the A* algorithm to determine the optimal path for a floor map. Next, we Figure 1: Illustration of the proposed VertexNet. Best view on screen. extract a number of vertex points on the optimal path based their vertex-ness (being corners or turning points). Finally, we create the ground-truth images by setting the pixels of the selected vertices to 0s, while the remaining pixels are set to 1s. Vertices are chosen among the pixels on the path based on specific criteria. Specifically, only the end-points of line segments are considered as vertices, while intermediate points are excluded. Since the paths are generated within discrete image domains, the identification of straight lines becomes crucial. Straight lines can consist of line segments with consistent directions, as in Fig. 2(a). The red dash-lines in Fig. 2(b) could also be straight under a continuous domain, as their directions are not changed. Fig. 2(c) shows a similar case where we look ahead for three steps. It should be noted that the points \(x_{i}\) in all three cases should not be classified as turning points. Let \(X_{\text{path}}=\{x_{0},x_{1},\cdots,x_{T}\}\) be a path going through a sequence of positions within an image domain. To determine whether a point \(x\in X_{\text{path}}\) should be considered as a turning point, we examine if there is a change in direction for any of the three step sizes in Fig. 2. In essence, we define a vertex as follows: \[X_{\text{vertex}}=\{x\in X_{\text{path}} |\Delta(x_{i}-x_{i-1})\wedge\Delta(x_{i}-x_{i-2})\] \[\wedge\Delta(x_{i}-x_{i-3})\}\] where \(\Delta\) denotes the presence of a change in direction with a certain step size. #### 3.1.2 Loss Function As very few pixels in each ground-truth image are selected as vertices, the ground-truth images tend to be predominately blank as vast majority (over 99%) of the pixels have intensity of 1s. This intensity imbalance would create a challenge for image mapping problems. To tackle this issue, we adopt the focal loss [11] as the objective function for our VertexNet. \[\mathrm{FL}(p_{\text{t}})=-(1-p_{\text{t}})^{\gamma}\log(p_{\text{t}})\] Compared to the cross-entropy loss, the focal loss adds a modulating factor \((1-p_{\text{t}})^{\gamma}\). This factor assigns varying weights to samples based on their difficulty of classification. Easy samples, which are more likely to be correctly classified, receive reduced weights, while harder samples are assigned higher weights. In our dataset, as the non-vertex pixels constitute the majority, they are categorized as easy samples, resulting in reduced contributions through the modulating factor. The focusing hyperparameter \(\gamma\) is adjustable, and based on empirical observations, we set it to \(2\) in our experiments. ### VertexNet RRT* After VertexNet is trained, it is integrated into the RRT* algorithm to enhance the sampling process. The integration process follows a similar design to that of Neural RRT* [8]. Taking a floor map as the input, the non-uniform sampler generated by VertexNet works together with the uniform sampler from RRT* to make informed sampling decisions. VertexNet plays a crucial role by providing probabilistic guidance for selecting the next sampling point, thus improving the overall sampling efficiency. Meanwhile, the uniform sampler ensures that the integrated algorithm retains its original properties of probabilistic completeness and asymptotic optimality. The resulting algorithm is referred to as VertexNet RRT*, as outlined in Algorithm 2. Each sampler is assigned a \(50\%\) probability of being invoked, which is determined by the Rand() function. The Nearest function, Extend, Rewire and Steer function are all the same as in the original RRT* algorithm. ``` Input:\(x_{\text{init}},x_{\text{goal}},Map,\text{VetexNet}\) Output:\(G=(V,E)\) 1\(\mathcal{O}=\text{VertexNet}(Map,x_{\text{init}},x_{\text{goal}})\) ; 2\(V\leftarrow\{x_{\text{init}}\};E\leftarrow\emptyset\) ; 3for\(i=1,\cdots,n\)do 4if\(\text{Rand}()>0.5\)then 5\(x_{\text{rand}}\leftarrow\text{VertexNetSample}(\mathcal{O})\) ; 6else 7\(x_{\text{rand}}\leftarrow\text{UniformSample}()\) ; 8\(x_{\text{nearest}}\leftarrow\text{Nearest}(G=(V,E),x_{\text{rand}})\) ; 9\(x_{\text{new}}\leftarrow\text{Steer}(x_{\text{nearest}},x_{\text{rand}})\) ; 10if\(\text{ObstacleFree}(x_{\text{nearest}},x_{\text{new}})\)then 11 Extend(\(G,x_{\text{new}}\)) ; 12 Rewire(\(G\)) ; 13 14 15return\(G=(V,E)\) ; ``` **Algorithm 2**VertexNet RRT* Figure 2: Three cases of straight lines within the pixel domain. #### 3.2.1 Masked VertexNet RRT* To explore different configurations, we develop a masked version of the VertexNet RRT* algorithm. In this variant, a mask is applied to the sampling probability distribution generated by VertexNet. The mask functions by setting probabilities below a specific threshold value, denoted as \(\tau\), to zero. In the original VertexNet RRT* algorithm, even pixels with low probabilities have a chance of being chosen as vertex points. However, with the applied mask, probabilities below the threshold value are disregarded, leading a higher likelihood of sampling actual vertex points. ## 4 Experiments **Data** We generated a total of 10,000 maps for our experiments. Each map consists of 12 randomly chosen start points and 12 randomly chosen goal points, resulting in a dataset of 1,440,000 maps in total. Fig. 3 shows five maps with different levels of complexity. In our experiments, \(70\%\) of the maps were used for training, and \(30\%\) for testing. To ensure the simulation of diverse map scenarios, our map generator randomly selects a variable number of obstacles, including shapes like triangles, circles, squares, bars, and U-shaped obstacles. The orientation of each obstacle is also chosen randomly. Each map has a dimension of 200x200 pixels, and integer values are assigned to each pixel based on their respective classes. Traversable space is denoted by 0, obstacles are represented by 1, start points are labeled as 2, and goal points are labeled as 3. ### Sampling Probability Comparisons The key design of our VertexNet RRT* lies in the modification of the training objective to focus on the turning points of the optimal paths. This modification is aimed to reduce the sampling space required for the RRT* algorithm. By comparing the resulting probability distribution predictions from models trained with distribution predictions from models trained with the vertex-based objective versus the path-based objective, we can assess the effectiveness of our approach. Fig. 4 illustrates a representative example, where the optimal path and the extracted vertices are displayed in Fig. 4(a) and (b), respectively. Fig. 4(c) shows the probability distributions trained using the optimal paths in Neural RRT*, while Fig. 4(d) show the probability distributions trained using our VertexNet. As evident, the latter has much fewer bright areas, indicating that the sampling space of VertexNet RRT* is greatly reduced compared to the Neural RRT*. ### Path Planning Results and Analysis We performed experiments on four different algorithms, namely RRT*, Neural RRT*, our VertexNet RRT* and Figure 4: Sampling distributions of Neural RRT* and VertexNet RRT*. Dark area depicts high probabilities, light area shows low probabilities. (a) Blue line is the optimal path. (b) Blue dots are the vertices in the optimal path. (c) Sampling probability distribution from Neural RRT*. (d) Sampling distribution from our VertexNet RRT*. Figure 3: Five floor maps with different complexities. In each map, green star denotes the start state and the red cross is the destination. Masked VertexNet RRT*. We use the abbreviations RRT*, NRRT*, VNRRT* and M-VNRRT* in the upcoming presentation. The evaluations were conducted for both _initial solutions_, where each algorithm terminates upon reaching the destination, and _optimal solutions_, where each algorithm terminates after finding the optimal path. We conducted experiments to find _initial solutions_ on the five individual maps in Fig. 3, as well as 1,000 random maps selected from the test set. Each individual map underwent 1,000 trials for each algorithm to find an initial solution. The results for the random maps were averaged across the 1,000 maps. The performance of the algorithms was evaluated based on _Path Length_ and _Time Cost_, as summarized in Table 1 and Table 2. Among the models, our VNRRT* algorithm demonstrated the fastest performance in finding the initial solution in the Random Maps settings, as indicated in Table 2. Moreover, the path length achieved by VNRRT* was comparable to that of the baseline Neural RRT*. We further evaluated the performance of the algorithms in finding _optimal solutions_. Due to the time-consuming nature of finding optimal solutions, we reduced the number of experimental trials to 100. Among the algorithms tested, the M-VNRRT* \(\tau\)=0.5 algorithm consistently delivered the best results in experiments conducted on Map 1-4 and Random Maps. On the other hand, our VNRRT* algorithm achieved the best performance on Map 5, as summarized in Table 3. Fig 5 shows the speed improvements of the algorithms over RRT* on finding optimal solutions. Our VNRRT* outperformed NRRT* in almost all the experiments, except for Map 2, where its performance was 5% worse. However, the M-VNRRT* demonstrated relative improvements compared to NRRT* in all experiments, particularly in Map 3, Map 4, and Random Maps, where the improvements were 118.59%, 234.03%, and 422.37%, respectively. As models' performance vary on individual maps, the Random Maps experiment provides a more objective evaluation by averaging results of 100 maps. The results are summarized in the rightmost column of Table 3. On average, the proposed VNRRT* algorithm shows an acceleration of 76.82% over the baseline NRRT* algorithm when converging to optimal paths. The M-VNRRT* with \(\tau\)=0.5 demonstrates an impressive improvement of 422.37% when compared to NRRT*. In summary, our VNRRT* algorithm demonstrates a significant improvement of over 70% compared to NRRT* when converging to both the optimal and initial solutions. On the other hand, our M-VNRRT* \(\tau\)=0.5 exhibits superior performance in finding the optimal solution, being over 400% faster than NRRT*. However, it is 9.41% slower than NRRT* when finding initial solutions. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline & Map 1 & Map 2 & Map 3 & Map 4 & Map 5 & Random Maps \\ \hline RRT* & 250.93\(\pm\)46.98 & 322.02\(\pm\)24.06 & 284.38\(\pm\)19.93 & 222.33\(\pm\)26.05 & 373.03\(\pm\)28.89 & 153.07\(\pm\)83.77 \\ NRRT* & 214.49\(\pm\)47.46 & 316.9\(\pm\)26.02 & **282.62\(\pm\)18.39** & 210.23\(\pm\)18.85 & 364.38\(\pm\)30.6 & 142.71\(\pm\)80.65 \\ VNRRT* & 215.85\(\pm\)47.96 & 312.17\(\pm\)23.23 & 283.94\(\pm\)18.57 & 212.53\(\pm\)19.55 & 361.57\(\pm\)30.98 & 145.75\(\pm\)81.38 \\ M-VNRRT* \(\tau\)=0.5 & 164.18\(\pm\)35.44 & 304.33\(\pm\)23.00 & 293.17\(\pm\)21.17 & 205.71\(\pm\)17.52 & 351.48\(\pm\)32.01 & 134.25\(\pm\)79.92 \\ M-VNRRT* \(\tau\)=0.9 & **154.02\(\pm\)24.01** & 313.67\(\pm\)25.77 & 296.76\(\pm\)22.96 & 202.30\(\pm\)16.42 & **344.03\(\pm\)29.34** & **133.98\(\pm\)80.73** \\ M-VNRRT* \(\tau\)=0.9 & 163.62\(\pm\)32.51 & **302.72\(\pm\)23.08** & 306.46\(\pm\)29.52 & **198.09\(\pm\)18.85** & 346.61\(\pm\)24.52 & 136.55\(\pm\)82.49 \\ Optimal & 135.14 & 255.68 & 264.71 & 190.20 & 319.58 & 119.35 \\ \hline \multicolumn{7}{c}{Optimal: Optimal path length} \\ \end{tabular} \end{table} Table 1: Path Length Comparisons of the _initial solutions_ \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline & Map 1 & Map 2 & Map 3 & Map 4 & Map 5 & Random Maps \\ \hline RRT* & 0.19\(\pm\)0.20 & 0.7\(\pm\)0.52 & 33.42\(\pm\)46.82 & 6.76\(\pm\)13.47 & 8.06\(\pm\)11.09 & 1.84\(\pm\)14.72 \\ NRRT* & 0.23\(\pm\)0.24 & **0.53\(\pm\)0.38** & 23.24\(\pm\)28.81 & 4.57\(\pm\)9.54 & 6.93\(\pm\)8.9 & 0.77\(\pm\)6.07 \\ VNRRT* & 0.18\(\pm\)0.19 & 0.64\(\pm\)0.44 & 11.15\(\pm\)14.83 & 2.29\(\pm\)3.57 & 5.69\(\pm\)7.23 & **0.45\(\pm\)2.17** \\ M-VNRRT* \(\tau\)=0.5 & 0.15\(\pm\)0.17 & 0.62\(\pm\)0.43 & 5.56\(\pm\)5.83 & **1.20\(\pm\)1.63** & 6.22\(\pm\)8.36 & 0.85\(\pm\)5.09 \\ M-VNRRT* \(\tau\)=0.9 & **0.10\(\pm\)0.08** & 0.76\(\pm\)0.53 & **5.22\(\pm\)6.82** & 1.83\(\pm\)2.49 & 8.81\(\pm\)11.89 & 0.62\(\pm\)3.60 \\ M-VNRRT* \(\tau\)=0.9 & 0.20\(\pm\)0.19 & 0.88\(\pm\)0.83 & 10.36\(\pm\)32.27 & 3.56\(\pm\)4.77 & **4.58\(\pm\)4.38** & 1.44\(\pm\)12.27 \\ \hline \multicolumn{7}{c}{Time cost to find the initial solution in seconds.} \\ \end{tabular} \end{table} Table 2: Time Cost of finding _initial solutions_ \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline & Map 1 & Map 2 & Map 3 & Map 4 & Map 5 & Random Maps \\ \hline RRT* & 0.19\(\pm\)0.20 & 0.7\(\pm\)0.52 & 33.42\(\pm\)46.82 & 6.76\(\pm\)13.47 & 8.06\(\pm\)11.09 & 1.84\(\pm\)14.72 \\ NRRT* & 0.23\(\pm\)0.24 & **0.53\(\pm\)0.38** & 23.24\(\pm\)28.81 & 4.57\(\pm\)9.54 & 6.93\(\pm\)8.9 & 0.77\(\pm\)6.07 \\ VNRRT* & 0.18\(\pm\)0.19 & 0.64\(\pm\)0.44 & 11.15\(\pm\)14.83 & 2.29\(\pm\)3.57 & 5.69\(\pm\)7.23 & **0.45\(\pm\)2.17** \\ M-VNRRT* \(\tau\)=0.5 & 0.15\(\pm\)0.17 & 0.62\(\pm\)0.43 & 5.56\(\pm\)5.83 & **1.20\(\pm\)1.63** & 6.22\(\pm\)8.36 & 0.85\(\pm\)5.09 \\ M-VNRRT* \(\tau\)=0.9 & **0.10\(\pm\)0.08** & 0.76\(\pm\)0.53 & **5.22\(\pm\)6.82** & 1.83\(\pm\)2.49 & 8.81\(\pm\)11.89 & 0.62\(\pm\)3.60 \\ M-VNRRT* \(\tau\)=0.9 & 0.20\(\pm\)0.19 & 0.88\(\pm\)0.83 & 10.36\(\pm\)32.27 & 3.56\(\pm\)4.77 & **4.58\(\pm\)4.38** & 1.44\(\pm\)12.27 \\ \hline \end{tabular} \end{table} Table 2: Time Cost of finding _initial solutions_ Our VNRRT* algorithm demonstrates superior speed in finding initial solutions (Table 2), possibly because it does not incorporate a mask that restricts the sampling points to optimal vertices. In contrast, the inclusion of a mask in the M-VNRRT* algorithm imposes more constraints on the sampling probability, therefore speed up convergence to the optimal solution. This is evident from the performance of M-VNRRT* \(\tau\)=0.5 in the optimal solution experiments (Table 3). Points with a probability prediction below 50% are less likely to be vertex points but still have a chance of being sampled. The masking process may have excluded these points, resulting in faster convergence to the optimal solutions. ## 5 Conclusion In this work, we present a novel approach to improve the learned heuristic for path planning algorithms. Rather than using the entire optimal path line segment as the objective, we focus solely on the turning points of the optimal paths in our proposed VertexNet. This modification significantly enhances the speed of the path planning algorithms. We also introduce a mask to the sampler of VertexNet RRT*, which boosts the sampling probability of the actual vertices in the optimal path. This further accelerates the convergence speed to the optimal path. Overall, our approach provides a more efficient method for learning heuristics, which is important for future machine learning applications in path planning problems.
2310.15092
Dihedral Quantum Codes
We establish dihedral quantum codes of short block length, a class of CSS codes obtained by the lifted product construction. We present the code construction and give a formula for the code dimension, depending on the two classical codes that the CSS code is based on. We also give a lower bound on the code distance and construct an example of short dihedral quantum codes.
Nadja Willenborg, Martino Borello, Anna-Lena Horlemann, Habibul Islam
2023-10-23T16:55:34Z
http://arxiv.org/abs/2310.15092v2
# Dihedral Quantum Codes ###### Abstract We study dihedral quantum codes of short block length, a large class of quantum CSS codes obtained by the lifted product construction. We present code construction and give a formula for the code dimension, depending on the two classical codes on which the CSS code is based on. We also give a lower bound on the code distance. Finally we construct an example of short dihedral quantum codes, improving parameters of previously known quantum codes. ## 1 Introduction In recent years, significant progress has been made in the theory of quantum low-density parity-check (LDPC) codes [8, 9, 5, 18, 19, 24, 13]. In particular, it has been shown that there exists a family of asymptotically good quantum LDPC codes, i.e. quantum LDPC codes for which both the code distance and the dimension grow proportionally to the block length \(N\), see [19]. However, it is still important to consider short-length quantum codes and to improve their parameters, as these will be the first to be used for the implementation of practical quantum computers. The aim of this article is to explore the construction of short-length lifted product codes. In contrast to [23, 14], we do not limit ourselves to 2 blocks (represented by two group algebra elements) in the lifted product construction, but choose square matrices \(A,B\) over the dihedral group algebra \(F[D_{2n}]\), formed by a field \(F\) and the dihedral group \(D_{2n}\). Quantum codes over this group algebra have not yet been considered in this generality and still allow the formulation of useful distance bounds. When comparing two codes \(\mathscr{C}_{1},\mathscr{C}_{2}\), one usually fixes the code rate (or minimum distance) and compares against the minimum distance (or code rate). If the parameter of \(\mathscr{C}_{1}\) to be compared is greater than that of \(\mathscr{C}_{2}\), we say that \(\mathscr{C}_{1}\)_performs better_ than \(\mathscr{C}_{2}\). The construction of quantum codes can be transformed into a problem of constructing classical linear codes with certain self-orthogonal properties. A classical linear code \(\mathscr{C}\) with parameters \([n,k]_{F}\) is a \(k\)-dimensional vector space in \(F^{n}\). The Hamming distance \(d_{H}(v,v^{\prime})\) between \(v,v^{\prime}\in F^{n}\) is the number of positions where \(v,v^{\prime}\) differ. The parameter \[d(\mathscr{C})=\min\{d(v,v^{\prime}):v\neq v^{\prime},v,v^{\prime}\in\mathscr{ C}\}\] is called the _minimum distance_ of \(\mathscr{C}\). A linear \([n,k]_{F}\) code \(\mathscr{C}\) with \(d(\mathscr{C})=d\) is called a \([n,k,d]_{F}\) code. A linear \([n,k]_{F}\) code can be defined as the kernel of a matrix \(H\), called the _parity-check matrix_ of the code. The rows of \(H\) are orthogonal to any vector in \(\mathscr{C}\) and \(\mathrm{rk}H=n-k\). The code defined by a parity-check matrix \(H\) is denoted by \(\mathscr{C}(H)\). From [3, 2] it is known that asymptotically good dihedral group codes exist (in the classical case). However, it is not possible to obtain asymptotically good dihedral quantum codes with the lifted product construction of [19], since finite solvable groups of bounded derived length never give a family of expanders [15]. In this paper we concentrate on general code constructions. It might be interesting to consider and analyze decoding algorithms to this particular class of codes. For example one could consider generalized discrete Fourier transforms, or, one could develop a decoder using the Morita correspondence between \(F[x]/(x^{m}-1)\) submodules and left ideals in \(\operatorname{Mat}_{2}(F)[x]/(x^{m}-1)\) as discussed in [1, 3]. Using [7, Theorem 2] one can check other classical dihedral codes to build good quantum codes of distance \(3\). ## 2 Preliminaries ### Quantum CSS codes We consider the complex Hilbert space \(\mathbb{C}^{q}\) of dimension \(q\) and its \(N\)-fold tensor product \((\mathbb{C}^{q})^{\otimes N}=\mathbb{C}^{q}\otimes\cdots\otimes\mathbb{C}^{q}\), also known as \(N\)_-qudit space_, where each component corresponds to one qudit. A quantum error-correcting code of length \(N\) and dimension \(K\) is a \(q^{K}\)-dimensional subspace of \((\mathbb{C}^{q})^{\otimes N}\). Quantum _Calderbank-Shor-Steane_ (CSS) codes form an important subclass of quantum error-correcting codes [6, 20]. A CSS code is defined by a pair of classical linear codes \(\mathscr{C}_{X},\mathscr{C}_{Z}\subseteq F^{N}\) such that the code is isomorphic to a direct sum of two quotient spaces \[Q(\mathscr{C}_{X},\mathscr{C}_{Z}):=\mathscr{C}_{Z}/\mathscr{C}_{X}^{\perp} \oplus\mathscr{C}_{X}/\mathscr{C}_{Z}^{\perp}. \tag{2.1}\] We use \(Q\) as a shorthand for \(Q(\mathscr{C}_{X},\mathscr{C}_{Z})\) and write \([[N,K,D]]_{F}\) to denote a \(K\)-dimensional quantum CSS code \(Q\subseteq(\mathbb{C}^{q})^{\otimes N}\) of length \(N\) with minimum distance \(D\). The dimension of \(Q\) is \(K=\dim\mathscr{C}_{X}/\mathscr{C}_{Z}^{\perp}\) and its minimum distance is defined as \(D:=\min\{D_{x},D_{z}\}\), where \[D_{z}:=\min_{c\in\mathscr{C}_{Z}\setminus\mathscr{C}_{X}^{\perp}}|c|,\quad D_{ x}:=\min_{c\in\mathscr{C}_{X}\setminus\mathscr{C}_{Z}^{\perp}}|c|.\] To guarantee that the CSS code (2.1) is well-defined, we need \(\mathscr{C}_{X}^{\perp}\subseteq\mathscr{C}_{Z}\) (or equivalently \(\mathscr{C}_{Z}^{\perp}\subseteq\mathscr{C}_{X}\)). Let \(H_{X}\) be a parity-check matrix of \(\mathscr{C}_{Z}\) and \(H_{Z}\) be a parity-check matrix of \(\mathscr{C}_{X}\), then we can express this via the following orthogonality condition \[H_{X}H_{Z}^{\top}=0.\] Indeed, since the parity-check matrix \(H_{X}\) is the generator matrix of \(\mathscr{C}_{Z}^{\perp}\) and \(H_{Z}\) is a parity-check matrix of \(\mathscr{C}_{X}\), we have that the row space of \(H_{X}\) is contained in \(\mathscr{C}_{X}\), i.e., \(\mathscr{C}_{Z}^{\perp}\subseteq\mathscr{C}_{X}\). Note that the dimension of \(Q\) can be reformulated as follows \[K=\dim\mathscr{C}_{X}-\dim\mathscr{C}_{Z}^{\perp}=\dim\ker H_{Z}-\mathrm{rk}H_ {X}=N-\mathrm{rk}H_{Z}-\mathrm{rk}H_{X}. \tag{2.2}\] ### Group Codes Let \(G\) be a finite group and \(F\) be a field. The group algebra \(F[G]\) over \(F\) is the set \[F[G]:=\left\{\sum_{g\in G}a_{g}g\mid a_{g}\in F\right\},\] with the following operations: \[\sum_{g\in G}a_{g}g+\sum_{g\in G}b_{g}g:=\sum_{g\in G}(a_{g}+b_{g})g,\] \[c\cdot\left(\sum_{g\in G}a_{g}g\right):=\sum_{g\in G}ca_{g}g,\] \[\left(\sum_{g\in G}a_{g}g\right)\cdot\left(\sum_{g\in G}b_{g}g\right):=\sum_{g\in G }\left(\sum_{\mu\nu=g}a_{\mu}b_{\nu}\right)g.\] In fact, \(\varphi:F^{|G|}\xrightarrow{\sim}F[G]\) via the canonical basis \(\{g\}_{g\in G}\). **Definition 2.1**.: A \(F\)-linear code \(\mathscr{C}\) is called a _quasi-group code_ of index \(\ell\) if \(\mathscr{C}\) is a left \(F\)-submodule of \(F[G]^{\ell}=F[G]\oplus\cdots\oplus F[G]\) (\(\ell\)-times) for some \(\ell\in\mathbb{N}\). If \(\ell=1\) we call \(\mathscr{C}\) a group code or a \(G\) code. **Remark 2.2**.: Via \(\varphi\), we may transfer the Hamming metric from \(F^{2n}\) to \(F[G]\). For \(c\in F[G]\) we define \(w_{H}(c):=w_{H}(\varphi^{-1}(c))\). Moreover, there is a well-understood relation between the algebraic structure of a quasi-group code \(\mathscr{C}\) and the permutation automorphism group of the corresponding code \(\varphi^{-1}(\mathscr{C})\). See [10, Chapter 16] and [4] for a detailed introduction of group codes and quasi-group codes. It is well-known (see for example [10, Chapter 16]) that if \(\text{char}(F)\nmid|G|\), all \(G\) codes over \(F[G]\) are principal. ### Lifted Product Construction The lifted product was introduced in [18] and formalizes many known constructions of quantum LDPC codes. The idea is to lift the elements in matrices over \(F\) up to some ring R that is also a finite dimensional \(F\)-algebra. **Definition 2.3**.: Let \(h\in F[G]\) such that \(h=\sum_{g\in G}a_{g}g\). Then its _reciprocal_\(h^{*}\) is defined as \[h^{*}:=\sum_{g\in G}a_{g^{-1}}g.\] If \(H=(h_{i,j})_{1\leq i\leq m,1\leq j\leq n}\) is a matrix over \(F[G]\) we define its _conjugated transpose_ as \(H^{*}:=(h^{*}_{j,i})_{1\leq j\leq n,1\leq i\leq m}\), where \(h^{*}_{j,i}\) is the reciprocal of \(h_{i,j}\in F[G]\). Note that the Hamming weight is invariant under taking the reciprocal and for any \(u,v\in F\) we have \((u+v)^{*}=u^{*}+v^{*}\) and \((uv)^{*}=v^{*}u^{*}\). **Definition 2.4**.: (Lifted Product Construction, see [18]) Let \(A\in\text{Mat}_{m_{A}\times n_{A}}(F[G])\) and \(B\in\text{Mat}_{m_{B}\times n_{B}}(F[G])\). A LP(A,B) code is defined by a pair of parity-check matrices: \[H_{X}=[A\otimes I_{m_{B}},-I_{m_{A}}\otimes B],\quad H_{Z}=[I_{n_{A}}\otimes B ^{*},A^{*}\otimes I_{n_{B}}]. \tag{2.3}\] **Remark 2.5**.: The \(\otimes\)-operator has the mixed-product property, i.e., for matrices \(A,B,C,D\) such that \(AC\) and \(BD\) can be formed, it holds \[(A\otimes B)(C\otimes D)=AC\otimes BD.\] From this property we easily obtain \[H_{X}H_{Z}^{\top}=0,\] hence the codes as defined in (2.3) are well-defined. ## 3 Dihedral Lifted Product Codes Throughout this section we assume \(\text{char}(F)\nmid|G|\). We let \(n\geq 3,D_{2n}:=\langle\alpha,\beta\mid\alpha^{n}=\beta^{2}=1,\beta\alpha=\alpha^{ n-1}\beta\rangle\) and \(C_{n}:=\langle\alpha\mid\alpha^{n}=1\rangle\). In [22, 16] explicit decompositions of \(F[D_{2n}]\) were obtained. For our construction a more generic notation suffices. Let \[x^{n}-1=\prod_{i=1}^{r}f_{i}\prod_{i=r+1}^{r+s}f_{i}^{*}f_{i}\] be the factorization of \(x^{n}-1\in F[x]\) into irreducible factors, where \(r\) is the number of self-reciprocal factors, i.e., \(f_{i}^{*}=f_{i}\) and \(2s\) the number of non-self-reciprocal factors. Let \[\theta(n)=\begin{cases}1&\text{if $n$ is odd}\\ 2&\text{if $n$ is even}\end{cases}\] and let \(F\subseteq F_{i}\) be extension fields of \(F\) such that \([F_{i}:F]=\deg f_{i}/2\) if \(\theta(n)+1\leq i\leq r\) and \([F_{i}:F]=\deg f_{i}\) in all other cases. Then \(F[D_{2n}]\) can be decomposed as \[F[D_{2n}]\cong\bigoplus_{i=1}^{r+s}R_{i},\] where \[R_{i}=\begin{cases}F[C_{2}]&\text{if $1\leq i\leq\theta(n)$}\\ \text{Mat}_{2}(F_{i})&\text{if $\theta(n)+1\leq i\leq r+s$}\end{cases}.\] As presented in [22] an arbitrary code \(\mathscr{C}\subseteq F[D_{2n}]\) can be decomposed as \[\mathscr{C}\cong\bigoplus_{i=1}^{r+s}C_{i},\quad C_{i}=\begin{cases}R_{i}& \text{if $i\in J_{1}$}\\ I_{i}&\text{if $i\in J_{2}$}\\ 0&\text{if $i\notin J_{1}\cup J_{2}$}\end{cases},\] where \(I_{i}\subsetneq R_{i}\) is a proper ideal and \(J_{1},J_{2}\subseteq[r+s]\) are two index sets which we call the _corresponding sets_ of the code. ### Dimension Formula We start with some preliminary results before giving the main result of this section in Theorem 3.3. **Lemma 3.1**.: Let \(A,B\in\text{Mat}_{m}(F),k_{A}:=\dim\ker A,k_{B}:=\dim\ker B\) and let \(H_{X},H_{Z}\) be defined as in 2.3. Then we have \[\text{rk}H_{X}=m^{2}-k_{A}k_{B},\quad\text{rk}H_{Z}=m^{2}-k_{A}k_{B}.\] Proof.: Let \(U,V\in\text{Mat}_{m}(F)\) be unitary matrices, i.e. \(U^{*}U=UU^{*}=I_{m}\), \(\lambda_{1},\dots,\lambda_{m}\) the eigenvalues of \((A\otimes I_{m})^{\top}(A\otimes I_{m})\) and \(\Sigma=\text{diag}(\sqrt{\lambda_{1}},\dots,\sqrt{\lambda_{m}})\). Using singular value decomposition (SVD) we can express \(A\otimes I_{m}\) as \[A\otimes I_{m}=U\Sigma V^{*}.\] We can now construct idempotent matrices \(E_{A}\in\text{Mat}_{m}(F),F_{A}\in\text{Mat}_{m}(F)\) both of the same rank as \(A\), such that \[E_{A}^{2}=E_{A},\,F_{A}^{2}=F_{A},\,E_{A}(A\otimes I_{m})=(A\otimes I_{m})F_{ A}=(A\otimes I_{m}),\] by \[E_{A}=U\Sigma_{m}U^{*},\quad F_{A}=V\Sigma_{m}V^{*},\] where \(\Sigma_{m}=\text{diag}(1,\dots,1,0,\dots,0)\in\text{Mat}_{m}(F)\) has exactly \(\text{rk}A\) many ones along the diagonal. Since \(E_{A}E_{A}^{-1}E_{A}=E_{A}=E_{A}^{3},F_{A}F_{A}^{-1}F_{A}=F_{A}=F_{A}^{3}\), i.e., any idempotent matrix is a generalized inverse of itself, [17, Theorem 5] gives \[\mathrm{rk}H_{X} =\mathrm{rk}(A\otimes I_{m},-I_{m}\otimes B)=\mathrm{rk}(A\otimes I _{m})+\mathrm{rk}((I_{m}-E_{A})(-I_{m}\otimes B))\] \[=m\cdot\mathrm{rk}A+m\cdot\mathrm{rk}B-\mathrm{rk}(E_{A}(-I_{m} \otimes B)).\] The last inequality follows from \(\mathrm{rk}(A\otimes B)=\mathrm{rk}A\cdot\mathrm{rk}B\) and the decomposition \[\mathrm{rk}(-I_{m}\otimes B)=\mathrm{rk}E_{A}(-I_{m}\otimes B)+\mathrm{rk}(I_ {m}-E_{A})(-I_{m}\otimes B),\] where \(E_{A}(I_{m}\otimes B)\) and \((I_{m}-E_{A})(-I_{m}\otimes B)\) represent projections of the columns of \(-I_{m}\otimes B\) onto two orthogonal subspaces. More precisely \(E_{A}(-I_{m}\otimes B)\) is the projection of \(-I_{m}\otimes B\) onto the column space of \(A\otimes I_{m}\) and \((I_{m}-E_{A})(-I_{m}\otimes B)\) is the projection of \(-I_{m}\otimes B\) onto the orthogonal complement of the column space of \(A\otimes I_{m}\). We have \[\mathrm{rk}(E_{A}(-I_{m}\otimes B))=\mathrm{rk}(E_{A}(-I_{m}\otimes B)(A \otimes I_{m}))=\mathrm{rk}((A\otimes I_{m})(-I_{m}\otimes B))\] and hence \[\mathrm{rk}H_{X} =m\mathrm{rk}B+m\mathrm{rk}A-\mathrm{rk}A\mathrm{rk}B\] \[=(m-\mathrm{rk}A)\mathrm{rk}B+(m-\mathrm{rk}B)\mathrm{rk}A+ \mathrm{rk}A\mathrm{rk}B\] \[=k_{A}\mathrm{rk}B+k_{B}\mathrm{rk}A+\mathrm{rk}A\mathrm{rk}B\] \[=(k_{A}+\mathrm{rk}A)(k_{B}+\mathrm{rk}B)-k_{A}k_{B}\] \[=m^{2}-k_{A}k_{B}.\] The equation for \(\mathrm{rk}H_{Z}\) follows similarly by using \(F_{A}\) instead of \(E_{A}\). **Proposition 3.2**.: Let \(A,B\in\mathrm{Mat}_{m}(F)\) and \(k_{A}:=\dim\mathscr{C}(A),k_{B}:=\dim\mathscr{C}(B)\). Then \[\dim LP(A,B)=2k_{A}k_{B}.\] Proof.: From Lemma 3.1 we have \(\mathrm{rk}H_{X}=m^{2}-k_{A}k_{B}\) and \(\mathrm{rk}H_{Z}=m^{2}-k_{A}k_{B}\). Hence using formula (2.2) for the quantum dimension we obtain \[K=2m^{2}-\mathrm{rk}H_{X}-\mathrm{rk}H_{Z}=2k_{A}k_{B}.\] **Theorem 3.3**.: Let \(e_{A},e_{B}\in F[D_{2n}]\), such that \((e_{A}),(e_{B})\) have corresponding sets \(J_{i}^{A},J_{i}^{B},i=1,2\). Let \(\mathbf{a}=(a_{1}e_{A},\ldots,a_{m}e_{A}),\mathbf{b}=(b_{1}e_{B},\ldots,b_{m}e _{B})\) and \(a_{1},\ldots,a_{m},b_{1},\ldots,b_{m}\) such that \((a_{i})+(e_{A})=(1)\), respectively \((b_{i})+(e_{B})=(1).\) Let \(A,B\in\mathrm{Mat}_{m}(F[D_{2n}])\) such that all rows lie in \(\mathbf{a}F[D_{2n}]\), respectively \(\mathbf{b}F[D_{2n}]\). Then \[\dim LP(A,B)= \sum_{j=1}^{r}\deg f_{j}(\mathbf{1}_{I_{1}}\omega_{1}+\mathbf{1 }_{I_{2}}\omega_{2}+\mathbf{1}_{I_{3}}\omega_{3}+\mathbf{1}_{I_{4}}\omega_{4}+ \mathbf{1}_{I_{5}}\omega_{5}+\mathbf{1}_{I_{6}}\omega_{6})\] \[+\sum_{j=r+1}^{r+s}2\deg f_{j}(\mathbf{1}_{I_{1}}\omega_{1}+ \mathbf{1}_{I_{2}}\omega_{2}+\mathbf{1}_{I_{3}}\omega_{3}+\mathbf{1}_{I_{4}} \omega_{4}+\mathbf{1}_{I_{5}}\omega_{5}+\mathbf{1}_{I_{6}}\omega_{6}),\] where \[I_{1} :=J_{2}^{A}\cap J_{2}^{B}\] \[I_{2} :=(J_{2}^{A}\cap J_{1}^{B})\cup(J_{2}^{B}\cap J_{1}^{A})\] \[I_{3} :=J_{1}^{A}\cap J_{1}^{B}\] \[I_{4} :=[r+s]\setminus(J_{1}^{A}\cup J_{2}^{A})\cap[r+s]\setminus(J_{1}^{B }\cup J_{2}^{B})\] \[I_{5} :=\left(J_{1}^{A}\cap[r+s]\setminus(J_{1}^{B}\cup J_{2}^{B})\right) \cup\left(J_{1}^{B}\cap[r+s]\setminus(J_{1}^{A}\cup J_{2}^{A})\right)\] \[I_{6} :=([r+s]\setminus(J_{1}^{A}\cup J_{2}^{A})\cap(J_{2}^{B})\cup([r+s ]\setminus(J_{1}^{B}\cup J_{2}^{B})\cap(J_{2}^{A})\] \[\omega_{1} :=(2m-1)^{2}\] \[\omega_{2} :=2(m-1)(2m-1)\] \[\omega_{3} :=2^{2}(m-1)^{2}\] \[\omega_{4} :=2^{2}m^{2}\] \[\omega_{5} :=2^{2}m(m-1)\] \[\omega_{6} :=2m(2m-1).\] Proof.: Let \[(e_{A})=\bigoplus_{i=1}^{r+s}C_{i}^{A},\quad(e_{B})=\bigoplus_{i=1}^{r+s}C_{i}^ {B}\] be the decompositions of the codes \((e_{A}),(e_{B})\). Using the decomposition of \(F[D_{2n}]\) any matrix \(A\in\operatorname{Mat}_{m}(F[D_{2n}])\) can be uniquely represented by the collection of matrices \((A_{i})_{i\in[r+s]}\), where \(A_{i}\) is the corresponding matrix over \(R_{i}.\) This gives \[\dim LP(A,B)=\sum_{i=1}^{\theta(n)}\dim_{F_{i}}LP(A_{i},B_{i}) +\sum_{i=\theta(n)+1}^{r}\dim_{F_{i}}LP(A_{i},B_{i})\cdot\frac{ \deg f_{i}}{2}\] \[+\sum_{i=r+1}^{r+s}\dim_{F_{i}}LP(A_{i},B_{i})\cdot\deg f_{i}. \tag{3.1}\] We have \(\dim_{F_{i}}\mathscr{C}(A_{i})=2m-\operatorname{rk}_{F_{i}}A_{i}\) and \(\dim_{F_{i}}\mathscr{C}(B_{i})=2m-\operatorname{rk}_{F_{i}}B_{i}\). Let \(\Theta\in\{A,B\}\) then \[\operatorname{rk}_{F_{i}}\Theta_{i}=\begin{cases}0&\text{if }i\in[r+s]\setminus J_{1}^{ \Theta}\cup J_{2}^{\Theta}\\ 1&\text{if }i\in J_{2}^{\Theta}\\ 2&\text{if }i\in J_{1}^{\Theta}\end{cases}\] and hence by Proposition 3.2 \[\dim_{F_{i}}LP(A_{i},B_{i})=\begin{cases}2(2m-1)^{2}&\text{if }i\in I_{1}\\ 2^{2}(m-1)(2m-1)&\text{if }i\in I_{2}\\ 2^{3}(m-1)^{2}&\text{if }i\in I_{3}\\ 2^{3}m^{2}&\text{if }i\in I_{4}\\ 2^{3}m(m-1)&\text{if }i\in I_{5}\\ 2^{2}m(2m-1)&\text{if }i\in I_{6}.\end{cases} \tag{3.2}\] Now the result follows combining (3.2) and (3.1). ### Induced Codes Let \(\ell\) be an arbitrary divisor of \(n\) and \(t<\ell\). Consider the two proper normal subgroups \(D_{2(n/\ell)}=\langle a^{\ell},ba^{t}\rangle\) and \(C_{\ell}=\langle a^{n/\ell}\rangle\) of the dihedral group \(D_{2n}\). Let \(I\) be a left ideal of \(F[C_{\ell}]\), then the \(D_{2n}\) code \(\mathscr{C}:=(F[D_{2n}])I\) is called \(C_{\ell}\)-induced. In [25] it was shown that if \(I\) is a \([\ell,k,d]\) code, then \(\mathscr{C}\) is a \([2n,|\Gamma|k,d]\) code, where \(\Gamma\) is the right transversal for \(C_{\ell}\) in \(D_{2n}.\) To distinguish between the different decompositions and the corresponding auxiliary code constructions, when referring to the algebra \(F[C_{\ell}]\), we add \(\hat{}\) to the notation. **Theorem 3.4**.: (See [22, Theorem 6]) Let \(\hat{x}^{\ell}-1=(\prod_{i=1}^{\hat{r}}\hat{f}_{i}(\hat{x}))(\prod_{i=\hat{r}+1} ^{\hat{r}+\hat{s}}\hat{f}_{i}(\hat{x})\hat{f}_{i}^{*}(\hat{x}))\) be the factorization of \(\hat{x}^{\ell}-1\) into irreducible factors, \(\hat{g}\mid\hat{x}^{\ell}-1\) and \(\hat{\mathscr{C}}_{\hat{g}}:=(\hat{g})\). Let \(\Omega:F[C_{\ell}]\hookrightarrow F[D_{2n}]\) be the embedding into \(F[D_{2n}]\) and let \(\mathscr{C}=(F[D_{2n}])\Omega(\hat{\mathscr{C}}_{\hat{g}})\) be the induced code. Then \[\mathscr{C}\cong\bigoplus_{j=1}^{r+s}B_{j},\quad B_{j}=\begin{cases}R_{j}& \text{if }j\in J_{1}\\ I_{j}&\text{if }j\in J_{2}\\ 0&\text{if }j\notin J_{1}\cup J_{2}\end{cases},\] where \[J_{1} =\{j\in[r+s]:(f_{j}(x)\nmid\hat{g}(x^{n/\ell})\wedge f_{j}^{*}(x) \nmid\hat{g}(x^{n/\ell})\},\] \[J_{2} =\{j\in[r+s]\setminus[\theta(n)]:\neg(f_{j}(x)\mid\hat{g}(x^{n/ \ell})\wedge f_{j}^{*}(x)\mid\hat{g}(x^{n/\ell}))\}.\] **Theorem 3.5**.: (See [22, Theorem 7]) Let \(\hat{x}^{n/\ell}-1=(\prod_{i=1}^{\hat{r}}\hat{f}_{i}(\hat{x}))(\prod_{i=\hat{r }+1}^{\hat{r}+\hat{s}}\hat{f}_{i}(\hat{x})\hat{f}_{i}^{*}(\hat{x}))\) be the factorization of \(\hat{x}^{n/\ell}-1\) into irreducible factors, let \(\Omega:F[D_{2(n/\ell)}]\hookrightarrow F[D_{2n}]\) be the embedding into \(F[D_{2n}]\) and let \(\hat{\mathscr{C}}\subseteq F[D_{2(n/\ell)}]\) be a code such that \[\hat{\mathscr{C}}\cong\bigoplus_{i=1}^{\hat{r}+\hat{s}}\hat{B}_{i},\] where, for \(1\leq i\leq\hat{r},\hat{B}_{i}=0\) or \(\hat{B}_{i}=\hat{A}_{i}\). Let \(\mathscr{C}=(F[D_{2n}])\Omega(\hat{\mathscr{C}})\) be the induced code and suppose that \[\mathscr{C}\cong\bigoplus_{j=1}^{r+s}B_{j}.\] Then, for all \(1\leq i\leq\hat{r}+\hat{s}\), \[B_{j}=\begin{cases}R_{j}&\text{if }f_{j}(x)\mid\hat{f}_{i}(x^{\ell})\wedge \hat{B}_{i}=\hat{A}_{i},\\ I_{j}&\text{else},\\ 0&\text{if }f_{j}(x)\mid\hat{f}_{i}(x^{\ell})\wedge\hat{B}_{i}=0.\end{cases}\] ### Distance Bound Let \(a\in F[D_{2n}]\). To determine a distance bound we use the linear map \(\mathbb{B}(a):F[D_{2n}]\to F[D_{2n}],x\mapsto xa\) such that \(\mathbb{B}(ab)=\mathbb{B}(a)\mathbb{B}(b)\) and \(\mathbb{B}(1)=I_{|D_{2n}|}\). Since for \(a\neq b\) the two maps \(\mathbb{B}(a)\) and \(\mathbb{B}(b)\) are distinct, this representation by \(2n\times 2n\) matrices over \(F\) is _faithful_. **Notation 3.6**.: For \(a\in F[D_{2n}]\) we denote by \(\mathbb{B}(a)\) its corresponding matrix over \(F\). Moreover, if \(A\in\operatorname{Mat}_{m}(F[D_{2n}])\) we define \[\mathbb{B}(A):=[\mathbb{B}(a_{i,j})]_{m\times m}\in\operatorname{Mat}_{2nm}(F)\] and for \(c\in[F[D_{2n}]]^{m}\) we consider the block vector \[\mathbb{b}(c):=[\mathbb{b}(c_{1}),\dots,\mathbb{b}(c_{m})]\in F^{2nm},\] where \(\mathbb{b}(c_{i})\in F^{2n}\) contains the coefficients of \(c_{i}\in F[D_{2n}]\). **Lemma 3.7** (See [14], Lemma 16).: Let \(G_{A},G_{B}\subseteq D_{2n}\) be two proper normal subgroups such that \([G_{A}:N]\cdot[G_{B}:N]\cdot|N|=2n\) and \(N=G_{A}\cap G_{B}\) is abelian and normal in both \(G_{A},G_{B}\). Let \(A\in\operatorname{Mat}_{m_{A}}(F[N]),B\in\operatorname{Mat}_{m_{B}}(F[N])\) with \(\dim\mathscr{C}(A)=\dim\mathscr{C}(B)=0.\) Then the quasi-abelian code \(LP(A,B)\) has zero dimension, i.e. \[\dim LP(A,B)=0.\] The following theorem is a variant of [12, Theorem 5], respectively [14, Statement 12]. **Theorem 3.8**.: Let \(G_{A},G_{B}\subseteq D_{2n}\) be two proper normal subgroups such that \([G_{A}:N]\cdot[G_{B}:N]\cdot|N|=2n\) and \(N:=G_{A}\cap G_{B}\) is abelian and normal in both \(G_{A},G_{B}\). Let \(A\in\operatorname{Mat}_{m_{A}}(F[G_{A}]),B\in\operatorname{Mat}_{m_{B}}(F[G_{ B}])\) and define \[d_{0}:=\min\{d(\mathscr{C}(A)),d(\mathscr{C}(B)),d(\mathscr{C}(A^{\top})),d( \mathscr{C}(B^{\top}))\}.\] Then the minimum distance of the lifted product code \(LP(A,B)\) satisfies \[D\geq\left\lfloor\frac{d_{0}}{|N|}\right\rfloor.\] Proof.: Let \(\ell_{A}:=[G_{A}:N],\ell_{B}:=[G_{B}:N]\). Moreover we replace the elements of the matrices \(A,B\) by some square matrices \(a_{i,j}\in\operatorname{Mat}_{\ell_{A}}(F[N]),b_{i,j}\in\operatorname{Mat}_{ \ell_{B}}(F[N])\) such that \(\hat{A}\in\operatorname{Mat}_{\ell_{A}m_{A}}(F[N]),\hat{B}\in\operatorname{ Mat}_{\ell_{B}m_{B}}(F[N])\). The block matrices over \(F\) are then given by \[\mathbb{B}(A)=\mathbb{B}(\hat{A})\otimes I_{\ell_{B}},\quad\mathbb{B}(B)=I_{ \ell_{A}}\otimes\mathbb{B}(\hat{B})\] and \[\mathbb{B}(H_{X})=[(\mathbb{B}(\hat{A})\otimes I_{\ell_{B}})\otimes I_{m_{B}}, -I_{m_{A}}\otimes(I_{\ell_{A}}\otimes\mathbb{B}(\hat{B}))].\] Let \(c\in\mathscr{C}(H_{X})\) such that \(w_{H}(c)<\lfloor d_{0}/|N|\rfloor\). We define reduced matrices \[\hat{A}[I_{A}]:=(\hat{a}_{i,j})_{[m_{A}\ell_{A}]\times I_{A}},\quad\hat{B}[I_ {B}]:=(\hat{b}_{i,j})_{[m_{B}\ell_{B}]\times I_{B}},\] where \(I_{A}\subseteq[m_{A}\ell_{A}],I_{B}\subseteq[m_{B}\ell_{B}]\) label the columns of \(\hat{A},\hat{B}\) incident to nonzero elements of \(\mathbb{b}(c)\) in \(\mathbb{B}(H_{X})\mathbb{b}(c)=0\). Let \(\mathscr{I}_{A},\mathscr{I}_{B}\) be the index sets of all columns in the corresponding \(\mathbb{B}(\hat{A}),\mathbb{B}(\hat{B})\) and let \(\mathscr{I}=\mathscr{I}_{A}\times[\ell_{B}m_{B}]\bigsqcup\mathscr{I}_{B} \times[\ell_{A}m_{A}]\) be the labeling of all such columns in \(\mathbb{B}(H_{X}).\) Each element of \(F[N]\) corresponds to a block of size \(|N|\). Thus \[|\mathscr{I}_{\mu}|=|N||I_{\mu}|\leq|N|w_{H}(c),\quad\mu\in\{A,B\}.\] Hence \(\mathbb{B}(\hat{A}[I_{A}]),\mathbb{B}(\hat{B}[I_{B}])\) have at most \(d_{0}-1\) columns which implies that all columns in the parity-check matrices are linearly independent. This gives \[\dim\mathscr{C}(\hat{A}[I_{A}])=\dim\mathscr{C}(\hat{B}[I_{B}])=0.\] Considering \(LP(\hat{A}[I_{A}],\hat{B}[I_{B}])\) with \[H_{X}[\mathscr{I}] =[\hat{A}[I_{A}]\otimes I_{m_{B}},-I_{m_{A}}\otimes\hat{B}[I_{B}]]\] \[H_{Z}[\mathscr{I}] =[I_{|I_{A}|}\otimes\hat{B}^{*}[I_{B}],\hat{A}^{*}[I_{A}]\otimes I _{|I_{B}|}].\] Lemma 3.7 gives \(\dim LP(\hat{A}[I_{A}],\hat{B}[I_{B}])=0\). But this implies \(\mathscr{C}(H_{X}[\mathscr{I}])=\mathscr{C}(H_{Z}[\mathscr{I}])^{\perp}\). Clearly, the reduced vector \(c[\mathscr{I}]\) belongs to \(\mathscr{C}(H_{X}[\mathscr{I}])\) by construction. Hence \(c[\mathscr{I}]\in\mathscr{C}(H_{Z}[\mathscr{I}])^{\perp}\). Since \(c\) can be obtained from \(c[\mathscr{I}]\) by extending it with zeroes on the positions \([2m_{A}m_{B}]\setminus\mathscr{I}\) we have \(c\in\mathscr{C}(H_{Z})^{\perp}\). Similar arguments show that \(c\in\mathscr{C}(H_{Z})\) with \(w_{H}(c)<\lfloor d_{0}/|N|\rfloor\) belongs to \(\mathscr{C}(H_{X})^{\perp}\). ## 4 Example We consider \(\mathbb{F}_{11}[D_{2\cdot 90}]\) with the following factorization \[(x^{90}-1)=\prod_{i=1}^{6}f_{i}\prod_{i=7}^{18}f_{i}^{*}f_{i}\] where \[\begin{array}{ll}f_{1}=x+10,&f_{2}=x+1,\\ f_{3}=x^{2}+x+1,&f_{4}=x^{2}+10x+1,\\ f_{5}=x^{6}+x^{3}+1,&f_{6}=x^{6}+10x^{3}+1,\\ f_{7}=x+9,&f_{7}^{*}=x+5,\\ f_{8}=x+7,&f_{8}^{*}=x+8,\\ f_{9}=x+3,&f_{9}^{*}=x+4,\\ f_{10}=x+2,&f_{10}^{*}=x+6,\\ f_{11}=x^{2}+2x+4,&f_{11}^{*}=x^{2}+6x+3,\\ f_{12}=x^{2}+3x+9,&f_{12}^{*}=x^{2}+4x+5,\\ f_{13}=x^{2}+7x+5,&f_{13}^{*}=x^{2}+8x+9,\\ f_{14}=x^{2}+5x+3,&f_{14}^{*}=x^{2}+9x+4,\\ f_{15}=x^{6}+2x^{3}+4,&f_{15}^{*}=x^{6}+6x^{3}+3,\\ f_{16}=x^{6}+3x^{3}+9,&f_{16}^{*}=x^{6}+4x^{3}+5,\\ f_{17}=x^{6}+5x^{3}+3,&f_{17}^{*}=x^{6}+9x^{3}+4,\\ f_{18}=x^{6}+7x^{3}+5,&f_{18}^{*}=x^{6}+8x^{3}+9,\end{array}\] Note that \(r=6\) and \(s=12\). Moreover, we consider the code \(\hat{\mathscr{E}}_{\hat{g}}\) with generator polynomial \[\hat{g}=(x^{2}+x+1)(x^{6}+x^{3}+1).\] The minimum distance of \(\hat{\mathscr{E}}_{\hat{g}}\) is \(8\). We have \[\hat{g}(x^{10})=\prod_{i=1}^{20}\hat{g}_{i},\] where \[\begin{array}{ll}\hat{g}_{1}=x^{2}+x+1,&\hat{g}_{2}=x^{2}+2x+4,\\ \hat{g}_{3}=x^{2}+3x+9,&\hat{g}_{4}=x^{2}+4x+5,\\ \hat{g}_{5}=x^{2}+5x+3,&\hat{g}_{6}=x^{2}+6x+3,\\ \hat{g}_{7}=x^{2}+7x+5,&\hat{g}_{8}=x^{2}+8x+9,\\ \hat{g}_{9}=x^{2}+9x+4,&\hat{g}_{10}=x^{2}+10x+1,\\ \hat{g}_{11}=x^{6}+x^{3}+1,&\hat{g}_{12}=x^{6}+2x^{3}+4,\\ \hat{g}_{13}=x^{6}+3x^{3}+9,&\hat{g}_{14}=x^{6}+4x^{3}+5,\\ \hat{g}_{15}=x^{6}+5x^{3}+3,&\hat{g}_{16}=x^{6}+6x^{3}+3,\\ \hat{g}_{17}=x^{6}+7x^{3}+5,&\hat{g}_{18}=x^{6}+8x^{3}+9,\\ \hat{g}_{19}=x^{6}+9x^{3}+4,&\hat{g}_{20}=x^{6}+10x^{3}+1,\end{array}\] Using the notation of Theorem 3.4 we obtain \[J_{1}^{A}=\{1,2,7,8,9,10\},\;J_{2}^{A}=\emptyset.\] For the dihedral code we use the \([20,8,8]_{\mathbb{F}_{11}}\) code as obtained in [21, Section 3] and consider \(\hat{\mathscr{C}}\subseteq\mathbb{F}_{11}[D_{20}]\) with minimum distance \(8\) and decomposition \[\hat{\rho}(\hat{\mathscr{C}})=\bigoplus_{i=1}^{6}\hat{B}_{i}\] where \[\hat{B_{1}}=\mathbb{F}_{11}\oplus\mathbb{F}_{11},\hat{B_{2}}=0\oplus 0,\hat{B_{3 }}=I_{3}(1,-1),\hat{B_{4}}=M_{2}(\mathbb{F}_{11}[3]),\hat{B_{5}}=0,\hat{B_{6}}=0,\] and determine the sets \(J_{1}^{B},J_{2}^{B}\) of the corresponding induced code \(\mathscr{C}=\mathbb{F}_{11}[D_{2\cdot 90}]\Omega(\hat{\mathscr{C}}).\) Note that the decomposition of \(x^{10}-1\) is \[x^{10}-1=\hat{f}_{1}\hat{f}_{2}[\hat{f}_{3}\hat{f}_{3}^{*}\hat{f}_{4}\hat{f}_{ 4}^{*}\hat{f}_{5}\hat{f}_{5}\hat{f}_{6}\hat{f}_{6}^{*}],\] where \[\begin{array}{llll}\hat{f}_{1}=(x-1),&\hat{f}_{3}=(x-2),&\hat{f}_{4}=(x-3), &\hat{f}_{5}=(x-7),&\hat{f}_{6}=(x-9),\\ \hat{f}_{2}=(x+1),&\hat{f}_{3}^{*}=(x-6),&\hat{f}_{4}^{*}=(x-4),&\hat{f}_{5}^{ *}=(x-8),&\hat{f}_{6}^{*}=(x-5).\end{array}\] We have \[\begin{array}{llll}\hat{f}_{1}(x^{9})=(x+10)(x^{2}+x+1)(x^{6}+x^{3}+1)\\ \hat{f}_{3}(x^{9})=(x+5)(x^{2}+6x+3)(x^{6}+7x^{3}+5)\\ \hat{f}_{4}(x^{9})=(x+7)(x^{2}+4x+5)(x^{6}+9x^{3}+4).\end{array}\] Applying Theorem 3.5 we have for the induced code \(\mathscr{C}=(\mathbb{F}_{11}[D_{2\cdot 90}])\Omega(\hat{\mathscr{C}})\) that \[\rho(\mathscr{C})=\bigoplus_{i=1}^{18}B_{i},\] where \[B_{i}=\begin{cases}\mathbb{F}_{11}\oplus\mathbb{F}_{11}&\text{if $i\in\{1,3,5\}$},\\ I_{3}(1,-1)&\text{if $i\in\{7,11,18\}$},\\ \operatorname{Mat}_{2}(\mathbb{F}_{11}[3])&\text{if $i\in\{8,12,17\}$},\\ 0&\text{else}.\end{cases}\] and hence \[J_{1}^{B}=\{1,3,5,8,12,17\},\;J_{2}^{B}=\{7,11,18\}.\] Theorem 3.3 gives \[\begin{array}{llll}I_{1}=\emptyset,\\ I_{2}=\{7\},\\ I_{3}=\{1,8\},\\ I_{4}=\{4,6,13,14,15,16\},\\ I_{5}=\{2,3,5,9,10,12,17\},\\ I_{6}=\{11,18\}.\end{array}\] We have \[\deg f_{i}=\begin{cases}1&\text{if }i\in\{1,2,7,8,9,10\},\\ 2&\text{if }i\in\{3,4,11,12,13,14\},\\ 6&\text{if }i\in\{5,6,15,16,17,18\}\end{cases}\] and since \(r=6,s=12\) and \(\theta(n)=2\) we obtain from Theorem 3.3 \[\dim_{\mathbb{F}_{11}}LP(A,B)=4(m-1)(2m-1)+12(m-1)^{2}+160m^{2}+116m(m-1)+32m(2 m-1).\] According to the online database [http://quantumcodes.info](http://quantumcodes.info) we can compare the dihedral lifted product codes in Table 1 to the quantum code \([[52,4,8]]_{\mathbb{F}_{11}}\) obtained in [11]. All these codes have minimum distance \(8\). However, our codes have a larger code rate. Hence, they perform better than this \([[52,4,8]]_{\mathbb{F}_{11}}\) code. ## 5 Acknowledgements We would like to thank Pavel Panteleev and Markus Grassl for fast correspondence and comments and Anthony Leverrier for helpful discussions and advice.
2305.11042
A unified framework for information-theoretic generalization bounds
This paper presents a general methodology for deriving information-theoretic generalization bounds for learning algorithms. The main technical tool is a probabilistic decorrelation lemma based on a change of measure and a relaxation of Young's inequality in $L_{\psi_p}$ Orlicz spaces. Using the decorrelation lemma in combination with other techniques, such as symmetrization, couplings, and chaining in the space of probability measures, we obtain new upper bounds on the generalization error, both in expectation and in high probability, and recover as special cases many of the existing generalization bounds, including the ones based on mutual information, conditional mutual information, stochastic chaining, and PAC-Bayes inequalities. In addition, the Fernique-Talagrand upper bound on the expected supremum of a subgaussian process emerges as a special case.
Yifeng Chu, Maxim Raginsky
2023-05-18T15:36:20Z
http://arxiv.org/abs/2305.11042v2
# A Unified Framework for Information-Theoretic ###### Abstract This paper presents a general methodology for deriving information-theoretic generalization bounds for learning algorithms. The main technical tool is a probabilistic decorrelation lemma based on a change of measure and a relaxation of Young's inequality in \(L_{\psi_{p}}\) Orlicz spaces. Using the decorrelation lemma in combination with other techniques, such as symmetrization, couplings, and chaining in the space of probability measures, we obtain new upper bounds on the generalization error, both in expectation and in high probability, and recover as special cases many of the existing generalization bounds, including the ones based on mutual information, conditional mutual information, stochastic chaining, and PAC-Bayes inequalities. In addition, the Fernique-Talagrand upper bound on the expected supremum of a subgaussian process emerges as a special case. ## 1 Introduction The generalization error of a learning algorithm is a useful proxy for evaluating the performance of the learned model on previously unseen data. Formally, it is defined as the expected (absolute) difference between the population risk and the empirical risk of the hypothesis returned by the algorithm. One of the classical methods for estimating the generalization error is via uniform convergence of various empirical processes indexed by the hypothesis class [1, 2]. For example, in the analysis of Empirical Risk Minimization, one can estimate the expected generalization error via Rademacher averages, which can be bounded from above using chaining techniques [3]. However, the bounds based on uniform convergence are often too pessimistic and may even become vacuous when the hypothesis space is extremely large, a typical situation with deep neural net models. For this reason, it is preferable to obtain algorithm-dependent generalization bounds that take into account the joint distribution of the training samples and of the output hypothesis. In this context, one capitalizes on the intuition that the generalization ability of a learning algorithm should be related to the amount of information the output hypothesis reveals about the training data. This idea, which has origins in the work on PAC-Bayes methods [4, 5], is the basis of the growing literature on information-theoretic generalization bounds, first proposed in [6] and further devoloped in [7, 8, 9, 10, 11, 12, 13, 14] and many other works. In fact, it is possible to effectively combine the information-theoretic approach with the classical framework based on various measures of complexity of the hypothesis class: One can use chaining techniques to successively approximate the hypothesis class by simpler model classes, which can then be analyzed using information-theoretic tools. This methodology, again originating in the PAC-Bayes literature [15], has been developed recently in [16, 17, 18]. Our goal in this work is to develop these ideas further by giving a unified framework for information-theoretic generalization bounds, from which many of the existing results emerge as special cases. ### The main idea, informally The main idea behind our framework is surprisingly simple. We first give an abstract description and then show how it can be particularized to various settings of interest. Let \((X_{t})_{t\in T}\) be a centered (zero-mean) stochastic process defined on a probability space \((\Omega,\mathcal{A},\mathbb{P})\) and indexed by the elements of some set \(T\). Let \(Q\) be a _Markov kernel_ from \(\Omega\) to \(T\), i.e., a measurable mapping taking each \(\omega\in\Omega\) to a probability measure \(Q(\cdot|\omega)\) on \(T\). Together, \(\mathbb{P}\) and \(Q\) define a probability measure \(\mathbb{P}\otimes Q\) on the product space \(\Omega\times T\). The mathematical object we would like to study is the expected value \[\langle\mathbb{P}\otimes Q,X\rangle:=\int_{\Omega\times T}X_{t}(\omega)Q( \mathrm{d}t|\omega).\mathbb{P}(\mathrm{d}\omega).\] For example, assuming that there exists a measurable map \(\tau^{*}:\Omega\to T\), such that \[X_{\tau^{*}(\omega)}(\omega)=\sup_{t\in T}X_{t}(\omega),\qquad\mathbb{P}- \mathrm{a.s.} \tag{1}\] we can take \(Q(A|\omega):=\mathbf{1}_{\{\tau^{*}(\omega)\in A\}}\) for all measurable subsets \(A\) of \(T\). Then \[\langle\mathbb{P}\otimes Q,X\rangle=\mathbf{E}\Big{[}\sup_{t\in T}X_{t}\Big{]}\] is the expected supremum of \(X_{t}\), the central object of study in the theory of generic chaining, where \((T,d)\) is a metric space and increments \(X_{u}-X_{v}\) are "stochastically small" relative to \(d(u,v)\). Alternatively, consider a statistical learning problem with instance space \(\mathcal{Z}\), hypothesis space \(\mathcal{W}\), and loss function \(\ell:\mathcal{W}\times\mathcal{Z}\to\mathbb{R}_{+}\). Let \(P_{Z}\) be the (unknown) probability law of the problem instances in \(\mathcal{Z}\). Then we could take \(\Omega=\mathcal{Z}^{n}\), \(\mathbb{P}=P_{Z}^{\otimes n}\), \(T=\mathcal{W}\), and \[X_{w}=\frac{1}{n}\sum_{i=1}^{n}\big{(}L(w)-\ell(w,Z_{i})\big{)},\] where \(L(w):=\mathbf{E}_{Z\sim P_{Z}}[\ell(w,Z)]\) is the _population risk_ of \(w\). Let \(Q\) be a (randomized) learning algorithm that associates to each sample \(S=(Z_{1},\ldots,Z_{n})\sim\mathbb{P}\) a probability measure \(Q(\cdot|S)\) on the hypothesis space \(\mathcal{W}\). Then \[\langle\mathbb{P}\otimes Q,X\rangle=\mathbf{E}\Big{[}\frac{1}{n}\sum_{i=1}^{ n}\big{(}L(W)-\ell(W,Z_{i})\big{)}\Big{]}\] is the expected generalization error of \(Q\). In either case, we can proceed to analyze \(\langle\mathbb{P}\otimes Q,X\rangle\) via a combination of the following two steps: * **Decorrelation** -- We can remove the correlations encoded in \(\mathbb{P}\otimes Q\) by choosing a convenient product measure \(\mathbb{P}\otimes\mu\) on \(\Omega\times T\), so that (roughly) \[\langle\mathbb{P}\otimes Q,X\rangle\lesssim\sqrt{D(\mathbb{P}\otimes Q\| \mathbb{P}\otimes\mu)}+\mathrm{Error}\] provided the process \((X_{t})_{t\in T}\) is regular enough for the error term to be small. Here, we use the relative entropy (or information divergence) \(D(\cdot\|\cdot)\) to illustrate the key idea with a minimum of detail; the precise description is given in Section 3. * **Chaining in the space of measures** -- Since the process \((X_{t})_{t\in T}\) is centered and \(\mathbb{P}\otimes\mu\) is a product measure, we automatically have \(\langle\mathbb{P}\otimes\mu,X\rangle=0\) even though \(\langle\mathbb{P}\otimes Q,X\rangle\neq 0\). We can therefore interpolate between \(\mathbb{P}\otimes Q\) and \(\mathbb{P}\otimes\mu\) along a (possibly infinite) sequence \(Q_{0},Q_{1},\ldots,Q_{K}\) of Markov kernels, such that \(\mathbb{P}\otimes Q_{K}=\mathbb{P}\otimes Q\), \(\mathbb{P}\otimes Q_{0}=\mathbb{P}\otimes\mu\), and the differences \(\langle\mathbb{P}\otimes Q_{k},X\rangle-\langle\mathbb{P}\otimes Q_{k-1},X\rangle\) are suitably small. Telescoping, we get \[\langle\mathbb{P}\otimes Q,X\rangle=\sum_{k=1}^{K}\big{(}\langle\mathbb{P} \otimes Q_{k},X\rangle-\langle\mathbb{P}\otimes Q_{k-1},X\rangle\big{)}.\] For each \(k\), we then apply the decorrelation procedure to the _increment process_\((X_{u}-X_{v})_{u,v\in T}\), with \(\mathbb{P}\) as before and with a suitably chosen family of couplings of \(Q_{k}(\cdot|\omega)\) and \(Q_{k-1}(\cdot|\omega)\). This step can be combined effectively with other techniques, such as symmetrization. ## 2 Preliminaries Basic definitions.All measurable spaces in this paper are assumed to be standard Borel spaces. The set of all Borel probability measures on a space \(\mathcal{X}\) will be denoted by \(\mathcal{P}(\mathcal{X})\). A _Markov kernel_ from \((\mathcal{X},\mathcal{A})\) to \((\mathcal{Y},\mathcal{B})\) is a mapping \(P_{Y|X}:\mathcal{B}\times\mathcal{X}\to[0,1]\), such that \(P_{Y|X=x}(\cdot):=P_{Y|X}(\cdot|x)\) is an element of \(\mathcal{P}(\mathcal{Y})\) for every \(x\in\mathcal{X}\) and the map \(x\mapsto P_{Y|X=x}(B)\) is measurable for every \(B\in\mathcal{B}\). The set of all such Markov kernels will be denoted by \(\mathcal{M}(\mathcal{Y}|\mathcal{X})\). The product of \(P_{X}\in\mathcal{P}(\mathcal{X})\) and \(P_{Y|X}\in\mathcal{M}(\mathcal{Y}|\mathcal{X})\) is the probability measure \(P_{X}\otimes P_{Y|X}\in\mathcal{P}(\mathcal{X}\times\mathcal{Y})\) defined on product sets \(A\times B\subseteq\mathcal{X}\times\mathcal{Y}\) by \((P_{X}\otimes P_{Y|X})(A\times B):=\int_{A}P_{Y|X=x}(B)P_{X}(\mathrm{d}x)\) and then extended to all Borel subsets of \(\mathcal{X}\times\mathcal{Y}\) by countable additivity. This defines a joint probability law for a random element \((X,Y)\) of \(\mathcal{X}\times\mathcal{Y}\), so that \(P_{X}\) is the marginal law of \(X\), \(P_{Y|X}\) is the conditional law of \(Y\) given \(X\), and \(P_{Y}(\cdot)=\int_{\mathcal{X}}P_{Y|X=x}(\cdot)P_{X}(\mathrm{d}x)\) is the marginal law of \(Y\). The product measure \(P_{X}\otimes P_{Y}\), under which \(X\) and \(Y\) are independent, is a special case of this if we interpret \(P_{Y}\) as a trivial Markov kernel with \(P_{Y|X=x}=P_{Y}\) for all \(x\). A _coupling_ of \(\mu\in\mathcal{P}(\mathcal{X})\) and \(\nu\in\mathcal{P}(\mathcal{Y})\) is a probability measure \(P\in\mathcal{P}(\mathcal{X}\times\mathcal{Y})\), such that \(P(\cdot\times\mathcal{Y})=\mu(\cdot)\) and \(P(\mathcal{X}\times\cdot)=\nu(\cdot)\). We will denote the set of all couplings of \(\mu\) and \(\nu\) by \(\Pi(\mu,\nu)\). \(L^{p}\) and \(L_{\psi_{p}}\) spaces.The \(L^{p}(\mu)\) norms of \(f:\mathcal{X}\to\mathbb{R}\), for \(\mu\in\mathcal{P}(\mathcal{X})\) and \(p\geq 1\), are defined as \[\|f\|_{L^{p}(\mu)}:=\Big{(}\int_{\mathcal{X}}|f|^{p}\,\mathrm{d}\mu\Big{)}^{1/p}\] whenever the expectation on the right-hand side exists. We will often use the linear functional notation for expectations, i.e., \(\langle\mu,f\rangle=\int_{\mathcal{X}}f\,\mathrm{d}\mu\). For \(p\geq 1\), define the function \(\psi_{p}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) by \(\psi_{p}(x):=\exp(x^{p})-1\). Its inverse is given by \(\psi_{p}^{-1}(x)=\big{(}\log(x+1)\big{)}^{1/p}\), where \(\log\) will always denote natural logarithms. Some useful properties of these two functions are collected in Appendix A of Supplementary Material. The function arises in the context of controlling the tail behavior of random variables (see [19, 20, 1] for details). The _Orlicz \(\psi_{p}\)-norm_ of a real-valued random variable \(X\) is defined as \[\|X\|_{\psi_{p}}:=\inf\Big{\{}c>0:\mathbf{E}\Big{[}\psi_{p}\Big{(}\frac{|X|}{c} \Big{)}\Big{]}\leq 1\Big{\}},\] and the tails of \(X\) satisfy \(\mathbf{P}[|X|\geq u]\leq Ke^{-Cu^{p}}\) for all \(u\geq 0\) if and only if \(\|X\|_{\psi_{p}}<\infty\). The Orlicz space \(L_{\psi_{p}}\) is the space of all random variables \(X\) with \(\|X\|_{\psi_{p}}<\infty\). In particular, if \(X\) is \(\sigma\)-subgaussian, i.e., \(\mathbf{P}[|X|\geq u]\leq 2e^{-u^{2}/2\sigma^{2}}\) for all \(u\geq 0\), then \(\|X\|_{\psi_{2}}\leq\sqrt{6}\sigma\); conversely, every \(X\in L_{\psi_{2}}\) is \(\sigma\)-subgaussian with \(\sigma\leq c\|X\|_{\psi_{2}}\) for some absolute constant \(c>0\). Information-theoretic quantities.The relative entropy (or information divergence) \(D(\mu\|\nu)\) between two probability measures \(\mu,\nu\) on the same space \(\mathcal{X}\) is defined as \[D(\mu\|\nu):=\Big{\langle}\mu,\log\frac{\mathrm{d}\mu}{\mathrm{d}\nu}\Big{\rangle}\] if \(\mu\ll\nu\) (i.e., \(\mu\) is absolutely continuous w.r.t. \(\nu\)), and \(D(\mu\|\nu):=+\infty\) otherwise. The following inequality will be useful (proofs of all results are in Appendix B of the Supplementary Material): **Proposition 1**.: _If \(\mu\ll\nu\), then for any \(p\geq 1\)_ \[\Big{\langle}\mu,\psi_{p}^{-1}\Big{(}\frac{\mathrm{d}\mu}{\mathrm{d}\nu} \Big{)}\Big{\rangle}\leq\big{(}D(\mu\|\nu)+1\big{)}^{1/p}.\] Given \(P_{U}\in\mathcal{P}(\mathcal{U})\) and \(P_{V|U},Q_{V|U}\in\mathcal{M}(\mathcal{V}|\mathcal{U})\), we define the _conditional divergence_ \[D(P_{V|U}\|Q_{V|U}|P_{U}):=D(P_{U}\otimes P_{V|U}\|P_{U}\otimes Q_{V|U}).\] The mutual information \(I(X;Y):=D(P_{Y|X}\|P_{Y}|P_{X})\) and conditional mutual information \(I(X;Y|Z):=D(P_{Y|XZ}\|P_{Y|Z}|P_{XZ})\) are special cases of the above definition, and the identities \[D(P_{Y|X}\|Q_{Y}|P_{X}) =I(X;Y)+D(P_{Y}\|Q_{Y}) \tag{2}\] \[D(P_{Y|XZ}\|Q_{Y|Z}|P_{XZ}) =I(X;Y|Z)+D(P_{Y|Z}\|Q_{Y|Z}|P_{Z}). \tag{3}\] hold whenever all the quantities are finite. ## 3 The decorrelation lemma All of our subsequent developments make use of the following _decorrelation lemma_: **Lemma 1**.: _Let \(\mu,\nu\) be two probability measures on a space \(\mathcal{X}\) such that \(\mu\ll\nu\), and let \(f,g:\mathcal{X}\to\mathbb{R}_{+}\) be two nonnegative measurable functions. Then the following inequalities hold:_ \[\langle\mu,fg\rangle\leq 2^{1/p}\Big{\langle}\mu,f\psi_{p}^{-1}\Big{(}\frac{ \mathrm{d}\mu}{\mathrm{d}\nu}\Big{)}\Big{\rangle}+\langle\nu,f\psi_{p}(g)\rangle \tag{4}\] _and_ \[\langle\mu,fg\rangle\leq 2^{1/p}\|f\|_{L^{2}(\nu)}+4^{1/p}\Big{\langle}\mu,f \psi_{p}^{-1}\Big{(}\frac{\mathrm{d}\mu}{\mathrm{d}\nu}\Big{)}\Big{\rangle}+4^ {1/p}\|f\|_{L^{1}(\mu)}\big{(}\log\langle\nu,\exp(g^{p})\rangle\big{)}^{1/p}. \tag{5}\] The proof makes extensive use of various properties of \(\psi_{p}\) and \(\psi_{p}^{-1}\). In particular, Eq. (4) is a relaxation of the Young-type inequality \(xy\leq\psi_{p}^{*}(x)+\psi_{p}(y)\), where \(\psi_{p}^{*}(x):=\sup_{y\geq 0}(xy-\psi_{p}(y))\) is the (one-sided) Lengendre-Fenchel conjugate of \(\psi_{p}\). Every use of Lemma 1 in the sequel will be an instance of the following scheme: Let \(P_{X}\in\mathcal{P}(\mathcal{X})\), \(Q_{Y}\in\mathcal{P}(\mathcal{Y})\), and \(P_{Y|X}\in\mathcal{M}(\mathcal{Y}|\mathcal{X})\) be given, such that \(P_{Y|X=x}\ll Q_{Y}\) for all \(x\in\mathcal{X}\). Let \((X,Y,\bar{Y})\) be a random element of \(\mathcal{X}\times\mathcal{Y}\times\mathcal{Y}\) with joint law \(P_{X}\otimes P_{Y|X}\otimes Q_{Y}\); in particular, \(\bar{Y}\) is independent of \((X,Y)\). Furthermore, let \(f:\mathcal{Y}\to\mathbb{R}_{+}\) and \(g:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}_{+}\) be given, such that \(\mathbf{E}[\psi_{p}(g(X,y))]\leq 1\) for all \(y\in\mathcal{Y}\). Then, applying Lemma 1 conditionally on \(X=x\) with \(\mu=P_{Y|X=x}\), \(\nu=Q_{Y}\), \(f\), and \(g(x,\cdot)\), and then taking expectations w.r.t. \(P_{X}\), we obtain \[\mathbf{E}[f(Y)g(X,Y)]\leq 2^{1/p}\mathbf{E}\Bigg{[}f(Y)\psi_{p}^{-1}\Bigg{(} \frac{\mathrm{d}P_{Y|X}}{\mathrm{d}Q_{Y}}(Y)\Bigg{)}\Bigg{]}+\mathbf{E}[f( \bar{Y})].\] The quantity on the right-hand side can be further upper-bounded in terms of the information divergences \(D(P_{Y|X}\|Q_{Y})\) using Proposition 1. ## 4 Some estimates for the absolute generalization error We adopt the usual set-up for the analysis of (possibly randomized) learning algorithms and their generalization error. Let an instance space \(\mathcal{Z}\), a hypothesis space \(\mathcal{W}\), and a nonnegative loss function \(\ell:\mathcal{W}\times\mathcal{Z}\to\mathbb{R}_{+}\) be given. A _learning algorithm_ is a Markov kernel \(P_{W|S}\) from the product space \(\mathcal{Z}^{n}\) into \(\mathcal{W}\), which takes as input an \(n\)-tuple \(S=(Z_{1},\ldots,Z_{n})\) of i.i.d. random elements of \(\mathcal{Z}\) with unknown marginal probability law \(P_{Z}\) and generates a random element \(W\) of \(\mathcal{W}\). We define the _empirical risk_ and the _expected_ (or _population_) _risk_ of each \(w\in\mathcal{W}\) by \[L_{n}(w):=\langle P_{n},\ell(w,\cdot)\rangle=\frac{1}{n}\sum_{i=1}^{n}\ell(w, Z_{i}),\qquad L(w):=\langle P_{Z},\ell(w,\cdot)\rangle=\mathbf{E}[\ell(w,Z)]\] where \(P_{n}\) is the empirical distribution of \(S\), and the _pointwise generalization error_ by \[\mathrm{gen}(w,S):=L(w)-L_{n}(w).\] It will also be convenient to introduce an auxiliary \(n\)-tuple \(S^{\prime}=(Z^{\prime}_{1},\ldots,Z^{\prime}_{n})\sim P_{Z}^{\otimes n}\), which is independent of \((S,W)\sim P_{Z}^{\otimes n}\otimes P_{W|S}\). We will use \(\tilde{S}\) to denote the pair \((S^{\prime},S)\) and write \(L^{\prime}_{n}(w)\) for the empirical risk of \(w\) on \(S^{\prime}\). As a first illustration of our general approach, we show that it can be used to recover some existing results on the generalization error, including the bounds of Xu and Raginsky [7] involving the mutual information and the bounds of Steinke and Zakynthinou [10] involving the conditional mutual information. We start with the following estimate on the expected value of \(|\mathrm{gen}(W,S)|\): **Theorem 1**.: _Assume the random variables \(\ell(w,Z)\), \(w\in\mathcal{W}\), are \(\sigma\)-subgaussian when \(Z\sim P_{Z}\). Let a learning algorithm \(P_{W|S}\) be given. Then, for any \(Q_{W}\in\mathcal{P}(\mathcal{W})\),_ \[\mathbf{E}[|\mathrm{gen}(W,S)|]\leq\sqrt{\frac{12\sigma^{2}}{n}}\Bigg{(} \mathbf{E}\Bigg{[}\psi_{2}^{-1}\Bigg{(}\frac{\mathrm{d}P_{W|S}}{\mathrm{d}Q_{ W}}\Bigg{)}\Bigg{]}+1\Bigg{)}, \tag{6}\] _where the expectation on both sides is w.r.t. \(P_{S}\otimes P_{W|S}=P_{Z}^{\otimes n}\otimes P_{W|S}\)._ The key step in the proof is to apply the decorrelation lemma, conditionally on \(S\), to \(\mu=P_{W|S}\), \(\nu=Q_{W}\), \(f(w)=\sigma\sqrt{6/n}\), and \(g(w)=\frac{|\mathrm{gen}(w,S)}{\sigma\sqrt{6/n}}\). The same subgaussianity assumption was also made by Xu and Raginsky [7]. Minimizing the right-hand side of (6) over \(Q_{W}\), we recover their generalization bound up to a multiplicative constant and an extra \(O(1/\sqrt{n})\) term (which is unavoidable since we are bounding the expected _absolute_ generalization error): **Corollary 1**.: _Under the assumptions of Theorem 1,_ \[\mathbf{E}[|\mathrm{gen}(W,S)|]\leq\sqrt{\frac{24\sigma^{2}}{n}\big{(}I(W;S)+4 \big{)}}. \tag{7}\] A notable shortcoming of Theorem 1 and Corollary 1 is that they yield vacuous bounds whenever the mutual information \(I(W;S)\) is infinite, which will be the case, e.g., when the marginal probability laws \(P_{Z}\) and \(P_{W}\) are nonatomic (i.e., assign zero mass to singletons) and the learning algorithm is deterministic. To remove this drawback, we will use an elegant auxiliary randomization device introduced by Steinke and Zakynthinou [10]. Let \(\varepsilon=(\varepsilon_{1},\ldots,\varepsilon_{n})\) be an \(n\)-tuple of i.i.d. Rademacher random variables, i.e., \(\mathbf{P}[\varepsilon_{i}=\pm 1]=1/2\), independent of \(\tilde{S}\). For each \(i\) let \(\tilde{Z}_{i}^{1}:=Z_{i}\) and \(\tilde{Z}_{i}^{-1}:=Z_{i}^{\prime}\) and let \(\bar{P}=\bar{P}_{\tilde{S}\varepsilon W}\) be the joint probability law of \((\tilde{S},\varepsilon,W)\), such that \(\bar{P}_{\tilde{S}\varepsilon}=P_{\tilde{S}}\otimes P_{\varepsilon}\) and \(\bar{P}_{W|\tilde{S}\varepsilon}:=P_{W|\tilde{S}^{\varepsilon}}\) where \(S^{\varepsilon}:=(\tilde{Z}_{1}^{\varepsilon_{1}},\ldots,\tilde{Z}_{n}^{ \varepsilon_{n}})\). In other words, under \(\bar{P}\), \(\tilde{S}\) and \(\varepsilon\) are independent and have their respective marginal distributions, while \(W\) is generated by feeding the learning algorithm \(P_{W|S}\) with the tuple \(\tilde{S}^{\varepsilon}\). Consequently, \(W\) is independent of \(\tilde{S}^{-\varepsilon}=(\tilde{Z}_{1}^{-\varepsilon_{1}},\ldots,\tilde{Z}_{ n}^{-\varepsilon_{n}})\). Then, letting \(P\) be the joint law of \((\tilde{S},W)\), we have \[\mathbf{E}_{P}[|\mathrm{gen}(W,S)|] =\mathbf{E}_{P}\big{|}\mathbf{E}_{P}[L_{n}^{\prime}(W)-L_{n}(W)|S,W]\big{|}\] \[\leq\mathbf{E}_{P}|L_{n}^{\prime}(W)-L_{n}(W)|\] \[=\mathbf{E}_{\bar{P}}\Big{|}\frac{1}{n}\sum_{i=1}^{n}\Big{(}\ell (W,\tilde{Z}_{i}^{-\varepsilon_{i}})-\ell(W,\tilde{Z}_{i}^{\varepsilon_{i}}) \Big{)}\Big{|}\] \[=\mathbf{E}_{\bar{P}}\Big{|}\frac{1}{n}\sum_{i=1}^{n}\varepsilon _{i}\Big{(}\ell(W,Z_{i}^{\prime})-\ell(W,Z_{i})\Big{)}\Big{|}.\] Thus, all the analysis can be carried out w.r.t. \(\bar{P}\), as in the following: **Theorem 2**.: _Assume there exists a function \(\Delta:\mathcal{Z}\times\mathcal{Z}\to\mathbb{R}_{+}\), such that \(|\ell(w,z)-\ell(w,z^{\prime})|\leq\Delta(z,z^{\prime})\) for all \(w\in\mathcal{W}\) and \(z,z^{\prime}\in\mathcal{Z}\). Then for any Markov kernel \(Q_{W|\tilde{S}}\) with access to \(\tilde{S}\) but not to \(\varepsilon\) we have_ \[\mathbf{E}_{P}[|\mathrm{gen}(W,S)|]\leq\frac{\sqrt{12}}{n}\mathbf{E}_{\bar{P}} \Bigg{[}\|\Delta(\tilde{S})\|_{\ell^{2}}\Bigg{(}\psi_{2}^{-1}\Bigg{(}\frac{ \mathrm{d}\bar{P}_{W|\tilde{S}\varepsilon}}{\mathrm{d}Q_{W|\tilde{S}}}\Bigg{)} +1\Bigg{)}\Bigg{]}, \tag{8}\] _where \(\|\Delta(\tilde{s})\|_{\ell^{2}}:=\big{(}\sum_{i=1}^{n}\Delta(z_{i},z_{i}^{ \prime})^{2}\big{)}^{1/2}\)._ The same assumption on \(\ell\) was also made in [10]. Optimizing over \(Q_{W|\tilde{S}}\), we can recover their Theorem 5.1 (again, up to a multiplicative constant and a \(O(1/\sqrt{n})\) fluctuation term): **Corollary 2**.: _Under the assumptions of Theorem 2,_ \[\mathbf{E}_{P}[|\mathrm{gen}(W,S)|]\leq\sqrt{\frac{24}{n}\mathbf{E}[\Delta^{2}(Z, Z^{\prime})]\big{(}I(W;\varepsilon|\tilde{S})+4\big{)}}, \tag{9}\] _where \(Z\) and \(Z^{\prime}\) are independent samples from \(P_{Z}\) and where the conditional mutual information is computed w.r.t. \(\bar{P}\)._ The main advantage of using conditional mutual information is that it never exceeds \(n\log 2\) (of course, the bound is only useful if \(I(W;\varepsilon|\tilde{S})=o(n)\)). ## 5 Estimates using couplings We now turn to the analysis of \(\mathbf{E}[\mathrm{gen}(W,S)]\) using couplings. The starting point is the following observation: With \((S^{\prime},S,W)\) be constructed as before, consider the quantities \[\tilde{L}_{n}(w):=L^{\prime}_{n}(w)-L_{n}(w)\equiv\frac{1}{n}\sum_{i=1}^{n} \big{(}\ell(w,Z^{\prime}_{i})-\ell(w,Z_{i})\big{)}.\] Then, using the fact that \(\langle P_{\tilde{S}}\otimes Q_{W},\tilde{L}_{n}\rangle=0\) for any \(Q_{W}\in\mathcal{P}(\mathcal{W})\), we have \[\mathbf{E}[\mathrm{gen}(W,S)] =\langle P_{\tilde{S}}\otimes P_{W|S},\tilde{L}_{n}\rangle- \langle P_{\tilde{S}}\otimes Q_{W},\tilde{L}_{n}\rangle\] \[=\int_{\mathcal{Z}\times\mathcal{Z}}P_{\tilde{S}}(\mathrm{d} \tilde{s})\big{(}\langle P_{W|S=s},\tilde{L}_{n}\rangle-\langle Q_{W},\tilde{ L}_{n}\rangle\big{)}. \tag{10}\] This suggests the idea of introducing, for each \(s\in\mathcal{Z}^{n}\), a coupling of \(P_{W|S=s}\) and \(Q_{W}\), i.e., a probability law \(P_{UV|S=s}\) for a random element \((U,V)\) of \(\mathcal{W}\times\mathcal{W}\) with marginals \(P_{U}=P_{W|S=s}\) and \(P_{V}=Q_{W}\). We then have the following: **Theorem 3**.: _For \(u,v\in\mathcal{W}\) and \(\tilde{s}=(s,s^{\prime})\in\mathcal{Z}^{n}\times\mathcal{Z}^{n}\), define_ \[\sigma^{2}(u,v,\tilde{s}):=\sum_{i=1}^{n}\left(\big{(}\ell(u,z^{\prime}_{i})- \ell(v,z^{\prime}_{i})\big{)}-\big{(}\ell(u,z_{i})-\ell(v,z_{i})\big{)}\right) ^{2}. \tag{11}\] _Then, for any \(Q_{W}\in\mathcal{P}(\mathcal{W})\), any family of couplings \(P_{UV|S=s}\in\Pi(P_{W|S=s},Q_{W})\) depending measurably on \(s\in\mathcal{Z}^{n}\), and any \(\mu_{UV}\in\mathcal{P}(\mathcal{W}\times\mathcal{W})\),_ \[\mathbf{E}[\mathrm{gen}(W,S)]\leq\frac{\sqrt{24}}{n}\mathbf{E}\Bigg{[}\sigma (U,V,\tilde{S})\psi_{2}^{-1}\Bigg{(}\frac{\mathrm{d}P_{UV|S}}{\mathrm{d}\mu_{ UV}}\Bigg{)}+\sqrt{\mathbf{E}[\sigma^{2}(\bar{U},\bar{V},\tilde{S})|\tilde{S}]} \Bigg{]}, \tag{12}\] _where the expectation on the right-hand side is w.r.t. the joint law of \((U,V,\bar{U},\bar{V},\tilde{S})\), under which \((S,U,V)\) are distributed according to \(P_{S}\otimes P_{UV|S}\), \((\bar{U},\bar{V})\) are distributed according to \(\mu_{UV}\) independently of \((U,V,S)\), and \(S^{\prime}\) is distributed according to \(P_{S}\) independently of everything else._ The proof makes essential use of symmetrization using an auxiliary \(n\)-tuple \(\varepsilon\) of i.i.d. Rademacher random variables, which allows us to apply Lemma 1 conditionally on \(\tilde{S}\). The coupling-based formulation looks rather complicated compared to the setting of Section 4. However, being able to choose not just the "prior" \(Q_{W}\), but also the couplings \(P_{UV|S}\) of \(P_{W|S}\) and \(Q_{W}\) and the reference measure \(\mu_{UV}\), allows us to overcome some of the shortcomings of the set-up of Section 4. Consider, for example, the case when the learning algorithm ignores the data, i.e., \(P_{W|S}=P_{W}\). Then we can choose \(Q_{W}=P_{W}\), \(P_{UV|S}(\mathrm{d}u,\mathrm{d}v)=P_{W}(\mathrm{d}u)\otimes\delta_{u}(\mathrm{d}v)\), where \(\delta_{u}\) is the Dirac measure concentrated on the point \(u\), and \(\mu_{UV}=P_{UV}\) (since the latter does not depend on \(S\)). With these choices, \(U=V\) and \(\tilde{U}=\tilde{V}\) almost surely, so the right-hand side of (12) is identically zero. By contrast, the bounds of Theorems 1 and 2 always include an additional \(O(1/\sqrt{n})\) term even when \(W\) and \(\tilde{S}\) are independent. Moreover, Theorem 3 can be used to recover the bounds of Theorems 1 and 2 up to multiplicative constants. For example, to recover Theorem 1, we apply Theorem 3 with \(P_{UV|S}=P_{W|S}\otimes Q_{W}\), \(\mu_{UV}=Q_{W}\otimes Q_{W}\), and with an estimate on \(\sigma(U,V,\tilde{S})\) based on the subgaussianity of \(\ell(w,Z)\). For a more manageable bound that will be useful later, let us define the following for \(u,v\in\mathcal{W}\): \[d_{S,\ell}(u,v) :=\bigg{(}\frac{1}{n}\sum_{i=1}^{n}\big{(}\ell(u,Z_{i})-\ell(v,Z_ {i})\big{)}^{2}\bigg{)}^{1/2}\equiv\|\ell(u,\cdot)-\ell(v,\cdot)\|_{L^{2}(P_{n })}\] \[d_{\ell}(u,v) :=\bigg{(}\mathbf{E}\big{[}\big{(}\ell(u,Z)-\ell(v,Z)\big{)}^{2} \big{]}\bigg{)}^{1/2}\equiv\|\ell(u,\cdot)-\ell(v,\cdot)\|_{L^{2}(P_{Z})},\] **Corollary 3**.: _Under the assumptions of Theorem 3,_ \[\mathbf{E}[\mathrm{gen}(W,S)]\leq\sqrt{\frac{48}{n}}\mathbf{E}\bigg{[}\big{(}d _{\ell}(U,V)+d_{S,\ell}(U,V)\big{)}\psi_{2}^{-1}\bigg{(}\frac{\mathrm{d}P_{UV |S}}{\mathrm{d}\mu_{UV}}\bigg{)}+d_{\ell}(\bar{U},\bar{V})\bigg{]}.\] ## 6 Refined estimates via chaining in the space of measures We now combine the use of couplings as in Section 5 with a chaining argument. The basic idea is as follows: Instead of coupling \(P_{W|S}\) with \(Q_{W}\) directly, we interpolate between them using a (possibly infinite) sequence of Markov kernels \(P^{0}_{W|S},P^{1}_{W|S},\ldots,P^{K}_{W|S}\), such that \(P^{0}_{W|S}=Q_{W}\) and \(P^{K}_{W|S}=P_{W|S}\) (or \(\lim_{k\to\infty}P^{k}_{W|S}=P_{W|S}\) in an appropriate sense, e.g., weakly for each \(S\), if the sequence is infinite). Given any such sequence, we telescope the terms in (10) as follows: \[\mathbf{E}[\mathrm{gen}(W,S)]=\int_{\mathcal{Z}\times\mathcal{Z}}P_{\tilde{S} }(\mathrm{d}\tilde{s})\sum_{k=1}^{K}\Big{(}\langle P^{k}_{W|S=s},\tilde{L}_{n }\rangle-\langle P^{k-1}_{W|S=s},\tilde{L}_{n}\rangle\Big{)}.\] For each \(k\), we can now choose a family of random couplings \(P_{W_{k}W_{k-1}|S}\in\Pi(P^{k}_{W|S},P^{k-1}_{W|S})\) and a deterministic probability measure \(\rho_{W_{k}W_{k-1}}\in\mathcal{P}(\mathcal{W}\times\mathcal{W})\). The following is an immediate consequence of applying Corollary 3 to each summand: **Theorem 4**.: _Let \(P_{W|S}\), \(Q_{W}\), \(P_{W_{k}W_{k-1}|S}\), and \(\rho_{W_{k}W_{k-1}}\) be given as above. Then_ \[\mathbf{E}[\mathrm{gen}(W,S)]\] \[\leq\sqrt{\frac{48}{n}}\sum_{k=1}^{K}\mathbf{E}\bigg{[}\big{(}d_{ \ell}(W_{k},W_{k-1})+d_{S,\ell}(W_{k},W_{k-1})\big{)}\psi_{2}^{-1}\bigg{(}\frac {\mathrm{d}P_{W_{k}W_{k-1}|S}}{\mathrm{d}\rho_{W_{k}W_{k-1}}}\bigg{)}+d_{\ell}( \bar{W}_{k},\bar{W}_{k-1})\bigg{]},\] _where in the \(k\)th term on the right-hand side \((S,W_{k},W_{k-1})\) are jointly distributed according to \(P_{S}\otimes P_{W_{k}W_{k-1}|S}\) and \((\bar{W}_{k},\bar{W}_{k-1})\) are jointly distributed according to \(\rho_{W_{k}W_{k-1}}\)._ Apart from Theorem 1, we have been imposing only minimal assumptions on \(\ell\) and then using symmetrization to construct various subgaussian random variables conditionally on \(W\) and \(\tilde{S}\). For the next series of results, we will assume something more, namely that \((\mathcal{W},d)\) is a metric space and that the following holds for the centered loss \(\bar{\ell}(w,z):=\ell(w,z)-\mathbf{E}[\ell(w,Z)]\): \[\Bigg{\|}\sum_{i=1}^{n}(\bar{\ell}(u,Z_{i})-\bar{\ell}(v,Z_{i}))\Bigg{\|}_{ \psi_{2}}\leq\sqrt{n}d(u,v),\quad\forall u,v\in\mathcal{W}. \tag{13}\] In other words, the centered empirical process \(\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\bar{\ell}(w,Z_{i})\) indexed by the elements of \((\mathcal{W},d)\) is a subgaussian process [1, 2, 3]. **Theorem 5**.: _Assume (13). Then_ \[\mathbf{E}[\mathrm{gen}(W,S)]\leq\sqrt{\frac{2}{n}}\sum_{k=1}^{K}\mathbf{E} \Bigg{[}d(W_{k},W_{k-1})\psi_{2}^{-1}\bigg{(}\frac{\mathrm{d}P_{W_{k}W_{k-1}|S }}{\mathrm{d}\rho_{W_{k}W_{k-1}}}\bigg{)}+d(\bar{W}_{k},\bar{W}_{k-1})\Bigg{]} \tag{14}\] As a byproduct, we recover the stochastic chaining bounds of Zhou et al. [17] (which, in turn, subsume the bounds of Asadi et al. [16]): **Corollary 4**.: _Let \(P_{Z}\) and \(P_{W|S}\) be given, and let \(P_{W}\) be the marginal law of \(W\). Let \(\big{(}P_{W_{k}|S}\big{)}_{k\geq 0}\) be a sequence of Markov kernels satisfying the following conditions: (i) \(P_{W_{0}|S}=P_{W}\); (ii) \(P_{W_{k}|S}\xrightarrow{k\to\infty}P_{W|S}\); (iii) for every \(k\geq 1\), \(S-W-W_{k}-W_{k-1}\) is a Markov chain. Then_ \[\mathbf{E}[\mathrm{gen}(W,S)] \leq\sqrt{\frac{2}{n}}\sum_{k=1}^{\infty}\mathbf{E}\Big{[}d(W_{k},W_{k-1})\Big{(}\sqrt{D(P_{S|W_{k}}\|P_{S})}+1\Big{)}\Big{]} \tag{15}\] \[\leq\sqrt{\frac{2}{n}}\sum_{k=1}^{\infty}\sqrt{\mathbf{E}[d^{2}( W_{k},W_{k-1})]}(\sqrt{I(W_{k};S)}+2). \tag{16}\] ## 7 Tail estimates Next, we turn to high-probability tail estimates on \(\mathrm{gen}(W,S)\). We start with the followoing simple observation: Assume \(\ell(w,Z)\) is \(\sigma\)-subgaussian for all \(w\in\mathcal{W}\) when \(Z\sim P_{Z}\). Then, for any \(Q_{W}\in\mathcal{P}(\mathcal{W})\) such that \(P_{W|S=s}\ll Q_{W}\) for all \(s\in\mathcal{Z}^{n}\), we have \[\mathbf{E}\Bigg{[}\exp\Bigg{(}\frac{\mathrm{gen}^{2}(W,S)}{6\sigma^{2}/n}- \log\Big{(}1+\frac{\mathrm{d}P_{W|S}}{\mathrm{d}Q_{W}}(W)\Big{)}\Bigg{)}\Bigg{]} \leq\mathbf{E}\Bigg{[}\exp\Bigg{(}\frac{\mathrm{gen}^{2}(\bar{W},S)}{6\sigma^{ 2}/n}\Bigg{)}\Bigg{]}\leq 1\] with \(\bar{W}\sim Q_{W}\) independent of \((S,W)\). Thus, by Markov's inequality, for any \(0<\delta<1\), \[\mathbf{P}\Bigg{[}|\mathrm{gen}(W,S)|>\sigma\sqrt{\frac{6}{n}}\Bigg{(}\psi_{ 2}^{-1}\Big{(}\frac{\mathrm{d}P_{W|S}}{\mathrm{d}Q_{W}}(W)\Big{)}+\sqrt{\log \frac{1}{\delta}}\Bigg{)}\Bigg{]}\leq\delta.\] In other words, \(|\mathrm{gen}(W,S)|\lesssim\frac{\sigma}{\sqrt{n}}\psi_{2}^{-1}\big{(}\frac{ \mathrm{d}P_{W|S}}{\mathrm{d}Q_{W}}\big{)}\) with high \(P_{SW}\)-probability. Similar observations are made by Hellstrom and Durisi [9] with \(Q_{W}=P_{W}\), giving high-probability bounds of the form \(|\mathrm{gen}(W,S)|\lesssim\sqrt{\frac{\sigma^{2}D(P_{W|S}\|P_{W})}{n}}\). Generalization bounds in terms of the divergence \(D(P_{W|S}\|P_{W})\) are also common in the PAC-Bayes literature [4, 5]. Moreover, using the inequality (5) in Lemma 1, we can give high \(P_{S}\)-probability bounds on the conditional expectation \[\langle P_{W|S},|\mathrm{gen}(W,S)|\rangle=\langle P_{W|S},|L(W)-L_{n}(W)|\rangle.\] **Theorem 6**.: _Assume \(\ell(w,Z)\) is \(\sigma\)-subgaussian for all \(w\) when \(Z\sim P_{Z}\). Then, for any \(Q_{W}\in\mathcal{P}(\mathcal{W})\), the following holds with \(P_{S}\)-probability of at least \(1-\delta\):_ \[\langle P_{W|S},|\mathrm{gen}(W,S)|\rangle\leq\sqrt{\frac{24\sigma^{2}}{n}} \Big{(}\Big{\langle}P_{W|S},\psi_{2}^{-1}\Big{(}\frac{\mathrm{d}P_{W|S}}{ \mathrm{d}Q_{W}}\Big{)}\Big{\rangle}+1+\sqrt{\log\frac{2}{\delta}}\Big{)}.\] Another type of result that appears frequently in the literature on PAC-Bayes methods pertains to so-called _transductive bounds_, i.e., inequalities for the difference between \[\langle P_{n}^{\prime}\otimes P_{W|S},\ell\rangle-\langle P_{n}^{\prime} \otimes Q_{W},\ell\rangle\equiv\frac{1}{n}\sum_{i=1}^{n}\mathbf{E}[\ell(W,Z_{ i}^{\prime})-\ell(\bar{W},Z_{i}^{\prime})|\tilde{S}],\] and \[\langle P_{n}\otimes P_{W|S},\ell\rangle-\langle P_{n}\otimes Q_{W},\ell \rangle\equiv\frac{1}{n}\sum_{i=1}^{n}\mathbf{E}[\ell(W,Z_{i})-\ell(\bar{W},Z_{ i})|\tilde{S}],\] where \(Q_{W}\) is some fixed "prior" and where \(\bar{W}\sim Q_{W}\) is independent of \((S^{\prime},S,W)\). Using our techniques, we can give the following general transductive bound: **Theorem 7**.: _Let \(P_{W|S}\) and \(Q_{W}\) be given and take any \((P_{W_{k}W_{k-1}|S})_{k=1}^{K}\) and \((\rho_{W_{k}W_{k-1}})_{k=1}^{K}\) as in Theorem 4. Also, let \(\mathbf{p}=(p_{1},p_{2},\dots)\) be a strictly positive probability distribution on \(\mathbb{N}\). Then the following holds with \(P_{\tilde{S}}\)-probability at least \(1-\delta\):_ \[\Big{(}\langle P_{n}^{\prime}\otimes P_{W|S},\ell\rangle-\langle P _{n}^{\prime}\otimes Q_{W},\ell\rangle\Big{)}-\Big{(}\langle P_{n}\otimes P_{W |S},\ell\rangle-\langle P_{n}\otimes Q_{W},\ell\rangle\Big{)}\] \[\leq\sqrt{\frac{96}{n}}\sum_{k=1}^{K}\Bigg{(}\sqrt{\langle\rho_{W_ {k}W_{k-1}},d_{\tilde{S},\ell}^{2}\rangle}+\Big{\langle}P_{W_{k}W_{k-1}|S},d_{ \tilde{S},\ell}\psi_{2}^{-1}\Big{(}\frac{\mathrm{d}P_{W_{k}W_{k-1}|S}}{ \mathrm{d}\rho_{W_{k}W_{k-1}}}\Big{)}\Big{\rangle}\] \[\qquad\qquad\qquad\qquad+\big{\langle}P_{W_{k}W_{k-1}|S},d_{ \tilde{S},\ell}\big{\rangle}\sqrt{\log\frac{2}{p_{k}\delta}}\Bigg{)},\] _where_ \[d_{\tilde{S},\ell}^{2}(u,v):=\frac{1}{2n}\sum_{i=1}^{n}\Big{(} \big{(}\ell(u,Z_{i})-\ell(v,Z_{i})\big{)}^{2}+\big{(}\ell(u,Z_{i}^{\prime})- \ell(v,Z_{i}^{\prime})\big{)}^{2}\Big{)}.\] This result subsumes some existing transductive PAC-Bayes estimates, such as Theorem 2 of Audibert and Bousquet [15]. Let us briefly explain how we can recover this result from Theorem 7. Assume that \(\mathcal{W}\) is countable and let \((\mathcal{A}_{k})\) be an increasing sequence of finite partitions of \(\mathcal{W}\) with \(\mathcal{A}_{0}=\{\mathcal{W}\}\). For each \(k\) and each \(w\in\mathcal{W}\), let \(A_{k}(w)\) be the unique set in \(\mathcal{A}_{k}\) containing \(w\). Choose a representative point in each \(A\in\mathcal{A}_{k}\) and let \(\mathcal{W}_{k}\) denote the set of all such representatives, with \(\mathcal{W}_{0}=\{w_{0}\}\). Take \(P_{W_{\infty}|S}=P_{W|S}\) and \(P_{W_{0}}=Q_{W}=\delta_{w_{0}}\). Now, for each \(k\geq 0\), we take \(P_{W_{k}|S}\) as the _projection_ of \(P_{W|S}\) onto \(\mathcal{W}_{k}\), i.e., the finite mixture \[P_{W_{k}|S}:=\sum_{w\in\mathcal{W}_{k}}P_{W|S}(A_{k}(w))\delta_{w}.\] Moreover, given some "prior" \(\pi\in\mathcal{P}(\mathcal{W})\), we can construct a sequence \((\pi_{k})_{k=0}^{\infty}\) of probability measures with \(\pi_{\infty}=\pi\) and \(\pi_{0}=\delta_{w_{0}}\), such that \(\pi_{k}\) is a projection of \(\pi\) onto \(\mathcal{W}_{k}\). Now observe that, for each \(k\), \(S-W_{\infty}-W_{k}-W_{k-1}\) is a Markov chain. Indeed, if we know \(P_{W_{k}|S}\), then we can construct \(P_{W_{\ell}|S}\) for any \(\ell<k\) without knowledge of \(S\). With these ingredients in place, let us choose \(P_{W_{k}W_{k-1}|S}=P_{W_{k-1}|W_{k}}\otimes P_{W_{k}|S}\) and \(\rho_{W_{k}W_{k-1}}=\pi_{k}\otimes P_{W_{k-1}|W_{k}}\). Then, using Cauchy-Schwarz and Jensen, we conclude that the following holds with \(P_{S}\)-probability at least \(1-\delta\): \[\Big{(}\langle P_{n}^{\prime}\otimes P_{W|S},\ell\rangle-\langle P _{n}^{\prime}\otimes\delta_{w_{0}},\ell\rangle\Big{)}-\Big{(}\langle P_{n} \otimes P_{W|S},\ell\rangle-\langle P_{n}\otimes\delta_{w_{0}},\ell\rangle \Big{)}\] \[\leq\sqrt{\frac{96}{n}}\sum_{k=1}^{\infty}\Bigg{(}\sqrt{\langle\pi _{k}\otimes P_{W_{k-1}|W_{k}},d_{S,\ell^{\prime}}^{2}\rangle}+\sqrt{2\langle P _{W_{k}|S}\otimes P_{W_{k-1}|W_{k}},d_{S,\ell^{\prime}}^{2}\rangle\Big{(}D(P_ {W_{k}|S}\|\pi_{k})+\log\frac{2e}{p_{k}\delta}\Big{)}}\Bigg{)}.\] This recovers [15, Thm. 2] up to an extra term that scales like \(\frac{1}{\sqrt{n}}\sum_{k}\sqrt{\langle\pi_{k}\otimes P_{W_{k-1}|W_{k}},d_{S, \ell^{\prime}}^{2}\rangle}\). ## 8 The Fernique-Talagrand bound As a bonus, we show that a combination of decorrelation and chaining in the space of measures can be used to recover the upper bounds of Fernique [21] and Talagrand [22] on the expected supremum of a stochastic process in terms of majorizing measures (see Eq. (18) below and also [23, 24]). For simplicity, let \((T,d)\) be a finite metric space with \(\operatorname{diam}(T)=\sup\{d(u,v):u,v\in T\}<\infty\). Let \(B(t,r)\) denote the ball of radius \(r\geq 0\) centered at \(t\in T\), i.e., \(B(t,r):=\{u\in T:d(u,t)\leq r\}\). Let \((X_{t})_{t\in T}\) be a centered stochastic process defined on some probability space \((\Omega,\mathcal{A},\mathbb{P})\) and satisfying \[\mathbf{E}\Bigg{[}\psi_{p}\Bigg{(}\frac{|X_{u}-X_{v}|}{d(u,v)}\Bigg{)}\Bigg{]} \leq 1,\qquad\forall u,v\in T \tag{17}\] for some \(p\geq 1\). Then we can obtain the following result using chaining in the space of measures and decorrelation estimates: **Theorem 8**.: _Let \(\tau\) be a random element of \(T\), i.e., a measurable map \(\tau:\Omega\to T\) with marginal probability law \(\nu\). Then for any \(\mu\in\mathcal{P}(T)\) we have_ \[\mathbf{E}[X_{\tau}]\lesssim\operatorname{diam}(T)+\int_{T}\int_{0}^{ \operatorname{diam}(T)}\bigg{(}\log\frac{1}{\mu(B(t,\varepsilon))}\bigg{)}^{1 /p}\mathrm{d}\varepsilon\,\nu(\mathrm{d}t).\] Applying Theorem 8 to \(\tau^{*}\) defined in (1) and then minimizing over \(\mu\), we recover a Fernique-Talagrand type bound on the expected supremum of \(X_{t}\): \[\mathbf{E}\Big{[}\sup_{t\in T}X_{t}\Big{]}=\mathbf{E}[X_{\tau^{*}}]\lesssim \operatorname{diam}(T)+\inf_{\mu\in\mathcal{P}(T)}\sup_{t\in T}\int_{0}^{ \operatorname{diam}(T)}\bigg{(}\log\frac{1}{\mu(B(t,\varepsilon))}\bigg{)}^{1 /p}\mathrm{d}\varepsilon. \tag{18}\] Conclusion and future work In this paper, we have presented a unified framework for information-theoretic generalization bounds based on a combination of two key ideas (decorrelation and chaining in the space of measures). However, our method has certain limitations, which we plan to address in future work. For example, it would be desirable to cover the case of processes satisfying Bernstein-type (mixed \(\psi_{1}\) and \(\psi_{2}\)) increment conditions. It would also be of interest to see whether there are any connections to the convex-analytic approach of Lugosi and Neu [25]. Finally, since our method seamlessly interpolates between Fernique-Talagrand type bounds and information-theoretic bounds, we plan to use it to further develop the ideas of Hodgkinson et al. [18], who were the first to combine these two complementary approaches to analyze the generalization capabilities of iterative learning algorithms. ## Acknowledgments This work was supported by the Illinois Institute for Data Science and Dynamical Systems (iDS\({}^{2}\)), an NSF HDR TRIPODS institute, under award CCF-1934986. The authors would like to thank Matus Telgarsky for some valuable suggestions.
2304.06617
Exact and lower bounds for the quantum speed limit in finite dimensional systems
A fundamental problem in quantum engineering is determining the lowest time required to ensure that all possible unitaries can be generated with the tools available, which is one of a number of possible quantum speed limits. We examine this problem from the perspective of quantum control, where the system of interest is described by a drift Hamiltonian and set of control Hamiltonians. Our approach uses a combination of Lie algebra theory, Lie groups and differential geometry, and formulates the problem in terms of geodesics on a differentiable manifold. We provide explicit lower bounds on the quantum speed limit for the case of an arbitrary drift, requiring only that the control Hamiltonians generate a topologically closed subgroup of the full unitary group, and formulate criteria as to when our expression for the speed limit is exact and not merely a lower bound. These analytic results are then tested and confirmed using a numerical optimization scheme. Finally we extend the analysis to find a lower bound on the quantum speed limit in the common case where the system is described by a drift Hamiltonian and a single control Hamiltonian.
Mattias T. Johnsson, Lauritz van Luijk, Daniel Burgarth
2023-04-13T15:37:39Z
http://arxiv.org/abs/2304.06617v1
# Exact and lower bounds for the quantum speed limit in finite dimensional systems ###### Abstract A fundamental problem in quantum engineering is determining the lowest time required to ensure that all possible unitaries can be generated with the tools available, which is one of a number of possible _quantum speed limits_. We examine this problem from the perspective of quantum control, where the system of interest is described by a drift Hamiltonian and set of control Hamiltonians. Our approach uses a combination of Lie algebra theory, Lie groups and differential geometry, and formulates the problem in terms of geodesics on a differentiable manifold. We provide explicit lower bounds on the quantum speed limit for the case of an arbitrary drift, requiring only that the control Hamiltonians generate a topologically closed subgroup of the full unitary group, and formulate criteria as to when our expression for the speed limit is exact and not merely a lower bound. These analytic results are then tested and confirmed using a numerical optimization scheme. Finally we extend the analysis to find a lower bound on the quantum speed limit in the common case where the system is described by a drift Hamiltonian and a single control Hamiltonian. ## I Introduction The emergence of quantum technologies such as quantum information processing, quantum engineering and quantum sensing has relied on our increasing ability to manipulate quantum systems with high levels of precision. Such manipulation requires the ability to carry out quantum operations and state preparation with high fidelity, in the presence of noisy environments, as quickly as possible, and potentially subject to a number of real-world constraints. These requirements are the province of the field of quantum control, which is primarily concerned with methods of steering a quantum system using a set of classical control inputs to the system [1; 2]. Two major topics within this field are characterising the operations that can be carried out and the states that can be reached with a given set of controls, as well as determining the specific time dependence of those controls that will steer the system to the intended goal. The questions regarding the gates that can be implemented and state reachability are approached using the methods of bilinear control theory [3; 4] which usually involve a Lie theoretic framework [3; 5; 6]. The questions regarding the determination of the time-dependent control fields (pulses) on the other hand, have no good general strategy and are generally difficult. Analytic methods of optimal control theory can be employed [3; 7; 8], but usually numerical optimization is used, typically involving gradient-based search strategies with some fidelity cost functional [9; 10; 11; 12]. While these aspects of quantum control have been extensively studied, less attention has been given to the question of the speed at which specific unitaries can be generated or specific states can be reached. Given that decoherence is present in all quantum information processing, it is important to minimize the time taken to perform quantum operations. The time taken to reach specific targets given the set of controls available is known as the quantum speed limit [13; 14; 15]. Or, more precisely, there are a number of different speed limits, some for the transformation of states, some for unitary transformation, some for uncontrolled dynamics, and some for controlled dynamics [14]. We will be more precise later, but in general terms the quantum speed limit we will consider in this paper is the following: Assuming we have a set of controls that allow us to achieve all possible unitaries in a finite-dimension system, what is the minimum time by which we can guarantee we can produce all possible unitaries? That is, how much time must we allow to be certain that we can accomplish everything that can be done with the system? The exact time for this type of quantum speed limit is generally very difficult to determine for a specific quantum system, unless that system is very low dimensional or possesses a very high degree of symmetry. Nonetheless, in some special cases the limit can be computed; see for example [16; 17; 18; 19; 20; 21; 22]. This difficulty means that work has concentrated on finding lower bounds for the speed limit rather than exact results. Various bounds have been obtained for closed, finite dimensional systems as well as for open systems [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. While these bounds are not tight, they can provide information on how the speed limit is likely to scale with regard to quantities of interest, such as system dimension or total energy. It is notable that many of these approaches make use of energy uncertainty of the system, applying the original results of Mandelstam and Tamm [38], as well as the more modern interpretation of Margolus and Levitin [23]. Given this background, we can state a generic quantum control problem and investigate its speed limit as follows. We consider a Hamiltonian given by \[H=H_{d}+\sum_{j=1}^{m}f_{j}(t)H_{j} \tag{1}\] where \(H_{d}\) and \(H_{j}\) are time-independent Hamiltonians acting on a finite-dimensional Hilbert space, and the \(f_{j}(t)\) are a set of real, time-dependent, scalar functions. \(H_{d}\) is called the _drift Hamiltonian_, and is always present. The \(H_{j}\) are the _control Hamiltonians_, and we assume that we have arbitrary control over the \(f_{j}(t)\), as even in this case the quantum speed limits are very difficult to determine. The system evolves according to the Schrodinger equation \[i\frac{d}{dt}U(t)=\bigg{(}H_{d}+\sum_{j=1}^{m}f_{j}(t)H_{j}\bigg{)}U(t),\quad U(0)= \mathbb{1}, \tag{2}\] where \(U(t)\) is the unitary time-evolution operator. In an \(n\)-dimensional system \(U(t)\) can be represented as a unitary \(n\times n\) matrix. Further, as unitary operators are physically indistinguishable up to a phase, we can choose to remove this excessive phase degree of freedom by demanding that \(U(t)\) have unit determinant, making it a special unitary matrix. This is accomplished by choosing the drift and control Hamiltonians to be traceless, and we will assume this is the case throughout this paper. The system is called _controllable_ if it is possible to find control functions \(f_{j}(t)\) such that, given enough time, we can achieve any possible unitary (up to a phase), or equivalently, if we can generate all possible members of the Lie group \(\mathrm{SU}(n)\). There is a beautiful Lie-algebraic result that states that this is the case if and only if [3; 5] the dynamical Lie algebra \(\{iH_{d},iH_{1},iH_{2},\ldots,iH_{m}\}_{LA}\) has dimension \(n^{2}-1\), i.e. the dynamical Lie algebra generated by the control Hamiltonians and drift Hamiltonian is the Lie algebra \(\mathfrak{su}(n)\). The next natural question is, if a quantum system is controllable, how long will it take to produce a specific unitary in the worst case? Or, equivalently [4; 6] in the case of compact groups such as \(\mathrm{SU}(n)\), since the system is controllable, what is the minimum time by which we can guarantee we can produce all possible unitaries? This is what we will refer to as the _quantum speed limit_ in this paper. We note that some authors make a distinction between quantum control systems which are fully controllable only in the presence of a drift term (i.e. removing the drift Hamiltonian would cause the system to no longer be fully controllable) from those systems for which this is not the case. Systems of the latter type are known as strongly controllable [39], and are fully controllable with control Hamiltonians alone regardless of the presence or absence of any drift term. Due to our assumption that the control strengths \(f_{j}(t)\) can be arbitrarily large, strongly controllable systems can reach any unitary in an arbitrarily short amount of time, rendering the concept of a speed limit irrelevant. For that reason we consider only systems of the first type, where the drift is required to ensure the system is controllable. In this paper, our goal is to derive lower bounds on the speed limits of controllable quantum systems that are as general as possible. We do not restrict the system to a specific number of dimensions, or demand it describes a set of qubits, or require the drift Hamiltonian to be of a specific form, as is common in other speed limit calculations. We require no knowledge of the quantum energy uncertainty of the system. We will require only that the control subgroup is topologically closed, where the control subgroup is the set of all unitaries that can be reached by application of the control Hamiltonians alone, and will thus form a subgroup of \(\mathrm{SU}(n)\). However the resulting speed limits can be hard to analytically compute explicitly as they require determining the diameter of rather abstract manifolds, so we examine in more detail cases where the manifolds are symmetric spaces [40], which can arise, for example, if the Lie algebra associated with the control subgroup forms a Cartan decomposition [3] of the full dynamical Lie algebra. This will allow us to derive explicit analytic lower bounds for the quantum speed limit for a number of control schemes corresponding to cases where the control group is one of \(\mathrm{SO}(n)\), \(\mathrm{Sp}(n/2)\), or \(\mathrm{S}(\mathrm{U}(p)\times\mathrm{U}(q))\) with \(p+q=n\), and investigate when this bound will be tight. We also consider the case where the number of control Hamiltonians is not enough to span the full Lie algebra corresponding to these groups, and give the minimum number of control Hamiltonians required to generate the algebra. Due to the fact that many control problems will not have enough controls to generate these groups, we also derive a bound for the common general case where there is a only a single control Hamiltonian. In all cases, our results are completely general and valid for arbitrary dimension. Finally, in order to test our analytic results, we carry out an exploration of quantum speed limits for a variety of low dimensional systems using numerical simulations. This not only provides a check on our results, but allows an investigation of the efficiency of numerical optimal control algorithms for bilinear systems. The structure of the paper is as follows. We begin in Section II by formulating the quantum speed limit problem in terms of Lie algebras and Lie groups, and introduce concepts we will require such as cosets, quotient spaces and adjoint orbits, as well as laying out our basic approach. We introduce the idea that the problem can be treated as movement on a manifold, with the movement direction and speed given by the drift Hamiltonian. Since the mathematical machinery will not be familiar to some readers, we provide illustrative examples. In Section III we explain how one can obtain a speed limit by determining the diameter of a manifold (i.e. the two points furthest apart) and dividing by the speed at which the system moves on the manifold. We describe the conditions on the manifold required for this to work, and give a way of computing the speed of movement from the system's drift Hamiltonian. We establish that symmetric spaces provide manifolds meeting the criteria, give their diameters, and use them to compute explicit expressions for the lower bound on the quantum speed limit. Section IV examines when the lower bound developed in the previous section is actually tight. It develops a criterion based on the dimension of the adjoint orbit and commutation relations between the drift Hamiltonian and the matrix representation of the Lie algebra corresponding to the controls. As this criterion is sufficient but not necessary, in Section V we investigate what else can be said about the tightness of the bound if the controls arise from a Cartan decomposition. This allows understanding the control problem in terms of root systems, and we illustrate the results by considering the case where the control group is \(\mathrm{SO}(n)\). In Section VI we treat the problem of finding the quantum speed limit numerically, and compare the simulations to our analytic results. This allows both a test of our bounds as well as an examination of how well standard optimization techniques used in quantum control work. Finally, in Section VII we consider the case where we have only a limited set of controls so that we do not have a symmetric space, and derive a bound on the speed limit for the common case where the system has only a single control Hamiltonian. ## II Problem formulation in terms of Lie groups and algebras The calculation of quantum speed limits is often approached using Lie group-theoretic techniques. We will also make use of these mathematical structures, so we briefly provide the relevant background here. Good explanations of this material can be found in, for example, [3; 40; 41]. In what follows, we will denote groups with a Roman upper case letter, e.g. \(G=\mathrm{SU}(n)\), and algebras with a Fraktur font (e.g. \(\mathfrak{g}=\mathfrak{su}(n)\)). Let \(\mathfrak{g}\) be the full Lie algebra generated by the drift Hamiltonian and the control Hamiltonians, i.e. \(\mathfrak{g}=\{iH_{d},iH_{1},iH_{2},\ldots,iH_{m}\}_{LA}\), and let the Lie algebra generated by the control Hamiltonians alone be given by \(\mathfrak{\hat{t}}=\{iH_{1},iH_{2},\ldots,iH_{m}\}_{LA}\). Clearly \(\mathfrak{\hat{t}}\) is a subalgebra of \(\mathfrak{g}\). The system is said to be controllable if \(\mathfrak{g}=\mathfrak{su}(n)\). We denote the control group, i.e. the group of unitaries generated by exponentiating \(\mathfrak{\hat{t}}\), by \(K\), and the dynamical Lie group generated by \(\mathfrak{g}\) with \(G\). Clearly \(K\subseteq G\) is a subgroup, and \(G\subseteq\mathrm{SU}(n)\) with equality if the system is controllable. At any given time, the system evolves according to (2). Since the control amplitudes \(f_{j}(t)\) can be arbitrarily large, we can generate any unitary \(U\in K\) in an arbitrarily short time to arbitrarily good precision (see [3] for a rigorous justification of this point). Now, suppose our control problem is to produce a unitary \(U_{\mathrm{target}}\) that moves us between the two unitaries \(U_{1}\) and \(U_{2}\), i.e. \(U_{2}=U_{\mathrm{target}}U_{1}\). Since we can move between elements of \(K\) arbitrarily quickly, all elements of \(K\) are equivalent, meaning if we apply any controls after we have generated the specific unitary \(U_{\mathrm{target}}\), all resulting unitaries \(KU_{\mathrm{target}}\) are equivalent in terms of how quickly we can generate them. Because of this, we can view our control problem as actually asking how to move between the right cosets \(KU_{1}\) and \(KU_{2}\), where the right coset is \(KU=\{kU|k\in K\}\). Furthermore, as the system evolves in time, the unitary at any point in time, given by (2), is equivalent to any other element in its coset, because it can be moved within the coset arbitrarily quickly. Alternatively, one could define equivalence in terms of _left_ cosets, where now we consider how to move between the left cosets \(U_{1}K\) and \(U_{2}K\). Again, these cosets are equivalent in terms of the minimum time it takes to use controls to move between them, but now the controls are being applied before the unitary rather than after. From a quantum control perspective, the first description is more intuitive. That is, if we are given a unitary, another unitary is equivalent if we can move to it arbitrarily quickly. This corresponds to the right coset picture, since applying controls to a given unitary corresponds to left multiplication by the controls. Consequently this is the approach we will take for this paper. These arguments show that the relevant elements of the control problem are the cosets, and an effective way of formulating the problem is to "divide out" the degree of freedom associated with each coset. To make this rigorous one defines the quotient space \(G/K\) as the set of right cosets \(Kg,\ \forall g\in G\). We denote each coset by \([g]=Kg\) with \(g\in G\), since the element \(g\) indexes the coset. The cosets can also be seen as the orbits of the natural left action of \(K\) on \(G\), and the space of orbits is \(G/K\). If \(K\) is a normal subgroup of \(G\), then \(G/K\) is itself a Lie group [40; 41]. However, even if this is not the case, provided \(G\) is a Lie group, and \(K\) is a _closed_ subgroup (in the topological sense) then \(G/K\) is a differentiable manifold [41] that is also a (right) homogeneous space meaning that it carries a (right) transitive \(G\)-action which is given by \([g^{\prime}]\cdot g=[g^{\prime}g]\). Specifically, \(G/K\) can be given the structure of a smooth manifold with dimension \(\dim(G/K)=\dim G-\dim K\). Movement within a coset does not result in movement in \(G/K\), but movement between cosets does. Movement within a coset is produced by the control Hamiltonians, and movement between the cosets requires the drift Hamiltonian. As the system evolves via (2) it traces out a continuous path in \(G/K\) space, and the quantum speed limit is governed by how fast we can move between the two points corresponding to \(U_{1}\) and \(U_{2}\). Clearly we cannot move arbitrarily in \(G/K\). Our movement on \(G/K\) is determined by the drift Hamiltonian, with the direction of the movement determined by where we are within a coset at any given time, allowing us some degree of steering. In particular, we have the following: If \(G\) is a compact and connected Lie group (e.g. \(\mathrm{SU}(n)\)), and \(K\) is a closed Lie subgroup of \(G\), with associated Lie algebras \(\mathfrak{g}\) and \(\mathfrak{\hat{t}}\), then we can decompose the Lie algebra \(\mathfrak{g}\) as \(\mathfrak{g}=\mathfrak{p}\oplus\mathfrak{\hat{t}}\) with \[[\mathfrak{\hat{t}},\mathfrak{p}] \subseteq\mathfrak{p} \tag{3}\] \[[\mathfrak{\hat{t}},\mathfrak{\hat{t}}] \subseteq\mathfrak{\hat{t}} \tag{4}\] where \(\mathfrak{p}=\mathfrak{\hat{t}}^{\perp}\) with respect to an \(\mathrm{Ad}\)-invariant inner product on \(\mathfrak{g}\)[40], e.g. the Hilbert-Schmidt inner product. Note that while \(\mathfrak{\hat{t}}\) is a Lie algebra, \(\mathfrak{p}\) is in general not closed under the Lie bracket. The \(\mathrm{Ad}\)-invariant inner product on \(\mathfrak{g}\) induces a bi-invariant Riemannian geometry on \(G\) which in turn induces a \(G\)-invariant Riemannian geometry on \(G/K\) (see the next section for details). This equips the manifold \(G/K\) with the structure of a so-called _reductive space_, which is a more restricted variety of a homogeneous space. Any evolution purely under the action of the controls, without the drift, will produce motion only within a coset. Without loss of generality, we can assume \(iH_{d}\in\mathfrak{p}\), since any contribution that lies in \(\mathfrak{\hat{t}}\) can be removed by application of the controls. Since \(\mathfrak{p}\) is orthogonal to \(\mathfrak{\hat{t}}\), this means that any evolution under the drift alone moves purely in \(G/K\), with no movement within a coset. Specifically, for a reductive space, the inner product lets us identify the tangent \(T_{o}(G/K)\) at the origin \(o=[\mathbb{1}]\) with \(\mathfrak{p}\). To show how the action of the control steers the direction of motion in \(G/K\), we need the concept of the adjoint orbit. The adjoint orbit of \(A\in\mathfrak{g}\) is given by \[\mathcal{O}(A)=\{k^{-1}Ak\,|\,k\in K\}. \tag{5}\] By Eq. (3), we have \(\mathcal{O}(A)\subset\mathfrak{p}\) for \(A\in\mathfrak{p}\). We can see how this steers the evolution in \(G/K\) space as follows [16]: Take elements \(k_{1},k_{2}\in K\) that belong to the coset containing the identity, and consider where they move under the action of the drift after a short time \(\Delta t\). We obtain \[k_{1}\to e^{-iH_{d}\Delta t}k_{1}=k_{1}e^{-ik_{1}^{-1}H_{d}k_{1}\Delta t} \tag{6}\] showing that after the evolution it is now a member of the coset \([e^{-ik_{1}^{-1}H_{d}k_{1}\Delta t}]\). Similarly, \(k_{2}\) moves to a coset \([e^{-ik_{2}^{-1}H_{d}k_{2}\Delta t}]\). Since we can choose to be anywhere in a coset arbitrarily quickly due to the action of the controls, we see that the adjoint orbit represents the directions we are able to move from the origin of \(G/K\). This mathematical machinery can be somewhat opaque, so we present a simple example that illustrates these concepts. We consider computing the quantum speed limit of a controllable quantum system in a two dimensional Hilbert space, i.e. the group associated with the unitary evolution operator is SU(2). This is one of the few cases where the speed limit is explicitly known. We take our Hamiltonian to be \[H=\sigma_{z}+f(t)\sigma_{x} \tag{7}\] and the Schrodinger equation is given by \[-i\frac{d}{dt}U(t)=(\sigma_{z}+f(t)\sigma_{x})U(t),\quad U(0)=\mathbb{1}. \tag{8}\] The Lie algebra associated with the single control is just \(\mathrm{span}\{i\sigma_{x}\}\), while the full dynamical Lie algebra associated with the drift and controls is \(\mathrm{span}\{i\sigma_{x},i\sigma_{y},i\sigma_{z}\}\). Since this algebra is three dimensional, and this matches \(n^{2}-1\) where \(n\) is the Hilbert space dimension, the system is controllable. Our Lie algebra decomposition is \(\mathfrak{g}=\mathfrak{p}\oplus\mathfrak{f}\) with \(\mathfrak{f}=\mathrm{span}\{i\sigma_{x}\}\) and \(\mathfrak{p}=\mathrm{span}\{i\sigma_{y},i\sigma_{z}\}\). We have \(\mathfrak{g}=\mathfrak{su}(2)\), \(\mathfrak{f}=\mathfrak{u}(1)\), \(G=\mathrm{SU}(2)\) and \(K=\mathrm{U}(1)\). The manifold corresponding to the quotient space \(G/K\) can in general be quite complicated, but in this case it is particularly simple; the manifold \(G/K=\mathrm{SU}(2)/\mathrm{U}(1)\) is isomorphic to the two-sphere \(S^{2}\). Since the control algebra is one-dimensional, the control group subgroup \(K\) generated by \(\mathfrak{f}\) can be parameterized by a single parameter \(\alpha\) as \(e^{i\,\alpha\sigma_{x}},\ \alpha\in[0,2\pi]\), and the adjoint orbit is given by the set \[\mathcal{O}(iH_{d}) =\{e^{-i\,\alpha\sigma_{x}}i\sigma_{z}e^{i\,\alpha\sigma_{x}}| \,\alpha\in[0,2\pi]\}\] \[=\{i\,\mathrm{cos}(2\alpha)\,\sigma_{z}+i\,\mathrm{sin}(2\alpha) \,\sigma_{y}|\,\alpha\in[0,2\pi]\}. \tag{9}\] \(S^{2}\) is two-dimensional, and the tangent space at the origin is defined by \(\mathrm{span}\{i\sigma_{y},i\sigma_{z}\}=\mathfrak{p}\). Since Eq. (9) allows any direction in the tangent space by suitable choice of \(\alpha\), we can move in any direction in \(G/K\) we wish. As we will show in later sections, the speed of movement in \(G/K\) is constant and determined purely by the drift Hamiltonian. This means that the speed limit is achieved by moving on a great circle geodesic between two antipodal points, as this yields the maximum possible evolution time between any two unitaries for the system. The concepts of speeds and distances on the \(G/K\) manifold are determined by the Riemannian metric on \(G/K\) which depends on the inner product chosen on \(\mathfrak{g}\). As will be shown later, if we choose the Killing form for the inner product, then for this particular example the speed of movement is \(2\sqrt{2}\), and the distance between two antipodal points is \(\sqrt{2}\pi\), giving the time for the quantum speed limit as \(t=\pi/2\), which agrees with the standard result [3]. We note that this is an unusual way to look at this problem. The normal approach is to apply the maxim "algebra is easier than geometry", and use Lie algebra, Lie groups and results such as the maximal tori theorem, rather than considering geodesics on a manifold. Nonetheless, the idea of obtaining a speed limit by dividing the diameter of the \(G/K\) manifold by the drift velocity will prove extremely useful. In the case where the adjoint orbit allows us to move on a geodesic connecting the two points furthest apart on the manifold, we can obtain an exact speed limit, and if it does not allow movement on such a geodesic, such a method will still provide a lower bound. ## III Quantum speed limits from manifold diameter and drift velocity As discussed in the previous Section, in order to obtain speed limits from the structure of the \(G/K\) manifold, we need some way of assigning distances to the space. This involves bridging the two descriptions of the problem: The control and drift Hamiltonians defining the system are described by the Lie algebra, while the unitaries corresponding to the system evolution are described by the Lie group and associated manifold. To see the issue, consider the group SU(2). The associated manifold is the three-sphere, which describes the topology, but there is no metric associated with it (yet) -- there is no concept of the size of its diameter, for example. The way the metric is imposed is to define an inner product on the Lie algebra which is then pushed around the group to define an inner metric on all tangent spaces. For the inner product on the Lie algebra \(\mathfrak{g}\) we will take \[\langle X,Y\rangle_{K}=-2n\,\mathrm{Tr}\,[XY],\quad X,Y\in\mathfrak{g}. \tag{10}\] This inner product is Ad-invariant since the group \(G\) consists of unitary operators. In the controllable case, i.e., \(\mathfrak{g}=\mathfrak{su}(n)\), this is the Killing form. We now obtain the inner product at the tangent space of a general element \(g\in G\) from \[\langle X,Y\rangle_{g}=\langle g^{-1}X,g^{-1}Y\rangle_{K}=\langle Xg^{-1},Yg^ {-1}\rangle_{K} \tag{11}\] where \(X,Y\in T_{\mathrm{g}}G\) are tangent vectors at \(g\). The second equality holds by Ad-invariance of the inner product on \(\mathfrak{g}\). This equips \(G\) with a bi-invariant Riemannian geometry (meaning that both left and right multiplication act isometrically). For such groups the geodesics through an element \(g\) are precisely the curves of the form \(t\mapsto ge^{v\pi}\), where \(v\in\mathfrak{g}\)[42] (Lemma 21.2). The quotient space \(G/K\) inherits a \(G\)-invariant Riemannian geometry from \(G\): At the origin \(o\) the inner product \((X,Y)_{o}\) is defined as \(\langle X,Y\rangle_{K}\) using that \(T_{o}G/K\cong\mathfrak{p}\subset\mathfrak{g}\). This is extended to arbitrary points \([g]\) by the (differential of the) \(G\)-action just as in (11): \(\langle X,Y\rangle_{[g]}=\langle X\cdot g^{-1},Y\cdot g^{-1}\rangle_{o}\) (this is indeed well-defined, i.e. independent of the choice of \(g\) within the coset). In particular, the resulting Riemannian metric is automatically \(G\)-invariant meaning that \(G\) acts isometrically on \(G/K\). It now holds by construction that the natural projection \(\pi:G\to G/K\) is a Riemannian submersion [40] meaning it induces an isometry between \((\ker d\pi|_{g})^{\perp}\) and \(T_{\pi(g)}(G/K)\) for all \(g\). Since \(\ker(d\pi|_{1})=\mathds{t}\), this just follows from \(T_{o}(G/K)\cong\mathfrak{p}\) and our definition of the metric (in general, we have \(\ker d\pi|_{g}=g^{-1}\mathds{t}g\) and hence \(T_{[g]}G/K\cong g^{-1}\mathds{p}g\)). The notation \(d\pi|_{g}\) means that we take the differential of \(\pi\) at the point \(g\) which is a linear map \(T_{g}G\to T_{\pi(g)}G/K\). The crucial point for us is the following: From \(\pi\) being a Riemannian submersion it follows that geodesics in \(G/K\) running through a coset \(x=[g]\) are precisely curves of the form \([g\exp(ut)]=x\cdot\exp(ut)\) with \(u\in g^{-1}\mathfrak{p}g\) and that they have the same length as their corresponding lifts of \(G\)[40] (Proposition 18.8). Let us summarize the relevant structure: We have a quantum control problem with dynamical Lie algebra \(\mathfrak{g}\) and control algebra \(\mathds{t}\), associated Lie groups \(G=e^{\mathfrak{g}}\) and \(K=e^{\mathfrak{t}}\), and \(K\) is a closed subgroup of \(G\). We use the Killing form as an inner product on \(\mathfrak{g}\) and take the decomposition \(\mathfrak{g}=\mathfrak{p}\oplus\mathds{t}\) with \(\mathfrak{p}=\mathds{t}^{\perp}\). We can always ensure \(iH_{d}\in\mathfrak{p}\) by removing any part not in \(\mathfrak{p}\) via the controls. We know \(G/K\) is a reductive space and we know precisely which form the geodesics on \(G/K\) have. We now compute the speed at which the system moves through \(G/K\) as it evolves. We know the possible directions of travel at the origin are given by the adjoint orbit of the drift, \(k^{\dagger}iH_{d}k\in\mathcal{O}(iH_{d})\) with \(k\in K\), so that in a time \(dt\) we move to a coset of \(\exp(ik^{\dagger}H_{d}k\,dt)\). To determine the distance \(ds\) this corresponds to in \(G/K\) we use the metric on \(G/K\) and because we have a Riemannian submersion we can employ (10) to obtain \[ds =\sqrt{(ik^{-1}H_{d}k\;dt,ik^{-1}H_{d}k\;dt)_{K}}\] \[=dt\sqrt{2n\operatorname{Tr}(H_{d}^{2})} \tag{12}\] where we have used the fact that the Killing form is Ad-invariant. Using the \(G\)-invariance of the metric on \(G/K\), the same argument shows this result also holds at other points \(x\neq o\;\in\bar{G}/K\). This means the speed at which the system moves in \(G/K\) is constant and is given by \[v=\sqrt{2n\operatorname{Tr}\left(H_{d}^{2}\right)}. \tag{13}\] Now that we know the form a geodesic in \(G/K\) must take, and speed with which a quantum system moves along it, the task is to find the diameter of the \(G/K\) space, that is, the furthest distance possible pairs of points can have. Given the fact that motion in \(G/K\) is at constant speed, this will give us a lower bound on the quantum speed limit, that is, the time taken to produce the most difficult unitary. This proves the following: **Theorem 1**.: _Let \(G\) be the dynamical Lie group of the control problem Eq. (1) and assume that the subgroup \(K\subset G\) generated by the controls alone is closed. Let \(T_{\mathrm{QSL}}\) be the minimum time in which all unitaries of \(G\) can be reached. Then_ \[T_{\mathrm{QSL}}\geq\frac{\mathrm{diam}(G/K)}{\sqrt{2n\operatorname{Tr}[H_{d }^{2}]}}. \tag{14}\] The practical usefulness of this result, of course, relies on an explicit computation of the diameter (or at least a lower bound). The diameter of the Riemannian manifold \(G/K\) is \[\mathrm{diam}(M)=\sup\{d(x,y):x,y\in G/K\} \tag{15}\] where \(d(x,y)\), the Riemannian distance between \(x\) and \(y\), is the infimum over the lengths of curves connecting these points as measured by the metric. Since \(G/K\) is homogeneous the definition is equivalent to \(\mathrm{diam}(M)=\sup_{x\in G/K}d(x,o)\). That Eq. (14) is only a lower bound in general is due to the restricted movement on \(G/K\): The possible directions are given by the adjoint orbit \(\mathcal{O}(iH_{d})\). If the adjoint orbit does not allow for the needed directions, the time taken to generate some unitaries will be longer than the lower bound given in Eqs. (16) - (20). Finding the diameter of the homogeneous space \(G/K\) is in general difficult. However, the diameter of all symmetric spaces arising from classical compact groups has been calculated by Yang [43]. (We note there appears to be an error in Yang's paper; the results given for the diameters of \(\mathrm{SU}(2n)/\mathrm{Sp}(n)\) should be divided by \(\sqrt{2}\)). If we consider only symmetric spaces arising from quotient groups of the form \(G/K\) where \(G=\mathrm{SU}(n)\), there are only three possibilities, which we list in Table 1. Note that the group \(\mathrm{Sp}(n)\) refers to the compact symplectic group, and we have chosen to use the Killing form as the inner product on the Lie algebra \(\mathfrak{g}\) to obtain a metric on \(G/K\). Consequently if the Lie group \(K\) generated by the controls is one of \(\mathrm{SO}(n)\), \(\mathrm{Sp}(n)\), or \(\mathrm{S}(\mathrm{U}(p)\times\mathrm{U}(q))\) (the matrices of unit determinant in \(\mathrm{U}(p)\times\mathrm{U}(q)\)), we obtain the following \begin{table} \begin{tabular}{c c c} \(G/K\) & \(\mathrm{diam}(G/K)\) \\ \hline \(\mathrm{SU}(n)/\mathrm{SO}(n)\) & \(\left\{\begin{array}{ll}\frac{\sqrt{2}}{2}\pi n&\text{if $n$ even}\\ \frac{\sqrt{2}}{2}\pi(n^{2}-1)^{1/2}&\text{if $n$ odd}\end{array}\right.\) \\ \(\mathrm{SU}(2n)/\mathrm{Sp}(n)\) & \(\left\{\begin{array}{ll}\pi n&\text{if $n$ even}\\ \pi(n^{2}-1)^{1/2}&\text{if $n$ odd}\end{array}\right.\) \\ \(\mathrm{SU}(p+q)/\mathrm{S}(\mathrm{U}(p)\times\mathrm{U}(q))\) & \(\pi(p+q)^{1/2}p^{1/2}\)\(p\leq q\) \\ \hline \end{tabular} \end{table} Table 1: The diameter of various compact symmetric spaces arising from the quotient \(G/K\), when using the Killing inner product on \(\mathfrak{g}\) in order to obtain a Riemannian metric on \(G/K\). quantum speed limits: \[\mathrm{SO}(n):\quad T_{\mathrm{QSL}}\geq\left\{\begin{array}{ll}\dfrac{\sqrt{n}\, \pi}{2\sqrt{\mathrm{Tr}(H_{d}^{2})}}&\text{if $n$ even}\\ \dfrac{\pi(n^{2}-1)^{1/2}}{2\sqrt{n\,\mathrm{Tr}(H_{d}^{2})}}&\text{if $n$ odd} \end{array}\right. \tag{17}\] \[\mathrm{Sp}(n):\quad T_{\mathrm{QSL}}\geq\left\{\begin{array}{ll}\dfrac{ \sqrt{n}\,\pi}{\sqrt{2\,\mathrm{Tr}(H_{d}^{2})}}&\text{if $n$ even} \\ \dfrac{\pi(n^{2}-1)^{1/2}}{\sqrt{2n\,\mathrm{Tr}(H_{d}^{2})}}&\text{if $n$ odd} \end{array}\right. \tag{18}\] \[\mathrm{S}(\mathrm{U}(p)\times\mathrm{U}(q)):\quad T_{\mathrm{QSL}}\geq\ \ \dfrac{\sqrt{p}\,\pi}{\sqrt{2\,\mathrm{Tr}(H_{d}^{2})}}\ \ p\leq q \tag{19}\] The result for the case where the control group is \(\mathrm{Sp}(n)\) is particularly interesting. It is known that this control group provides complete _state_ controllability even in the absence of a drift Hamiltonian [3]. As we have assumed arbitrarily strong controls, this means that one can find controls that move from any state to any other state arbitrarily quickly. That is, the speed limit for state control in this case is zero. The emergence of a finite speed limit as given by (17) and (18) highlights the difference between unitary control and state control. It is also worth noting the appearance of explicit dependence of the Hilbert space dimension in these bounds, as existing speed limits in the literature are usually not able to include this factor. ## IV Bound tightness in terms of dimension counting Let us discuss the tightness of our speed limit bounds from the perspective of the dimensions of the control group. Our bound was obtained from the observations that the speed of movement in \(G/K\) is constant and that the largest distance between two points (the diameter) is finite. While the existence of a length minimizing geodesic connecting the origin \(o\) with any other point \(x\in G/K\) is guaranteed (by the Hopf-Rinow theorem), it is not clear that such a geodesic is available by choice of suitable controls. Denote by \(D\) the set of points maximizing the distance from the origin, i.e. the points \(x\in G/K\) with \(d(o,x)=\mathrm{diam}(G/K)\) where \(d\) denotes the Riemannian length on \(G/K\). As both inversion and the \(K\)-action are isometries that fix the origin, we know that they also leave \(D\) invariant, i.e. \(D^{-1}=D\) and \(D\cdot k=D\) for all \(k\in K\). For the bounds to be tight, it is necessary and sufficient that _for each_\(x\in D\), there is a minimal geodesic connecting the origin \(o\) with \(x\) which is of the form \([\exp(vt)]\) with \(v\in\mathcal{O}(iH_{d})\). This trivially holds if \(\mathcal{O}(iH_{d})\) is equal to the sphere \(S=\partial B_{r}(0)\) in \(\mathfrak{p}\) of radius \(r=\sqrt{(H_{d},H_{d})_{K}}\) (note that all directions in the adjoint orbit have the same length by \(\mathrm{Ad}\)-invariance). The adjoint orbit itself is a closed manifold which is a subset of \(S\). In the case that the dimension of \(\mathcal{O}(iH_{d})\) is maximal (i.e. equal to \(\dim\mathfrak{p}-1\)), it follows that \(\mathcal{O}(iH_{d})\) is equal to \(S\) and thus contains every direction in \(\mathfrak{p}\). The dimension of the adjoint orbit is \[\dim\mathcal{O}(iH_{d})=\dim\mathfrak{f}-\dim(\{A\in\mathfrak{f}\,|\,[H_{d},A]=0\}) \tag{20}\] because \(T_{A}\mathcal{O}(A)\cong\mathfrak{p}/\mathrm{ker}[A,\cdot\,]\). This means that the bound is tight if we have equality in \[\dim(\{A\in\mathfrak{f}\,|\,[H_{d},A]=0\})\geq 1+\dim\mathfrak{f}-\dim \mathfrak{p}. \tag{21}\] This inequality always holds and equality is equivalent to the ability to move into every possible direction in \(G/K\). We stress that this is a _sufficient_ condition, but not a necessary one. Even if the adjoint orbit does not have enough directions to access all dimensions of \(\mathfrak{p}\), that does not rule out the possibility that, for a specific drift Hamiltonian, a single-parameter geodesic from the origin to the locus corresponding to the diameter with an initial direction lying in the adjoint orbit does not exist. Table 2 lists the relevant dimensions for \(\mathfrak{f}\) and \(\mathfrak{p}\) for the symmetric spaces we are considering, as well the quantity corresponding to the right hand side of (21). For the symmetric spaces \(\mathrm{SU}(n)/\mathrm{Sp}(\frac{\pi}{2})\) and \(\mathrm{SU}(p+q)/\mathrm{S}(\mathrm{U}(p)\times\mathrm{U}(q))\) the number degrees of freedom in the control group exceeds that of the quotient space, so naive dimension counting arguments suggest the bound is likely to be tight, although one must test for equality in Eq. (21) to be sure. However, it is clear that for the case \(\mathrm{SU}(n)/\mathrm{SO}(n)\) with \(n>2\) it is never possible to achieve equality in (21) as the dimension of a space can never be less than zero. Nonetheless, as we will see in our numerical tests of the speed limit in Section VI, for some drift Hamiltonians the bounds (17) and (18) are still tight. To investigate this in more detail, we consider case where the control algebra is \(\mathfrak{f}=\mathfrak{so}(n)\). We wish to determine the size of \(\dim(\{A\in\mathfrak{f}\,|\,[H_{d},A]=0\})\). To \begin{table} \begin{tabular}{c c c c} \(G/K\) & \(d_{p}\) & \(d_{k}\) & \(d_{k}-d_{p}+1\) \\ \hline \(\mathrm{SU}(n)/\mathrm{SO}(n)\) & \(\frac{1}{2}(n^{2}+n-2)\) & \(\frac{n}{2}(n-1)\) & \(2-n\) \\ \(\mathrm{SU}(n)/\mathrm{Sp}(\frac{n}{2})\) & \(\frac{1}{2}(n^{2}-n-2)\) & \(\frac{n}{2}(n+1)\) & \(2+n\) \\ \(\mathrm{SU}(p+q)/\mathrm{S}(\mathrm{U}(p)\times\mathrm{U}(q))\) & \(2pq\) & \(p^{2}+q^{2}-1\) & \((p-q)^{2}\) \\ \hline \end{tabular} \end{table} Table 2: The dimensions of the Lie algebras associated with the three symmetric spaces associated with \(\mathrm{SU}(n)\). \(d_{k}=\dim(\mathfrak{f})\) is the dimension of the control algebra and \(d_{p}=\dim(\mathfrak{p})\) is the dimension of the symmetric space \(G/K\). If \(\dim(\{A\in\mathfrak{f}\,|\,[H_{d},A]=0\})=1+\dim\mathfrak{f}-\dim\mathfrak{p}\), the adjoint orbit from the controls is guaranteed to have enough degrees of freedom to choose any single parameter geodesic from the origin to a point corresponding to the diameter of the space. begin, we note that any drift \(H_{d}\) can be moved into the Cartan subalgebra by some controls. This subalgebra is diagonal with trace zero, meaning we need only consider the case where \(H_{d}\) is diagonal. Let \(H_{d}=\text{diag}\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\}\), where the \(\lambda_{i}\) are the eigenvalues of \(H_{d}\). We choose the basis of \(\mathbf{f}\) to be the set of \(n\times n\) matrices given by \(B_{ij}=|e_{i}\rangle\langle e_{j}|-|e_{j}\rangle\langle e_{i}|\), \(i<j\), where \(|e_{i}\rangle\) is the column vector with a \(1\) in row \(i\) and zero everywhere else. The size of this basis is \(\dim\mathbf{f}=n(n-1)/2\). The commutator of \(H_{d}\) with the basis elements of \(\mathbf{f}\) is given by \[\left[H_{d},B_{ij}\right]=\left(\lambda_{i}-\lambda_{j}\right)\left(|e_{i} \rangle\langle e_{j}|+|e_{j}\rangle\langle e_{i}|\right), \tag{23}\] demonstrating that to ensure \(\left[H_{d},B_{ij}\right]=0\) we require \(\lambda_{i}=\lambda_{j}\). This means that \(\dim(\{A\in\mathbf{f}\}\left[H_{d},A\right]=0)\) is given by the number of pairs \(M\) of eigenvalues of \(H_{d}\) that are degenerate, giving \(\dim\mathcal{O}(iH_{d})=\dim\mathbf{f}-M\). So, for example, if all eigenvalues are distinct, \(M=0\) meaning \(\dim\mathcal{O}(iH_{d})=\dim\mathbf{f}\). If all eigenvalues are identical, then \(M=\frac{1}{2}n(n-1)=\dim\mathbf{f}\) meaning \(\dim\mathcal{O}(iH_{d})=0\). This shows that the more eigenvalues that are degenerate, the smaller the chance the the adjoint orbit allows us to choose a direction that makes the bound tight. As an example, consider the case \(\text{SU}(2)/\text{SO}(2)\) discussed in the previous Section. Here \(d_{k}=1\), \(d_{p}=2\), so equality in Eq. (22) is achieved when the two eigenvalues of \(H_{d}\) are not degenerate. Specifically, in this case the adjoint orbit is one-dimensional, and since \(G/K\) is the two-sphere, this single degree of freedom for the adjoint orbit suffices to choose arbitrary directions on the two-dimensional manifold, meaning achieving a minimal geodesic from the origin to the diameter is always possible. ## V Examination of the tightness of our bounds with Cartan controls In the previous Section we developed a criterion that was _sufficient_ to show our speed limit bounds were tight, based on determining the dimension of the adjoint orbit. As this criterion is not _necessary_, however, this Section examines what else can be said about the tightness of the bounds. We do this mostly for the controllable case \(\mathfrak{g}=\mathfrak{su}(n)\) by using the root system of (\(\mathfrak{g}\), \(\mathbf{f}\)), and we illustrate the approach using \(\mathbf{f}=\mathfrak{so}(n)\). We begin by considering the symmetric spaces described in the previous section as arising from the situation where the controls form a Cartan decomposition of the full Lie algebra. As before, the control algebra is denoted \(\mathbf{f}\) and the associated control group denoted \(K=e^{\mathbf{f}}\). This decomposition is often used in quantum control problems. The main point is that a Cartan decomposition provides a decomposition of the full Lie algebra of the form \(\mathfrak{g}=\mathfrak{p}\oplus\mathbf{f}\) with \(\mathfrak{p}=\mathbf{f}^{\perp}\), that satisfies the relations \[\left[\mathbf{f},\mathbf{f}\right] \subseteq\mathbf{f}, \tag{24}\] \[\left[\mathbf{f},\mathfrak{p}\right] \subseteq\mathbf{p},\] (25) \[\left[\mathfrak{p},\mathfrak{p}\right] \subseteq\mathbf{f}. \tag{26}\] These conditions include those required for a reductive space, plus the additional condition (26). Here the Lie algebra is again equipped with the inner product (10) in order to match the speeds and manifold diameters computed in the previous section. There are precisely three Cartan decompositions of \(\mathfrak{su}(n)\)[3]. They are \(\mathbf{f}=\mathfrak{so}(n)\), \(\mathbf{f}=\mathfrak{sp}(\frac{n}{2})\), and \(\mathbf{f}=\mathfrak{s}(\mathfrak{u}(p)\oplus\mathfrak{u}(q))\) with \(p+q=n\), where \[\mathfrak{s}(\mathfrak{u}(p)\oplus\mathfrak{u}(q))\\ =\left\{\left(\begin{array}{cc}\text{A}&0\\ 0&\text{B}\end{array}\right)\bigg{|}\,A\in\mathfrak{u}(p),B\in\mathfrak{u}(q), \text{Tr}A=-\text{Tr}B\,\right\}. \tag{27}\] These three decompositions are associated with the three possible symmetric spaces of \(\text{SU}(n)\) we met before. To proceed we need the following notion: A Cartan subalgebra of \(\mathfrak{g}\) (with respect to a Cartan decomposition \(\mathfrak{g}=\mathfrak{p}\oplus\mathbf{f}\)) is a maximal abelian subalgebra \(\mathfrak{a}\) contained in \(\mathfrak{p}\)[3] (subalgebras contained in \(\mathfrak{p}\) are abelian because of Eq. (26)). All Cartan subalgebras are conjugate via an element \(k\in K\) and every element of \(\mathfrak{p}\) is contained in a Cartan subalgebra [3]. In particular, for every \(X\in\mathfrak{p}\) there are \(k\in K\) and \(A\in\mathfrak{a}\) so that \[X=kAk^{-1}. \tag{28}\] From now on we assume that \(\mathfrak{g}=\mathfrak{su}(n)\). It is possible to use the maximal tori theorem to show [16] that the fastest way to generate any target unitary \(U_{\text{targ}}\) is to find the the smallest \(\tau\) such that it is possible to write \[U_{\text{targ}}=k_{1}\exp(v\tau)k_{2} \tag{29}\] with \(k_{1},k_{2}\in K\) and \(v\in\mathfrak{p}\) of the form \[v=\sum_{i=1}^{m}\beta_{i}X_{i},\quad\beta\geq 0,\,\sum\beta_{i}=1,\,X_{i}\in \mathcal{W}(iH_{d}) \tag{30}\] where \(\mathcal{W}(iH_{d})=\mathfrak{a}\cap\mathcal{O}(iH_{d})\) is the Weyl orbit of \(iH_{d}\). Note that Eq. (29) does not actually give a specific minimal time solution; it merely states the form it must take, and reduces the difficulty of the (usually numerical) optimization problem. Clearly \(v\in\mathfrak{p}\) and gives the direction of the geodesic connecting the identity and \(U_{\text{targ}}\) in \(G/K\), so (29) shows the the correct control strategy is to apply strong controls initially to pick the correct direction in \(G/K\) provided the adjoint orbit allows the direction, drift for a time with all controls at zero, then apply strong controls again to move to the final desired \(U_{\text{targ}}\) within the coset. If \(v\) lies in \(\mathcal{O}(iH_{d})\), we can generate it and will always be capable of moving on a geodesic between any two points in \(G/K\), including from the identity to the point the furthest away corresponding to the diameter of \(G/K\). Since all elements of the Weyl orbit commute, \(\exp(v\tau)\) can be written \[\exp(v\tau)=\exp(\beta_{1}X_{1})\exp(\beta_{2}X_{2})\,\ldots\exp(\beta_{m}X_{m}) \tag{31}\] with \(\beta_{i}\) and \(X_{i}\) as in (30). Because the elements of the Weyl orbit \(\mathcal{W}(iH_{d})\) are a subset of the adjoint orbit \(\mathcal{O}(iH_{d})\), we are clearly capable of implementing \(\exp(v\tau)\) through the action of the drift and arbitrarily strong controls. It is important to note, however, that the fastest way of implementing a unitary by using the available controls, i.e. the path described by (31), is not necessarily a minimal geodesic between the initial and final points even though it is a piecewise geodesic. Only if the right hand side of (31) consists of single exponential is it possible that the time this fastest path takes coincides our lower bound given by Eqs. (16) - (20). We now examine the question as to when the \(v\) in Eq. (31) lies within the adjoint orbit, making our speed limit lower bounds tight. As said in the previous section, the fact that \(G/K\) is homogeneous implies that \(\operatorname{diam}(G/K)=\sup_{x\in G/K}d(o,x)\), meaning we need only look for the point \(x\) corresponding to the target unitary that is furthest from the group identity along a single-parameter geodesic. This point has the property that a geodesic starting at the origin stops to be length minimizing after running through \(x\). The set of points where geodesics starting at the origin \(o\) stop to be length minimizing is known as the cut locus (of the origin). By the Hopf-Rinow theorem [40], there is for every \(x\in G/K\) a minimizing geodesic joining it with the origin. If \(G\) is simply connected (as is the case for \(\operatorname{SU}(n)\)), the symmetric space \(G/K\) is also simply connected [44] (Proposition 3.6) which implies that the cut locus coincides with what is called the first conjugate locus [45] (Theorem 3.5.4). The conjugate locus can be described in terms of the positive roots \(\Delta^{+}(\mathfrak{g},\mathfrak{a})\) of the Lie algebra \(\mathfrak{g}\) with respect to the Cartan subalgebra \(\mathfrak{a}\). Specifically, the exact form of the conjugate locus of a point \(x\in M\) is given by [43, 45, 46]: \[C(x)=\left\{x\cdot e^{A}k\,\big{|}\,k\in K,\,A\in\mathfrak{a}\text{ s.t. \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq has three parameters, this is already an eight-parameter problem, and is analytically difficult. Higher dimensional groups will pose an even bigger problem. We can, however, gain some partial information by making use of Eqs. (36) - (38). We note that any drift \(iH_{d}\in\mathfrak{p}\) can be moved into a Cartan subalgebra \(\mathfrak{a}\) via conjugation by some controls, i.e. \(iH_{d}^{\mathfrak{a}}=kitH_{d}k^{\dagger}\). This is a unitary transformation which does not change the spectrum, and since the Cartan subalgebra is spanned by the real diagonal matrices with zero trace, we write \(H_{d}^{\mathfrak{a}}\) in terms of its eigenvectors: \(H_{d}^{\mathfrak{a}}=\operatorname{diag}\{\lambda_{1},\lambda_{2},\lambda_{3}\}\) where \(\lambda_{3}=-\lambda_{1}-\lambda_{2}\) since \(\operatorname{Tr}(H_{d}^{\mathfrak{a}})=0\). If we apply Eqs. (36), (38) and (33) to \(A=iH_{d}^{\mathfrak{a}}t=it\operatorname{diag}\{\lambda_{1},\lambda_{2},- \lambda_{1}-\lambda_{2}\}\) we see that to intersect the cut locus we need one of \[(\lambda_{1}+2\lambda_{2})t =m_{1}\pi \tag{41}\] \[(\lambda_{1}-\lambda_{2})t =m_{2}\pi \tag{42}\] to be satisfied. Since \(t\) is a continuous positive parameter, these conditions will almost always be satisfied for some \(t\), unless \(\lambda_{1}\) and \(\lambda_{2}\) are chosen to make the left hand side of one of (41), (42) equal to zero. This occurs if \(\lambda_{1}=\lambda_{2}\), or if \(\lambda_{1}=-2\lambda_{2}\). However these two conditions are equivalent, since if \(\lambda_{1}=-2\lambda_{2}\), then \(\lambda_{3}=-\lambda_{1}-\lambda_{2}=\lambda_{2}\) due to the zero trace condition, showing that \(\lambda_{1}\) and \(\lambda_{3}\) are degenerate. Consequently if any two eigenvalues are degenerate, one of (41), (42) cannot be met. Note that this condition does not guarantee there is no element of the Weyl orbit that produces a single-parameter geodesic from the identity to the point corresponding to the diameter, but it reduces the possibility since it ensures there is a portion of the cut locus that cannot be reached. This is because each root condition corresponds to a geodesic that intersects a different portion of the cut locus, so failing one of the conditions (41), (42) will only result in the bound not being tight if the diameter lies on that portion of the cut locus. However this is more powerful than might first be imagined, since if _any_ drift Hamiltonian with degenerate eigenvalues exceeds the lower bound on the speed limit, then _all_ drift Hamiltonians with degenerate eigenvalues will exceed the lower bound. This is because the ordering of the elements of a diagonal matrix can be arbitrarily switched by controls, and multiplying the drift by a scalar does not change whether bound is tight; it merely stretches the timescale. This means all drifts with two degenerate eigenvalues have the same behaviour regarding whether the bound is tight. If this can be determined for a single case in \(\operatorname{SU}(3)/\operatorname{SO}(3)\), the behaviour of all drift Hamiltonians is known. This is one of the questions that will be investigated numerically in the next Section. ## VI Numerical tests of the analytic speed limits In Section III we derived lower bounds on the quantum speed limit for various types of controls, and in Section V we looked at evidence for when these bounds might be saturated, i.e. when the bound is actually exact. We now examine these systems to determine the speed limit via a numerical optimization procedure. The motivation is to provide checks on both the analytic bounds as well as to test our dimension counting and eigenvalue degeneracy arguments for bound tightness laid out in Sections IV and V. In addition, bilinear optimal control problems are seldom analytically tractable and are usually approached numerically, so our analytic results provide a ideal test for checking the performance of various optimization strategies. Our approach was to determine the quantum speed limit numerically for a variety of drifts and a variety of Hilbert space dimensions, assuming the controls Hamiltonians generate one of the three Lie algebras \(\mathfrak{so}(n)\), \(\mathfrak{sp}(\frac{n}{2})\), or \(\mathfrak{s}(\mathfrak{u}(p)\oplus\mathfrak{u}(q))(n)\). To do this we chose a series of Haar-random unitary targets, and attempted to numerically find optimal controls that, for a specific drift Hamiltonian \(H_{d}\), would achieve that unitary at a specific chosen time \(T\). That time was divided into \(N\) discrete intervals ("time slots"), with the width of each time slot given by \(T/N\), and the controls were assumed to have a constant amplitude over each interval, i.e. the controls were time-dependent but piecewise constant. In the limit of a large number of time slots, arbitrary control functions are well approximated. Specifically, we solved \[i\frac{d}{dt}U(t)=\bigg{(}H_{d}+\sum_{j=1}^{m}f_{j}(t)H_{j}\bigg{)}U(t) \tag{43}\] with an initial random guess at the amplitudes in each time slot for each independent control function \(f_{j}(t)\). We used QuTiP's optimal control package [47] with a gradient ascent algorithm to find the control functions that maximized the overlap between the final unitary resulting from the evolution of (43) and the desired target unitary, as given by the phase-insensitive fidelity measure \[F=\frac{1}{n}\big{|}\mathrm{Tr}\big{[}U_{\mathrm{target}}^{\dagger}U(T)\big{]} \big{|}. \tag{44}\] This process was then repeated many times with different random initial guesses to help the optimizer becoming stuck in local minima. For each target unitary, we gradually increased the time \(T\) until a solution could be found where the fidelity error \(1-F\) was less than a cutoff of \(10^{-7}\). This was repeated for a large number of random unitaries, and the quantum speed limit for that particular drift was taken to be the lowest time for which we could guarantee solutions for all the unitaries with a fidelity error less than the cutoff. This is illustrated in Figure 1 with a small sample of the results for the \(\operatorname{SU}(3)/\operatorname{SO}(3)\) case for a particular drift corresponding to a predicted analytic quantum speed limit of \(t_{\mathrm{qsl}}=1.81\). It shows how the fidelity error for any given target reduces as more time is allowed, until we reach a sudden drop in the error which we interpret as the existence of a set of control functions that can achieve that unitary. Not all drift Hamiltonians \(H_{d}\) need to be examined. First, if \(iH_{d}\) has some overlap with \(\mathfrak{f}\) then this portion can be removed arbitrarily quickly by application of the controls, so we can assume \(iH_{d}\in\mathfrak{p}\). Second, since \(iH_{d}\in\mathfrak{p}\), due to (28) it can also be moved into a subspace of \(\mathfrak{p}\) corresponding to a Cartan subalgebra \(\mathfrak{a}\) by application of the controls. This means we need only consider drift Hamiltonians drawn from \(\mathfrak{a}\) (multiplied by \(i\)). There are a number of reasons that the numerical approach may provide a speed limit higher than the true one, making it difficult to determine if the lower bounds given by Eqs. (16) - (20) are truly tight. First, for a given time there may have been a better solution that the optimizer simply missed, even with many attempts with random initial conditions. Second, because we have divided the total time \(T\) into \(N\) time slots, elements of the control group cannot be performed arbitrarily fast; they take at least \(T/N\). Both of these serve to ensure the speed limit found numerically will be slightly higher than the true speed limit. Third, the since the testing is done with a set of discrete choices of time \(T\), there may be a fast solution at a specific low \(T\) that we don't see because that value of \(T\) is not tested, giving the illusion that the speed limit for that unitary is higher than it actually is. Conversely, we draw the target unitaries from a Haar-random set. As the dimension of the Hilbert space increases, it becomes increasingly difficult to properly sample the set of possible unitaries, and this is exacerbated by the fact that higher dimensions take longer to simulate so fewer targets can be sampled. With these caveats in mind, we now examine the results of the numerical optimization process. We first consider the case where the controls generate the \(\text{Sp}(n/2)\) subgroup. As discussed in Section V, from dimension counting arguments we might expect the speed limit bounds given by (18) and (19) to be tight. The elements of the Lie algebra \(\mathfrak{sp}(\frac{n}{2})\) have the form \[\begin{pmatrix}L_{1}&L_{2}\\ -L_{2}^{*}&L_{1}^{*}\end{pmatrix}\] with \(L_{1}\) skew-Hermitian and \(L_{2}=L_{2}^{T}\), where \(L_{1}\), \(L_{2}\) are complex and \(\frac{n}{2}\times\frac{n}{2}\) in size. One chooses a basis for this space, and the control Hamiltonians will be given by this basis multiplied by \(i\). As discussed above, we need only consider drift Hamiltonians that lie within the Cartan subalgebra, which drastically reduces the possibilities. For \(\mathfrak{sp}(\frac{n}{2})\) this is given by matrices of the form [3] \[A=\begin{pmatrix}D&0\\ 0&D\end{pmatrix} \tag{45}\] with \(D\) diagonal and \(D\in\mathfrak{su}(\frac{n}{2})\). Figure 2 shows results for the \(\text{SU}(4)/\text{Sp}(2)\) case, with a drift Hamiltonian \(H_{d}=\text{diag}\{1,-1,1,-1\}\). Up to a constant factor, this is in fact the only drift Hamiltonian that lies within the Cartan subalgebra. As expected, all random target unitaries chosen can be reached with a time under the speed limit given by (18), and the maximal time falls on the speed limit, showing that the bound is tight. Next we consider the case where the controls generate the \(\text{S}(\text{U}(p)\times\text{U}(q))\) subgroup of \(\text{SU}(n)\), with \(p+q=n\), \(p\leq q\). Its Lie algebra \(\mathfrak{s}(\mathfrak{u}(p)\oplus\mathfrak{u}(q))\) is given by Eq. (27). Again, from dimension counting arguments one would expect the bound given by (20) to be tight. The Cartan subalgebra is given by matrices of the form [3] \[A=\begin{pmatrix}0&B\\ -B^{T}&0\end{pmatrix} \tag{46}\] Figure 1: Example of how the quantum speed limit is determined numerically, for the case with \(\text{SO}(3)\) controls. Each line corresponds to a random target unitary in \(\text{SU}(3)\). We attempt to find a solution for time dependent controls for a given fixed time total time \(T\) (horizontal axis), and a specified drift \(H_{d}\). As \(T\) is increased, better solutions can be found, giving a better fidelity overlap with the target unitary. When the fidelity error is lower than some cutoff, we take this to mean we have found a solution for the control pulse that can generate the unitary. If this is repeated many times for many random unitary targets, the speed limit is taken to be the time for which we can find a control pulse for all possible targets in this time or less. This plot shows 30 Haar random unitary targets with \(H_{d}=\text{diag}\{1,0,-1\}\) and 100 time slots. Figure 2: Speed limit for the \(\text{SU}(4)/\text{Sp}(2)\) case, with a drift \(H_{d}=\text{diag}\{1,-1,1,-1\}\). The histogram shows the fastest possible times to achieve 150 randomly chosen unitary targets when using \(\text{Sp}(\frac{n}{2})\) controls, with the analytic lower bound given by (18) represented by the vertical green line. As all targets can be met in a time less than the bound, and some targets are at the bound, the bound is tight. where \(B\) is a real \(p\times q\) matrix that is zero everywhere except for the first \(p\) columns, which is given by a \(p\times p\) diagonal matrix. We chose our drift Hamiltonian to be given by \[H_{d}=i\begin{pmatrix}0&0&1&0&0\\ 0&0&0&4&0\\ -1&0&0&0&0\\ 0&-4&0&0&0\\ 0&0&0&0\end{pmatrix}. \tag{47}\] Figure 3 shows results for the \(\mathrm{SU}(5)/\mathrm{S}(\mathrm{U}(3)\times\mathrm{U}(2))\) case with the drift Hamiltonian given by (47). As expected, all random target unitaries chosen can be reached with a time equal to or less than the speed limit given by (20). Again, we conclude that in this case the bound is tight. We come now to the third and final case, \(\mathrm{SU}(n)/\mathrm{SO}(n)\). Dimension counting arguments suggest that we cannot always rely on the bound being tight, and at least in the \(\mathrm{SU}(3)/\mathrm{SO}(3)\) case we expect the bound to fail to be tight if the drift Hamiltonian has a degenerate eigenvalue. The Lie algebra \(\mathfrak{so}(n)\) associated with the \(\mathrm{SO}(n)\) control group is the set of \(n\times n\) traceless skew-Hermitian complex matrices, and the Cartan subalgebra is the set of real, diagonal, and traceless matrices. Our numerics were carried out for the \(\mathrm{SU}(3)/\mathrm{SO}(3)\) case, where the control Hamiltonians were given by the three Gell-Mann matrices \(\lambda_{2},\lambda_{5}\) and \(\lambda_{7}\), and the Cartan subalgebra is spanned by \(i\lambda_{3}\) and \(i\lambda_{8}\). We first consider a drift Hamiltonian \(H_{d}=\mathrm{diag}\{1,0,-1\}\) which clearly does not have degenerate eigenvalues. The results are shown in Figure 4. Interestingly, we see that the speed limit lower bound is still tight. No target unitary takes longer than this lower bound. Finally, we consider the case with a drift \(H_{d}=\mathrm{diag}\{1,-\frac{1}{2},-\frac{1}{2}\}\), which _does_ have a degenerate eigenvalue. The results are shown in Figure 5, and we see that while the analytic lower bound given by (17) is still respected it is no longer tight, which is what we expect due to \(H_{d}\) possessing degenerate eigenvalues. Collectively these results provide a check on the analytic results for the lower bounds on the quantum speed limit. They confirm that the bounds (16) - (20) are accurate, showing that if we consider all possible unitaries, there will be at least one that takes at least this long to generate. These simulations also support our conjecture that for the \(\mathrm{Sp}(\frac{n}{2})\) and \(\mathrm{S}(\mathrm{U}(p)\times\mathrm{U}(q))\) control schemes, the bounds are tight, meaning that there is at least one unitary that takes exactly tha Figure 4: Speed limit for the \(\mathrm{SU}(3)/\mathrm{SO}(3)\) case, with a drift \(H_{d}=\mathrm{diag}\{1,0,-1\}\). The histogram shows the fastest possible times to achieve 160 randomly chosen unitary targets when using \(\mathrm{SO}(3)\) controls, with the analytic lower bound given by (17) represented by the vertical green line. As all targets can be met in a time less than the bound, and some targets are at the bound, the bound is tight. Figure 5: Speed limit for the \(\mathrm{SU}(3)/\mathrm{SO}(3)\) case, with a drift \(H_{d}=\mathrm{diag}\{1,-0.5,-0.5\}\). The histogram shows the fastest possible times to achieve 140 randomly chosen unitary targets when using \(\mathrm{SO}(3)\) controls, with the analytic lower bound given by (17) represented by the vertical green line. Consequently the bound is not tight for this particular drift, as expected due to the fact that \(H_{d}\) has two degenerate eigenvalues. Figure 3: Speed limit for the \(\mathrm{SU}(5)/\mathrm{S}(\mathrm{U}(2)\times\mathrm{U}(3))\) case, with a drift Hamiltonian given by (47). The histogram shows the fastest possible times to achieve 120 randomly chosen unitary targets when using \(\mathrm{S}(\mathrm{U}(2)\times\mathrm{U}(3))\) controls, with the analytic lower bound given by (20) represented by the vertical green line. As all targets can be met in a time less than the bound, and some targets are at the bound, the bound is tight. no unitaries will take longer. Furthermore, the results show that, for the \(\mathrm{SO}(3)\) control case where the drift has a pair of degenerate eigenvalues, the bound is respected but is not tight, as expected. Interestingly, the bound with \(\mathrm{SO}(3)\) control case where the drift has distinct eigenvalues does appear to be tight, at least for the particular drift Hamiltonian we chose. Finally, we see that numerical optimization techniques to find optimal control pulses for quantum systems appear to work remarkably well. Optimal pulses are found that respect the analytic bounds exactly, providing evidence that such methods can be trusted for bilinear control problems. ## VII Speed Limits without a full set of Lie algebra controls The previous sections have obtained lower bounds on the quantum speed limit for systems with arbitrary drifts and with controls that form a closed subgroup of \(\mathrm{SU}(n)\), as well as considering in more detail the case where the control Hamiltonians are one of the Lie algebras \(\mathfrak{so}(n)\), \(\mathfrak{sp}(\frac{\pi}{2})\), or \(\mathfrak{s}(\mathfrak{u}(p)\oplus\mathfrak{u}(q))\). The number of control Hamiltonians required to span these Lie algebras is given by \(d_{k}\) in Table 2, and can be seen to scale quadratically in \(n\). Such a situation might seem to be difficult to arrange in practice. However it is important to realise that the controls themselves need not provide a full basis for the algebra, but rather that the dynamical Lie algebra generated through repeated application of the commutators of the controls provide such a basis. Clearly, if we have a full set of controls that already provide a basis, that is enough. But the question is, how few control Hamiltonians do we actually need to generate these algebras? It is known that the simple compact classical Lie algebras \(\mathfrak{su}(n)\), \(\mathfrak{so}(n)\), and \(\mathfrak{sp}(\frac{n}{2})\) can be generated by "one and a half" elements [48]. This means that if we choose any element in the algebra, there exists a second element in the algebra that along with the first will generate the entire algebra, provided neither of the two is the identity. Consequently one never needs more than two control Hamiltonians to generate the full \(\mathfrak{so}(n)\) or \(\mathfrak{sp}(\frac{n}{2})\) algebras, ensuring the results in previous Sections are applicable. Finally, so far we have only discussed systems where we have multiple control Hamiltonians, but the situation with a single control, i.e. where the system Hamiltonian is given by \[H=H_{d}+f(t)H_{c} \tag{48}\] is very common. It is therefore useful to derive a bound on the quantum speed limit in this case. Again, the full Lie algebra of the system is \(\mathfrak{g}=\mathfrak{su}(n)\), and the control subalgebra is one-dimensional and is given by \(\mathfrak{f}=\mathrm{span}\{iH_{c}\}\). This pair does not admit a Cartan decomposition unless \(H_{c}\in\mathfrak{so}(2)\). Indeed, the Lie group generated by \(K=\exp(\mathfrak{f})\) is in some cases not even topologically closed. Consequently the quotient space \(G/K\) may not be a homogeneous space, let alone a symmetric space. We can, however, apply the results we derived in previous Sections to obtain a lower bound on the quantum speed limit in this case by "embedding" this control problem into another which does satisfy our criteria. To obtain a bound we note that since \(H_{c}\) is Hermitian it can be transformed into a diagonal, purely real matrix \(H^{\prime}_{c}=UH_{c}U^{\dagger}\) via a unitary transformation. In this new basis the drift is given by \(H^{\prime}_{d}=UH_{d}U^{\dagger}\). Changing the basis of the problem via unitary transformation cannot change the speed limit since a basis change is only a mathematical convenience. We also introduce an auxiliary control problem with the same drift \(H^{\prime}_{d}\) but with the control group given by \(\mathrm{S}(\mathrm{U}(p)\times U(q))\) with \(p+q=n\) and an associated control algebra \(\mathfrak{f}\). This auxiliary problem _does_ admit a Cartan decomposition. Since \(iH^{\prime}_{c}\) is diagonal, purely imaginary and traceless, it can be written \[iH^{\prime}_{c}=\begin{pmatrix}D_{1}&0\\ 0&D_{2}\end{pmatrix},\ \ \ \mathrm{Tr}(D_{1})+\mathrm{Tr}(D_{2})=0. \tag{49}\] where \(D_{1}\) and \(D_{2}\) are diagonal, imaginary and \(p\times p\) and \(q\times q\) respectively. Consequently we have \(D_{1}\in\mathfrak{u}(p),D_{2}\in\mathfrak{u}(q)\) and thus \(iH^{\prime}_{c}\subset\mathfrak{f}\). This means that the control problem \[H=H^{\prime}_{d}+f(t)H^{\prime}_{c} \tag{50}\] is the same as the auxiliary control problem, except with fewer controls. That is, it has a single control from \(\mathfrak{f}\), rather the entire basis set of \(p^{2}+q^{2}-1\) controls. Hence whatever the lower bound on the quantum speed limit for the auxiliary control problem, the lower bound for the system described by (50) must be at least as large since it has a strict subset of the controls relative to the auxiliary problem. Since the system described by (50) is physically equivalent to (48), and since the trace is unchanged by a unitary transformation, we obtain a lower bound on the quantum speed limit of (48) given by \[T_{\mathrm{QSL}}\geq\frac{\sqrt{p}\;\pi}{\sqrt{2\,\mathrm{Tr}(H^{2}_{d})}} \tag{51}\] where we have assumed without loss of generality that \(p\leq q\). Since our split of the sizes of \(D_{1}\) and \(D_{2}\) in (49) is only constrained by \(p\leq q\), we are free to choose the size of \(p\) and \(q\) to make the lower bound (51) as large as possible. This clearly occurs when \(p=\lfloor n/2\rfloor\), yielding \[T_{\mathrm{QSL}}\geq\frac{\sqrt{\lfloor n/2\rfloor}\,\pi}{\sqrt{2\,\mathrm{Tr} (H^{2}_{d})}}. \tag{52}\] for the case where we have a single control. In the general case one would not expect (52) to be tight, but does provide a rigorous lower bound and demonstrates how the quantum speed limit scales with dimension and how it depends on the form of the drift. ## VIII Conclusion The purpose of this paper has been to develop a lower bound for the quantum speed limit of a controllable, finite-dimensional system, given the assumption that the controls can be arbitrarily strong. We have also investigated the circumstances under which this lower bound is not merely a bound, but is actually exact. We have used the techniques of Lie algebras, Lie groups and differential geometry. Mindful that these areas may not be entirely familiar to many physicists, we have provided a pedagogical development of this material, making it clear why it is relevant, and constantly tying it back to the physics. We have also provided a number of examples to aid this process. Our approach has been completely general, and the basic result given by Theorem 1 holds for Hilbert spaces of arbitrary dimension, arbitrary drift Hamiltonians, and does not require specific symmetries. The only requirement is that the control group is topologically closed. This basic result, however, does require some knowledge of the diameter of the homogeneous space corresponding to the quotient group of \(\mathrm{SU}(n)\) with the control group. While this is generally difficult to determine, exact diameters are available for symmetric spaces, allowing us to give explicit bounds in this case. It is important to note, however, that even if the exact diameter of the quotient group is not known analytically, any ability to bound the diameter, analytically or numerically, can immediately be used in our expression for the quantum speed limit, and merely results in a looser bound. We have also examined the question of when our formula for the quantum speed limit is not merely a lower bound, but is actually exact. In the fully general case we developed a sufficiency criterion based on the dimension of the adjoint orbit and commutation relations between the drift Hamiltonian and the matrix representation of the Lie algebra corresponding to the controls. As an illustration we showed how this can be done for the case where the control group is \(\mathrm{SO}(n)\). As this criterion for bound tightness is sufficient but not necessary, we also examined what could further be said in the case where the controls are not arbitrary, but form a Cartan decomposition of the quantum control problem. In this case bound tightness depends on the cut locus of the quotient space, which can be described in terms of the positive roots of the Lie algebras. We were not able to provide a complete statement as to when the bounds were tight, but did show how conditions on the roots would decrease the probability that the bound was tight. Since the development of our results is somewhat abstract and mathematical, we have also examined our speed limit bounds using a numerical optimization procedure for a number of specific Hamiltonians. This purpose of this is twofold. First, it provides numerical confirmation of our explicit analytic bounds, as well as supporting our results on the link between the degree of degeneracy of the drift Hamiltonian and the tightness of the bounds. Second, it provides a general way to use numerical optimization to determine speed limits, and demonstrates that gradient descent-based techniques work well. Finally, we have considered the quantum speed limit in the very common quantum control case where one has a drift Hamiltonian and a single control Hamiltonian. Such a system need not meet the assumptions for our main speed limit theorem; for example the control group may not be closed, or indeed form a quotient group that is a homogeneous space. Nonetheless, we showed it is possible to embed such a problem into a group that does meet our criteria, allowing us to use our previous results and thereby provide an explicit lower bound for this case.
2307.10525
Stability of klt singularities
We survey some recent development in the stability theory of klt singularities. The main focus is on the solution of the stable degeneration conjecture.
Ziquan Zhuang
2023-07-20T01:57:07Z
http://arxiv.org/abs/2307.10525v1
# Stability of klt singularities ###### Abstract. We survey some recent development in the stability theory of klt singularities. The main focus is on the solution of the stable degeneration conjecture. ## 1. Introduction As Goresky and MacPherson put it in their famous monograph [1, p.26], "Philosophically, any statement about the projective variety or its embedding really comes from a statement about the singularity at the point of the cone. Theorems about projective varieties should be consequences of more general theorems about singularities which are no longer required to be conical". In this expository article, we discuss this local and global correspondence in the context of K-stability, and survey some recent development in the local aspect of the stability theory. ### Motivation The local stability theory originates from questions in complex geometry. Recall that a Kahler-Einstein metric on a complex manifold is a Kahler metric \(\omega\) with constant Ricci curvature. After appropriate rescaling, this means \(\operatorname{Ric}(\omega)=\lambda\omega\) where \(\lambda\in\{0,-1,1\}\). On a Fano manifold, we have \(\lambda=1\). Consider a sequence of Kahler-Einstein Fano manifolds \((X_{k},\omega_{k})\)\((k=1,2,\dots)\). By the convergence theory of Riemannian manifolds, specifically Gromov's compactness theorem, one can pass to a subsequence and extract a Gromov-Hausdorff limit \(X_{\infty}\). In this context, Donaldson and Sun [1, 2] prove that the limit space \(X_{\infty}\) is also a Kahler-Einstein Fano variety. In particular, it is algebraic (but may be singular). To analyze the singularities of \(X_{\infty}\), they inspect the metric tangent cones of \(X_{\infty}\), which are pointed Gromov-Hausdorff limits of \(x\in(X_{\infty},r_{k}\omega_{\infty})\) for some fixed \(x\in X_{\infty}\) and some increasing sequence of scaling factors \(r_{k}\to\infty\). They find that the metric tangent cone again inherits some algebraic structure: it is a normal affine algebraic variety endowed with an effective torus action and a singular Ricci-flat Kahler cone metric. They also give a two-step degeneration description of the metric tangent cone, where the intermediate step (the K-semistable degeneration) is algebraic as well. There should be an algebro-geometric explanation for the ubiquity of algebraic structures in these constructions, and this is achieved by the algebraic K-stability theory. The recent development in the (global) K-stability theory of Fano varieties culminates in the K-moduli theorem, which (among other things) provides an algebro-geometric construction of the Gromov-Hausdorff limit \(X_{\infty}\). For those interested in this part of the theory, we recommend the survey [16] and the upcoming book [16] for a comprehensive and up-to-date account1. The local K-stability theory, which is the main topic of this survey article, will address Donaldson and Sun's conjecture that the two-step degeneration of \(x\in X_{\infty}\) to its metric tangent cone should only depend on the algebraic structure of the singularity (rather than the metric). More generally, as we will explain in subsequent sections, every Kawamata log terminal (klt) singularity has a two-step degeneration to a uniquely determined K-polystable Fano cone singularity, and it seems likely that there is a K-moduli of klt singularities. ### History Apart from Donaldson-Sun's conjecture mentioned above, another source of inspiration for the development of the local stability theory is the question on the existence of Sasaki-Einstein metrics. In [16, 17], Martelli, Sparks and Yau set up a variational problem on Sasaki-Einstein manifolds whose critical point determines the Reeb vector field of the Sasaki-Einstein metric. The volume functional they considered and the minimization phenomenon they discovered may be seen as the first prototype of the local stability theory. Later, Collins and Szekelyhidi [18, 19] proved a Yau-Tian-Donaldson type criterion for the existence of a Ricci flat Kahler cone metric on an isolated cone singularity, or equivalently, a Sasaki-Einstein metric on the link of the singularity. In particular, they defined K-semi/polystability for Fano cones (by mimicking the definitions in the global case [14, 15]), and related the existence of a Ricci flat Kahler cone metric to the algebro-geometric condition that the singularity is a K-polystable Fano cone. The algebraic theory of local K-stability starts with Chi Li's introduction of the normalized volumes of valuations [10]. Li's insight (partly inspired by Martelli-Sparks-Yau's work) is that valuations on the singularity represent algebraic "rescalings" of the singularity, and that the valuation with the smallest normalized volume represents an "optimal rescaling" that should be closely related to the metric tangent cone degeneration. Based on this philosophy, he proposed to attack Donaldson-Sun's conjecture by solving a series of conjectures regarding the minimizer of the normalized volume function. The theory is further investigated in [11, 12]. In particular, Li and Xu [12] show that the K-semistable degeneration step in Donaldson-Sun's construction only depends on the algebraic structure of singularity, and is indeed induced by a minimizer of the normalized volume function. Later [10] completes the proof of Donaldson-Sun's conjecture by proving the algebraicity of the other step (i.e. the K-polystable degeneration) of the metric tangent cone construction. The proof [12] of the algebraicity of the K-semistable degenerations assumes the existence of such degenerations, which in turn relies on deep analytic results [11, 12, 13] and as such is restricted to singularities on Gromov-Hausdorff limits of Kahler-Einstein Fano manifolds. To give a purely algebraic construction of the two-step degeneration, and to extend the theory to arbitrary klt singularities, [12] refines Li's original proposal, and put forth what is now called the _Stable Degeneration Conjecture_ (see Section 2.4). It highlights a number of conjectural properties of the normalized volume minimizer, which, when put together, ensure that every klt singularity has a canonical stable degeneration induced by the said minimizer. The Stable Degeneration Conjecture is subsequently proved by a series of works: the existence of the normalized volume minimizer is prove by Blum [13], the uniqueness is established in [17] (later [1] gives another proof), Xu [18] proves that the minimizer is quasi-monomial (in _loc. cit._ he also gives another proof that the minimizer exists), while the finite generation property (known by itself as the local higher rank finite generation conjecture) is confirmed in [22]. It is proved in [1] that the induced degeneration is a K-semistable degeneration, and [17] further gives a recipe for constructing the K-polystable degeneration. These complete the algebro-geometric construction of the two-step degeneration. The development of the local stability theory intertwines with the study on the K-stability of Fano varieties. The local and the global theory often draw inspirations from each other. The uniqueness of the normalized volume minimizer implies (through the cone construction) that equivariant K-semistability of a Fano variety is equivalent to K-semistability, and the proof of the uniqueness [22] is in turn inspired by the earlier work on equivariant K-stability of Fano varieties [23]. The idea behind Xu's proof [24] of the quasi-monomial property of the minimizer led to the proof of the openness of K-semistability in families of Fano varieties [24, 25]. The finite generation part of the Stable Degeneration Conjecture is a local analog of the higher rank finite generation conjecture for Fano varieties, proved in [1]. In the setting of Fano varieties, there is also an algebro-geometric construction of canonical two-step degenerations to Fano varieties with Kahler-Ricci solitons [25]. Inspired by the K-moduli theory of Fano varieties, the focus of the local stability theory recently shifts towards the boundedness of singularities, an important missing ingredient for the local K-moduli theory. This topic has been intensively studied in [26, 27, 28, 29], yet the general case remains wide open. ### Outline Here is a roadmap for this survey. In section 2, we define some basic objects in the local stability theory and state the Stable Degeneration Conjecture. In Section 3, we introduce the notion of Kollar components, which plays an important role in the study of klt singularities. The entire Section 4 is devoted to explaining some key ingredients in the proof of the Stable Degeneration Conjecture. Section 5 surveys our current understanding on the boundedness of klt singularities. Finally we discuss some conjectures and open questions in Section 6. Since our primary focus is on the stable degeneration and the boundedness of klt singularities, we have to leave out several other interesting topics such as the analytic aspect of the theory and further applications of the normalized volume. Some of these topics have been covered by the survey [1], which we recommend to the interested readers. ### Notation and conventions We always work over an algebraically closed field \(\Bbbk\) of characteristic \(0\). A singularity \(x\in X\) consists of a normal variety \(X\) and a closed point \(x\in X\). We will often assume that \(X\) is affine and will freely shrink \(X\) around \(x\) as needed. ### Acknowledgement The author is partially supported by the NSF Grants DMS-2240926, DMS-2234736 and a Clay research fellowship. He would like to thank Harold Blum, Chi Li, Yuchen Liu, Xiaowei Wang and Chenyang Xu for many helpful comments and conversations. ## 2. Stable Degeneration The main result of the local stability theory is that every klt singularity has a canonical two-step stable degeneration induced by the valuation that minimizes the normalized volume. In this section, we elaborate the content of this statement. ### Valuation We start with the notion of valuations. **Definition 2.1**.: A (real) valuation on a variety \(X\) is a map \(v\colon\Bbbk(X)^{*}\to\mathbb{R}\) (where \(\Bbbk(X)\) denotes the function field of \(X\)), satisfying: * \(v(fg)=v(f)+v(g)\); * \(v(f+g)\geq\min\{v(f),v(g)\}\); * \(v(\Bbbk^{*})=0\). By convention, we set \(v(0)=+\infty\). Let us explain how valuations naturally arise in our context, at least in hindsight. For singularities appearing on Gromov-Hausdorff limits of Kahler-Einstein Fano manifolds, the stable degenerations are supposed to algebro-geometrically recover Donaldson-Sun's two-step degeneration description of the metric tangent cones. Now there does exist a tangent cone construction in algebraic geometry: for any singularity \(x\in X=\mathbf{Spec}(R)\), its tangent cone is defined as \(X_{0}=\mathbf{Spec}(R_{0})\) where \[R_{0}:=\bigoplus_{k\in\mathbb{N}}\mathfrak{m}_{x}^{k}/\mathfrak{m}_{x}^{k+1}. \tag{2.1}\] As a typical example, if \(X=(f=0)\subseteq\mathbb{A}^{n+1}\) is a hypersurface singularity of multiplicity \(k\) at the origin, and we write \[f=f_{k}+(\text{terms of multiplicity}\geq k+1)\] where \(f_{k}\) is homogeneous of degree \(k\), then the tangent cone to \(0\in X\) is the hypersurface singularity \((f_{k}=0)\subseteq\mathbb{A}^{n+1}\). On the other hand, it is not hard to see that this is not the desired metric tangent cone in general. One reason is that the tangent cone can be reducible, while the metric tangent cone is always irreducible (it is an affine _variety_). In fact, the tangent cone of a (klt) hypersurface singularity \((f=0)\subseteq\mathbb{A}^{n+1}\) coincides with its metric tangent cone if and only if \((f_{k}=0)\subseteq\mathbb{P}^{n}\) is a K-polystable Fano variety (see Proposition 3.8). By the Yau-Tian-Donaldson correspondence, the latter condition is equivalent to the existence of a Kahler-Einstein metric on the Fano variety. What are some variations of the "naive" tangent cone construction? The first observation is that the same construction can be applied to any decreasing graded sequence of \(\mathfrak{m}_{x}\)-primary ideals. **Definition 2.2** ([12]).: A graded sequence of ideals in a ring \(R\) is a sequence of ideals \(\mathfrak{a}_{\bullet}=(\mathfrak{a}_{k})_{k\in\mathbb{N}}\) such that \(\mathfrak{a}_{0}=R\) and \(\mathfrak{a}_{m}\mathfrak{a}_{n}\subseteq\mathfrak{a}_{m+n}\). We say it is decreasing if \(\mathfrak{a}_{k+1}\subseteq\mathfrak{a}_{k}\) for all \(k\in\mathbb{N}\). Given a decreasing graded sequence \(\mathfrak{a}_{\bullet}\) of \(\mathfrak{m}_{x}\)-primary ideals on \(X=\mathbf{Spec}(R)\), we can form the associated graded algebra \[\operatorname{gr}_{\mathfrak{a}_{\bullet}}R:=\bigoplus_{k\in\mathbb{N}} \mathfrak{a}_{k}/\mathfrak{a}_{k+1}.\] When \(\mathfrak{a}_{k}=\mathfrak{m}_{x}^{k}\), this recovers the graded algebra (2.1) that defines the tangent cone. In general, if the algebra \(\bigoplus_{k\in\mathbb{N}}\mathfrak{a}_{k}\) is finitely generated, then so is \(\operatorname{gr}_{\mathfrak{a}_{\bullet}}R\), and we get an isotrivial degeneration of \(X\) to \(\mathbf{Spec}(\operatorname{gr}_{\mathfrak{a}_{\bullet}}R)\) through the Rees construction. To see this, set \(\mathfrak{a}_{k}=R\) for all \(k<0\) and let \[\mathcal{R}:=\bigoplus_{k\in\mathbb{Z}}\mathfrak{a}_{k}t^{-k}\subseteq R[t].\] Then one can check that \(\mathcal{X}=\mathbf{Spec}(\mathcal{R})\to\mathbb{A}_{t}^{1}\) is a flat family with general fiber \(X\) and special fiber \(\mathbf{Spec}(\operatorname{gr}_{\mathfrak{a}_{\bullet}}R)\). The \(\mathbb{Z}\)-grading of \(\mathcal{R}\) also induces a \(\mathbb{G}_{m}\)-action on the total space \(\mathcal{X}\) that commutes with the usual \(\mathbb{G}_{m}\)-action on \(\mathbb{A}^{1}\). Such a family is also called a _test configuration_ of the singularity \(x\in X\). The \(\mathfrak{m}_{x}\)-primary condition further ensures that the closed point \(x\in X\) on the general fiber specializes to a closed point in the central fiber (the fixed point of the \(\mathbb{G}_{m}\)-action). For us, such isotrivial degenerations serve as the algebraic analog of Gromov-Hausdorff limits. As a slight generalization, we also allow graded sequences of ideals that are indexed by \(\mathbb{R}_{\geq 0}\). These are called filtrations. A formal definition is as follows. **Definition 2.3**.: A filtration of a ring \(R\) is a collection of ideals \(\mathfrak{a}_{\bullet}=(\mathfrak{a}_{\lambda})_{\lambda\in\mathbb{R}_{\geq 0}}\) that is 1. decreasing: \(\mathfrak{a}_{\lambda}\subseteq\mathfrak{a}_{\mu}\) when \(\lambda\geq\mu\), 2. multiplicative: \(\mathfrak{a}_{\lambda}\mathfrak{a}_{\mu}\subseteq\mathfrak{a}_{\lambda+\mu}\), 3. left-continuous: \(\mathfrak{a}_{\lambda-\varepsilon}=\mathfrak{a}_{\lambda}\) when \(\lambda>0\) is fixed and \(0<\varepsilon\ll 1\), and 4. exhaustive: \(\mathfrak{a}_{0}=R,\bigcap_{\lambda\geq 0}\mathfrak{a}_{\lambda}=\{0\}\). Denote by \(\operatorname{Val}_{X}\) the set of valuations that has a center on \(X\), and by \(\operatorname{Val}_{X,x}\) the set of valuations centered at \(x\in X=\mathbf{Spec}(R)^{2}\). Every valuation \(v\in\operatorname{Val}_{X,x}\) induces an \(\mathfrak{m}_{x}\)-primary filtration \(\mathfrak{a}_{\bullet}(v)\) by setting \[\mathfrak{a}_{\lambda}(v):=\{f\in R\mid v(f)\geq\lambda\}.\] Similar to the case of graded sequences of ideals, for any filtration defined above we can form the associated graded algebra \(\operatorname{gr}_{\mathfrak{a}_{\bullet}}R:=\bigoplus_{\lambda\in\mathbb{R }_{\geq 0}}\mathfrak{a}_{\lambda}/\mathfrak{a}_{>\lambda}\), where \(\mathfrak{a}_{>\lambda}=\bigcup_{\mu>\lambda}\mathfrak{a}_{\mu}\). If \(\mathfrak{a}_{\bullet}=\mathfrak{a}_{\bullet}(v)\) for some valuation \(v\in\operatorname{Val}_{X,x}\), we further denote \(\operatorname{gr}_{\mathfrak{a}_{\bullet}}R\) as \(\operatorname{gr}_{v}R\). With this level of generality, Donaldson and Sun [10] show that the two-step degenerations to the metric tangent cones are both induced by some \(\mathfrak{m}_{x}\)-primary filtration. Usually the first step is called the K-semistable degeneration, while the second step is the K-polystable degeneration. Philosophically, the two steps can be seen as analogous to the Harder-Narasimhan and Jordan-Holder filtrations of vector bundles, where the graded pieces are (slope) semistable and polystable bundles, respectively. For now, we focus on the K-semistable degeneration, deferring the discussion of the K-polystable degeneration to Section 2.4. Since the filtration in [10] that induces the K-semistable degeneration is constructed using the (singular) Kahler-Einstein metric on the Gromov-Hausdorff limit, an immediate question, especially if we want to generalize the construction to arbitrary klt singularities, is how to identify the filtration using algebraic geometry. A simple but crucial observation is that the filtration is necessarily induced by some valuation, as the central fiber of the K-semistable degeneration is irreducible. **Lemma 2.4**.: _Let \(\mathfrak{a}_{\bullet}\) be an \(\mathfrak{m}_{x}\)-primary filtration of \(R\) such that \(\operatorname{gr}_{\mathfrak{a}_{\bullet}}R\) is an integral domain. Then \(\mathfrak{a}_{\bullet}=\mathfrak{a}_{\bullet}(v)\) for some valuation \(v\in\operatorname{Val}_{X,x}\)._ Proof.: For any \(0\neq f\in R\) we let \(v(f):=\sup\{\lambda\geq 0\mid f\in\mathfrak{a}_{\lambda}\}\). Note that \(v(f)<+\infty\) since \(\mathfrak{a}_{\bullet}\) is exhaustive. Left-continuity of the filtration implies that \(f\in\mathfrak{a}_{v(f)}\), while by definition \(f\not\in\mathfrak{a}_{>v(f)}\). The condition that \(\operatorname{gr}_{\mathfrak{a}_{\bullet}}R\) is an integral domain translates into the equality \(v(fg)=v(f)+v(g)\). From here it is clear that \(v\) defines a valuation and \(\mathfrak{a}_{\bullet}=\mathfrak{a}_{\bullet}(v)\). Because of this fact, we may restrict our search to valuations. Before we discuss further constraints, there are two important classes of valuations we shall keep in mind. **Example 2.5** (divisorial valuations).: Consider a proper birational morphism (such as a log resolution) \(\pi\colon Y\to X\) where \(Y\) is normal and let \(E\subseteq Y\) be a prime divisor. We call such a divisor \(E\) a prime divisor over \(X\). Then we get a valuation \(\operatorname{ord}_{E}\) which assigns to each \(f\in\Bbbk(X)^{*}=\Bbbk(Y)^{*}\) its order of zero (or pole) along \(E\). Rescalings of such valuations (i.e. \(\lambda\cdot\operatorname{ord}_{E}\) for some \(\lambda>0\)) are called divisorial valuations. Note that the center of \(\operatorname{ord}_{E}\) is the generic point of \(\pi(E)\); in particular, the valuation \(\operatorname{ord}_{E}\) is centered at \(x\in X\) if and only if \(E\subseteq\pi^{-1}(x)\). **Example 2.6** (quasi-monomial valuations).: A generalization of the above example is the class of quasi-monomial valuations. Consider a proper birational morphism \(\pi\colon Y\to X\) as above and a reduced divisor \(E\) with irreducible components \(E_{1},\ldots,E_{r}\). Assume that \(Y\) is smooth and \(E\) is simple normal crossing (SNC) at a generic point \(\eta\) of \(\cap_{i=1}^{r}E_{i}\). Then we have local coordinates \(y_{1},\ldots,y_{r}\) such that \(E_{i}=(y_{i}=0)\) around \(\eta\in Y\). Any \(f\in\mathcal{O}_{Y,\eta}\) has a Taylor expansion \[f=\sum c_{\beta}y^{\beta}\in\widehat{\mathcal{O}}_{Y,\eta}\cong\Bbbk(\eta)[ \![y_{1},\ldots,y_{r}]\!].\] For any \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\in\mathbb{R}_{\geq 0}^{r}\setminus\{0\}\), we can thus define a valuation \(v_{\eta,\alpha}\) (or simply denoted as \(v_{\alpha}\)) by setting \[v_{\eta,\alpha}(f)=\min\left\{\left\langle\alpha,\beta\right\rangle\left|\,c_ {\beta}\neq 0\right\}.\] In other words, it calculates the \(\alpha\)-weighted multiplicity of \(f\). Such valuations are called quasi-monomial valuations3. The rational rank of a quasi-monomial valuation \(v_{\alpha}\) is defined as the dimension of the \(\mathbb{Q}\)-vector space Footnote 3: The name stems from the fact that the valuation \(v_{\alpha}\) is monomial with respect to the local coordinates \(y_{1},\ldots,y_{r}\) on the birational model \(Y\). \[\operatorname{span}_{\mathbb{Q}}\{\alpha_{1},\ldots,\alpha_{r}\}\subseteq \mathbb{R}.\] Equivalently, it is the rank of the value group \(\Gamma=v_{\alpha}(\Bbbk(X)^{*})\subseteq\mathbb{R}\). Note that \(v_{\alpha}\) is divisorial if and only if its rational rank is one. For a fixed pair \((Y,E)\) and a generic point \(\eta\) of some stratum, we denote the corresponding set of quasi-monomial valuations by \(\operatorname{QM}_{\eta}(Y,E)\). We also set \(\operatorname{QM}(Y,E)=\cup_{\eta}\operatorname{QM}_{\eta}(Y,E)\), where \(\eta\) varies over the smooth points of \(Y\) at which \(E\) is SNC. A general valuation can be thought of as limits of quasi-monomial valuations, c.f. [12, Section 4]. In fact, for any log resolution \(\pi\colon Y\to X\) and any SNC divisor \(E\subseteq Y\) (we will henceforth call such pair \((Y,E)\) a log smooth model of \(X\)), there is a natural retraction map \(r_{Y,E}\colon\operatorname{Val}_{X}\to\operatorname{QM}(Y,E)\). As we vary the model \((Y,E)\), the images under the retraction maps give approximations of a given valuation. ### Fano cone singularities The singularities that appear on Gromov-Hausdorff limits of Kahler-Einstein Fano manifolds are examples of Kawamata log terminal (klt) singularities. From an algebro-geometric perspective, it is more natural to set up a local stability theory for klt singularities. This class of singularities is particularly important in birational geometry, as they are also singularities of minimal models of algebraic varieties. The output of the stable degenerations belong to a special class of klt singularities called Fano cone singularities. We next review some basics on this class of singularities. For the readers' convenience, we first recall some definitions for singularities of pairs. More details can be found in [10]. **Definition 2.7**.: A pair \((X,D)\) consisting of a normal variety \(X\) and an effective \(\mathbb{Q}\)-divisor \(D\) is said to be klt (resp. log canonical, or lc for short) if \(K_{X}+D\) is \(\mathbb{Q}\)-Cartier and for any log resolution \(\pi\colon Y\to X\) we have \[K_{Y}=\pi^{*}(K_{X}+D)+\sum a_{i}E_{i}\] where the \(E_{i}\)'s are the components of \(\pi_{*}^{-1}D+\operatorname{Ex}(\pi)\) and \(a_{i}>-1\) (resp. \(a_{i}\geq-1\)). A singularity \(x\in X\) is klt (resp. lc) if \(X\) is klt (resp. lc) around \(x\). In a very rough sense, the klt (resp. lc) condition says that the singularities of holomorphic \(n\)-forms (\(n=\dim X\)) on \(X\) are better (resp. not worse) than poles. It will be convenient to reformulate the above definition using log discrepancies. For any pair \((X,D)\) and any prime divisor \(E\) on some log resolution \(\pi\colon Y\to X\), the log discrepancy \(A_{X,D}(E)\) is defined to be \[A_{X,D}(E):=1+\operatorname{ord}_{E}(K_{Y}-\pi^{*}(K_{X}+D)).\] Then the pair \((X,D)\) is klt (resp. lc) if and only if \(A_{X,D}(E)>0\) (resp. \(\geq 0\)) for all prime divisor \(E\) over \(X\). _Remark 2.8_.: For simplicity, we only state results in the context of klt singularities in this survey, but it is worth pointing out that the entire local stability theory also works for klt pairs \(x\in(X,D)\). Heuristically, klt singularities are the local analog of Fano varieties: **Example 2.9** (orbifold cones, [10, Section 3.1]).: Cones over Fano manifolds are typical examples of klt singularities. More generally, for any projective variety \(V\) and any ample \(\mathbb{Q}\)-Cartier Weil divisor \(L\) such that \(L\sim_{\mathbb{Q}}-rK_{V}\) for some \(r>0\), the orbifold cone singularity \[o\in C_{a}(V,L):=\operatorname{\mathbf{Spec}}\bigoplus_{m\in\mathbb{N}}H^{0}( V,mL)\] is klt if and only if \(V\) is a Fano variety with only klt singularities. A more general construction that will play a key role in the local stability theory is given by Fano cone singularities. By definition, these are klt singularities with a nontrivial good torus action, together with the choice of a Reeb vector (also called a polarization). Let us explain the terminology. We say a torus \(\mathbb{T}=\mathbb{G}_{m}^{r}\)-action on a singularity \(x\in X=\operatorname{\mathbf{Spec}}(R)\) is _good_ if it is effective and \(x\) is in the orbit closure of any \(\mathbb{T}\)-orbit. Let \(N:=N(\mathbb{T})=\operatorname{Hom}(\mathbb{G}_{m},\mathbb{T})\) be the co-weight lattice and \(M=N^{*}\) the weight lattice. We have a weight decomposition \[R=\oplus_{\alpha\in M}R_{\alpha},\] and the action being good implies that \(R_{0}=\Bbbk\) and every \(R_{\alpha}\) is finite dimensional. For \(f\in R\), we denote by \(f_{\alpha}\) the corresponding component in the above weight decomposition. **Definition 2.10**.: A _Reeb vector_ on \(X\) is a vector \(\xi\in N_{\mathbb{R}}\) such that \(\langle\xi,\alpha\rangle>0\) for all \(0\neq\alpha\in M\) with \(R_{\alpha}\neq 0\). The set \(\mathfrak{t}_{\mathbb{R}}^{+}\) of Reeb vectors is called the Reeb cone4. Footnote 4: The terminologies are borrowed from contact geometry: suppose that \(x\in X\subseteq\mathbb{C}^{N}\) is an isolated singularity, then the link \(L(x,X)=X\cap\{|z|=\varepsilon\}\subseteq\mathbb{C}^{N}\) (\(0<\varepsilon\ll 1\)) is a contact manifold, and Reeb vectors on \(X\) (in our definition) induce Reeb vector fields on the link (in the sense of contact geometry). For later use, we also define the notion of toric valuations. For any singularity \(x\in X=\mathbf{Spec}(R)\) with a good torus action as above and any \(\xi\in\mathfrak{t}_{\mathbb{R}}^{+}\), we define a valuation \(\mathrm{wt}_{\xi}\) (called a toric valuation) by setting \[\mathrm{wt}_{\xi}(f):=\min\{\langle\xi,\alpha\rangle\mid\alpha\in M,f_{\alpha }\neq 0\}\] where \(f\in R\). It is not hard to verify that \(v:=\mathrm{wt}_{\xi}\in\mathrm{Val}_{X,x}\). We also see that \(\mathrm{gr}_{v}R\cong R\), as both sides have the same weight decomposition. In other words, the toric valuation \(v\) induces a degeneration of the singularity to itself. A Fano cone singularity will be denoted as \(x\in(X;\xi)\) where \(\xi\) is the Reeb vector field. Through the inclusion \(\mathbb{T}\subseteq\mathrm{Aut}(x,X)\), we often view the Reeb vector \(\xi\) as an element of the Lie algebra of \(\mathrm{Aut}(x,X)\). The subtorus in \(\mathbb{T}\) generated by \(\xi\) is independent of \(\mathbb{T}\), and can be characterized as the smallest torus in \(\mathrm{Aut}(x,X)\) whose Lie algebra contains \(\xi\). If we assume that the torus \(\mathbb{T}\) is generated by the Reeb vector \(\xi\) (and we will often do), then we may recover \(\mathbb{T}\) from the data \(x\in(X;\xi)\). This justifies the absence of \(\mathbb{T}\) in the notation. We will denote the torus generated by \(\xi\) as \(\langle\xi\rangle\). Let us describe two extreme cases of Fano cone singularities in more details. **Example 2.11** (toric singularities).: Every toric singularity is given by a strongly convex rational polyhedral cone \(\sigma\subseteq N_{\mathbb{R}}\) (see e.g. [10]), and the Reeb cone \(\mathfrak{t}_{\mathbb{R}}^{+}\) is the interior of \(\sigma\). The singularity is klt if and only if it is \(\mathbb{Q}\)-Gorenstein. If this is the case, we get a Fano cone singularity after fixing a Reeb vector \(\xi\in\mathrm{Int}(\sigma)\). **Example 2.12** (quasi-regular Fano cones).: A Fano cone singularity \(x\in(X;\xi)\) is quasi-regular if \(\langle\xi\rangle\cong\mathbb{G}_{m}\), i.e. \(\xi\) generates a one parameter subgroup. In this case the weight decomposition becomes \(R=\oplus_{m\in\mathbb{N}}R_{m}\). We may form the Proj and get \(V:=\mathbf{Proj}(R)\). The natural projection \(X\setminus\{x\}\to V\) is a Seifert \(\mathbb{G}_{m}\)-bundle in the sense of [11]; in particular, for every closed point of \(V\), the \(\mathbb{G}_{m}\)-action on the corresponding reduced fiber is isomorphic to the left \(\mathbb{G}_{m}\)-action on \(\mathbb{G}_{m}/\mu_{r}\) for some positive integer \(r\). This gives rise to an orbifold boundary \(\Delta_{V}=\sum_{r}(1-\frac{1}{r})\Delta_{r}\) where \(\Delta_{r}\subseteq V\) is the divisorial part of the locus where the \(\mathbb{G}_{m}\)-action on the reduced fiber has stabilizer \(\mu_{r}\). By the local calculation in [11, Section 4] (which generalizes [11, Section 3.1]), we know that the pair \((V,\Delta_{V})\) is klt and log Fano (i.e., \(-(K_{V}+\Delta_{V})\) is ample). As we will see in Proposition 3.4, every klt singularity has a degeneration by test configuration to some Fano cone singularity (the proof relies on the notion of Kollar components). The local stability theory will allow us to find the "optimal" degeneration. ### Normalized volume Chi Li observes that the K-semistable degeneration from Donaldson-Sun's construction is induced by a valuation that minimizes what he calls the _normalized volume_. The definition involves two more classical invariants of valuations: the log discrepancy and the volume. **Definition 2.13** (log discrepancy).: For any klt singularity \(x\in X\), the _log discrepancy_ function \[A_{X}\colon\operatorname{Val}_{X}\to(0,+\infty],\] is defined as follows (c.f. [11] and [1, Theorem 3.1]). 1. For divisorial valuations \(\lambda\cdot\operatorname{ord}_{E}\) where \(E\) is a divisor over \(X\), we set \[A_{X}(\lambda\cdot\operatorname{ord}_{E}):=\lambda\cdot A_{X}(E).\] 2. For quasi-monomial valuations \(v_{\alpha}\in\operatorname{QM}(Y,E)\) where \((Y,E=\sum_{i=1}^{r}E_{i})\) is a log smooth model and \(\alpha\in\mathbb{R}_{\geq 0}^{r}\setminus\{0\}\), we set \[A_{X}(v_{\alpha}):=\sum_{i=1}^{r}\alpha_{i}A_{X}(E_{i}).\] When \(v_{\alpha}\) is divisorial, this recovers the previous definition. 3. For general valuations \(v\in\operatorname{Val}_{X,x}\), we set \[A_{X}(v):=\sup_{(Y,E)}A_{X}\left(r_{Y,E}(v)\right)\] where the supremum runs over all log smooth models of \(X\), and \(r_{Y,E}\colon\operatorname{Val}_{X}\to\operatorname{QM}(Y,E)\) is the retraction map discussed at the end of Section 2.1. It can happen that \(A_{X}(v)=+\infty\) for some valuation \(v\). We denote by \(\operatorname{Val}_{X,x}^{*}\) the set of valuations \(v\in\operatorname{Val}_{X,x}\) with \(A_{X}(v)<+\infty\). **Definition 2.14** (volume).: For any graded sequence \(\mathfrak{a}_{\bullet}\) of \(\mathfrak{m}_{x}\)-primary ideals, the volume of \(\mathfrak{a}_{\bullet}\) is defined as \[\operatorname{vol}(\mathfrak{a}_{\bullet}):=\limsup_{m\to\infty}\frac{ \operatorname{length}(\mathcal{O}_{X,x}/\mathfrak{a}_{m})}{m^{n}/n!}\] where \(n=\dim X\). A similar invariant is the multiplicity of \(\mathfrak{a}_{\bullet}\), which is defined as \[\operatorname{mult}(\mathfrak{a}_{\bullet})=\lim_{m\to\infty}\frac{ \operatorname{mult}(\mathfrak{a}_{m})}{m^{n}}.\] In the geometric setting we consider, we have \[\operatorname{vol}(\mathfrak{a}_{\bullet})=\operatorname{mult}(\mathfrak{a}_ {\bullet})\] by [1, 1, 10, 11]. The _volume_ of a valuation \(v\in\operatorname{Val}_{X,x}\) is defined as \[\operatorname{vol}(v)=\operatorname{vol}_{X,x}(v):=\operatorname{vol}( \mathfrak{a}_{\bullet}(v))=\operatorname{mult}(\mathfrak{a}_{\bullet}(v)).\] A basic observation is that both log discrepancy and volume are homogeneous in the variable: if we rescale the valuation \(v\) to \(\lambda v\), we find \[\operatorname{vol}(\lambda v)=\lambda^{-n}\operatorname{vol}(v),\text{ and }A_{X}(\lambda v)=\lambda\cdot A_{X}(v).\] It follows that \(A_{X}(v)^{n}\cdot\operatorname{vol}(v)\) is invariant under rescaling. **Definition 2.15** (normalized volume [11]).: Let \(x\in X\) be an \(n\)-dimensional klt singularity. For any \(v\in\operatorname{Val}_{X,x}^{*}\), we define the _normalized volume_ of \(v\) as \[\widehat{\operatorname{vol}}(v)=\widehat{\operatorname{vol}}_{X}(v):=A_{X}(v)^{ n}\cdot\operatorname{vol}(v).\] By convention, we also set \(\widehat{\operatorname{vol}}(v)=+\infty\) when \(A_{X}(v)=+\infty\). The _local volume_ of the singularity \(x\in X\) is defined as \[\widehat{\operatorname{vol}}(x,X):=\inf_{v\in\operatorname{Val}_{X,x}^{*}} \widehat{\operatorname{vol}}_{X}(v).\] _Remark 2.16_.: In some literature, the valuation space \(\operatorname{Val}_{X,x}\) is called the non-archimedean link of \(x\in X\), since it can be thought of as a punctured neighbourhood of \(x^{\operatorname{an}}\) in the Berkovich analytification \(X^{\operatorname{an}}\) of \(X\). We can form the normalized non-archimedean link \(NL(x,X)\) as the quotient \(\operatorname{Val}_{X,x}/\mathbb{R}_{+}\) where \(\mathbb{R}_{+}\) acts by rescaling. Since normalized volume function is rescaling invariant, it descends to a function on the normalized non-archimedean link. The local volume of a klt singularity can also be computed using normalized multiplicities of ideals, as observed in [11]. This alternative approach offers a great deal of flexibility in the study of this invariant. **Theorem 2.17** ([11, Theorem 27]).: _For any klt singularity \(x\in X\) of dimension \(n\), we have_ \[\widehat{\operatorname{vol}}(x,X)=\inf_{\mathfrak{a}}\operatorname{lct}( \mathfrak{a})^{n}\cdot\operatorname{mult}(\mathfrak{a})=\inf_{\mathfrak{a}_{ \bullet}}\operatorname{lct}(\mathfrak{a}_{\bullet})^{n}\cdot\operatorname{mult }(\mathfrak{a}_{\bullet}) \tag{2.2}\] _where the first_ (_resp. second_) _infimum runs over all \(\mathfrak{m}_{x}\)-primary ideals_ (_resp. graded sequences of ideals_)_._ Here \(\operatorname{lct}(\mathfrak{a})\) is the log canonical threshold (lct) of the ideal \(\mathfrak{a}\), defined as \[\operatorname{lct}(\mathfrak{a}):=\inf_{v\in\operatorname{Val}_{X,x}^{*}} \frac{A_{X}(v)}{v(\mathfrak{a})}.\] It is also the largest number \(\lambda>0\) such that \((X,\mathfrak{a}^{\lambda})\) is log canonical. The log canonical threshold of a graded sequence \(\mathfrak{a}_{\bullet}\) of ideals is defined in a similar manner, replacing \(\mathfrak{a}\) by \(\mathfrak{a}_{\bullet}\) in the above formula. By [12], the infimum is in fact a minimum. The proof of the formula (2.2) is quite straightforward. On one hand, we have \(A_{X}(v)\geq\operatorname{lct}(\mathfrak{a}_{\bullet}(v))\) and \(\operatorname{vol}(v)=\operatorname{mult}(\mathfrak{a}_{\bullet}(v))\) for any valuation \(v\in\operatorname{Val}_{X,x}^{*}\), hence \[\widehat{\operatorname{vol}}(v)\geq\operatorname{lct}(\mathfrak{a}_{\bullet }(v))^{n}\cdot\operatorname{mult}(\mathfrak{a}_{\bullet}(v)).\] On the other hand, for any valuation \(v\in\operatorname{Val}_{X,x}^{*}\) that computes \(\operatorname{lct}(\mathfrak{a}_{\bullet})\) we may rescale it so that \(v(\mathfrak{a}_{\bullet})=1\). By definition this gives \(\operatorname{lct}(\mathfrak{a}_{\bullet})=A_{X}(v)\) and \(\mathfrak{a}_{\bullet}\subseteq\mathfrak{a}_{\bullet}(v)\), hence \(\operatorname{mult}(\mathfrak{a}_{\bullet})\geq\operatorname{vol}(v)\) and \[\operatorname{lct}(\mathfrak{a}_{\bullet})^{n}\cdot\operatorname{mult}( \mathfrak{a}_{\bullet})\geq\widehat{\operatorname{vol}}(v).\] It turns out that the local volume of a klt singularity is always positive [11] (we will sketch a proof in Section 3) and thus becomes an interesting invariant of the singularity. If \(x\in X\) lives on some Gromov-Hausdorff limit of Kahler-Einstein Fano manifolds, then the local volume \(\widehat{\operatorname{vol}}(x,X)\) has the following differential geometric interpretation (see [10, Theorem 5.6]): \[\frac{\widehat{\operatorname{vol}}(x,X)}{\widehat{\operatorname{vol}}(0,\mathbb{ A}^{n})}=\lim_{r\to 0}\frac{\operatorname{Vol}(B_{r}(x,X))}{r^{2n}\operatorname{ Vol}(B_{1}(0,\mathbb{A}^{n}))},\] where the right hand side is the volume density (in the sense of geometric measure theory) of the Kahler-Einstein limit metric. An interesting question is the distribution of the possible values of local volumes, see Conjecture 6.5. A guiding principle of the local stability theory, put forward by Li [10], is that the K-semistable degeneration of a klt singularity is induced by the valuation with the smallest normalized volume. For singularities on Gromov-Hausdorff limits of Kahler-Einstein Fano manifolds, this is confirmed in [10, Section 3.1]. Here we illustrate this connection through a few examples. **Example 2.18** (smooth point).: Consider \(0\in X=\mathbb{A}^{n}\) and let \(v_{\alpha}\) (\(\alpha\in\mathbb{R}^{n}_{+}\)) be a monomial valuation with respect to the coordinates \(x_{1},\ldots,x_{n}\). We have \(A_{X}(v_{\alpha})=\alpha_{1}+\cdots+\alpha_{n}\) and \(\operatorname{vol}(v_{\alpha})=(\alpha_{1}\ldots\alpha_{n})^{-1}\) by direct calculations, thus \[\widehat{\operatorname{vol}}(v_{\alpha})=\frac{(\alpha_{1}+\cdots+\alpha_{n} )^{n}}{\alpha_{1}\ldots\alpha_{n}}.\] In particular, we see that \(\widehat{\operatorname{vol}}(v_{\alpha})\geq n^{n}\), with equality if and only if all the weights \(\alpha_{i}\) are equal, i.e. \(v_{\alpha}=c\cdot\operatorname{mult}_{0}\) for some \(c>0\). It is slightly harder to compute the local volume of a smooth point using the valuative definition. Instead we resort to normalized multiplicities (2.2). Using toric degeneration, it is shown in [1] that \[\operatorname{lct}(\mathfrak{a})^{n}\cdot\operatorname{mult}(\mathfrak{a}) \geq n^{n}\] for any \(\mathfrak{m}_{x}\)-primary ideal \(\mathfrak{a}\) when \(x\in X\) is smooth. This implies that \(\widehat{\operatorname{vol}}(0,\mathbb{A}^{n})=n^{n}\) and that \(\operatorname{mult}_{0}\) is a minimizer of the normalized volume function. **Example 2.19** (toric singularities).: The argument in the above example can be generalized to show that on any klt toric singularity \(x\in X\) the normalized volume function is minimized by some toric valuation \(v_{\xi}\), where \(\xi\in N_{\mathbb{R}}\); we leave the details to the reader. From the discussions in Section 2.2, we know that the toric minimizer induces a degeneration of the toric singularity to itself. This is compatible with the differential geometric picture: the toric singularity admits a Ricci-flat Kahler cone metric, and the metric tangent cone is the toric singularity itself. Moreover, the vector field on \(X\) that gives the homothetic scaling along the rays of the Kahler cone is naturally identified with \(\xi\in N_{\mathbb{R}}\), see [12, 13]. **Example 2.20** (cone singularities).: Consider an orbifold cone singularity \(o\in X:=C_{a}(V,L)\) as in Example 2.9. The exceptional divisor of the orbifold blowup at \(o\) gives a divisorial valuation \(v=\operatorname{ord}_{o}\) on \(X\). It is also characterized by the condition that it is invariant under the natural \(\mathbb{G}_{m}\)-action and that \(v(s)=m\) for all \(s\in H^{0}(V,mL)\). If \(L\) is Cartier and sufficiently ample, then we also have \(\mathfrak{a}_{k}(v)=\mathfrak{m}_{o}^{k}\) and hence \(v\) induces the degeneration to the tangent cone (which in this case is isomorphic to \(o\in X\) itself). However, it is not always the case that \(v\) minimizes the normalized volume function: it is proved in [10, 10, 10] that this happens if and only if the Fano variety \(V\) is K-semistable. The latter is a necessary condition for \(X\) to admit a Ricci-flat Kahler cone metric [10]. This gives another strong evidence that the minimizing valuations of the normalized volume function contains rich information about the local stability of the singularities. The local volumes of klt singularities also enjoy some nice properties. We only list some of them here, referring to [10] for a more thorough discussion. **Theorem 2.21** (lower semi-continuity, [1]).: _For any \(\mathbb{Q}\)-Gorenstein family \(B\subseteq\mathcal{X}\to B\) of klt singularities, the function_ \[b\in B\mapsto\widehat{\operatorname{vol}}(b,\mathcal{X}_{b})\] _on \(B\) is lower semi-continuous with respect to the Zariski topology._ Here we call \(B\subseteq\mathcal{X}\to B\) a \(\mathbb{Q}\)-Gorenstein family of klt singularities if \(\mathcal{X}\) is flat over \(B\), \(B\subseteq\mathcal{X}\) is a section of the projection, \(K_{\mathcal{X}/B}\) is \(\mathbb{Q}\)-Cartier and \(b\in\mathcal{X}_{b}\) is klt for any \(b\in B\). **Theorem 2.22** (largest volume, [10, Appendix]).: _For any klt singularity \(x\in X\) of dimension \(n\), we have \(\widehat{\operatorname{vol}}(x,X)\leq n^{n}\), with equality if and only if \(x\in X\) is smooth._ Note that the inequality part is also a consequence of the lower semi-continuity of local volumes, but the equality case requires more work. **Proposition 2.23** (behavior under birational morphism, [10, Lemma 2.9]).: _Let \(\pi\colon Y\to X\) be a proper birational morphism between klt varieties. Assume that \(K_{Y}\leq\pi^{*}K_{X}\). Then \(\widehat{\operatorname{vol}}(y,Y)\geq\widehat{\operatorname{vol}}(x,X)\) for any \(x\in X\) and any \(y\in\pi^{-1}(x)\)._ In particular, local volumes are non-increasing under small birational morphisms. On the other hand, it is less clear how they behave under flips. ### Stable Degeneration Conjecture We now introduce the Stable Degeneration Conjecture, which gives a recipe for constructing the K-semistable degenerations of klt singularities using the minimizers of the normalized volume function. **Conjecture 2.24** ([10, 10]).: _Let \(x\in X=\mathbf{Spec}(R)\) be a klt singularity. Then:_ 1. (Existence of minimizer). _There exists a valuation_ \(v_{0}\in\operatorname{Val}_{X,x}\) _such that_ \[\widehat{\operatorname{vol}}(v_{0})=\widehat{\operatorname{vol}}(x,X).\] 2. (Uniqueness). _The normalized volume minimizer_ \(v_{0}\) _is unique up to rescaling._ 3. (Quasi-monomial). _The minimizer_ \(v_{0}\) _is a quasi-monomial valuation._ 4. (Finite generation). _The associated graded algebra_ \(\operatorname{gr}_{v_{0}}R\) _is finitely generated._ 5. (Stability). _The quasi-monomial minimizer_ \(v_{0}\) _induces a natural Reeb vector_ \(\xi_{0}\) _on_ \(X_{0}:=\mathbf{Spec}(\operatorname{gr}_{v_{0}}R)\)_, and_ \(x_{0}\in(X_{0};\xi_{0})\) _is a K-semistable Fano cone singularity._ Let us elaborate the various parts of the above conjecture. Since the K-semistable degeneration of a klt singularity eventually comes from the minimizing valuation of the normalized volume function, the existence of the minimizer is a necessary condition to begin with. The uniqueness part can be reformulated as saying that the normalized volume function has a unique minimizer on the normalized non-archimedean link. It implies the uniqueness of the K-semistable degeneration, since rescaling the valuation does not change the isomorphism class of the associated graded algebra. Assuming that there exists a unique minimizer \(v_{0}\), the natural candidate of the K-semistable degeneration (as we have discussed in Section 2.1) is \(\mathbf{Spec}(\operatorname{gr}_{v_{0}}R)\). But there is a serious issue here, since a priori the algebra \(\operatorname{gr}_{v_{0}}R\) need not be finitely generated. An obvious necessary condition is that the value semigroup \(v_{0}(R\setminus\{0\})\) is finitely generated. With a bit more work, one can show that it is also necessary that the minimizer \(v_{0}\) is a quasi-monomial valuation. This justifies the third item of the conjecture. Unfortunately, there are still many quasi-monomial valuations whose associated graded algebras are not finitely generated, see Example 4.17. The finite generation part (also called the local higher rank finite generation conjecture) of the Stable Degeneration Conjecture turns out to be quite subtle. Taking (1)-(4) for granted, let us elaborate the precise content of item (5). First we need to explain where the Reeb vector comes from. Denote by \(r\) the rational rank of the quasi-monomial minimizer \(v_{0}\). By choosing a (non-canonical) isomorphism \(\Gamma\cong\mathbb{Z}^{r}\), we may replace the \(\Gamma\)-grading on \(\operatorname{gr}_{v_{0}}R\) by a \(\mathbb{Z}^{r}\)-grading. In particular, we get a \(\mathbb{T}=\mathbb{G}_{m}^{r}\)-action on \(\operatorname{gr}_{v_{0}}R\). Since \(v_{0}\) takes the same positive value on each \(\operatorname{gr}_{v_{0}}^{\lambda}R\), it induces a toric valuation on \(\operatorname{gr}_{v_{0}}R\) and hence a Reeb vector \(\xi_{0}\) on \(X_{0}=\mathbf{Spec}(\operatorname{gr}_{v_{0}}R)\). The grading also determines a closed point \(x_{0}\) that is the unique closed orbit of the torus action, thus we get a Fano cone singularity \(x_{0}\in(X_{0};\xi_{0})\). Next we shall define K-semistability for Fano cone singularities. The original definition from [18, 19] is via the non-negativity of generalized Futaki invariants. We choose the following definition, which is more convenient for our purpose. **Definition 2.25**.: We say a Fano cone singularity \(x\in(X;\xi)\) is _K-semistable_ if \[\widehat{\operatorname{vol}}(x,X)=\widehat{\operatorname{vol}}_{X}( \operatorname{wt}_{\xi}),\] i.e. the toric valuation \(\operatorname{wt}_{\xi}\) minimizes the normalized volume. Its equivalence with the original definition is shown in [18, Theorem 2.34]. Intuitively, the generalized Futaki invariants of the Fano cone singularity are "directional derivatives" of the normalized volume function at \(\operatorname{wt}_{\xi}\), hence they are non-negative if \(\operatorname{wt}_{\xi}\) is a minimizer. There is also a local-to-global correspondence: by [18, 19, 19] (see the discussions in Example 2.20), a cone singularity \(o\in C_{a}(V,L)\) is K-semistable if and only if the Fano base \(V\) is K-semistable. The stable degeneration of a klt singularity is a two-step process. Conjecture 2.24 takes care of the first step, the K-semistable degeneration. The other step, the K-polystable degeneration, can be done using the following theorem. **Theorem 2.26** ([19, Theorem 1.2]).: _Given a K-semistable Fano cone singularity \(x_{0}\in(X_{0};\xi_{0})\), there always exists a special test configuration that degenerates \(x_{0}\in(X_{0};\xi_{0})\) to a K-polystable Fano cone singularity \(y\in(Y;\xi_{Y})\). Moreover, such a K-polystable degeneration \(y\in(Y;\xi_{Y})\) is uniquely determined by \(x_{0}\in(X_{0};\xi_{0})\) up to isomorphism._ Let us clarify some of the terminologies in the above statement. **Definition 2.27**.: A special test configuration of a klt singularity \(x\in X\) is a test configuration with klt central fiber. A special test configuration of a Fano cone singularity \(x\in(X;\xi)\) is a \(\mathbb{T}=\langle\xi\rangle\)-equivariant special test configuration of the klt singularity \(x\in X\). The central fiber is also a Fano cone singularity \(y\in(Y;\xi_{Y})\)5. If it is K-semistable, we call it a K-semistable degeneration of \(x\in(X;\xi)\)6. Footnote 5: In fact using the fiberwise \(\mathbb{T}\)-action we can identify \(\xi_{Y}\) with \(\xi\) in \(N(\mathbb{T})_{\mathbb{R}}\). Footnote 6: This should not be confused with the K-semistable degeneration of the klt singularity \(x\in X\) in the Stable Degeneration Conjecture. Next we define K-polystability. Again, the original definition involves generalized Futaki invariants, but we choose the following more convenient definition. They are equivalent by [10]. By [10, 10, 11, 12], we also know that a Fano cone singularity is K-polystable if and only if it admits a Ricci flat Kahler cone metric. **Definition 2.28**.: We say a Fano cone singularity \(x\in(X;\xi)\) is _K-polystable_ if it is K-semistable, and any K-semistable degeneration is isomorphic to \(x\in(X;\xi)\). The intuition behind this definition is a notion of \(S\)-equivalence: two semistable objects are considered \(S\)-equivalent if one of them isotrivially degenerates to the other, and polystable objects are the ones without any further \(S\)-equivalent degenerations7. Footnote 7: The definition of polystable vector bundle and GIT-polystable point can both be formulated this way: two semistable vector bundles are \(S\)-equivalent if they have the same Jordan-Hölder factors, and a vector bundle is polystable if it is a direct sum of its Jordan-Hölder factors; similarly in GIT (geometric invariant theory), two GIT-semistable points are \(S\)-equivalent if their orbit closure intersect, and the GIT-polystable point represents the unique closed orbit in this \(S\)-equivalence class. The proofs of Conjecture 2.24 and Theorem 2.26 will be sketched in Section 4. ## 3. Kollar components In this section, we highlight an important tool in the study of klt singularities: Kollar components. This notion was originally introduced in [13] to study the local fundamental groups of klt singularities (see also [14, 15] for some precursors), and has since found many other applications. While the cone construction provides one direction of the local-to-global correspondence, Kollar components work in the opposite direction: it often helps to reduce questions about klt singularities to questions about Fano varieties. In the K-stability context, Kollar components also serve as the local analog of special test configurations [10], which play a key role in the K-stability theory of Fano varieties. **Definition 3.1** (Kollar component).: Let \(x\in X\) be a klt singularity and let \(E\) be a prime divisor over \(X\). If there exists a proper birational morphism \(\pi\colon Y\to X\) such that \(\pi\) is an isomorphism away from \(x\), \(E=\pi^{-1}(x)\), \((Y,E)\) is plt and \(-(K_{Y}+E)\) is \(\pi\)-ample, we call \(E\) a _Kollar component_ over \(x\in X\) and call \(\pi\colon Y\to X\) the plt blowup of \(E\). Intuitively, a Kollar component is the exceptional divisor of a partial resolution that is also a Fano variety. In fact, by adjunction (c.f. [15, Section 4.1]), we may write \[(K_{Y}+E)|_{E}=K_{E}+\Delta_{E} \tag{3.1}\] for some effective divisor \(\Delta_{E}\) (called the different) on \(E\), and the condition that \(E\) is a Kollar component implies that \((E,\Delta_{E})\) is a klt log Fano pair. Since \(K_{Y}+E=\pi^{*}K_{X}+A_{X}(E)\cdot E\) and \(A_{X}(E)>0\), we also see that \(-E\) is \(\pi\)-ample and this implies that the plt blowup is uniquely determined by the Kollar component \(E\). **Example 3.2**.: If \(x\in X\) is the orbifold cone over a klt Fano variety as in Example 2.9, then the exceptional divisor of the orbifold blowup at the vertex \(x\) is a Kollar component. More generally, for any quasi-regular Fano cone singularity \(x\in(X;\xi)\), the zero section of the corresponding Seifert \(\mathbb{G}_{m}\)-bundle \(X\setminus\{x\}\to V\) (see Example 2.12) is a Kollar component over \(x\in X\). By [20], every klt singularity has at least one Kollar component. In fact, the proof in _loc. cit._ shows that every log canonical threshold is computed by some Kollar component. **Theorem 3.3** ([20, Lemma 1]).: _Let \(x\in X\) be a klt singularity and let \(\mathfrak{a}\) be an \(\mathfrak{m}_{x}\)-primary ideal. Then there exists a Kollar component \(E\) over \(x\in X\) such that_ \[\operatorname{lct}(\mathfrak{a})=\frac{A_{X}(E)}{\operatorname{ord}_{E}( \mathfrak{a})}.\] The existence of Kollar component already has the following consequence. **Proposition 3.4**.: _Every klt singularity has a degeneration by test configuration to some Fano cone singularity._ Proof.: Take a Kollar component \(E\) over \(x\in X=\mathbf{Spec}(R)\) and consider the induced degeneration to \(X_{0}:=\mathbf{Spec}(\operatorname{gr}_{E}R)\)8. It suffices to show that \(X_{0}\) is a Fano cone singularity. For simplicity, assume that \(E\) is Cartier on the associated plt blowup \(\pi\colon Y\to X\). Then \(\Delta_{E}=0\) in the adjunction formula (3.1) and Footnote 8: This is just a shorthand notation for \(\operatorname{gr}_{\operatorname{ord}_{E}}R\). \[\operatorname{gr}_{E}^{m}R\cong\pi_{*}\mathcal{O}_{Y}(-mE)/\pi_{*}\mathcal{O} _{Y}(-(m+1)E)\] can be identified with \(H^{0}(E,-mE|_{E})\), as the next term in the long exact sequence is \[R^{1}\pi_{*}\mathcal{O}_{Y}(-(m+1)E)=0\] by Kawamata-Viehweg vanishing. Hence \(X_{0}\) is a cone over the klt Fano variety \(E\). In the general case, the central fiber \(X_{0}\) is only an orbifold cone over \((E,\Delta_{E})\) polarized by the \(\mathbb{Q}\)-line bundle \(-E|_{E}\), but the basic idea is the same, see e.g. [10, Proposition 2.10]. We can also use Kollar component to show that the local volume of a klt singularity is always positive. This is originally proved in [11]. The argument we present here is slightly different, but the main ideas are the same. The key ingredient is an Izumi type inequality: **Lemma 3.5**.: _Let \(x\in X\) be a klt singularity. Then there exists some constant \(C>0\) such that_ \[v(\mathfrak{m}_{x})\mathrm{ord}_{x}\leq v\leq C\cdot A_{X}(v)\mathrm{ord}_{x}\] _for any valuation \(v\in\mathrm{Val}_{X,x}^{*}\)._ Here \(\mathrm{ord}_{x}(f):=\max\{k\in\mathbb{N}\,|\,f\in\mathfrak{m}_{x}^{k}\}\) for \(f\in\mathcal{O}_{X}\). To see why this lemma implies the positivity of the local volume, note that \[\widehat{\operatorname{vol}}(x,X)=\inf_{v:\,A_{X}(v)=1}\operatorname{vol}(v)\] by the rescaling invariance of \(\widehat{\operatorname{vol}}\). For such valuations \(v\), the Izumi inequality above implies \(v\leq C\cdot\mathrm{ord}_{x}\) and hence \(\operatorname{vol}(v)\geq C^{-n}\cdot\mathrm{mult}_{x}X\). This gives \(\widehat{\operatorname{vol}}(x,X)\geq C^{-n}\cdot\mathrm{mult}_{x}X>0\). Proof of Lemma 3.5.: The first inequality is definitional. For the second inequality, it is enough to prove \[v\leq C\cdot A_{X}(v)\mathrm{ord}_{E}\] for some Kollar component \(E\) over \(x\in X\). We can reformulate this statement as \(A_{X}(v)\geq v(D)\) for all \(\mathbb{Q}\)-Cartier divisor \(D\) on \(X\) with \(\mathrm{ord}_{E}(D)\leq C^{-1}\). Thus the question is equivalent to finding some constant \(\varepsilon>0\) such that \((X,D)\) is lc whenever \(\mathrm{ord}_{E}(D)\leq\varepsilon\). On the plt blowup \(\pi\colon Y\to X\) of \(E\) we have \[\pi^{*}(K_{X}+D)\leq K_{Y}+\pi_{*}^{-1}D+E\] as long as \(\varepsilon\leq A_{X}(E)\). By inversion of adjunction (see e.g. [13, Theorem 4.9]), the pair \((X,D)\) is lc if and only if \((E,\Delta_{E}+\pi_{*}^{-1}D|_{E})\) is lc. Since \((E,\Delta_{E})\) is klt, we essentially reduce to a similar question in lower dimension. By induction on the dimension, we may assume there exists some \(0<\varepsilon\ll 1\)9 such that \((E,\Delta_{E}+\Gamma)\) is lc for all effective \(\mathbb{Q}\)-divisor \(\Gamma\sim_{\mathbb{Q}}-\varepsilon E|_{E}\). This gives the desired constant as \(\pi_{*}^{-1}D\sim_{\mathbb{Q}}-\varepsilon E\). Footnote 9: In fact, we can choose \(\varepsilon\) to be the alpha invariant [14, 15] of the log Fano pair \((E,\Delta_{E})\). The proof we sketch here is essentially the proof that alpha invariants are positive. ### Divisorial minimizer Using Kollar component, we now discuss a special case of the Stable Degeneration Conjecture, namely when the minimizer is a divisorial valuation. It is worth noting that, in general, the minimizer can be a valuation of higher rank; one such example is the cone over \(\mathrm{Bl}_{p}\mathbb{P}^{2}\), see [11, Section 8.3]. Nevertheless, the divisorial case will provide some intuition for our understanding of the higher rank case. The key observation is that divisorial minimizers are necessarily Kollar components. **Theorem 3.6**.: _Any divisorial minimizer of the normalized volume is of the form \(v=\lambda\cdot\mathrm{ord}_{E}\) for some Kollar component \(E\)._ This is proved in [11, Proposition 4.9] and [13, Theorem C] using somewhat different arguments. The first step is to show that the divisorial minimizer \(v=\mathrm{ord}_{E}\) satisfies the finite generation property (Conjecture 2.24(4)), which follows from two basic observations: 1. If \(v\) is a minimizer of \(\widehat{\mathrm{vol}}\), then \(v\) is the unique valuation that computes \(\mathrm{lct}(\mathfrak{a}_{\bullet}(v))\). This can be derived from the equality conditions in the proof of (2.2), see [11, Lemma 4.7] for more details. 2. If \(v\) is a divisorial valuation that computes the log canonical threshold \(\mathrm{lct}(\mathfrak{a}_{\bullet})\) of some graded sequence of ideals, then it satisfies the finite generation property. This is essentially a consequence of [1, Corollary 1.4.3]. See the proof of [13, Lemma 3.11]. The finite generation property ensures that there exists some sufficiently divisible integer \(m\) such that \(\mathfrak{a}_{mr}(v)=\mathfrak{a}_{m}(v)^{r}\) for all \(r\in\mathbb{N}\). From this we deduce that the divisor \(E\) also computes \(\mathrm{lct}(\mathfrak{a}_{m})\) and it is the unique such divisor (by the item (1) mentioned above). Since every log canonical threshold is computed by some Kollar component (Theorem 3.3), we see that \(E\) is a Kollar component. Once we know that the divisorial minimizer comes from a Kollar component \(E\), we can study the minimizer in terms of the geometry of the associated log Fano pair \((E,\Delta_{E})\). The results can be summarized as follows: **Theorem 3.7** ([13, Theorem 1.2]).: _A Kollar component \(E\) over a klt singularity \(x\in X\) minimizes the normalized volume if and only if the log Fano pair \((E,\Delta_{E})\) is K-semistable. Moreover, such a K-semistable Kollar component, if it exists, is unique._ Using this theorem, we can now verify one of the facts mentioned in Section 2.1. **Proposition 3.8**.: _Let \(0\in(f=0)\subseteq\mathbb{A}^{n+1}\) be a klt hypersurface singularity with tangent cone \((f_{k}=0)\subseteq\mathbb{A}^{n+1}\). Then \(\operatorname{mult}_{0}\) is a valuation that minimizes the normalized volume if and only if \((f_{k}=0)\subseteq\mathbb{P}^{n}\) is a K-semistable Fano variety._ Proof.: Note that \(\operatorname{mult}_{0}\) is a valuation if and only if \(f_{k}\) is irreducible. In this case, the ordinary blowup at the origin gives an exceptional divisor \(E\cong(f_{k}=0)\subseteq\mathbb{P}^{n}\) with \(\Delta_{E}=0\). The result then follows from the previous theorem. ## 4. Geometry of minimizers The Stable Degeneration Conjecture has been proved by the works [1, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] (see also [13, 14, 21]). In this section, we explain some main ideas of its proof; we will also sketch the proof of Theorem 2.26 (the existence of K-polystable degeneration). Throughout, we fix a klt singularity \(x\in X\). ### Existence We first explain why volume minimizers exist, following [1]. **Theorem 4.1** ([1]).: _For any klt singularity \(x\in X\), there exists a valuation \(v_{0}\in\operatorname{Val}_{X,x}^{*}\) that minimizes the normalized volume function._ Take a sequence of valuations \(v_{k}\in\operatorname{Val}_{X,x}^{*}\)\((k=1,2,\dots)\) such that \[\lim_{k\to\infty}\widehat{\operatorname{vol}}(v_{k})\to\widehat{\operatorname {vol}}(x,X).\] We may also rescale the valuations so that \(A_{X}(v_{k})=1\) (this is necessary to force the valuations \(v_{k}\) to lie in a compact subset of \(\operatorname{Val}_{X,x}\)). Ideally, we want to construct a minimizer \(v_{0}\) as a limit of the sequence \(v_{1},v_{2},\dots\). For such an argument to work, one would need to know that the normalized volume function is lower semi-continuous on the valuation space. Unfortunately, it is still an open question whether this is the case or not. Instead, we consider the graded sequences of valuation ideals \(\mathfrak{a}_{\bullet}(v_{k})\). We already see in (2.2) that the normalized volume can also be computed using normalized multiplicities of graded ideal sequences. Moreover, by the proof of (2.2), if we can find a graded sequence of ideals \(\mathfrak{a}_{\bullet}\) such that \[\widehat{\operatorname{vol}}(x,X)=\operatorname{lct}(\mathfrak{a}_{\bullet})^ {n}\cdot\operatorname{mult}(\mathfrak{a}_{\bullet}), \tag{4.1}\] then any valuation that computes \(\operatorname{lct}(\mathfrak{a}_{\bullet})\) would be a minimizer of \(\widehat{\operatorname{vol}}\). Such a graded sequence \(\mathfrak{a}_{\bullet}\) is constructed in [1] as a "generic limit" of the sequences \(\mathfrak{a}_{\bullet}(v_{k})\)\((k=1,2,\dots)\). The idea is to consider, for each \(m\in\mathbb{N}\), the locus \(H_{m}\) in the Hilbert scheme that contains a Zariski dense subset which parametrizes the "truncated" graded sequences of ideals \[\mathfrak{a}_{m}(v_{k})\subseteq\dots\subseteq\mathfrak{a}_{1}(v_{k}).\] There are natural truncation maps \[\pi_{m+1}\colon H_{m+1}\to H_{m}.\] One can show (see [11, Section 5]) that there exists a compatible sequence of closed points \(x_{m}\in H_{m}\), where each \(x_{m}\) is a very general point of \(H_{m}\), such that \(\pi_{m+1}(x_{m+1})=x_{m}\). They parametrize a graded sequence \(\mathfrak{a}_{\bullet}\) of ideals10, and the goal is to verify the identity (4.1) for this \(\mathfrak{a}_{\bullet}\). Footnote 10: We view it as a “generic limit” of the sequences \(\mathfrak{a}_{\bullet}(v_{k})\) (\(k=1,2,\dots\)), since the limit point is obtained as a very general point of their Zariski closure in the Hilbert scheme. From the generic limit construction, we have \[\operatorname{lct}(\mathfrak{a}_{m})^{n}\cdot\operatorname{mult}(\mathfrak{a} _{m})=\limsup_{k\to\infty}\big{(}\operatorname{lct}(\mathfrak{a}_{m}(v_{k}))^ {n}\cdot\operatorname{mult}(\mathfrak{a}_{m}(v_{k}))\big{)},\] since both functions \(x_{m}\mapsto\operatorname{lct}(\mathfrak{a}_{m})\) and \(x_{m}\mapsto\operatorname{mult}(\mathfrak{a}_{m})\) are constructible on the Hilbert scheme (in particular on \(H_{m}\)). By our choice of \(v_{k}\), we also have \[\limsup_{k\to\infty}\big{(}\operatorname{lct}(\mathfrak{a}_{\bullet}(v_{k})) ^{n}\cdot\operatorname{mult}(\mathfrak{a}_{\bullet}(v_{k}))\big{)}=\widehat{ \operatorname{vol}}(x,X).\] A moment of thought reveals that the missing ingredient is the following uniform convergence statement. **Proposition 4.2**.: _For any \(\varepsilon>0\), there exists some positive integer \(M\) such that_ \[\operatorname{lct}(\mathfrak{a}_{m}(v_{k}))^{n}\cdot\operatorname{mult}( \mathfrak{a}_{m}(v_{k}))\leq(1+\varepsilon)\cdot\operatorname{lct}(\mathfrak{ a}_{\bullet}(v_{k}))^{n}\cdot\operatorname{mult}(\mathfrak{a}_{\bullet}(v_{k}))\] _for all \(m\geq M\) and all \(k=1,2,\dots\)._ Proof.: We always have \(m\cdot\operatorname{lct}(\mathfrak{a}_{m})\leq\operatorname{lct}(\mathfrak{a}_ {\bullet})\), so the main question is to show \[\frac{\operatorname{mult}(\mathfrak{a}_{m}(v_{k}))}{m^{n}}\leq(1+\varepsilon) \cdot\operatorname{mult}(\mathfrak{a}_{\bullet}(v_{k})) \tag{4.2}\] for large \(m\). The proof of this uses asymptotic multiplier ideals. Recall that for any graded sequence of ideals \(\mathfrak{a}_{\bullet}\) on \(X\) and any rational number \(c>0\), the asymptotic multiplier ideal \(\mathcal{J}(c\cdot\mathfrak{a}_{\bullet})\) (see [1, Section 11.1] and [1, Theorem 1.2]) is the ideal on \(X\) consisting of local sections \(f\in\mathcal{O}_{X}\) such that \[v(f)>c\cdot v(\mathfrak{a}_{\bullet})-A_{X}(v)\] for all valuations \(v\in\operatorname{Val}_{X,x}^{*}\). To illustrate the ideas, let us first assume that \(x\in X\) is smooth for simplicity. For any valuation \(v\in\operatorname{Val}_{X,x}^{*}\) and any \(m\in\mathbb{N}\), the asymptotic multiplier ideals \(\mathcal{J}(m\cdot\mathfrak{a}_{\bullet}(v))\) of the corresponding sequence of valuation ideals satisfy \[\mathfrak{a}_{m}(v)\subseteq\mathcal{J}(m\cdot\mathfrak{a}_{\bullet}(v)) \subseteq\mathfrak{a}_{m-A_{X}(v)}(v),\] where both inclusions follow from the definition of multiplier ideals. When \(x\in X\) is smooth, the asymptotic multiplier ideals also satisfy subadditivity [1, Theorem 11.2.3], in particular, \[\mathcal{J}(m\ell\cdot\mathfrak{a}_{\bullet})\subseteq\mathcal{J}(m\cdot \mathfrak{a}_{\bullet})^{\ell}\] for any \(m,\ell\in\mathbb{N}\). A formal consequence of these two properties, when applied to the valuations \(v_{k}\) (rescaled so that \(A_{X}(v_{k})=1\)), is that11 Footnote 11: This is also the argument behind [1, Theorem A]. \[\mathfrak{a}_{m\ell}(v_{k})\subseteq\mathcal{J}(m\ell\cdot\mathfrak{a}_{ \bullet}(v_{k}))\subseteq\mathcal{J}(m\cdot\mathfrak{a}_{\bullet}(v_{k}))^{ \ell}\subseteq\mathfrak{a}_{m-1}(v_{k})^{\ell}.\] From here it is not hard to deduce (4.2). When \(x\in X\) is singular, we only have a weaker subadditivity result (see [14, Theorem 0.1] or [11, Theorem 7.3.4]): \[\operatorname{Jac}_{X}^{\ell}\cdot\mathcal{J}(m\ell\cdot\mathfrak{a}_{\bullet}) \subseteq\mathcal{J}(m\cdot\mathfrak{a}_{\bullet})^{\ell},\] where \(\operatorname{Jac}_{X}\) is the Jacobian ideal of \(X\). As before this gives \[\operatorname{Jac}_{X}^{\ell}\cdot\mathfrak{a}_{m\ell}(v_{k})\subseteq \mathfrak{a}_{m-1}(v_{k})^{\ell}.\] What is important to us is that the "correction term" \(\operatorname{Jac}_{X}^{\ell}\) is independent of the valuation \(v_{k}\), and its effect on the multiplicity is negligible when \(m\to\infty\) (the precise proof uses Teissier's Minkowski Inequality and Li's properness estimate). We may then conclude as in the smooth case. See [1, Proposition 3.7] for the technical details. _Remark 4.3_.: The generic limit argument we sketch above requires the base field to be uncountable, since we need to choose very general points of the locus \(H_{m}\). Using boundedness of complements, [13] gives another proof for the existence of minimizer that works for general fields. Alternatively, once we know that the minimizer is unique up to rescaling (Theorem 4.6) and in particular invariant under the Galois action, we can first base change to an uncountable field to find a minimizer and then Galois descend to the original base field. _Remark 4.4_.: The proof of the lower semi-continuity of local volumes ([1], see Theorem 2.21) follows a similar circle of ideas, but carried out in families. Roughly speaking, since the local volumes can be approximated by normalized multiplicities of the form \(\operatorname{lct}(\mathfrak{a})^{n}\cdot\operatorname{mult}(\mathfrak{a})\) and the log canonical threshold function is lower semi-continuous in families (a consequence of the inversion of adjunction), the main obstruction comes from the multiplicity term, which is not lower semi-continuous in families. In fact, it is the opposite: multiplicities usually increases under specialization. Nonetheless, one can extend the argument proving Theorem 4.1 to show that the local volume can be uniformly approximated by the normalized colengths \[\operatorname{lct}(\mathfrak{a})^{n}\cdot\ell(\mathcal{O}_{X}/\mathfrak{a})\] of ideals that are bounded below by some fixed power of \(\mathfrak{m}_{x}\). Here in order to ensure that the approximation is uniform in families, one needs to show that the constants in the Izumi type inequality (Lemma 3.5) is uniformly bounded in families. As these constants ultimately rely on the Kollar components, this can be done by extracting a family version of Kollar components. Since the colength function \(\mathfrak{a}\mapsto\ell(\mathcal{O}_{X}/\mathfrak{a})\) is locally constant on the Hilbert scheme, and the lct part is lower semi-continuous, the lower semi-continuity of local volumes is then a direct consequence of this uniform approximation result. ### Uniqueness and K-semistability In [1], it is shown that the quasi-monomial minimizer of the normalized volume function is unique12, under the assumption that the minimizer has a finitely generated associated graded algebra. The first proof of the uniqueness that is independent of the other parts of the Stable Degeneration Conjecture appears in [15], and later [1] finds another argument. Both proofs rely on a notion of geodesics between valuations, and ultimately the uniqueness of the minimizer can be seen a consequence of the "geodesic (strong) convexity" of the volume function. Ideally, convexity means \[\operatorname{vol}((1-t)v_{0}+tv_{1})\leq(1-t)\cdot\operatorname{vol}(v_{0})+t \cdot\operatorname{vol}(v_{1})\] for any valuations \(v_{0},v_{1}\in\operatorname{Val}_{X,x}^{*}\), except that it is not clear how to make sense of the "valuation" \((1-t)v_{0}+tv_{1}\). On the other hand, there is a natural way to interpret \((1-t)v_{0}+tv_{1}\) as a _filtration_: for any \(\lambda\in\mathbb{R}_{+}\), we take \(\mathfrak{a}_{\lambda,t}\) to be the \(\mathfrak{m}_{x}\)-primary ideal generated by those \(f\in\mathcal{O}_{X,x}\) such that \[(1-t)\cdot v_{0}(f)+t\cdot v_{1}(f)\geq\lambda.\] The reader may easily verify that this defines a filtration \(\mathfrak{a}_{\bullet,t}\) for each \(t\in[0,1]\), and that \(\mathfrak{a}_{\bullet,0}\) (resp. \(\mathfrak{a}_{\bullet,1}\)) is the filtration \(\mathfrak{a}_{\bullet}(v_{0})\) (resp. \(\mathfrak{a}_{\bullet}(v_{1})\)) induced by \(v_{0}\) (resp. \(v_{1}\)). We view the family \((\mathfrak{a}_{\bullet,t})_{t\in[0,1]}\) of filtrations as the _geodesic_ between \(v_{0}\) and \(v_{1}\). More generally, given two filtrations \(\mathfrak{a}_{\bullet,0}\) and \(\mathfrak{a}_{\bullet,1}\), we can define [13, 14]13 the geodesic between them as the following family \((\mathfrak{a}_{\bullet,t})_{t\in[0,1]}\) of filtrations: Footnote 13: See also [14, 15] for the global version of this construction. \[\mathfrak{a}_{\lambda,t}=\sum_{(1-t)\lambda_{0}+t\lambda_{1}=\lambda}\mathfrak{ a}_{\lambda_{0},0}\cap\mathfrak{a}_{\lambda_{1},1}.\] In some sense, the space of filtrations is the "geodesic completion" of the valuation space \(\operatorname{Val}_{X,x}^{*}\). We already have an extension of the normalized volume function to the space of filtrations using normalized multiplicities (2.2), and the more natural question is whether the individual terms in (2.2) are convex along geodesics. This is confirmed by the following statement. **Theorem 4.5**.: _For any \(t\in[0,1]\), we have_ 1. ([13, Theorem 3.11]) \[\operatorname{lct}(\mathfrak{a}_{\bullet,t})\leq(1-t)\cdot\operatorname{lct}( \mathfrak{a}_{\bullet,0})+t\cdot\operatorname{lct}(\mathfrak{a}_{\bullet,1}).\] 2. ([14, Theorem 1.1]) \[\operatorname{mult}(\mathfrak{a}_{\bullet,t})^{-1/n}\geq(1-t)\cdot \operatorname{mult}(\mathfrak{a}_{\bullet,0})^{-1/n}+t\cdot\operatorname{mult }(\mathfrak{a}_{\bullet,1})^{-1/n}.\] _Moreover, equality holds if and only if there exists_ \(c>0\) _such that_ \(v(\mathfrak{a}_{\bullet,0})=c\cdot v(\mathfrak{a}_{\bullet,1})\) _for all valuations_ \(v\in\operatorname{Val}_{X,x}^{*}\)_._ The statement (1) is deduced from a summation formula of asymptotic multiplier ideals, while statement (2) relies on the construction of a two-dimensional Duistermaat-Heckman measure using a compatible basis with respect to the two filtrations \(\mathfrak{a}_{\bullet,0}\) and \(\mathfrak{a}_{\bullet,1}\). We refer to the original articles for the relevant details, here we just explain why this theorem implies the uniqueness of \(\widehat{\operatorname{vol}}\)-minimizer. **Theorem 4.6** ([13, 14]).: _Up to rescaling, there is a unique valuation \(v_{0}\) that minimizes the normalized volume function \(\widehat{\operatorname{vol}}\)._ Before we discuss the proof, let us mention one interesting consequence of this theorem: the finite degree formula for local volumes. **Theorem 4.7** ([15, Theorem 1.3]).: _Let \(f\colon(y\in Y)\to(x\in X)\) be a finite quasi-etale morphism between klt singularities. Then_ \[\widehat{\operatorname{vol}}(y,Y)=\deg(f)\cdot\widehat{\operatorname{vol}}(x,X).\] More generally, the finite degree formula holds for crepant Galois morphisms, i.e., Galois morphisms \(f\colon\big{(}y\in(Y,\Delta_{Y})\big{)}\to\big{(}x\in(X,\Delta)\big{)}\) such that \(f^{*}(K_{X}+\Delta)=K_{Y}+\Delta_{Y}\). Roughly, the reason for the finite degree formula is that the unique minimizer is necessarily invariant under the Galois action, hence descends to the quotient. Moreover, the normalized volume gets divided by \(\deg(f)\) as the valuation descends. We now return to the proof of Theorem 4.6. Proof of Theorem 4.6.: Suppose we have two minimizers \(v_{0},v_{1}\). Consider the filtrations \(\mathfrak{a}_{\bullet,i}=\mathfrak{a}_{\bullet}(v_{i})\) (\(i=0,1\)) and the geodesic \(\mathfrak{a}_{\bullet,t}\) connecting them. By (2.2), we have \[\widehat{\operatorname{vol}}(x,X)^{1/n}\leq\frac{\operatorname{lct}( \mathfrak{a}_{\bullet,t})}{\operatorname{mult}(\mathfrak{a}_{\bullet,t})^{- 1/n}},\] hence using Theorem 4.5 we obtain \[\widehat{\operatorname{vol}}(x,X)^{1/n} \leq\frac{(1-t)\cdot\operatorname{lct}(\mathfrak{a}_{\bullet,0}) +t\cdot\operatorname{lct}(\mathfrak{a}_{\bullet,1})}{(1-t)\cdot\operatorname {mult}(\mathfrak{a}_{\bullet,0})^{-1/n}+t\cdot\operatorname{mult}(\mathfrak{ a}_{\bullet,1})^{-1/n}}\] \[\leq\max\left\{\frac{\operatorname{lct}(\mathfrak{a}_{\bullet,0 })}{\operatorname{mult}(\mathfrak{a}_{\bullet,0})^{-1/n}},\frac{ \operatorname{lct}(\mathfrak{a}_{\bullet,1})}{\operatorname{mult}(\mathfrak{ a}_{\bullet,1})^{-1/n}}\right\}\] \[\leq\max\{\widehat{\operatorname{vol}}(v_{0})^{1/n},\widehat{ \operatorname{vol}}(v_{1})^{1/n}\}=\widehat{\operatorname{vol}}(x,X)^{1/n}.\] Thus equality holds everywhere. In particular, one can show that the equality condition in Theorem 4.5(2) implies \(v_{1}=cv_{0}\) for some \(c>0\). We remark that our presentation so far draws heavily from [1]. The original proof of Theorem 4.6 in [15] exploits the K-semistability of the minimizing valuation rather than the full convexity of the volume function. This approach has some other interesting consequences; most notably, it gives the following generalization of Theorem 3.7. **Theorem 4.8** ([15, Theorems 3.7 and 3.10]).: _A valuation \(v_{0}\in\operatorname{Val}_{X,x}^{*}\) minimizes the normalized volume function if and only if it is K-semistable._ Let us provide a brief definition of the K-semistability of valuations, which mimics the characterization of K-semistability of Fano varieties using basis type divisors. Recall from [11, 1] that an (\(m\)-)basis type divisor on a Fano variety \(V\) is a divisor of the form \[D=\frac{1}{mN_{m}}\sum_{i=1}^{N_{m}}\{s_{i}=0\}\] where \(N_{m}=h^{0}(V,-mK_{V})\) and \(s_{1},\dots,s_{N_{m}}\) is a basis of \(H^{0}(V,-mK_{V})\) (typically we choose \(m\gg 0\)), and that a Fano variety \(V\) is K-semistable if and only if its basis type divisors are "asymptotically log canonical", i.e., \[A_{V}(v)\geq S_{V}(v):=\lim_{m\to\infty}\sup\{v(D)\,|\,D\text{ is of $m$-basis type}\} \tag{4.3}\] for all valuations \(v\) on \(V\). Suppose next that we have a valuation \(v\in\operatorname{Val}_{X,x}^{*}\) over the klt singularity \(x\in X\). To define its K-semistability, we rescale it so that \(A_{X}(v)=1\) and consider \(m\)-basis type divisors (with respect to \(v\)) of the form \[D=\frac{1}{mN_{m}}\sum_{i=1}^{N_{m}}\{s_{i}=0\},\] where this time \[N_{m}=\dim(\mathfrak{a}_{m}(v)/\mathfrak{a}_{m+1}(v))\] and \(s_{1},\dots,s_{N_{m}}\in\mathfrak{a}_{m}(v)\) restrict to a basis of \(\mathfrak{a}_{m}(v)/\mathfrak{a}_{m+1}(v)\). We say the valuation \(v\) is K-semistable if its basis type divisors are "asymptotically log canonical", i.e., for any \(w\in\operatorname{Val}_{X,x}^{*}\) we have \[A_{X}(w)\geq S(v;w):=\lim_{m\to\infty}\sup\{w(D)\,|\,D\text{ is of $m$-basis type}\}^{14}.\] Note that we have \(A_{X}(v)=S(v;v)=1\) by definition, hence if the valuation \(v\) is K-semistable, then it is automatically an lc place of its own basis type divisors (in the asymptotic sense). If \(E\) is a Kollar component over \(x\in X\), then it follows from inversion of adjunction (see [21, Theorem 3.6]) that the divisorial valuation \(\operatorname{ord}_{E}\) is K-semistable if and only if the induced log Fano pair \((E,\Delta_{E})\) is K-semistable. Thus Theorem 3.7 can be viewed as a special case of Theorem 4.8. The proof of Theorem 4.8 naturally divides into two steps. First we need to show that the minimizers of the normalized volume function are K-semistable. This is done by analyzing the derivatives of the normalized volume function along the geodesic connecting the minimizer \(v_{0}\) to an arbitrary valuation \(w\in\operatorname{Val}_{X,x}^{*}\). The non-negativity of the derivative at the minimizer \(v_{0}\) is (almost) exactly the condition \(A_{X}(w)\geq S(v_{0};w)\) that defines K-semistability. To show the other direction, i.e. K-semistable valuations are \(\widehat{\operatorname{vol}}\)-minimizers, one interprets the normalized volume as a "log canonical threshold" via the identity \[\widehat{\operatorname{vol}}_{X}(v)^{1/n}=\frac{A_{X}(v)}{\operatorname{vol}( v)^{-1/n}}.\] A key step is to realize the denominator \(\operatorname{vol}(v)^{-1/n}\) as the asymptotic vanishing order along the valuation \(v\) of certain basis type divisors. From this perspective, the \(\widehat{\operatorname{vol}}\)-minimizers are just the valuations that asymptotically compute the log canonical thresholds of basis type divisors. Since K-semistable valuations are exactly of this kind, they minimize the normalized volume. For more details, see [21]. Suppose for the moment that the minimizing valuation \(v_{0}\) is quasi-monomial and has a finitely generated associated graded algebra \(\operatorname{gr}_{v_{0}}R\) (these will be verified in the next two subsections). Then as we see in Section 2.4, we have a degeneration of \(x\in X=\operatorname{\mathbf{Spec}}(R)\) to \(x_{0}\in X_{0}:=\operatorname{\mathbf{Spec}}(\operatorname{gr}_{v_{0}}R)\), and there is an induced Reeb vector \(\xi_{0}\) on \(X_{0}\). Li and Xu [21] show that what we get is a K-semistable Fano cone singularity. **Theorem 4.9** ([21]).: _The Fano cone singularity \(x_{0}\in(X_{0};\xi_{0})\) is K-semistable, and the degeneration is volume preserving, i.e. \(\widehat{\operatorname{vol}}(x,X)=\widehat{\operatorname{vol}}(x_{0},X_{0})\)._ These can also be explained using the K-semistability of the minimizer \(v_{0}\). All we need to show is that the toric valuation \(\operatorname{wt}_{\xi_{0}}\) on \(X_{0}\) minimizes the normalized volume. By Theorem 4.8, this is equivalent to showing that \(\operatorname{wt}_{\xi_{0}}\) is a K-semistable valuation. We know that \(v_{0}\) is K-semistable since it is the normalized volume minimizer on \(X\), hence its basis type divisors are asymptotically log canonical, and always have the valuation \(v_{0}\) as an lc place. In general, given a log canonical pair, the degenerations induced by its lc places have semi log canonical central fibers, since the latter are orbifold cones over pairs coming from adjunction along the lc places, c.f. [1, Appendix A.1]. This essentially implies that the degenerations of the basis type divisors to \(X_{0}\) remain asymptotically log canonical. It appears that what we get from these degenerations are exactly the basis type divisors on \(X_{0}\) (with respect to \(\operatorname{wt}_{\xi_{0}}\)). Therefore, the toric valuation \(\operatorname{wt}_{\xi_{0}}\) is K-semistable by definition. ### Quasi-monomial property We have seen in Section 4.1 that the normalized volume minimizer computes the log canonical threshold of some graded sequence of ideals. Regarding such valuations, Jonsson and Mustata have made the following conjecture. **Conjecture 4.10** ([16]).: _Let \(X\) be klt and let \(\mathfrak{a}_{\bullet}\) be a graded sequence of ideals on \(X\) such that \(\operatorname{lct}(\mathfrak{a}_{\bullet})<\infty\)._ 1. (Weak version). _There exists a quasi-monomial valuation that computes_ \(\operatorname{lct}(\mathfrak{a}_{\bullet})\)_._ 2. (Strong version). _Every valuation that computes_ \(\operatorname{lct}(\mathfrak{a}_{\bullet})\) _is quasi-monomial._ The strong version of this conjecture is still open. An important breakthrough in the development of the K-stability theory is Xu's proof [14] of the weak version of Jonsson-Mustata's Conjecture. An immediate corollary is the quasi-monomial property of the \(\widehat{\operatorname{vol}}\)-minimizer. In this subsection, we sketch the main ideas of this proof. **Theorem 4.11** ([14, Theorem 1.1]).: _Let \(\mathfrak{a}_{\bullet}\) be a graded sequence of ideals on a klt variety \(X\) such that \(\operatorname{lct}(\mathfrak{a}_{\bullet})<\infty\). Then there exists a quasi-monomial valuation \(v\) that computes \(\operatorname{lct}(\mathfrak{a}_{\bullet})\)._ One way to get a valuation that computes \(\operatorname{lct}(\mathfrak{a}_{\bullet})\) is to take an \(m\to\infty\) limit of valuations \(v_{m}\) that compute \(\operatorname{lct}(\mathfrak{a}_{m})\). The latter are always quasi-monomial. This is of course not surprising, since every valuation is a limit of quasi-monomial valuations. Therefore, the main difficulty is to control the limit process. A crucial ingredient is the theory of complement. Recall that a (\(\mathbb{Q}\)-)complement of an lc pair \((X,\Delta)\) is an effective \(\mathbb{Q}\)-divisor \(D\sim_{\mathbb{Q}}-(K_{X}+\Delta)\) such that \((X,\Delta+D)\) is lc. It is called an \(N\)-complement (for some positive integer \(N\)) if \(N(K_{X}+\Delta+D)\sim 0\). A valuation \(v\) on \(X\) is called an lc place of the complement \(D\) if \(A_{X,\Delta+D}(v)=0\); such valuations are always quasi-monomial. We use \(\operatorname{LC}(X,\Delta+D)\subseteq\operatorname{Val}_{X}\) to denote the corresponding set of lc places. A difficult theorem of Birkar [10, Theorem 1.7], known as the boundedness of complements, states that if \(X\) is of Fano type and \((X,\Delta)\) admits a complement, then it also has an \(N\)-complement for some integer \(N\) that only depends on the dimension of \(X\) and the coefficient of \(\Delta\). This has the following consequence. **Proposition 4.12**.: _Let \(x\in X\) be a klt singularity. Then there exists some positive integer \(N\) depending only on the dimension of \(X\) such that for any \(\mathfrak{m}_{x}\)-primary ideal \(\mathfrak{a}\), any valuation computing \(\operatorname{lct}(\mathfrak{a})\) is an lc place of some \(N\)-complement._ Roughly speaking, this is because any valuation computing \(\operatorname{lct}(\mathfrak{a})\) is automatically an lc place of some complement (an obvious choice is a general member of the \(\mathbb{Q}\)-ideal \(\mathfrak{a}^{\operatorname{lct}(\mathfrak{a})}\)), hence by Birkar's theorem we can upgrade the complement to a bounded complement. The proposition in particular applies to the valuations \(v_{m}\) that compute \(\operatorname{lct}(\mathfrak{a}_{m})\). Because the integer \(N\) does not depend on the ideal \(\mathfrak{a}_{m}\), modulo some sufficiently large power of the maximal ideal \(\mathfrak{m}_{x}\) we can further arrange that the valuations \(v_{m}\) are lc places of a _bounded_ family of \(N\)-complements. It follows that the limit \(\lim_{m\to\infty}v_{m}\) is not arbitrary: it is a generic limit in a bounded family of simplices of quasi-monomial valuations. From here, we conclude that the limit valuation stays in the same family of simplices; in particular, it is quasi-monomial. In fact, the proof naturally implies a stronger statement: **Theorem 4.13** ([20]).: _Let \(\mathfrak{a}_{\bullet}\) be a graded sequence of ideals such that \(\operatorname{lct}(\mathfrak{a}_{\bullet})<\infty\). Then \(\operatorname{lct}(\mathfrak{a}_{\bullet})\) is computed by some lc place of \(N\)-complement, where the integer \(N\) only depends on the dimension._ _In particular, the minimizing valuation of the normalized volume is an lc place of \(N\)-complement._ By applying a similar technique in families, one can also show that local volumes of klt singularities are constructible in families. This is indeed an important ingredient in the proof of the openness of K-semistability in families of Fano varieties. **Theorem 4.14** ([20, Theorem 1.3]).: _For any \(\mathbb{Q}\)-Gorenstein family \(B\subseteq\mathcal{X}\to B\) of klt singularities, the function_ \[b\in B\mapsto\widehat{\operatorname{vol}}(b,\mathcal{X}_{b})\] _on \(B\) is constructible with respect to the Zariski topology._ Since every \(\widehat{\operatorname{vol}}\)-minimizer is an lc place of \(N\)-complement, the key point is to analyze how the volume changes as the \(N\)-complement varies. The constructibility statement in Theorem 4.14 is ultimately a consequence of a local version of the deformation invariance of log plurigenera [11] (see also [21]). ### Finite generation We now come to finite generation part of the Stable Degeneration Conjecture, which is the main result of [20]. **Theorem 4.15** ([20, Theorem 1.1]).: _Let \(x\in X=\operatorname{\mathbf{Spec}}(R)\) be a klt singularity and let \(v_{0}\) be the minimizer of the normalized volume function \(\widehat{\operatorname{vol}}\) on \(\operatorname{Val}_{X,x}^{*}\). Then the associated graded algebra \(\operatorname{gr}_{v_{0}}R\) is finitely generated._ Instead of proving finite generation for this particular valuation, we will describe a finite generation criterion for more general valuations. To motivate such a criterion, we first revisit the argument in the divisorial case. We have seen in Section 3.1 that if the minimizer \(v_{0}\) is divisorial, then the associated graded algebra \(\operatorname{gr}_{v_{0}}R\) is finitely generated. This can also be deduced from Theorem 4.13, as _divisorial_ lc places of complements satisfy the finite generation property by [1]15. Since the minimizer is still an lc place of complement in the higher (rational) rank situation (Theorem 4.13), one may ask: **Question 4.16**.: Is it true that \(\operatorname{gr}_{v}R\) is finitely generated for any valuation \(v\in\operatorname{Val}_{X,x}^{*}\) that is an lc place of complement? Unfortunately the answer is no. Indeed, the global version of this question already has a negative answer [1, 1]. **Example 4.17**.: Any valuation \(v\) on a projective variety \(V\) induces a filtration of the section ring of an ample line bundle \(L\). It is proved in [1, Theorem 4.5] that when \(V\) is Fano, \(L\) is proportional to \(-K_{V}\) and \(v\) is an lc place of complement, the induced filtration is finitely generated if and only if the \(S\)-invariant function defined in (4.3) is locally linear on the rational envelope of \(v\), i.e. a simplex \(\operatorname{QM}(Y,E)\) of smallest dimension that contains \(v\). The latter condition is automatic for any divisorial valuation (since the rational envelope is just a single point), but gets highly non-trivial for higher rank valuations. It already fails for some lc places of a nodal cubic curve \(C\subseteq\mathbb{P}^{2}\), see [1, Section 6]. By the cone construction, this provides plenty of valuations over \(0\in\mathbb{A}^{3}\) that are lc places of complements but the associated graded algebras are not finitely generated. In fact, one can even write down a simplex \(\operatorname{QM}(Y,E)\) of lc places of complements such that \(v\in\operatorname{QM}(Y,E)\) satisfies the finite generation property if and only if \(v\) is divisorial. Recall that any two valuations \(v_{0},v_{1}\) are connected by a geodesic \((\mathfrak{a}_{\bullet,t})_{0\leq t\leq 1}\) in the space of filtrations. If both \(v_{0}\) and \(v_{1}\) are divisorial lc places of the same complement, then using [1] it is not too hard to show that the filtrations \(\mathfrak{a}_{\bullet,t}\) along the geodesic all have finitely generated associated algebras. On the other hand, we can draw lines in any simplex \(\operatorname{QM}(Y,E)\). The failure of the finite generation property of higher rank valuations essentially comes from the fact that these lines are not necessarily geodesics in the valuation space. We need to find additional properties of the \(\widehat{\operatorname{vol}}\)-minimizer that turn lines in its rational envelope into geodesics. We do know more about the divisorial minimizers: they are also induced by Kollar components (Theorem 3.6). The following higher rank analog turns out to be a key to the proof of Theorem 4.15. **Definition 4.18** ([1, Definition 3.7]).: Let \(x\in X\) be a klt singularity. A _Kollar model_ of \(x\in X\) is a birational model \(\pi\colon(Y,E)\to X\) such that \(\pi\) is an isomorphism away from \(\{x\}\), \(E=\pi^{-1}(x)\), \((Y,E)\) is \(\operatorname{dlt}\)16 and \(-(K_{Y}+E)\) is ample. Footnote 16: The readers may notice that the definition in [1] uses the qdlt (shorthand for quotient of \(\operatorname{dlt}\)) condition rather than the dlt one. The difference is not essential, except that the qdlt version would make some technical steps easier. Since we will only focus on the more conceptual part of the proof, we ignore the difference and work with the dlt version. The only difference with the definition of Kollar components (Definition 3.1) is that we allow the exceptional divisor \(E\) to have more than one components, i.e., we drop the rank one condition. **Definition 4.19**.: Let \(x\in X\) be a klt singularity. We say a quasi-monomial valuation \(v\in\operatorname{Val}_{X,x}^{*}\) is a _Kollar valuation_ if there exists a Kollar model \(\pi\colon(Y,E)\to X\) such that \(v\in\operatorname{QM}(Y,E)\). Using Kollar models and the notion of Kollar valuations, we can finally formulate the finite generation criterion. **Theorem 4.20** ([22, Theorem 4.1]).: _Let \(x\in X=\mathbf{Spec}(R)\) be a klt singularity, and let \(v\in\operatorname{Val}_{X,x}^{*}\) be a quasi-monomial valuation. Then the following are equivalent._ 1. _The graded algebra_ \(\operatorname{gr}_{v}R\) _is finitely generated and_ \(X_{v}:=\mathbf{Spec}(\operatorname{gr}_{v}R)\) _is klt._ 2. _The valuation_ \(v\) _is a Kollar valuation._ We only sketch the proof of the implication \((2)\Rightarrow(1)\), since this is what we need for the finite generation part of the Stable Degeneration Conjecture (Theorem 4.15). Let \(\pi\colon(Y,E)\to X\) be a Kollar model such that \(v\in\operatorname{QM}(Y,E)\). For simplicity, let us assume that \(E\) only has two components \(E_{0}\) and \(E_{1}\). According to what we discuss before, the finite generation of \(\operatorname{gr}_{v}R\) would follow if we can show that the geodesic \((\mathfrak{a}_{\bullet,t})_{t\in[0,1]}\) joining \(v_{0}=\operatorname{ord}_{E_{0}}\) and \(v_{1}=\operatorname{ord}_{E_{1}}\) matches the obvious line in \(\operatorname{QM}(Y,E)\). We divide this into two parts. First, we shall prove that the filtration \(\mathfrak{a}_{\bullet,t}\) comes from a valuation; in other words, the induced degeneration has irreducible central fiber (see Lemma 2.4). For this part, we observe that the induced degeneration can be decomposed into a two step degeneration by test configurations \[X\rightsquigarrow X_{0}\rightsquigarrow X_{1},\] where the first degeneration is induced by \(E_{0}\), while the second degeneration is induced by the specialization of \(E_{1}\) on the degeneration \(Y_{0}\to X_{0}\) of the Kollar model \(Y\to X\). Note that a priori \(E_{1}\) may break into several components on \(Y_{0}\), in which case \(X_{1}\) will no longer be irreducible. Thus the key to this part of the proof is the following specialization result for Kollar models ([22, Section 4.2]). **Proposition 4.21**.: _Let \((Y,E)\to X\) be a Kollar model. For any component \(E^{\prime}\) of \(E\), let \((Y_{0},E_{0})\to X_{0}\) be the central fiber of the induced test configuration. Then:_ 1. \(X_{0}\) _is klt._ 2. _Each irreducible component of_ \(E\) _specializes to an irreducible component of_ \(E_{0}\)_._ 3. \((Y_{0},E_{0})\to X_{0}\) _is also a Kollar model._ In particular, using this proposition we may conclude that \(X_{1}\) is irreducible and even klt. The proof of the proposition itself is a delicate application of the tie-breaking method in birational geometry. In general, if \(E\) has \(r\) components, then the degeneration induced by \(\mathfrak{a}_{\bullet,t}\) would be decomposed into \(r\)-steps and we need to apply the above proposition inductively, but overall the main idea stays the same. To this end, we conclude that the geodesic joining Kollar valuations (with respect to the same Kollar model) lies in the valuations space. The next step is to show that the geodesic is the obvious line. This actually holds in a more general setting (see [22, Lemma 4.8]): **Proposition 4.22**.: _Let \(v_{0},v_{1}\in\operatorname{QM}(Y,E)\) be quasi-monomial valuations in the same simplex, and let \((\mathfrak{a}_{\bullet,t})_{t\in[0.1]}\) be the geodesic connecting \(v_{0}\) and \(v_{1}\). Suppose that for some \(t\in[0,1]\) the filtration \(\mathfrak{a}_{\bullet,t}\) is induced by a valuation \(w\). Then \((\)under some mild assumptions\()\) we have \(w=(1-t)\cdot v_{0}+t\cdot v_{1}\in\operatorname{QM}(Y,E)\)._ In other words, whenever the geodesic intersects the valuations space, the intersection point is the obvious one in the corresponding simplex \(\operatorname{QM}(Y,E)\). We remark that a priori it is not even clear why the intersection point lies in \(\operatorname{QM}(Y,E)\). To apply the finite generation criterion from Theorem 4.20 to the \(\widehat{\operatorname{vol}}\)-minimizer, we still need the next result. **Theorem 4.23**.: _Let \(x\in X\) be a klt singularity and let \(v_{0}\in\operatorname{Val}_{X,x}^{*}\) be the minimizer of the normalized volume function \(\widehat{\operatorname{vol}}\). Then \(v_{0}\) is a Kollar valuation._ If we assume that \(\operatorname{gr}_{v}R\) is finitely generated, Theorem 4.23 can be deduced from the fact that the minimizer \(v\) is the unique valuation that computes \(\operatorname{lct}(\mathfrak{a}_{\bullet}(v))\). In the divisorial case, this is exactly the argument we use in Section 3.1. Since we don't know finite generation yet, we need to find a different argument, even in the divisorial case. The idea is that many properties of Kollar components are stable under perturbation. Conversely, we can detect whether a given divisor is a Kollar component "by perturbation". As a typical example, we have the following characterization of Kollar component. **Lemma 4.24**.: _A prime divisor \(E\) over a klt singularity \(x\in X\) is a Kollar component if and only if for any effective Cartier divisor \(D\) on \(X\), there exists some \(\varepsilon\in\mathbb{Q}_{>0}\) such that \(E\) is an lc place of some complement of \((X,\varepsilon D)\)._ Essentially, we try to perturb the property that every Kollar component is an lc place of some complement. The global version of this statement is [23, Theorem 4.12]. Proof.: If \(E\) is a Kollar component and \(Y\to X\) is the plt blowup, then \((Y,E)\) has a complement since the pair is plt and \(-(K_{Y}+E)\) is ample. Taking pushforward we get a complement on \(X\) with \(E\) as an lc place. Both the plt and the ampleness condition are preserved if we add a small multiple of the strict transform of \(D\) to the pair \((Y,E)\), thus the same statement holds for \((X,\varepsilon D)\). Conversely, if \(E\) is an lc place of complement, then by [1] one can find a birational model \(Y\to X\) with exceptional divisor \(E\) such that \((Y,E)\) is lc and \(-(K_{Y}+E)\) is ample over \(X\). If these conditions are preserved after adding a small boundary, then \((Y,E)\) is in fact plt and thus \(E\) becomes a Kollar component. Similarly, we can formulate a higher rank analog that characterizes Kollar valuations "by perturbation". **Proposition 4.25**.: _Let \(x\in X\) be a klt singularity and let \(v\in\operatorname{Val}_{X,x}^{*}\) be a quasi-monomial valuation. Then the following are equivalent._ 1. _The valuation_ \(v\) _is a Kollar valuation._ 2. _For any effective Cartier divisor_ \(D\) _on_ \(X\)_, there exists some_ \(\varepsilon\in\mathbb{Q}_{>0}\) _such that_ \(v\) _is an lc place of some complement of_ \((X,\varepsilon D)\)_._ Unfortunately, the proof of the higher rank version is much harder. One reason is that we no longer have a canonical blowup that "extract" the higher rank valuation. The Kollar models only serve as approximations of this extraction, and they are no longer Kollar models if we add a small boundary divisor (e.g. the dlt condition may fail). This should not be surprising. After all, if we think of \(\operatorname{QM}(Y,E)\) (where \((Y,E)\) is a Kollar model) as a "neighbourhood" of the valuation \(v\), then after perturbation, we should only expect properties of \(v\) to hold after "shrinking the neighbourhood", which in practice means we should switch to a different Kollar model. To overcome these difficulties, we rely on the idea of _special complements_, which was originally introduced in [10] to attack the Fano version of the Higher Rank Finite Generation Conjecture. **Definition 4.26** (special complements).: Let \(x\in X\) be a klt singularity and let \(\pi\colon(Y,E)\to X\) be a log smooth model. A _special complement_ of \(x\in X\) (with respect to \((Y,E)\)) is a complement \(\Gamma\) such that \(\pi_{*}^{-1}\Gamma\geq G\) for some effective ample \(\mathbb{Q}\)-divisor \(G\) on \(Y\) whose support does not contain any stratum of \((Y,E)\). Any valuation \(v\in\operatorname{QM}(Y,E)\cap\operatorname{LC}(X,\Gamma)\) is called a _monomial lc place_ of the special complement \(\Gamma\) (with respect to \((Y,E)\)). Intuitively, the log smooth model is a log resolution of the Kollar model, while conversely, the Kollar model is the ample model of the log smooth model. The special complement condition can be regarded as a birational version of the log Fano condition in the definition of Kollar models, while monomial lc places of special complements are the birational analog of monomial valuation on Kollar models. The conditions are specifically designed so that the definition is not sensitive to the particular choice of the log smooth model nor the special complements. This offers room for perturbation. The proof of Proposition 4.25 now proceeds by showing that the two conditions in the statement are both equivalent to a third one: 1. The valuation \(v\) is a monomial lc place of some special complement \(\Gamma\) (with respect to some log smooth model \((Y,E)\)). For details, see [22, Section 3.3]. Finally, we need to verify the equivalent conditions in Proposition 4.25 in order to finish the proof of Theorem 4.23. This is accomplished by the following result. **Proposition 4.27**.: _Let \(x\in X\) be a klt singularity and let \(v_{0}\in\operatorname{Val}^{*}_{X,x}\) be the minimizer of the normalized volume function. Then for any effective Cartier divisor \(D\) on \(X\), there exists some \(\varepsilon\in\mathbb{Q}_{>0}\) such that \(v_{0}\) is an lc place of some complement of \((X,\varepsilon D)\)._ This is proved in [22, Lemma 3.2], and the argument in _loc. cit._ also naturally gives an explicit value of \(\varepsilon\). The proof essentially exploits the K-semistability property of the minimizer (Theorem 4.8). The idea is that asymptotically a K-semistable valuation is an lc place of their basis type divisors (which are asymptotically log canonical). If we choose a basis type divisor that maximizes the coefficient of \(D\), and write the basis type divisor as \(\varepsilon D+\Gamma\), then at least in the limit we get the desired coefficient \(\varepsilon\) (some calculations are needed to show that it is positive) and the complement \(\Gamma\). It remains to replace the limit by a finite level approximation, and this can be done using the ACC of log canonical threshold [14]. ### K-polystable degeneration At this point, we have finished the proof of the Stable Degeneration Conjecture (Conjecture 2.24). In particular, any klt singularity \(x\in X=\mathbf{Spec}(R)\) has a degeneration to a K-semistable Fano cone singularity \(x_{0}\in(X_{0};\xi_{0})\) induced by the minimizer \(v_{0}\in\operatorname{Val}^{*}_{X,x}\) of the normalized volume function. To complete the two-step degeneration, it remains to construct the K-polystable degeneration of \(x_{0}\in(X_{0};\xi_{0})\). Before we sketch the ideas, it would be helpful to first review some argument in the case of vector bundles, where the analog of a polystable degeneration is the Jordan-Holder filtration of a slope semistable vector bundle. The key to the existence of this filtration is the Schreier refinement theorem, stating that any two filtrations with semistable graded pieces of the same slopes have a common refinement. The Jordan-Holder filtration is then obtained as the finest filtration of this kind. The construction of the K-polystable degeneration, see Theorem 2.26, is a generalization of this basic strategy. Its proof heavily relies on the following analog of the Schreier refinement theorem. **Theorem 4.28** ([15, Theorem 4.1]).: _Suppose that \(x_{i}\in\left(X_{i};\xi_{i}\right)(i=1,2)\) are two K-semistable degenerations of the Fano cone singularity \(x_{0}\in\left(X_{0};\xi_{0}\right)\). Then they have a common K-semistable degeneration \(y\in\left(Y;\xi_{Y}\right)\)._ Assuming this result, let us explain how to construct the K-polystable degeneration. First note that Theorem 4.28 immediately implies the uniqueness of the K-polystable degeneration. Indeed, if \(x_{i}\in\left(X_{i};\xi_{i}\right)(i=1,2)\) are two K-polystable degenerations of the Fano cone singularity \(x_{0}\in\left(X_{0};\xi_{0}\right)\), then they have a common K-semistable degeneration \(y\in\left(Y;\xi_{Y}\right)\). However, because both \(x_{i}\in\left(X_{i};\xi_{i}\right)\) are K-polystable, their K-semistable degenerations are isomorphic to themselves. Thus we get \[\big{(}x_{1}\in\left(X_{1};\xi_{1}\right)\big{)}\cong\big{(}y\in\left(Y;\xi_{ Y}\right))\cong\big{(}x_{2}\in\left(X_{2};\xi_{2}\right)\big{)},\] which gives the uniqueness. Next we prove the existence of the K-polystable degeneration. Suppose that \(x_{0}\in\left(X_{0};\xi_{0}\right)\) is not K-polystable. By definition, this means that there exists a K-semistable degeneration \(x_{1}\in\left(X_{1};\xi_{1}\right)\) that is not isomorphic to \(x_{0}\in\left(X_{0};\xi_{0}\right)\). If \(x_{1}\in\left(X_{1};\xi_{1}\right)\) is still not K-polystable, we can find a further degeneration \(x_{2}\in\left(X_{2};\xi_{2}\right)\) and continue. The key is to make this process stop after finitely many steps. A discrete invariant that grows under this procedure is the dimension of the maximal torus. Note that the automorphism group \(\operatorname{Aut}(x\in\left(X;\xi\right))\) of a Fano cone singularity (i.e. the group of \(\langle\xi\rangle\)-equivariant automorphisms of the singularity \(x\in X\)) is an algebraic group. We denote by \(\mathbb{T}_{i}\) the maximal torus of \(\operatorname{Aut}(x_{i}\in\left(X_{i};\xi_{i}\right))\), which is well-defined up to conjugation. **Claim**.: If \(x_{i}\in\left(X_{i};\xi_{i}\right)\) is not K-polystable, then it has a \(\mathbb{T}_{i}\)-equivariant K-semistable degeneration \(x_{i+1}\in\left(X_{i+1};\xi_{i+1}\right)\) that is not isomorphic to \(x_{i}\in\left(X_{i};\xi_{i}\right)\). Moreover, \(\dim\mathbb{T}_{i+1}>\dim\mathbb{T}_{i}\). The second part of the claim actually follows from the first, since we clearly have \(\dim\mathbb{T}_{i+1}\geq\dim\mathbb{T}_{i}\), and through the graded algebra description of \(X_{i+1}\) we get an additional \(\mathbb{G}_{m}\)-action on \(x_{i+1}\in\left(X_{i+1};\xi_{i+1}\right)\) from the grading. Since the dimension of the maximal torus is at most the dimension of the singularity (otherwise the torus action is not effective), the claim implies that the K-semistable degeneration process necessarily stops after finitely many steps. It remains to construct the equivariant K-semistable degeneration in the above claim. We start with any test configuration that degenerates \(x_{i}\in\left(X_{i};\xi_{i}\right)\) to some non-isomorphic K-semistable Fano cone singularity \(x_{i+1}\in\left(X_{i+1};\xi_{i+1}\right)\). The idea is to use Theorem 4.28 to find a "toric degeneration" of this test configuration. Note that for any one parameter subgroup \(\rho\colon\mathbb{G}_{m}\to\mathbb{T}_{i}\), we also have a product test configuration of \(x_{i}\in\left(X_{i};\xi_{i}\right)\) induced by the weight filtration of the \(\mathbb{G}_{m}\)-action. The central fiber of the product test configuration is just \(x_{i}\in\left(X_{i};\xi_{i}\right)\) itself. By Theorem 4.28, we have a common degeneration as illustrated by the following diagram \[x_{i}\in(X_{i};\xi_{i})\rToarrow{\mathbb{G}_{m}}x_{i}\in(X_{i};\xi_{i})\] \[x_{i+1}\in(X_{i+1};\xi_{i+1})\rToarrow{\mathbb{G}_{m}}y\in(Y,\xi_{ Y}).\] In some sense we view the right column as the "toric degeneration" of the left column. In fact, in the proof of Theorem 4.28, this is what happens at the level of filtrations. By construction, the degeneration \[x_{i}\in(X_{i};\xi_{i})\rightsquigarrow y\in(Y,\xi_{Y})\] is equivariant with respect to the chosen one parameter subgroup \(\rho({\mathbb{G}_{m}})\). It also inherits the torus action on the original degeneration \[x_{i}\in(X_{i};\xi_{i})\rightsquigarrow x_{i+1}\in(X_{i+1};\xi_{i+1}).\] If we replace \(x_{i+1}\in(X_{i+1};\xi_{i+1})\) by \(y\in(Y,\xi_{Y})\) and repeat this construction for a finite collection of one parameter subgroups that generate \({\mathbb{T}_{i}}\), we will eventually get the desired \({\mathbb{T}_{i}}\)-equivariant K-semistable degeneration. This proves the claim and we have finished the construction of the K-polystable degeneration assuming Theorem 4.28. We now return to sketch a proof of Theorem 4.28. Recall that test configurations of \(x_{0}\in X_{0}=\mathbf{Spec}(R_{0})\) are given by filtrations of \(R_{0}\). In particular, there are filtrations \(\mathfrak{a}_{\bullet,i}\) (\(i=1,2\)) whose associated graded algebra gives \(X_{i}\), i.e., \[X_{i}=\mathbf{Spec}(\operatorname{gr}_{\mathfrak{a}_{\bullet,i}}R_{0}).\] These filtrations have equivalent refinements, namely, each filtration \(\mathfrak{a}_{\bullet,i}\) induces a filtration on the associated graded algebra \(\operatorname{gr}_{\mathfrak{a}_{\bullet,j}}R_{0}\) of the other, and the induced filtrations satisfy \[\operatorname{gr}_{\mathfrak{a}_{\bullet,2}}\operatorname{gr}_{\mathfrak{a }_{\bullet,1}}R_{0}\cong\operatorname{gr}_{\mathfrak{a}_{\bullet,1}} \operatorname{gr}_{\mathfrak{a}_{\bullet,2}}R_{0}.\] Denote this (doubly graded) algebra by \(R^{\prime}\). Then \(\mathbf{Spec}(R^{\prime})\) is the obvious candidate of the common degeneration. To make this strategy work, we need to show that \(R^{\prime}\) is finitely generated. Note that both filtrations \(\mathfrak{a}_{\bullet,i}\) are induced by some _divisorial_ valuations \(v_{i}=\operatorname{ord}_{E_{i}}\), and we may realize \(R^{\prime}\) as a quotient of the Cox ring \[\bigoplus_{m_{1},m_{2}\in{\mathbb{N}}}\pi_{*}\mathcal{O}_{Y}(-m_{1}E_{1}-m_{2 }E_{2}),\] where \(\pi\colon Y\to X\) is a birational model that extracts both divisors \(E_{i}\). The general results from [1] tell us that this Cox ring is finitely generated if we can find an effective \({\mathbb{Q}}\)-divisor \(D\) on \(X\) such that \((X,D)\) is lc and \(A_{X,D}(E_{i})<1\) for both \(i\), because these conditions imply that the two divisors \(E_{i}\) can be simultaneously extracted on a model \(Y\) that is of Fano type over \(X\), and Cox rings on Fano type varieties are finitely generated by [1]. We haven't used the assumption that both valuations \(v_{i}\) induce K-semistable degenerations of \(x_{0}\in(X_{0};\xi_{0})\). It turns out that this condition is equivalent to the vanishing of the generalized Futaki invariant, or \[A_{X_{0}}(v_{i})=S(\operatorname{wt}_{\xi_{0}};v_{i}), \tag{4.4}\] see [16, Lemma 3.1 and proof of Theorem 4.1]. In other words, if we consider \(\mathbb{T}=\langle\xi_{0}\rangle\)-invariant basis type divisors that are compatible with \(v_{i}\)17, then asymptotically they are log canonical (because the Fano cone singularity \(x_{0}\in(X_{0};\xi_{0})\) is K-semistable) and have the valuation \(v_{i}\) as an lc place (because of the identity (4.4)). By choosing basis type divisors that are simultaneously compatible with both \(v_{i}\)18, in the limit we would get the desired auxiliary divisor \(D\). Footnote 17: Namely, the corresponding basis is \(\mathbb{T}=\langle\xi_{0}\rangle\)-invariant and is compatible with the filtration induced by \(v_{i}\). Such basis type divisors maximize the vanishing order along \(v_{i}\) and therefore asymptotically compute \(S(\operatorname{wt}_{\xi_{0}};v_{i})\). Footnote 18: Given two filtrations on a vector space, there is always a simultaneously compatible basis. See e.g. [1, Lemma 3.1] or [1, Proposition 1.14]). It then follows from the previous discussion that \(R^{\prime}\) is finitely generated, and we get a common degeneration to \(y\in(Y:=\operatorname{\mathbf{Spec}}(R^{\prime});\xi_{Y})\), where \(\xi_{Y}\) is the induced Reeb vector. It remains to check that the Fano cone singularity \(y\in(Y,\xi_{Y})\) is K-semistable. Roughly speaking, this is because the degenerations are induced by lc places of basis type divisors, hence the degenerations of basis type divisors remain log canonical. Alternatively, it follows from the vanishing of the generalized Futaki invariants, a property that passes on to the induced degenerations \(x_{i}\in(X_{i};\xi_{i})\rightsquigarrow y\in(Y,\xi_{Y})\). ## 5. Boundedness of singularities One of the recent achievements in K-stability of Fano varieties is the construction of the K-moduli space, a proper moduli space that parametrizes K-polystable Fano varieties. A detailed account on this topic is [14]. Among other things, the content of the K-moduli theorem can be summarized as follows. **Theorem 5.1**.: _For any positive integer \(n\) and any positive real number \(\varepsilon\), there exists a projective moduli space parametrizing K-polystable Fano varieties of dimension \(n\) and anti-canonical volume at least \(\varepsilon\)._ There should be a local analog of the K-moduli for klt singularities. In general, klt singularities may have infinite dimensional deformation spaces, so we certainly need to restrict the class of singularities we consider in order to have a reasonably behaved moduli space. The Stable Degeneration Conjecture and the surrounding stability theory of klt singularities suggest the following refinement of the local-to-global correspondence. \begin{tabular}{c|c} global & local \\ \hline K-semi/polystable Fano varieties \(V\) & K-semi/polystable Fano cone \\ & singularities \(x\in(X;\xi)\) \\ \hline anti-canonical volume \((-K_{V})^{\dim V}\) & local volume \(\widehat{\operatorname{vol}}(x,X)\) \\ \end{tabular} In particular, it seems reasonable to expect that for any positive integer \(n\) and any real number \(\varepsilon>0\), there exists a projective moduli space parametrizing K-polystable Fano cone singularities of dimension \(n\) and anti-canonical volume at least \(\varepsilon\). While many parts of K-moduli theory should carry over to the local setting, the boundedness part remains quite mysterious. In this section, we discuss what is known so far about boundedness of klt singularities and what are the challenges. We say that a given set \(\mathcal{S}\) of klt singularities is _bounded_ if there exists a \(\mathbb{Q}\)-Gorenstein family \(B\subseteq\mathcal{X}\to B\) of klt singularities such that every singularity \(x\in X\) in the set \(\mathcal{S}\) is isomorphic to some fiber of \(B\subseteq\mathcal{X}\to B\)19. For boundedness of Fano cone singularities, we will also require that there is a fiberwise torus action on the family \(B\subseteq\mathcal{X}\to B\) such that the Reeb vectors lie in the Lie algebra of the acting torus20. The following is a more precise formulation of the Boundedness Conjecture for Fano cone singularities. Footnote 19: We do not require that all the fibers belong to the set \(\mathcal{S}\). Footnote 20: It is quite likely that this additional condition is automatic once the underlying singularities are bounded. This is related to the following question: suppose \(\mathcal{X}\to B\) is a flat (but not necessarily projective) family of algebraic varieties and assume that there is a Zariski dense subset \(B_{0}\) of \(B\) such that the fibers over \(B_{0}\) admit a \(\mathbb{T}\)-action for some fixed torus \(\mathbb{T}\); then does the family \(\mathcal{X}\to B\) have a fiberwise torus action, possibly after replacing \(B\) by a dense open subset? **Conjecture 5.2** ([22, Conjecture 1.7]).: _Fix a positive integer \(n\) and some real number \(\varepsilon>0\). Then the set of \(n\)-dimensional K-semistable Fano cone singularities \(x\in(X;\xi)\) with local volume \(\widehat{\operatorname{vol}}(x,X)\geq\varepsilon\) is bounded._ A variant of this conjecture is the special boundedness conjecture [10], whose weaker version predicts that \(n\)-dimensional klt singularities \(x\in X\) with \(\widehat{\operatorname{vol}}(x,X)\geq\varepsilon\) are bounded up to special degenerations. There is also a related ACC conjecture for local volumes, stating that in any fixed dimension, the set of possible local volumes are discrete away from zero. They both follow from the Stable Degeneration Conjecture and the boundedness conjecture above, as the stable degeneration of a klt singularity preserves the local volume (Theorem 4.9). For the ACC conjecture, we also need the constructibility of local volumes in \(\mathbb{Q}\)-Gorenstein families (Theorem 4.14). It is not hard to verify the Boundedness Conjecture in dimension two. In fact, klt singularities in dimension two are the same as quotient singularities, and K-semistable Fano cone singularities are the linear quotients, i.e. they are isomorphic to \(\mathbb{C}^{2}/G\) for some finite group \(G\subseteq GL(2,\mathbb{C})\) that does not contain any pseudoreflections. By the finite degree formula (Theorem 4.7), we see that their local volume is \(\frac{4}{|G|}\), hence there are only finitely many isomorphism classes if the local volume is bounded away from zero. When the dimension is at least three, a full classification of klt singularities is no longer available, and the Boundedness Conjecture becomes much harder. Let us also draw some comparison with the corresponding boundedness result for Fano varieties, which is also part of the K-moduli theorem (Theorem 5.1). **Theorem 5.3** ([10, 20, 21]).: _Fix a positive integer \(n\) and some real number \(\varepsilon>0\). Then the set of \(n\)-dimensional K-semistable Fano variety \(V\) with volume \((-K_{V})^{n}\geq\varepsilon\) form a bounded family._ Consider the special case of the Boundedness Conjecture concerning orbifold cones \(o\in C_{a}(V,L)\). From Example 2.20, we know that the orbifold cone singularity is K-semistable if and only if the base \(V\) is a K-semistable Fano variety. While it is very tempting to relate the Boundedness Conjecture in this case to the boundedness of Fano varieties, a direct computation shows that the local volume \(\widehat{\operatorname{vol}}(o,C_{a}(V,L))\) is only a multiple of the anti-canonical volume of \(V\). Namely, we have \[\widehat{\operatorname{vol}}(o,C_{a}(V,L))=r\cdot\operatorname{vol}(-K_{V})\] where \(r>0\) is the rational number satisfying \(-K_{V}\sim_{\mathbb{Q}}rL\). The largest possible value of \(r\) we can get as we vary the Weil divisor \(L\) is called the Weil index of the Fano variety \(V\). Note that the Weil index of a Fano variety can be arbitrarily big in a fixed dimension; for example, the Weil index of the weighted projective space \(\mathbb{P}(a_{0},a_{1},\dots,a_{n})\) is \(a_{0}+a_{1}+\dots+a_{n}\). Even if we assume that the Fano variety is K-semistable, there does not seem to be any particular reason for the Weil index to be bounded. Thus already in this special case, it is not clear how to deduce Conjecture 5.2 from Theorem 5.3. In some sense, the presence of the Weil index is one of the major difficulties in the study of the Boundedness Conjecture for klt singularities. Some partial progress on the Boundedness Conjecture have been made in [11, 13, 14, 15, 16]. In particular, the conjecture is known for hypersurface singularities, for threefold singularities, and for singularities of complexity at most one. The works [15, 16] also introduce an approach to the Boundedness Conjecture through the minimal log discrepancies of Kollar components. **Definition 5.4** ([16]).: Let \(x\in X\) be a klt singularity. The minimal log discrepancy of Kollar components, denoted \(\operatorname{mld}^{\mathrm{K}}(x,X)\), is the smallest log discrepancy \(A_{X}(E)\) as \(E\) varies among all Kollar components over \(x\in X\). One of the main results of [16, 16] is the following boundedness criterion. **Theorem 5.5**.: _Fix a positive integer \(n\) and consider a set \(\mathcal{S}\) of \(n\)-dimensional K-semistable Fano cone singularities. Then \(\mathcal{S}\) is bounded if and only if there exist some \(\varepsilon,A>0\) such that_ \[\widehat{\operatorname{vol}}(x,X)\geq\varepsilon\quad\text{and}\quad \operatorname{mld}^{\mathrm{K}}(x,X)\leq A\] _for all \(x\in(X;\xi)\) in \(\mathcal{S}\)._ The idea of the proof comes from the following observation. Given a K-semistable Fano cone singularity \(x\in(X;\xi)\), each rational Reeb vector on \(X\) induces a projective orbifold cone compactification \(\overline{X}\) of \(X\). As the Reeb vector approximates the K-semistable polarization \(\xi\), the volumes \(\operatorname{vol}(-(K_{\overline{X}}+D))\) of the log Fano pair \((\overline{X},D)\) (where \(D=\overline{X}\setminus X\) is the divisor at infinity) approximates the local volume \(\widehat{\operatorname{vol}}(x,X)\) of the singularity. In particular, the anti-canonical volume of \(\overline{X}\) is bounded. One should note that the compactification \(\overline{X}\) is not unique, and in general not bounded, as illustrated by the following example. **Example 5.6**.: Let \(a_{1},\dots,a_{n}\in\mathbb{N}^{*}\) be pairwise coprime integers. Then \(\xi=(a_{1},\dots,a_{n})\in\mathbb{N}^{n}\) gives a polarization of the Fano cone singularity \(0\in\mathbb{A}^{n}\); it generates the \(\mathbb{G}_{m}\)-action with weights \(a_{1},\dots,a_{n}\) on the coordinates. This endows \(\mathbb{A}^{n}\) with an affine orbifold cone structure \(C_{a}(V,L)\) where \(V=\mathbb{P}(a_{1},\dots,a_{n})\) and \(L=\mathcal{O}_{V}(1)\). The associated projective orbifold cone is \(\overline{X}=\mathbb{P}(1,a_{1},\dots,a_{n})\), which do not form a bounded family as the weights \(a_{i}\)'s vary. Nonetheless, one can still extract some weaker boundedness in the above example: at least the linear system \(|-K_{\overline{X}}|\) always defines a birational map that is an embedding at the vertex \([1:0:\cdots:0]\). In fact, if \([s:x_{1}:\cdots:x_{n}]\) are the weighted homogeneous coordinates of \(\overline{X}\), then for every \(i\in\{1,\dots,n\}\) there exists some \(k_{i}\in\mathbb{N}\) such that \(s^{k_{i}}x_{i}\in H^{0}(-K_{\overline{X}})\) (this is possible because \(s\) has weight \(1\)); it is not hard to see that the sub linear system spanned by \(s^{k_{i}}x_{i}\) (\(i=0,\dots,n\)) is base point free and restricts to an embedding on the affine chart \(\mathbb{A}^{n}=\overline{X}\setminus(s=0)\). In general, we have an effective birationality result ([23, Proposition 3.8]): there exists a positive integer \(m\) depending only on \(\widehat{\operatorname{vol}}(x,X)\) and \(\operatorname{mld}^{\operatorname{K}}(x,X)\) such that \(|-mK_{\overline{X}}|\) induces a birational map that restricts to an embedding on \(X\). This implies the boundedness of the Fano cone singularity \(x\in(X;\xi)\). In some situations, one can use classification results to verify the boundedness of \(\operatorname{mld}^{\operatorname{K}}\) and hence prove the Boundedness Conjecture 5.2 using Theorem 5.5. This is the case for singularities of complexity at most one [10], and for threefold singularities whose local volumes are bounded away from zero [23]. However, in general it is not yet clear what to expect about the behaviour of \(\operatorname{mld}^{\operatorname{K}}\). ## 6. Questions and future directions In this last section, we collect some conjectures and open questions about the stability and boundedness of klt singularities, hoping that the readers will become motivated to work on some of them. ### Boundedness One of the major challenges in this topic is the Boundedness Conjecture (Conjecture 5.2). We restate it here for the readers' convenience. **Conjecture 6.1** (Boundedness).: _Fix a positive integer \(n\) and some real number \(\varepsilon>0\). Then the set of \(n\)-dimensional K-semistable Fano cone singularities \(x\in(X;\xi)\) with local volume \(\widehat{\operatorname{vol}}(x,X)\geq\varepsilon\) is bounded._ As discussed in Section 5, the Boundedness Conjecture has several interesting consequences. Some of these might be easier to study. The first one is the Special Boundedness Conjecture. **Conjecture 6.2** (Special Boundedness).: _Fix a positive integer \(n\) and some real number \(\varepsilon>0\). Then the set of \(n\)-dimensional klt singularities \(x\in X\) with local volume \(\widehat{\operatorname{vol}}(x,X)\geq\varepsilon\) is bounded up to special degenerations._ Since the multiplicity and embedded dimension of a singularity are bounded in a given family and are non-decreasing under specialization, another consequence of the Boundedness Conjecture is the boundedness of these invariants. **Conjecture 6.3**.: _Fix a positive integer \(n\) and some real number \(\varepsilon>0\). Then there exists some constant \(M\) depending only on \(n,\varepsilon\) such that for all \(n\)-dimensional klt singularities \(x\in X\) with local volume \(\widehat{\operatorname{vol}}(x,X)\geq\varepsilon\), we have \(\operatorname{mult}_{x}X\leq M\) and the embedded dimension of \(x\in X\) is also at most \(M\)._ Apart from the stable degenerations, there are other ways to produce special degenerations of klt singularities. Essentially, special test configurations are in one-to-one correspondence with Kollar components. By the Borisov-Alexeev-Borisov Conjecture (now a theorem of Birkar [1, 2]), the Kollar components (viewed as log Fano varieties) belong to a bounded family if and only if their minimal log discrepancies (mld) are bounded away from zero. This motivates a stronger version of the Special Boundedness Conjecture (see [10, Conjecture 1.6]). **Conjecture 6.4**.: _Fix a positive integer \(n\) and some real number \(\varepsilon>0\). Then there exists some constant \(\delta>0\) depending only on \(n,\varepsilon\) such that any \(n\)-dimensional klt singularities \(x\in X\) with local volume \(\widehat{\operatorname{vol}}(x,X)\geq\varepsilon\) admits a \(\delta\)-plt blowup._ Here we say the klt singularity \(x\in X\) admits a \(\delta\)-plt blowup if there exists a plt blowup \(Y\to X\) of a Kollar component \(E\) such that the pair \((Y,E)\) is \(\delta\)-plt, i.e., \(A_{Y,E}(F)>\delta\) for all prime divisors \(F\) that are exceptional over \(Y\). Conjecture 6.4 has been verified up to dimension three, see [10, 2]. We remark that while Conjectures 5.2 and 6.4 both imply the Special Boundedness Conjecture, it is not clear whether any of these two implies the other. Another consequence of the Boundedness Conjecture is the discreteness of the local volume away from zero. Sometimes this is also referred to as the ACC Conjecture for local volumes21. See [11, Question 4.3] and [10, Question 6.12]. Footnote 21: If we consider klt pairs \((X,D)\) with DCC coefficients, then their local volumes are expected to form an ACC set, hence the name of the conjecture. **Conjecture 6.5** (Acc).: _Fix a positive integer \(n\). Then the set of all possible local volumes of \(n\)-dimensional klt singularities are discrete away from zero._ By Theorem 2.22, the largest local volume is achieved by a smooth point. Assuming the above ACC conjecture, a natural question is what should be the second largest local volume. A natural prediction is given by the ODP volume gap conjecture, see [14, Conjecture 5.5] and [11, Conjecture 4.5]. **Conjecture 6.6** (ODP volume gap).: _The second largest volume of an \(n\)-dimensional klt singularity is \(2(n-1)^{n}\), and it is achieved only by the ordinary double point._ By _loc. cit._, this conjecture implies that the K-moduli space of cubic hypersurfaces coincides with their GIT moduli space. On the other hand, the existence of a volume gap already seems nontrivial. **Conjecture 6.7**.: _There exists some constant \(\varepsilon>0\) such that the only \(n\)-dimensional klt singularity with local volume at least \((1-\varepsilon)n^{n}\) is the smooth point._ The ODP volume gap conjecture also has a global analog. Recall that by Theorem 5.3, the volumes of K-semistable Fano varieties are known to be discrete away from zero. A theorem of Fujita [19] says that the projective space has the largest volume among them in any fixed dimension. **Conjecture 6.8** (Second largest volume).: _The second largest anti-canonical volume of an \(n\)-dimensional K-semistable Fano variety is \(2n^{n}\), and it is achieved only by \(\mathbb{P}^{1}\times\mathbb{P}^{n-1}\) and the smooth quadric hypersurface in \(\mathbb{P}^{n+1}\)._ An interesting (but also mysterious) feature of the global version is that there are two Fano varieties with second largest volume. On the other hand, because one of them, \(\mathbb{P}^{1}\times\mathbb{P}^{n-1}\), is toric, the toric case of Conjecture 6.8 is also interesting by its own. It might be approachable using combinatorial argument and will provide further evidence for Conjecture 6.8. **Conjecture 6.9**.: _Among \(n\)-dimensional K-semistable toric Fano variety, \(\mathbb{P}^{1}\times\mathbb{P}^{n-1}\) has the second largest anti-canonical volume._ Going back to the Boundedness Conjecture, if we compare it with Theorem 5.5, we are naturally led to the following speculation, see [10, Conjecture 1.7]. **Conjecture 6.10**.: _Fix a positive integer \(n\) and some real number \(\varepsilon>0\). Then there exists some constant \(A\) depending only on \(n,\varepsilon\) such that_ \[\mathrm{mld}^{\mathrm{K}}(x,X)\leq A\] _for any \(n\)-dimensional klt singularity \(x\in X\) with \(\widehat{\mathrm{vol}}(x,X,\Delta)\geq\varepsilon\)._ Shokurov has conjectured that the set of minimal log discrepancies (mld) satisfies the ACC [11]. In particular, there should be an upper bound on the mlds that only depends on the dimension. This is known as the boundedness (BDD) conjecture for mld. By analogy, we are tempted to ask whether the same holds for \(\mathrm{mld}^{K}\), and in particular, whether the lower bound on the local volume is really necessary in Conjecture 6.10. **Question 6.11** (ACC and BDD for \(\mathrm{mld}^{\mathrm{K}}\)).: Fix a dimension \(n\). Does the set of \(\mathrm{mld}^{\mathrm{K}}\) of \(n\)-dimensional klt singularities satisfy the ACC? Is there a constant \(A\) depending only on \(n\) such that \[\mathrm{mld}^{\mathrm{K}}(x,X)\leq A\] for any \(n\)-dimensional klt singularity \(x\in X\)? Perhaps what makes this question hard to study is the lack of understanding for Kollar components that minimizes the log discrepancy function. **Question 6.12**.: Is there an intrinsic way to tell whether a given Kollar component computes \(\mathrm{mld}^{\mathrm{K}}\)? There are some klt singularities with unique Kollar component. They are characterized by the property that the induced log Fano pair \((E,\Delta_{E})\) on the Kollar component \(E\) (see Section 3) is weakly special, i.e. \((E,\Delta_{E}+D)\) is log canonical for any effective \(\mathbb{Q}\)-divisor \(D\sim_{\mathbb{Q}}-(K_{E}+\Delta_{E})\). Consider orbifold cones over weakly special Fano varieties as a special case. Their \(\mathrm{mld}^{\mathrm{K}}\) are closely related to the Weil index22 of the Fano varieties. We may ask: Footnote 22: Recall that the Weil index of a Fano variety \(V\) is the largest integer \(q\) such that \(-K_{V}\sim_{\mathbb{Q}}qL\) for some Weil divisor \(L\) on \(V\). **Question 6.13**.: Fix a dimension \(n\). Can the Weil index of \(n\)-dimensional weakly special Fano varieties be arbitrarily big? ### Local volumes The local volume is a delicate invariant of a klt singularity, and it is still quite mysterious how it behaves under the steps of the minimal model program, especially flips. **Question 6.14**.: Does the local volume satisfy some type of monotonicity under flips? It is not clear what kind of monotonicity should be there. On one hand, since the flip improves the singularity in general, we may hope that the local volume increases under flips. On the other hand, one can also find toric flips \(X_{1}\dasharrowright X_{2}\) such that \[\min_{x_{1}\in X_{1}}\widehat{\operatorname{vol}}(x_{1},X_{1})>\min_{x_{2}\in X _{2}}\widehat{\operatorname{vol}}(x_{2},X_{2}).\] It is possible that the correct formulation of the monotonicity should involve some motivic version of the local volumes. The local volumes are also expected to relate to singularity invariants in positive characteristics. Given a klt singularity \(x\in X\) in characteristic \(0\), we may consider its reduction \(x_{p}\in X_{p}\) modulo a prime \(p\). From [11, 12], we know that the mod \(p\) reduction \(x_{p}\in X_{p}\) is strongly \(F\)-regular when \(p\gg 0\). An interesting invariant of a strongly \(F\)-regular singularity \(x\in X\) in positive characteristic is its \(F\)-signature \(s(x,X)\) (see [10, 13]), and a folklore question in commutative algebra is to find geometric interpretations of \(\lim_{p\to\infty}s(x_{p},X_{p})\). Partly motivated by this question, a comparison result between the local volume and the \(F\)-signature is conjectured in [10, Section 6.3.1]. Here we state a modified version. **Conjecture 6.15**.: _For any \(n\)-dimensional klt singularity \(x\in X\) in characteristic \(0\), let \(x_{p}\in X_{p}\) be its reduction mod \(p\gg 0\). Then_ \[\liminf_{p\to\infty}s(x_{p},X_{p})\geq\frac{\widehat{\operatorname{vol}}(x,X) }{n^{n}}.\] The right hand side is also known as the volume density of the singularity. It is not hard to see that the inequality becomes an equality when \(x\in X\) is smooth. A weaker conjecture would replace the constant \(n^{n}\) by the existence of some positive dimensional constant. If the (weaker) conjecture is true, it will give a positive answer to [14, Question 5.9], which asks whether the \(F\)-signatures \(s(x_{p},X_{p})\) have uniform lower bounds as \(p\to\infty\). One motivation for Conjecture 6.15 is the finite degree formula for \(F\)-signature, which is reminiscent of the finite degree formula for local volumes (Theorem 4.7). **Theorem 6.16** ([14, Theorem B]).: _Let \(f\colon(y\in Y)\to(x\in X)\) be a finite quasi-etale morphism between strongly \(F\)-regular singularities. Then_ \[s(y,Y)=\deg(f)\cdot s(x,X).\] Note that [14, Theorem 4.4] proves a much more general finite degree formula for crepant morphisms between strongly \(F\)-regular pairs. In contrast, the finite degree formula for local volumes is currently restricted to _Galois_ morphisms. It would be necessary to resolve this discrepancy. **Conjecture 6.17**.: _Let \(f\colon\big{(}y\in(Y,\Delta_{Y})\big{)}\to\big{(}x\in(X,\Delta)\big{)}\) be a finite surjective morphism between klt pairs such that \(f^{*}(K_{X}+\Delta)=K_{Y}+\Delta_{Y}\). Then_ \[\widehat{\operatorname{vol}}(y,Y,\Delta_{Y})=\deg(f)\cdot\widehat{ \operatorname{vol}}(x,X,\Delta).\] One obvious subtlety is that if we pass to the Galois closure, the boundary divisor \(\Delta_{Y}\) may have negative coefficients. Perhaps there is some possibility of developing a stability theory for sub-pairs. Guided by Conjecture 6.15, it seems reasonable to believe that many nice properties of the local volume (and even the stability theory itself) carry over to positive characteristics. For example, one can ask: **Question 6.18**.: Fix a dimension \(n\) and some real number \(\varepsilon>0\). Is the set of strongly \(F\)-regular singularities \(x\in X\) (in characteristic \(p\)) with \(F\)-signature \(s(x,X)\geq\varepsilon\) bounded up to special degenerations? **Question 6.19**.: In a fixed dimension \(n\) and characteristic \(p\), are the possible values of \(F\)-signatures discrete away from zero? What is the second largest \(F\)-signature? ### Miscellaneous There are some basic properties of the normalized volume function that are still not fully understood. The following question is taken from [11]. **Question 6.20**.: Is the normalized volume function lower semi-continuous on the valuation space? A formal arc through a singularity \(x\in X\) is a morphism \(\phi\colon\mathbf{Spec}(\Bbbk[\![t]\!])\to X\) such that \(\phi(0)=x\). The arc space of the singularity, which parameterizes the formal arcs, is an essential tool in the theory of motivic integration and is also quite useful in the study of invariants in birational geometry, see e.g. [10, 11, 12, 13] for some applications of this kind. A natural question (communicated to us by Chenyang Xu) is whether the local volumes of singularities have interpretations through the arc space. Note that [11] have defined volumes for subsets of the arc space. **Question 6.21**.: Can the local volume be defined using invariants of the arc space? _Remark 6.22_.: In a somewhat related direction, one can also ask whether the local volume only depends on the contact geometry of the link of the klt singularity. The answer is no in general. The reason is that there are smooth families of Fano manifolds whose general fibers are K-semistable while some special fibers are K-unstable (one explicit example is the family 2.26 of Fano threefolds, see [1, Section 5.10]). By Example 2.20, this implies that the local volume is not constant on the corresponding family of cones. On the other hand, since the original family is smooth, the fibers are symplectomorphic and therefore the links of the cone singularities have isomorphic contact structure. On Fano varieties, [13, Theorem 4.5] relates the finite generation property of lc places of complements to the linearity of the \(S\)-invariants. In the applications to explicit examples, this is the easiest way to check finite generation. There might be a local analog. **Question 6.23**.: Find finite generation criterion (possibly in terms of \(S\)-invariant or other geometric conditions) for general lc places of complements of a klt singularity. Since Kollar valuations are the higher rank versions of Kollar components, we may ask whether some of the known properties of Kollar components have higher rank analog. For example: **Question 6.24**.: For any graded sequence \(\mathfrak{a}_{\bullet}\) of \(\mathfrak{m}_{x}\)-primary ideals on a klt singularity \(x\in X\), is the log canonical threshold \(\operatorname{lct}(\mathfrak{a}_{\bullet})\) always computed by some Kollar valuation?
2307.04550
Gradient Surgery for One-shot Unlearning on Generative Model
Recent regulation on right-to-be-forgotten emerges tons of interest in unlearning pre-trained machine learning models. While approximating a straightforward yet expensive approach of retrain-from-scratch, recent machine unlearning methods unlearn a sample by updating weights to remove its influence on the weight parameters. In this paper, we introduce a simple yet effective approach to remove a data influence on the deep generative model. Inspired by works in multi-task learning, we propose to manipulate gradients to regularize the interplay of influence among samples by projecting gradients onto the normal plane of the gradients to be retained. Our work is agnostic to statistics of the removal samples, outperforming existing baselines while providing theoretical analysis for the first time in unlearning a generative model.
Seohui Bae, Seoyoon Kim, Hyemin Jung, Woohyung Lim
2023-07-10T13:29:23Z
http://arxiv.org/abs/2307.04550v2
# Gradient Surgery for One-shot Unlearning on Generative Model ###### Abstract Recent regulation on right-to-be-forgotten emerges tons of interest in unlearning pre-trained machine learning models. While approximating a straightforward yet expensive approach of retrain-from-scratch, recent machine unlearning methods unlearn a sample by updating weights to remove its influence on the weight parameters. In this paper, we introduce a simple yet effective approach to remove a data influence on the deep generative model. Inspired by works in multi-task learning, we propose to manipulate gradients to regularize the interplay of influence among samples by projecting gradients onto the normal plane of the gradients to be retained. Our work is agnostic to statistics of the removal samples, outperforming existing baselines while providing theoretical analysis for the first time in unlearning a generative model. Machine Learning, ICML ## 1 Introduction Suppose a user wants to get rid of his/her face image anywhere in your facial image generation application - including the database and the generative model on which it is trained. Is the expensive retrain-from-scratch the only solution for this kind of request? As the use of personal data has been increased in training the machine learning models for online service, meeting individual demand for privacy or the rapid change in the legislation of General Data Protection Registration (GDPR) is inevitable to ML service providers nowadays. This request on 'Right-To-Be-Forgotten (RTBF)' might be a one-time or in-series, scaling from a feature to a number of tasks, querying single instance to multiples. A straightforward solution for unlearning a single data might be to retrain a generative model from scratch without data of interest. This approach, however, is intractable in practice considering the grand size and complexity of the latest generative models (Rombach et al., 2022; Child, 2020) and the continual request for removal. Unlearning, thereafter, aims to approximate this straightforward-yet-expensive solution of retrain-from-scratch time and computation efficiently. First-order data-influence-based approximate unlearning is currently considered the state-of-the-art approach to unlearning machine learning models in general. Grounded by the notion of data influence (Koh and Liang, 2017), a simple one-step Newton's update certifies sufficiently small bound between retrain-from-scratch (Guo et al., 2020). Nonetheless, those relaxations are infeasible to the non-convex deep neural networks (_e.g_. generative model) where the gap is not certifiably bounded and the process of computing the inverse of hessian is intractable. Several recent works also have affirmed that these relaxed alternatives perform poorly on deep neural networks (Golatkar et al., 2021; Liu et al., 2022) and even that on generative models have not been explored yet. ContributionIn this work, we propose a novel one-shot unlearning method for unlearning samples from pre-trained deep generative model. Relaxing the definition of influence function on parameters in machine unlearning (Koh and Liang, 2017; Basu et al., 2020), we focus on the influence of a single data on the _test loss_ of the others and propose a simple and cost-effective method to minimize this inter-dependent influence to approximate retrain-from-scratch. We summarize our contributions as follows: * We propose to annul the influence of samples on generations with simple gradient manipulation. * Agnostic to removal statistics and thus applied to any removals such as a single data, a class, some data feature, etc. * Grounded by a theoretical analysis bridging standard machine unlearning to generative model. ## 2 Gradient Surgery for One-shot Data Removals on Generative Model NotationsLet \(D=\{\textbf{x}_{i}\}_{i=1}^{N}\subseteq\mathcal{X}\) be the training data where \(\textbf{x}_{i}\in\mathcal{X}\) is input. Let \(D_{f}\subseteq D\) be a subset of training data that is to be forgotten (_i.e_. forget set) and be remaining training data of which information we want to retain. Recall that the goal of unlearning is to approximate the deep generative model retrained from scratch with only \(D_{r}\), which we denote as \(f_{\theta^{*}}\) parameterized by \(\theta^{*}\). Then, our goal is to unlearn \(D_{f}\subseteq D\) from a converged pre-trained generator \(f_{\hat{\theta}}\) by updating the parameter \(\hat{\theta}\rightarrow\theta^{-}\), where \(\theta^{-}\) represents the updated parameters obtained after unlearning. Proposed methodGiven a generative model that models the distribution of training data \(p(D)\), a successful unlearned model that unlearns \(D_{f}\) would be what approximates \(p(D_{r})\), the distribution of \(D_{r}\), as if it had never seen \(D_{f}\). The only case where the unlearned model generates samples similar to \(x\in D_{f}\) is when \(p(D_{f})\) and \(p(D_{r})\) happen to be very close from the beginning. Under this goal, a straight-forward objective given the pre-trained model approximating \(p(D)\) is to make the output of generation to _deviate from \(p(D_{f})\)_, which could be simply formulated as the following: \[\max_{\theta}\mathbb{E}_{(x,y)\sim D_{f}}\mathcal{L}(\theta,x,y) \tag{1}\] where \(\mathcal{L}\) denotes training loss (_e.g._ reconstruction loss). Meanwhile, assume we could _define_ the influence of a single data on the weight parameter and generation result. Then, unlearning this data would be by simply updating the weight parameter in a direction of removing the data influence. Toward this, we start with defining the data influence on weight parameters and approximates to feasible form as introduced in Koh and Liang (2017): **Definition 2.1**.: Given upweighting \(z\) by some small \(\epsilon\) and the new parameters \(\hat{\theta}_{\epsilon,z}\stackrel{{\mbox{def}}}{{=}}\operatorname {argmin}_{\theta\in\Theta}\frac{1}{n}\sum_{i=1}^{n}\mathcal{L}(z_{i},\theta)+ \epsilon\mathcal{L}(z,\theta)\), the influence of upweighting \(z\) on the parameter \(\hat{\theta}\) is given by \[I_{up,param}(z)\stackrel{{\mbox{def}}}{{=}}\frac{d\hat{\theta}_ {\epsilon,z}}{d\epsilon}\Big{|}_{\epsilon=0}\stackrel{{\mbox{ def}}}{{=}}-H_{\hat{\theta}}^{-1}\nabla_{\theta}L(z,\hat{\theta}) \tag{2}\] where \(H_{\hat{\theta}}=\frac{1}{n}\sum_{i=1}^{n}\nabla_{\theta}^{2}L(z_{i},\hat{ \theta})\) is the Hessian and is positive definite (PD) by assumption. By forming a quadratic approximation to the empirical risk around \(\hat{\theta}\), a data influence on the weight parameter is formulated as a single Newtons step (See details in Appendix of (Koh and Liang, 2017)), which is consistent with the objective we have mentioned in Equation 1. Although numerous works have verified that this data influence-based approach works well in shallow, discriminative models (Guo et al., 2020; Golatkar et al., 2020; 20), we cannot apply this directly to our generative model due to intractable computation and lack of guarantees on bounds. To address this problem, we re-purpose our objective to minimize the **data influence on generation**. Grounded by recent works (Basu et al., 2020; Sun et al., 2023), we find that we could enjoy this on generative model simply by diminishing the gradient conflict as follows: **Theorem 2.2**.: _Reducing the influence of samples \(z\in D_{f}\) in training data with regard to test loss is formulated as:_ \[I^{{}^{\prime}}_{up,loss}(D_{f},z^{\prime})\to 0, \tag{3}\] _which is equivalent to_ \[\nabla_{\theta}\mathcal{L}(z^{\prime},\hat{\theta})^{T}\sum_{z\in D_{f}} \nabla_{\theta}\mathcal{L}(z,\hat{\theta})\to 0 \tag{4}\] _where \(z^{\prime}\in D_{r}\) in our scenario._ Informally, we could achieve this by alleviating the conflict between two gradients \(\nabla_{\theta}\mathcal{L}(z^{\prime},\hat{\theta})\) and \(\nabla_{\theta}\mathcal{L}(z,\hat{\theta})\), resulting in diminishing the inner product of two gradients. This reminds us of a classic approach of gradient manipulation techniques for conflicting gradients in multi-task learning scenario (Yu et al., 2020; Liu et al., 2021; Guangyuan et al.). Specifically, we project a gradient of forget sample \(x_{f}\in D_{f}\) onto normal plane of a set of retain samples \(x_{r}\in D_{r}\) to meet \(\mathcal{I}_{up,loss}(x_{f},x_{r})=0\). This orthogonal projection manipulates the original gradient of forget sample \(\mathbf{g}_{f}=\nabla\mathcal{L}_{f}\) to the weight parameter to which sufficiently unlearns a sample \(x_{f}\in D_{f}\): \(\mathbf{g}_{f}=\mathbf{g}_{f}-\frac{\mathbf{g}_{f}\cdot\mathbf{g}_{r}}{\| \mathbf{g}_{r}^{-1}\|}\mathbf{g}_{r}\). Then, the unlearned model \(\theta^{-}\) is obtained after the following gradient update: \(\theta^{-}=\hat{\theta}-\eta\mathbf{g}_{f}\). ## 3 Experiments We verify our idea under numerous data removal requests. Note that measuring and evaluating a generative model to unlearn _a single data_ is non-trivial. Even comparing pre-trained generative models trained _with_ a particular data over _without_ simply by looking at the output of training (_e.g._ generated image, weight) is intractable in case of a deep generative model to the best of our knowledge (van den Burg and Williams, 2021). To make the problem verifiable, in this work, we experiment to unlearn a group of samples sharing similar statistics in the training data - either belonging to a particular class or that has a distinctive semantic feature. In this case, one can evaluate the output of the generation by measuring the number of samples including that class or a semantic feature; a successfully unlearned model would generate nearly zero number of samples having these features. Although we are not able to cover unlearning a single data in this work, note that in essence, our method could successfully approximate the generative model trained without a single data seamlessly, and we look forward to exploring and adjusting a feasible evaluation on this scenario in the near future. ### Experimental Setup ScenariosWe unlearn either a whole class or some notable feature from a group of samples. In the experiment, we use a subset of MNIST (Alsaafin and Elnagar, 2017) with samples of classes 1,3,8 and 64x64 CelebA (Liu et al., 2015) to train and unlearn vanilla VAE (Kingma and Welling, 2013). **Evaluation** We evaluate our method under the following three criteria: a privacy guarantee, utility guarantee, and cost. Privacy guarantee includes feature ratio (_fratio_), a ratio of images including the target feature (See details in Appendix A). Utility guarantee includes Frechet Inception Distance (_FID_), a widely used measure for generation quality. Cost includes a total execution time (_Time_) which should be shorter than retrain-from-scratch. A successfully unlearned model would show near-zero on feature ratio, the same IS, FID score as the initial pre-trained model (BEFORE), and the lowest possible execution time. Given the legal impact and the goal of unlearning, note that guaranteeing privacy is prioritized the highest. ### Result on Pre-trained Generative Model Quantitative ResultWe run the proposed method on pre-trained VAE to remove unlearning group \(D_{f}\) (_e.g._ class 1 or male, respectively) and evaluate them as follows (Table 3) Starting from the pre-trained model (BEFORE) our method unlearns the target \(D_{f}\) with a large decrease on _fratio_ by 65% to 70% while keeping the time cost of unlearning \(\leq\) 5% of retrain-from-scratch. All the while, our method still keeps a decent utility performance. Comparing the baselines, our method shows the best in privacy - the prioritized metric - through all experiments. Note that the feature ratio of gradient ascent in the CelebA experiment (feature ratio-CelebA-Grad.Ascnt) was omitted because the generated samples are turned out to be noisy images and thus the evaluation result of pre-trained classifier cannot be accepted. Also, note that although baselines show better performance in terms of utility and cost, they don't show near-best score on privacy guarantee. Qualitative ResultWe further validate our method by comparing the generated images before and after the proposed unlearning algorithm. As in Figure 3.1, no class 1 samples are observed after unlearning class 1, meaning that our method successfully meets the request of unlearning class 1, which aligns with the quantitative result where the ratio of samples with class 1 is reduced from 34.3% to \(\leq\) 15% as in Table 3. The output of image generation is fair where 3 and 8 are decently distinguishable through one's eyes, although it is certain that some examples show some minor damaged features, which are in the same line as a decrease in IS and an increase in FID score. Note that the ultimate goal of unlearning is to meet the privacy guarantee while preserving the utility of pre-training, which are remained as our next future work. ## 4 Conclusion In this work, we introduce a novel theoretically sounded unlearning method for the generative method. Inspired by the influence of the sample on the others, we suggest a simple and effective gradient surgery to unlearn a given set of samples on a pre-trained generative model and outperform the existing baselines. Although we don't experiment to unlearn single data due to a lack of ground evaluation on the uniqueness of the particular data, we leave it as future work emphasizing that our method could also be applied to this scenario. Furthermore, it would be interesting to verify our ideas on various privacy-sensitive datasets. Nonetheless, our work implies the possibility of unlearning a pre-trained generative model, laying the groundwork for privacy handling in generative AI. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{MNIST138(Class: 1)} & \multicolumn{4}{c}{CelebA(Feature: Male)} \\ \cline{2-9} Metric & Privacy & Utility & Cost & Privacy & Utility & Cost \\ \cline{2-9} & _fratio_(\(\downarrow\)) & _IS_(\(\uparrow\)) & _FID_(\(\downarrow\)) & _Time_(s)(\(\downarrow\)) & _fratio_(\(\downarrow\)) & _IS_(\(\uparrow\)) & _FID_(\(\downarrow\)) & _Time_(s)(\(\downarrow\)) \\ \hline Before & 0.343(0.027) & 2.053(0.029) & 0.030(0.003) & 218.6 & 0.394(0.119) & 1.812(0.044) & 29.81(0.341) & \(3\times 10^{4}\) \\ \hline Grad.Ascnt. & 0.264(0.141) & 2.029(0.018) & 0.127(0.059) & **1.010** & - (*) & **1.311**(0.076) & 30.93(1.215) & **97.31** \\ Moon et al. (2023) & 0.344(0.019) & 2.048(0.021) & **0.031**(0.002) & 166.2 & 1.000(0.000) & 1.000(0.000) & **15.81**(9.831) & \(8\times 10^{4}\) \\ \hline Ours & **0.153**(0.057) & **2.192**(0.076) & 0.092(0.030) & 13.12 & **0.150**(0.098) & 1.254(0.013) & 34.24(0.698) & 613.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of Class/Feature Unlearning VAE on MNIST138 (_left columns_) and CelebA (_right column_) Each experiments are three times repeated. (*) indicates erroneous evaluation by a pre-trained feature classifier. **Bold** indicates the best score. Figure 1: Unlearning groups of class 1 samples from VAE pretrained on MNIST138 (_left_: original, _right_: unlearned) Note that images of class 1 do not appear in generation result.
2305.15224
Global Solutions of the Two-Dimensional Riemann Problem with Four-Shock Interactions for the Euler Equations for Potential Flow
We present a rigorous approach and related techniques to construct global solutions of the 2-D Riemann problem with four-shock interactions for the Euler equations for potential flow. With the introduction of three critical angles: the vacuum critical angle from the compatibility conditions, the detachment angle, and the sonic angle, we clarify all configurations of the Riemann solutions for the interactions of two-forward and two-backward shocks, including the subsonic-subsonic reflection configuration that has not emerged in previous results. To achieve this, we first identify the three critical angles that determine the configurations, whose existence and uniqueness follow from our rigorous proof of the strict monotonicity of the steady detachment and sonic angles for 2-D steady potential flow with respect to the Mach number of the upstream state. Then we reformulate the 2-D Riemann problem into the shock reflection-diffraction problem with respect to a symmetric line, along with two independent incident angles and two sonic boundaries varying with the choice of incident angles. With these, the problem can be further reformulated as a free boundary problem for a second-order quasilinear equation of mixed elliptic-hyperbolic type. The difficulties arise from the degenerate ellipticity of the nonlinear equation near the sonic boundaries, the nonlinearity of the free boundary condition, the singularity of the solution near the corners of the domain, and the geometric properties of the free boundary. To the best of our knowledge, this is the first rigorous result for the 2-D Riemann problem with four-shock interactions for the Euler equations. The approach and techniques developed for the Riemann problem for four-wave interactions should be useful for solving other 2-D Riemann problems for more general Euler equations and related nonlinear hyperbolic systems of conservation laws.
Gui-Qiang G. Chen, Alexander Cliffe, Feimin Huang, Song Liu, Qin Wang
2023-05-24T15:06:53Z
http://arxiv.org/abs/2305.15224v1
# Global solutions of the two-dimensional Riemann problem with four-shock interactions ###### Abstract. We present a rigorous approach and related techniques to construct global solutions of the two-dimensional (2-D) Riemann problem with four-shock interactions for the Euler equations for potential flow. With the introduction of three critical angles: the vacuum critical angle from the compatibility conditions, the detachment angle, and the sonic angle, we clarify all configurations of the Riemann solutions for the interactions of two-forward and two-backward shocks, including the subsonic-subsonic reflection configuration that has not emerged in previous results. To achieve this, we first identify the three critical angles that determine the configurations, whose existence and uniqueness follow from our rigorous proof of the strict monotonicity of the steady detachment and sonic angles for 2-D steady potential flow with respect to the Mach number of the upstream state. Then we reformulate the 2-D Riemann problem into the shock reflection-diffraction problem with respect to a symmetric line, along with two independent incident angles and two sonic boundaries varying with the choice of incident angles. With these, the problem can be further reformulated as a free boundary problem for a second-order quasilinear equation of mixed elliptic-hyperbolic type. The difficulties arise from the degenerate ellipticity of the nonlinear equation near the sonic boundaries, the nonlinearity of the free boundary condition, the singularity of the solution near the corners of the domain, and the geometric properties of the free boundary. To solve the problem, we need to analyze the solutions for a quasilinear degenerate elliptic equation by the maximum principle of the mixed-boundary value problem, the theory of the oblique derivative boundary value problem, the uniform _a priori_ estimates, and the iteration method. To the best of our knowledge, this is the first rigorous result for the 2-D Riemann problem with four-shock interactions for the Euler equations. The approach and techniques developed for the Riemann problem for four-wave interactions should be useful for solving other 2-D Riemann problems for more general Euler equations and related nonlinear hyperbolic systems of conservation laws. Key words and phrases:Riemann problem, shock interaction, Euler equations, potential flow, free boundary, mixed elliptic-hyperbolic type, _a priori_ estimates, degenerate elliptic equation, regularity, convexity. 5 Optimal Regularity of Solutions and Convexity of Free Boundaries * 5.1 Proof of Theorem 2.2: Optimal regularity of solutions * 5.2 Proof of Theorem 2.3: Convexity of free boundaries and transonic shocks * A Proof of Lemma 2.5 and Related Properties of Solutions * A.1 Proof of Lemma 2.5: Monotonicity of critical angles for 2-D steady potential flow * A.2 Monotonicity properties with respect to the incident angles * B Some Known Results Needed for the Proofs * B.1 Well-posedness of the iteration boundary value problem * B.2 Regularized distance function * B.3 Leray-Schauder degree theorem * B.4 Regularity theorem * B.5 General framework for the convexity of transonic shocks ## 1. Introduction We are concerned with the two-dimensional (2-D) Riemann problem for the Euler equations for potential flow, consisting of the conservation law of mass and the Bernoulli law: \[\left\{\begin{aligned} &\partial_{t}\rho+\operatorname{div}_{ \mathbf{x}}(\rho\nabla_{\mathbf{x}}\Phi)=0\,,\\ &\partial_{t}\Phi+\frac{1}{2}|\nabla_{\mathbf{x}}\Phi|^{2}+h( \rho)=B\,,\end{aligned}\right. \tag{1.1}\] where \((t,\mathbf{x})\in(0,\infty)\times\mathbb{R}^{2}\), \(\rho=\rho(t,\mathbf{x})\) is the density, \(\Phi=\Phi(t,\mathbf{x})\) is the velocity potential (_i.e._, \(\nabla_{\mathbf{x}}\Phi=(u,v)\) is the velocity), \(h(\rho)\coloneqq\int_{0}^{1}\frac{\rho^{\prime}(s)}{s}\,\mathrm{d}s\) is the enthalpy with the polytropic pressure law \(p(\rho)=A\rho^{\gamma}\) for \(\gamma\geq 1\) and \(A>0\), and the Bernoulli constant \(B\) is determined by the far field flow. Without loss of generality, we may fix \(A=\gamma^{-1}\) by scaling and denote the sonic speed by \(c(\rho)\coloneqq\sqrt{p^{\prime}(\rho)}=\rho^{\frac{\gamma-1}{2}}\). In this paper, we focus on the 2-D Riemann problem with four-shock interactions and corresponding Riemann initial given by \[(\rho,\Phi)(0,\mathbf{x})=(\rho_{i},(u_{i},v_{i})\cdot\mathbf{x})\qquad\text{ for }\mathbf{x}\in\Lambda_{i}\,, \tag{1.2}\] for suitable constant states \((i)\) with values \(U_{i}\coloneqq(\rho_{i},u_{i},v_{i})\), and corresponding scale-invariant domains \(\Lambda_{i}\subseteq\mathbb{R}^{2}\) for \(i=1,2,3,4\). We choose these domains to be four sectors of \(\mathbb{R}^{2}\), defined by \[\begin{split}\Lambda_{1}&\coloneqq\left\{\mathbf{x }\in\mathbb{R}^{2}\,:\,-\theta_{14}<\theta<\theta_{12}\right\},\qquad\qquad \Lambda_{2}\coloneqq\left\{\mathbf{x}\in\mathbb{R}^{2}\,:\,\theta_{12}<\theta <\pi-\theta_{32}\right\},\\ \Lambda_{3}&\coloneqq\left\{\mathbf{x}\in\mathbb{R}^ {2}\,:\,\pi-\theta_{32}<\theta<\pi+\theta_{34}\right\},\quad\Lambda_{4} \coloneqq\left\{\mathbf{x}\in\mathbb{R}^{2}\,:\,\pi+\theta_{34}<\theta<2\pi- \theta_{14}\right\},\end{split} \tag{1.3}\] where \(\theta\) is the polar angle of point \(\mathbf{x}\in\mathbb{R}^{2}\), and the four parameters \(\theta_{12},\theta_{32},\theta_{34},\theta_{14}\in(0,\frac{\pi}{2})\). The initial data and shock discontinuities are depicted in Fig. 1. The Euler equations for potential flow (1.1) are the oldest, yet still prominent, paradigm of fluid dynamic equations for inviscid fluids, which have been widely used in aerodynamics. System (1.1) has further been extended to the full Euler equations for more general inviscid compressible fluids when the effect of vortex sheets and the derivation of vorticity become significant. One of the main features of the Euler equations is the formation of shocks in finite time, no matter how smooth the initial data are. The shock is one of the most fundamental nonlinear waves in compressible fluid flow, and the analysis of shocks dates back to the 19th century; see _e.g._[20, 27, 43, 44, 47]. In 1860, Riemann [44] first considered a special initial value problem for the one-dimensional (1-D) Euler system, with two constant states separated at the origin--now known as the Riemann problem. For general 1-D strictly hyperbolic systems of conservation laws, Lax [29] showed that the Riemann solutions are the combination of three fundamental waves: the rarefaction waves, the shocks, and the contact discontinuities. In 1965, Glimm [24] employed the Riemann solutions as building blocks of the Glimm scheme to establish the global existence of solutions in BV (bounded variation) of 1-D strictly hyperbolic systems of conservation laws for the initial data of small BV, for which the Riemann solutions play a fundamental role. For the uniqueness and stability of BV solutions, we refer to [3, 4, 5, 40, 41] and the references cited therein. For a systematic theory of 1-D hyperbolic systems of conservation laws, we refer to [3, 19, 26, 30, 42] and the references cited therein. Multi-dimensional (M-D) Riemann problems for the Euler system are vastly more complicated; see [8, 19, 26, 30, 31] and the references cited therein. For the 2-D four-quadrant Riemann problem with four initial constant states given in the four quadrants so that each jump between two neighbouring quadrants projects exactly one planar fundamental wave, Zhang-Zheng [49] predicted that there are a total of 16 genuinely different configurations of the Riemann solutions for polytropic gases. In Chang-Chen-Yang [6, 7], it is first observed that, when two initially parallel slip lines are present, it makes a distinguished difference whether the vorticity waves generated have the same or opposite sign, which, along with Lax-Liu [32], leads to the classification with a total of 19 genuinely different configurations of the Riemann solutions for the compressible Euler equations for polytropic gases, via characteristic analysis; also see [25, 28, 33, 45] and the references cited therein. However, there have been few rigorous global results for the 2-D four-quadrant Riemann problem for the Euler equations, except for the result obtained by Li-Zheng in [36], in which the four-rarefaction wave case was considered. On the other hand, the 2-D Riemann problem has been solved for simplified models such as the pressure gradient system [14], and the local structure of wave interactions in the Riemann problem has been analyzed in [34, 35, 37]. For the 2-D Riemann problem for Chaplygin gases, we refer to [17, 46] and the references cited therein. The purpose of this paper is to solve rigorously the 2-D Riemann problem with four-shock interactions globally for the Euler equations for potential flow (1.1). The 2-D Riemann initial data (1.2) under consideration in this paper satisfy that \[\left\{\begin{aligned} &\text{one forward shock }\overrightarrow{S}_{1j}\text{ is generated between states }(1)\text{ and }(j),\\ &\text{one backward shock }\overleftarrow{S}_{3j}\text{ is generated between states }(3)\text{ and }(j),\end{aligned}\right.\] for \(j=2,4\), and that the angular bisectors at the vertices of \(\Lambda_{1}\) and \(\Lambda_{3}\) in Fig. 1 coincide. Without loss of generality, we choose \(\theta_{12}=\theta_{14}\) and \(\theta_{32}=\theta_{34}\), and then the \(x_{1}\)-axis may be considered as a rigid wall. The above Riemann problem corresponds to one of the conjectures for the Euler equations proposed in Zhang-Zheng [49]. The problem considered has a close relation to the shock reflection-diffraction problem, whose research dates back to 1878, in which Ernst Mach revealed the complexity of the shock reflection-diffraction configurations that include two fundamental modes: the regular reflection-diffraction and the Mach reflection-diffraction. The shock reflection phenomena occur when a moving planar shock wave impinges on a rigid wall; see Courant-Friedrichs [18] and Chen-Feldman [10]. In the recent three decades, there have been many theoretical results on shock reflection/diffraction problems. Indeed, Elling-Liu [22] considered the self-similar solutions of the Prandtl-Meyer problem for supersonic potential flow onto a solid wedge for a class of physical parameters. Chen-Feldman [9] Figure 1.1. The four sectors of the Riemann initial data. proved the first global existence of solutions of the shock reflection-diffraction problem for potential flow when the incident angles are small; later, they succeeded to establish the global existence of solutions for the incident angles all the way up to the detachment angle in [10]. Furthermore, Chen-Feldman-Xiang [12] proved the convexity of transonic shocks, which leads to the uniqueness and stability of the admissible solution [13]. More recently, Bae-Chen-Feldman [2] obtained the existence and the optimal regularity of admissible solutions of the Prandtl-Meyer reflection problem for the general case up to the detachment angle. For the supersonic flow passing through a wing or curved wedge, see also [15, 16, 11]. For the 2-D Riemann problem with four-shock interactions under consideration, we can reformulate the problem into a shock reflection-diffraction problem with respect to a symmetric line. Compared with the previous results, such as the shock reflection-diffraction problem [10] and the Prandtl-Meyer reflection problem [2], we now have two independent incident angles in the four-shock reflection problem. Correspondingly, we have two sonic boundaries varying with the choice of incident angles, which leads to more complicated configurations. After introducing the self-similar coordinates, the original problem can be reformulated as a free boundary problem for a second-order quasilinear partial differential equation (PDE) of mixed elliptic-hyperbolic type. The difficulties come from the degenerate ellipticity of the nonlinear PDE near the sonic boundaries, the nonlinearity of the free boundary conditions, the singularity of the solution near the corners of the domain, and the geometric properties of the free boundary. To solve our problem, we need to analyze the solutions for a quasilinear degenerate elliptic equation by the maximum principle of mixed-boundary value problems and the theory of the oblique derivative boundary value problems [23, 38, 39], as well as the uniform estimates and the iteration method from [2, 10, 21]. An important part of our analysis includes the determination of the three critical angles: the vacuum critical angle, the detachment angle, and the sonic angle. Their existence and uniqueness follow from Lemma 2.5, whereby we prove the strict monotonicity of the steady detachment and sonic angles for 2-D steady potential flow with respect to the Mach number of the upstream state. Using these critical angles, we classify all configurations of the solutions of the Riemann problem. In particular, the subsonic-subsonic reflection configuration has never emerged in the previous works. Lemma 2.5 should be useful in the analysis of other shock reflection-diffraction problems. To the best of our knowledge, this is the first result on the shock reflection-diffraction of the Riemann problem for potential flow with piecewise constant initial data in four sectors. Furthermore, the mathematical methods developed in solving the free boundary problem for nonlinear PDEs such as the uniform _a priori_ estimate on the ellipticity of the equation, the anisotropic scaling method, and the sophisticated construction of the iteration set and iteration map will provide insights for the analysis of more general nonlinear problems. In particular, the Euler equations for potential flow are the core part of the full Euler equations, coupled with the incompressible-type Euler equations, in a local domain, the techniques and approaches developed for the Riemann problem for four-wave interactions will be useful to guide the analysis of the structure of Riemann solutions, shock reflection/diffraction problems, and numerical simulations of the full Euler equations. The organisation of this paper is as follows: In Section 2, we present the mathematical formulation of the Riemann problem with four-shock interactions and the definition of admissible solutions of the free boundary problem, and state three main results. In Section 3, we obtain the directional monotonicity, the strict ellipticity, and the uniform weighted Holder estimates of admissible solutions. In Section 4, we prove the existence of admissible solutions by constructing a suitable iteration set and iteration map via applying the Leray-Schauder degree theory. Finally, in Section 5, we give the proof of the optimal regularity of admissible solutions, as well as the proof of the convexity of free boundaries and transonic shocks. In Appendix A, we give the details for some calculations involving the potential flow equation and, in Appendix B, we present some known results that are needed throughout the proofs. ## 2. The Riemann Problem and Main Theorems for Four-Shock Interactions In this section, we first present the mathematical formulation of the Riemann problem with four-shock interactions as a free boundary problem, along with the definition of admissible solutions of the free boundary problem, and then state three main theorems. ### Mathematical formulation The Riemann problem (1.1)-(1.2) is invariant under the self-similar scaling: \[(t,\mathbf{x})\mapsto(\lambda t,\lambda\mathbf{x})\,,\quad(\rho,\Phi)(t, \mathbf{x})\mapsto(\rho,\frac{1}{\lambda}\Phi)(\lambda t,\lambda\mathbf{x}) \qquad\quad\text{for any }\lambda>0\,.\] This leads us to consider the self-similar solution \((\rho,\phi)(\boldsymbol{\xi})\coloneqq(\rho,\frac{\Phi}{t})(t,\mathbf{x})\) in the self-similar coordinates \(\boldsymbol{\xi}=(\xi,\eta)\coloneqq\frac{\mathbf{x}}{t}\in\mathbb{R}^{2}\). Moreover, by introducing the pseudo-potential function \(\varphi(\boldsymbol{\xi})\coloneqq\phi(\boldsymbol{\xi})-\frac{1}{2}| \boldsymbol{\xi}|^{2}\), we obtain the pseudo-steady potential flow equation for \(\varphi(\boldsymbol{\xi})\) in the form: \[\operatorname{div}(\rho(|D\varphi|,\varphi)D\varphi)+2\rho(|D\varphi|,\varphi )=0\,, \tag{2.1}\] where \(\operatorname{div}\) and \(D\) represent the divergence and gradient operators with respect to the self-similar coordinates \(\boldsymbol{\xi}\in\mathbb{R}^{2}\), and the density is determined from the Bernoulli law as \[\rho(|D\varphi|,\varphi)=\begin{cases}\left(1+(\gamma-1)(B-\frac{1}{2}|D \varphi|^{2}-\varphi)\right)^{\frac{1}{\gamma-1}}&\quad\text{if }\gamma>1\,,\\ \exp(B-\frac{1}{2}|D\varphi|^{2}-\varphi)&\quad\text{if }\gamma=1\,.\end{cases} \tag{2.2}\] Equation (2.1) with (2.2) is a second-order nonlinear equation of mixed elliptic-hyperbolic type. It is elliptic if and only if \(|D\varphi|<c(|D\varphi|,\varphi)\) (pseudo-subsonic), while it is hyperbolic if and only if \(|D\varphi|>c(|D\varphi|,\varphi)\) (pseudo-supersonic). We seek a weak solution for the pseudo-steady potential flow equation (2.1)-(2.2). **Definition 2.1**.: _A function \(\varphi\in W^{1,\infty}_{\rm loc}(\mathbb{R}^{2})\) is called a weak solution of equation (2.1)-(2.2) in \(\mathbb{R}^{2}\) if \(\varphi\) satisfies the following properties:_ 1. \(B-\frac{1}{2}|D\varphi|^{2}-\varphi\geq h(0^{+})\quad\text{a.e. in }\mathbb{R}^{2};\)__ 2. \(\int_{\mathbb{R}^{2}}\left(\rho(|D\varphi|,\varphi)D\varphi\cdot D\zeta-2 \rho(|D\varphi|,\varphi)\zeta\right)\mathrm{d}\boldsymbol{\xi}=0\,\) _for every_ \(\zeta\in C^{\infty}_{\rm c}(\mathbb{R}^{2})\)_._ Let \(S\) be a \(C^{1}\)-curve across which \(|D\varphi|\) is discontinuous. Then \(\varphi\) is a weak solution if and only if \(\varphi\) satisfies the Rankine-Hugoniot conditions on \(S\): \[[\varphi]_{S}=\left[\rho(|D\varphi|,\varphi)D\varphi\cdot\boldsymbol{\nu}_{S} \right]_{S}=0\,, \tag{2.3}\] where \([F]_{S}\) represents the jump of a quantity \(F\) across \(S\), and \(\boldsymbol{\nu}_{S}\) is any unit normal vector on \(S\). By Definition 2.1(i) and (2.3), \(D\varphi\cdot\boldsymbol{\nu}_{S}\) is either positive or negative across the discontinuity \(S\). We say that \(\boldsymbol{\nu}_{S}\) points from upstream to downstream if \(D\varphi\cdot\boldsymbol{\nu}_{S}>0\), and from downstream to upstream if \(D\varphi\cdot\boldsymbol{\nu}_{S}<0\). The discontinuity \(S\) is called a _shock_ if it further satisfies the physical entropy condition: _the corresponding density \(\rho(|D\varphi|,\varphi)\) increases across \(S\) from upstream to downstream, or equivalently, \(|D\varphi\cdot\boldsymbol{\nu}_{S}|\) decreases across \(S\) from upstream to downstream._ In the self-similar coordinates \(\boldsymbol{\xi}=(\xi,\eta)\), the initial condition (1.2) becomes the following asymptotic boundary condition at infinity: For any \(\theta\in[0,2\pi)\) and \(i=1,2,3,4\), \[\lim_{r\to\infty}\|\varphi-\varphi_{i}\|_{C^{0,1}(R_{\theta}\setminus B_{r}( \boldsymbol{0}))}=0\,,\] whenever ray \(R_{\theta}\coloneqq\{r(\cos\theta,\sin\theta)\,:\,r>0\}\) is contained in sector \(\Lambda_{i}\) as defined in (1.3), where the pseudo-potential function \(\varphi_{i}\) corresponding to the uniform state \((i)\) is given by \[\varphi_{i}(\boldsymbol{\xi})\coloneqq-\frac{1}{2}|\boldsymbol{\xi}|^{2}+(u_{ i},v_{i})\cdot\boldsymbol{\xi}+k_{i}\,, \tag{2.4}\] and constant \(k_{i}\) is uniquely determined to satisfy (2.2) with \((\rho,\varphi)=(\rho_{i},\varphi_{i})\) for \(i=1,2,3,4\). We may simply fix \(k_{2}=0\) in (2.4) since the Bernoulli constant can be shifted to \(B-k_{2}\). Since system (1.1) is of Galilean invariance, it is convenient to normalize the initial data to satisfy \(u_{2}=v_{1}=0\). From (2.2) and (2.4) with \(k_{2}=0\), the Bernoulli constant \(B\) is then fixed as \[B=\frac{1}{2}v_{2}^{2}+h(\rho_{2})\,. \tag{2.5}\] In this paper, we assume that the initial data (1.2) are chosen such that, in the far field region, any two neighboring states are connected by exactly one planar shock discontinuity. Furthermore, we restrict our attention to the case: \[\max\{\rho_{1},\rho_{3}\}<\min\{\rho_{2},\rho_{4}\}\,, \tag{2.6}\] so that the initial data consist precisely of two forward shocks \(\{\overrightarrow{S}_{12},\overrightarrow{S}_{14}\}\) and two backward shocks \(\{\overleftarrow{S}_{32},\overleftarrow{S}_{34}\}\), which travel away from the origin in pairs; see Case 5.2 in [6]. Moreover, to facilitate our analysis, we focus on the symmetric case: \[\theta_{12}=\theta_{14}\eqqcolon\theta_{1},\quad\theta_{32}=\theta_{34}\eqqcolon \theta_{2}\qquad\text{ with }\theta_{1},\theta_{2}\in(0,\tfrac{\pi}{2})\,. \tag{2.7}\] We refer to parameters \(\boldsymbol{\theta}\coloneqq(\theta_{1},\theta_{2})\) as the incident angles of the shock discontinuities; see Fig. 2.1. #### 2.1.1. Compatibility conditions It is clear that the initial data and parameters \(\boldsymbol{\theta}\) cannot be specified arbitrarily. In the following lemma, we give the necessary and sufficient conditions for the initial data \(U_{i}=(\rho_{i},u_{i},v_{i})\) and parameters \(\boldsymbol{\theta}\) to generate two forward shocks \(\{\overrightarrow{S}_{12},\overrightarrow{S}_{14}\}\) and two backward shocks \(\{\overrightarrow{S}_{32},\overrightarrow{S}_{34}\}\). **Lemma 2.2**.: _Fix \(\gamma\geq 1\) and \(\rho_{2}>0\). There exist both a constant \(v_{\min}\in[-\infty,0)\) depending on \((\gamma,\rho_{2})\) and a critical angle \(\theta^{\mathrm{cr}}\in(0,\tfrac{\pi}{2}]\) depending on \((\gamma,\rho_{2},v_{2})\) such that, whenever \(v_{2}\in(v_{\min},0)\) and \(\boldsymbol{\theta}\in(0,\theta^{\mathrm{cr}})^{2}\), there exists a unique choice of constant states \(U_{i}=(\rho_{i},u_{i},v_{i})\), for \(i=1,3,4\), satisfying the Rankine-Hugoniot conditions (2.3) between any two neighbouring states and the entropy condition (2.6) so that two forward shocks \(\{\overrightarrow{S}_{12},\overrightarrow{S}_{14}\}\) and two backward shocks \(\{\overleftarrow{S}_{32},\overrightarrow{S}_{34}\}\) are generated._ Proof.: Throughout the proof, we take \(i=1,3\), and \(j=2,4\). In the self-similar coordinates, we denote the shock discontinuity line between state \((i)\) and state \((j)\) by \[S_{ij}\coloneqq\{\boldsymbol{\xi}\in\mathbb{R}^{2}\,:\,\varphi_{i}( \boldsymbol{\xi})=\varphi_{j}(\boldsymbol{\xi})\}=\big{\{}\boldsymbol{\xi}\in \mathbb{R}^{2}\,:\,\eta=(-1)^{\frac{i+i+1}{2}}\xi\tan\theta_{ij}+a_{ij}\big{\}}\,, \tag{2.8}\] where the expressions of \(\varphi_{i}\) and \(\varphi_{j}\) are given in (2.4), \(\theta_{ij}\in(0,\tfrac{\pi}{2})\) is the angle between the \(\xi\)-axis and the shock discontinuity line \(S_{ij}\), and \(a_{ij}\) is a constant determined below. We fix the unit normal vector on \(S_{ij}\) to be \[\boldsymbol{\nu}_{ij}=((-1)^{\frac{i+1}{2}}\sin\theta_{ij},(-1)^{\frac{i+2}{2} }\cos\theta_{ij})\,,\] which points towards the downstream state according to the entropy condition (2.6), so that \[D\varphi_{i}(\boldsymbol{\xi})\cdot\boldsymbol{\nu}_{ij}>D\varphi_{j}( \boldsymbol{\xi})\cdot\boldsymbol{\nu}_{ij}>0\qquad\text{ for }\boldsymbol{\xi}\in S_{ij}\,. \tag{2.9}\] A straightforward calculation by using (2.2)-(2.4) and (2.8)-(2.9) gives \[(u_{i},v_{i})-(u_{j},v_{j})=\ell_{ij}\boldsymbol{\nu}_{ij}\,,\quad a_{ij}=(-1) ^{\frac{i+j+1}{2}}\Big{(}v_{i}-u_{i}\tan\theta_{ij}+\frac{(-1)^{\frac{i-1}{2} }\rho_{j}\ell_{ij}}{(\rho_{i}-\rho_{j})\cos\theta_{ij}}\Big{)}\,, \tag{2.10}\] Figure 2.1. Four-shock interaction with symmetric incident shocks. where \(\ell_{ij}\coloneqq\ell(\rho_{i},\rho_{j})\) is defined as \[\ell(\rho_{i},\rho_{j})\coloneqq\sqrt{\frac{2(\rho_{i}-\rho_{j})(h(\rho_{i})-h( \rho_{j}))}{\rho_{i}+\rho_{j}}}\,. \tag{2.11}\] It is direct to check that \(\ell\) is symmetric and satisfies the strict monotonicity: For any \(\bar{\rho}>0\), \[\rho\mapsto\ell(\bar{\rho},\rho)\,\,\,\,\text{is strictly increasing on $\rho\in(\bar{\rho},\infty)$ and strictly decreasing on $\rho\in(0,\bar{\rho})$}\,. \tag{2.12}\] From (2.7) and (2.10), we deduce that \[(\ell_{12}-\ell_{14})\sin\theta_{1}=(\ell_{34}-\ell_{32})\sin\theta_{2}\,,\] from which \(\rho_{4}=\rho_{2}\) must hold by virtue of the entropy condition (2.6) and (2.12). Thus, we obtain the following relations: \[\ell_{12}\cos\theta_{1}=\ell_{32}\cos\theta_{2}\,,\qquad\begin{cases}u_{2}=u_{ 4}=u_{1}+\ell_{12}\sin\theta_{1}=u_{3}-\ell_{32}\sin\theta_{2}\,,\\ v_{1}=v_{3}=v_{2}+\ell_{12}\cos\theta_{1}=v_{4}-\ell_{32}\cos\theta_{2}\,.\end{cases}\] Recall that \(u_{2}=v_{1}=0\) are fixed, so that \[U_{1}=(\rho_{1},-\ell_{12}\sin\theta_{1},0)\,,\,\,\,U_{2}=(\rho_{2},0,v_{2})\,, \,\,\,U_{3}=(\rho_{3},\ell_{32}\sin\theta_{2},0)\,,\,\,\,U_{4}=(\rho_{2},0,-v_ {2})\,, \tag{2.13}\] whenever \(\rho_{1},\rho_{3}\in(0,\rho_{2})\) satisfy \[\ell_{12}\cos\theta_{1}=\ell_{32}\cos\theta_{2}=-v_{2}\,. \tag{2.14}\] We refer to (2.14) as the compatibility conditions for the initial data. We define \[v_{\min}\coloneqq-\ell(0^{+},\rho_{2})\in[-\infty,0)\,,\qquad\theta^{\rm cr} \coloneqq\arccos\big{(}\frac{-v_{2}}{\ell(0^{+},\rho_{2})}\big{)}\in(0,\tfrac {\pi}{2}]\,. \tag{2.15}\] Then, by (2.12), the necessary and sufficient conditions for the existence of \(\rho_{1}\) and \(\rho_{3}\) uniquely solving (2.14) subject to (2.6) are \[v_{2}\in(v_{\min},0)\,,\qquad\boldsymbol{\theta}\in(0,\theta^{\rm cr})^{2}\,.\] The constant states \(U_{i}\), \(i=1,3,4\), are then uniquely determined by (2.13)-(2.14) depending only on \((\gamma,\rho_{2},v_{2},\boldsymbol{\theta})\). We call \(\theta^{\rm cr}\) the vacuum critical angle, because \(\rho_{1}\to 0^{+}\) as \(\theta_{1}\to\theta^{\rm cr-}\) and \(\rho_{3}\to 0^{+}\) as \(\theta_{2}\to\theta^{\rm cr-}\), according to (2.14). Note that, in potential flow, the incident shock with vacuum upstream state is mathematically consistent, which is in contrast to the non-potential Euler flow for which the maximal density ratio across a shock is bounded. It follows from Lemma 2.2 that \(\rho_{2}=\rho_{4}\). Without loss of generality, we fix \(\rho_{2}=\rho_{4}=1\) via the scale-invariance of equation (2.1) as follows \[\boldsymbol{\xi}\mapsto c_{2}\boldsymbol{\xi}\,,\qquad(\rho,\varphi,\rho_{2},v _{2})\mapsto\big{(}\frac{\rho}{\rho_{2}},\frac{\varphi}{c_{2}^{2}},1,\frac{v_{ 2}}{c_{2}}\big{)}\,.\] Then the entropy condition (2.6) becomes \(\max\{\rho_{1},\rho_{3}\}<1\), and \(v_{\min}=-\ell(0^{+},1)\) depends only on \(\gamma\geq 1\). For simplicity, we occasionally use the abbreviation \(\ell(\cdot)\coloneqq\ell(\cdot,1)\). #### 2.1.2. The asymptotic boundary value problem in the upper half-plane Equation (1.1) and the initial data in (1.2) given by (2.13) are invariant under reflection with respect to the \(x_{1}\)-axis. Thus, we look for solutions of (1.1)-(1.2) in the symmetric form: \[\Phi(t,x_{1},x_{2})=\Phi(t,x_{1},-x_{2})\qquad\text{ for all }(t,x_{1},x_{2}) \in(0,\infty)\times\mathbb{R}^{2}\,,\] which is equivalent to \[\varphi(\xi,\eta)=\varphi(\xi,-\eta)\qquad\text{for all }(\xi,\eta)\in\mathbb{R}^{2}\,.\] For this reason, it suffices to consider the restriction of solutions to the upper half-plane \[\mathbb{R}^{2}_{+}\coloneqq\big{\{}\boldsymbol{\xi}\in\mathbb{R}^{2}\,:\,\eta >0\big{\}}\,,\] so long as we impose the slip boundary condition \[D\varphi\cdot(0,1)=0\qquad\text{ on }L_{\rm sym}\coloneqq\big{\{}\boldsymbol{\xi} \in\mathbb{R}^{2}\,:\,\eta=0\big{\}}\,.\] This condition means that \(L_{\rm sym}\) can be regarded as a solid wall. Hereafter, we use symbols \(S_{12}\) and \(S_{32}\) to represent only the closure of the rays of the incident shocks \(S_{12}\) and \(S_{32}\) lying in the upper half-plane. We denote by \(P_{0}^{1}=(\xi^{P_{0}^{1}},\eta^{P_{0}^{1}})\) the point of intersection between \(S_{12}\) and \(L_{\rm sym}\), and by \(P_{0}^{2}=(\xi^{P_{0}^{2}},\eta^{P_{0}^{2}})\) the point of intersection between \(S_{32}\) and \(L_{\rm sym}\). We expect the incident shocks \(S_{12}\) and \(S_{32}\) to undergo shock reflection-diffraction at points \(P_{0}^{1}\) and \(P_{0}^{2}\) respectively; see Fig. 2.1. From (2.8), (2.10), and (2.13), we have \[P_{0}^{1}=(-\ell(\rho_{1})\sin\theta_{1}+\frac{1}{1-\rho_{1}}\frac{\ell(\rho_ {1})}{\sin\theta_{1}},\,0),\qquad P_{0}^{2}=(\ell(\rho_{3})\sin\theta_{2}- \frac{1}{1-\rho_{3}}\frac{\ell(\rho_{3})}{\sin\theta_{2}},\,0)\,. \tag{2.16}\] Using (2.4), (2.13)-(2.14), and (2.16), the pseudo-potentials of states (1)-(3) are given by \[\varphi_{1}=-\frac{1}{2}|\boldsymbol{\xi}|^{2}+v_{2}(\xi-\xi^{P_{0}^{1}}) \tan\theta_{1},\ \varphi_{2}=-\frac{1}{2}|\boldsymbol{\xi}|^{2}+v_{2}\eta,\ \varphi_{3}=-\frac{1}{2}|\boldsymbol{\xi}|^{2}-v_{2}(\xi-\xi^{P_{0}^{2}})\tan \theta_{2}. \tag{2.17}\] In particular, \(\varphi_{1}\) is independent of \(\theta_{2}\), while \(\varphi_{3}\) is independent of \(\theta_{1}\). The following lemma concerning the pseudo-Mach number of state (2) at the reflection points \(P_{0}^{1}\) and \(P_{0}^{2}\) is fundamental to our later analysis of the critical angles in SS2.2.1. **Lemma 2.3**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\rm min},0)\). For any incident angle \(\theta_{1}\in(0,\theta^{\rm cr}),\) the pseudo-Mach number of state (2) at the reflection point \(P_{0}^{1}\) is given by_ \[M_{2}^{(\theta_{1})}\coloneqq|D\varphi_{2}(P_{0}^{1})|=\frac{\ell(\rho_{1})} {1-\rho_{1}}\big{(}\rho_{1}^{2}+\cot^{2}\theta_{1}\big{)}^{\frac{1}{2}}\,, \tag{2.18}\] _with \(\rho_{1}=\rho_{1}(\theta_{1};v_{2})\in(0,1)\) given by (2.14). Moreover, the pseudo-Mach number \(M_{2}^{(\theta_{1})}\) is a strictly decreasing function of the incident angle \(\theta_{1}\) and satisfies that \(M_{2}^{(\theta_{1})}\to\infty\) as \(\theta_{1}\to 0^{+}\)._ Proof.: Formula (2.18) follows directly from (2.14) and (2.16)-(2.17). For the second property, the limit is clear since (2.14) implies that \(\ell(\rho_{1})\to-v_{2}>0\) as \(\theta_{1}\to 0^{+}\), whilst \(\cot^{2}\theta_{1}\to\infty\) as \(\theta_{1}\to 0^{+}\). To determine the strict monotonicity of \(M_{2}^{(\theta_{1})}\), we differentiate (2.14) and (2.18) with respect to \(\theta_{1}\) and re-arrange it to obtain \[\frac{(1-\rho_{1})^{2}\cot\theta_{1}}{2(\rho_{1}+\cot^{2}\theta_{1})(\ell( \rho_{1}))^{2}}\frac{\mathrm{d}(M_{2}^{(\theta_{1})})^{2}}{\mathrm{d}\theta_{ 1}}=\rho_{1}+\frac{1}{1-\rho_{1}}\frac{\ell(\rho_{1})}{\ell^{\prime}(\rho_{1} )}-\cot^{2}\theta_{1}\,. \tag{2.19}\] Furthermore, from the definition of \(\ell(\rho)\) in (2.11), we directly compute \[(1-\rho)\frac{\ell^{\prime}(\rho)}{\ell(\rho)}=-\frac{(\ell(\rho))^{2}+(1- \rho)^{2}h^{\prime}(\rho)}{(\ell(\rho))^{2}(1+\rho)}\,.\] Substituting the above expression into the right-hand side of (2.19) gives \[\text{RHS of \eqref{eq:Mach_1}}=\frac{\rho_{1}(1-\rho_{1})^{2}h^{\prime}(\rho_{1}) -(\ell(\rho_{1}))^{2}}{(\ell(\rho_{1}))^{2}+(1-\rho_{1})^{2}h^{\prime}(\rho_ {1})}-\cot^{2}\theta_{1}\,.\] Denote the numerator of the first term above by \[f(\rho)\coloneqq\rho(1-\rho)^{2}h^{\prime}(\rho)-(\ell(\rho))^{2}\equiv\frac {1-\rho}{1+\rho}\,\tilde{f}(\rho)\,,\] where \(\tilde{f}(\rho)\coloneqq(\gamma+1)h(\rho)-(\gamma-1)\rho^{2}h(\rho)+(1-\rho^ {2})\). Observe that \(\tilde{f}(1)=0\) and \[\tilde{f}^{\prime}(\rho)=(\gamma+1)(1-\rho^{2})h^{\prime}(\rho)>0\qquad\text{ for any }\rho\in(0,1)\,.\] Thus, \(\tilde{f}(\rho)<0\) for all \(\rho\in(0,1)\). It directly follows that the RHS of (2.19) is negative. We conclude that \(M_{2}^{(\theta_{1})}\) is a strictly decreasing function of \(\theta_{1}\in(0,\theta^{\rm cr})\). It follows immediately from Lemma 2.3 that \(\xi^{P_{0}^{1}}>0\) is a strictly decreasing function of \(\theta_{1}\in(0,\theta^{\rm cr})\) with \(\xi^{P_{0}^{1}}\to\infty\) as \(\theta_{1}\to 0^{+}\), since \(M_{2}^{(\theta_{1})}=|(-\xi^{P_{0}^{1}},v_{2})|\). Similarly, \(\xi^{P_{0}^{2}}<0\) is a strictly increasing function of \(\theta_{2}\in(0,\theta^{\rm cr})\) with \(\xi^{P_{0}^{2}}\to-\infty\) as \(\theta_{2}\to 0^{+}\). We define the background solution \(\bar{\varphi}\in C^{0,1}_{\rm loc}(\mathbb{R}_{+}^{2})\) as \[\bar{\varphi}(\boldsymbol{\xi})\coloneqq\min\left\{\varphi_{1}(\boldsymbol{\xi} ),\varphi_{2}(\boldsymbol{\xi}),\varphi_{3}(\boldsymbol{\xi})\right\}, \tag{2.20}\] and define the three open domains \[\Omega_{i}\coloneqq\operatorname{int}\left\{\boldsymbol{\xi}\in\mathbb{R}_{+}^{2}\, :\,\varphi_{i}(\boldsymbol{\xi})=\bar{\varphi}(\boldsymbol{\xi})\right\}\qquad \text{for }i=1,2,3\,, \tag{2.21}\] where \(\operatorname{int}G\) denotes the interior of a set \(G\subseteq\mathbb{R}^{2}\). Note that \(\Omega_{1}\) and \(\Omega_{3}\) are the wedge-shaped domains with wedge angles \(\theta_{1}\) and \(\theta_{2}\) respectively, and they become empty sets in the limits: \(\theta_{1}\to 0^{+}\) and \(\theta_{2}\to 0^{+}\), respectively. In light of the above discussion, we seek solutions of the following asymptotic boundary value problem in the self-similar coordinates \(\boldsymbol{\xi}=(\xi,\eta)\) in the upper half-plane \(\mathbb{R}_{+}^{2}\). **Problem 2.4** (Asymptotic boundary value problem).: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For any \(\boldsymbol{\theta}\in(0,\theta^{\mathrm{cr}})^{2},\) determine the existence of a weak solution \(\varphi\in W^{1,\infty}_{\mathrm{loc}}(\mathbb{R}_{+}^{2})\) of equations (2.1)-(2.2) in \(\mathbb{R}_{+}^{2}\) satisfying the following conditions_:__ 1. _The asymptotic boundary condition at infinity_:__ \[\lim_{r\to\infty}\|\varphi-\bar{\varphi}\|_{C^{0,1}(R_{\theta}\setminus B_{r} (\boldsymbol{0}))}=0\qquad\text{ for any }\theta\in(0,\pi)\,;\] 2. _The slip boundary condition on the symmetric boundary_:__ \[D\varphi\cdot(0,1)=0\qquad\text{on }L_{\mathrm{sym}}=\left\{\boldsymbol{\xi} \in\mathbb{R}^{2}\,:\,\eta=0\right\}.\] #### 2.1.3. Normal reflection configurations It is meaningful to extend the range of parameters in Problem 2.4 to allow \(\boldsymbol{\theta}\in[0,\theta^{\mathrm{cr}})^{2}\). We study the reflection for the case: \(\boldsymbol{\theta}=\boldsymbol{0}\), which we call the normal reflection configuration. It is clear that setting \(\varphi=\bar{\varphi}\equiv\varphi_{2}\) would satisfy Problem 2.4(i) at infinity. However, since \(\varphi_{2}\) does not satisfy Problem 2.4(ii), a boundary layer must be present. For this reason, we introduce a uniform downstream state (0), determined by a pseudo-potential \(\varphi_{0}\) satisfying Problem 2.4(ii), such that a straight reflected shock \(S_{0}\) is formed between states (2) and (0). In fact, the only possible straight reflected shock that satisfies 2.4(i) is a normal shock \(S_{0}\coloneqq\{\boldsymbol{\xi}\in\mathbb{R}_{+}^{2}\,:\,\eta=\eta_{0}\}\) for some \(\eta_{0}>0\), which is parallel to \(L_{\mathrm{sym}}\). The constant density of state (0) given by \(\rho_{0}\coloneqq\rho(|D\varphi_{0}|,\varphi_{0})\) should satisfy the entropy condition \(\rho_{0}>1\). We demonstrate that state (0) described above is uniquely determined by \((\gamma,v_{2})\). The pseudo-potential \(\varphi_{0}\) has form (2.4) with \(i=0\), for suitable constants \((u_{0},v_{0},k_{0})\) to be determined. It follows from Problem 2.4(ii) that \(v_{0}=0\), whilst \(u_{0}=0\) follows from the Rankine-Hugoniot conditions (2.3) between states (2) and (0) because \(S_{0}\) is parallel to \(L_{\mathrm{sym}}\). The value of \(k_{0}=v_{2}\eta_{0}\) is then obtained by using the continuity of \(\varphi=\varphi_{0}=\varphi_{2}\) on \(S_{0}\). It remains to determine constant \(\eta_{0}>0\), which fixes the location of \(S_{0}\). Combining the Bernoulli law (2.2) with the Rankine-Hugoniot conditions (2.3), we find the necessary condition \[\ell(\rho_{0})=-v_{2}\,. \tag{2.22}\] It is clear that (2.22) admits a unique solution \(\rho_{0}\in(1,\infty)\) depending only on \((\gamma,v_{2})\). Indeed, \(\ell(\cdot)\) is strictly increasing on \((1,\infty)\) and \(\ell(1)=0\). Constant \(\eta_{0}\) is then uniquely determined by the Rankine-Hugoniot conditions (2.3) to be \[\eta_{0}\coloneqq-\frac{v_{2}}{\rho_{0}-1}>0\qquad\text{where }\rho_{0}\in(1, \infty)\text{ satisfies (\ref{eq:22}).} \tag{2.23}\] Thus, the pseudo-potential of state (0) has been uniquely determined above by \((\gamma,v_{2})\) as \[\varphi_{0}(\boldsymbol{\xi})\coloneqq-\frac{1}{2}|\boldsymbol{\xi}|^{2}+v_{ 2}\eta_{0}\,.\] It is direct to verify that \[\varphi_{\mathrm{norm}}(\boldsymbol{\xi})\coloneqq\min\{\varphi_{0}( \boldsymbol{\xi}),\varphi_{2}(\boldsymbol{\xi})\}=\begin{cases}\varphi_{2}( \boldsymbol{\xi})&\text{for }\eta>\eta_{0}\,,\\ \varphi_{0}(\boldsymbol{\xi})&\text{for }0<\eta<\eta_{0}\,,\end{cases} \tag{2.24}\] is a solution to Problem 2.4 when \(\boldsymbol{\theta}=\boldsymbol{0}\); see Fig. 2.2(a). Briefly, we mention the case: \(\boldsymbol{\theta}\in\left(\{0\}\times(0,\theta^{\mathrm{cr}})\cup\left((0, \theta^{\mathrm{cr}})\times\{0\}\right)\right)\), which is called the unilateral normal reflection; a normal reflection on one side with respect to the symmetric boundary \(L_{\mathrm{sym}}\) occurs by the same argument as the normal reflection case \(\boldsymbol{\theta}=\boldsymbol{0}\) above, while the other side undergoes a regular reflection at the reflection point, of which details will be discussed in SS2.2 below; see the case: \(\boldsymbol{\theta}\in(0,\theta^{\mathrm{cr}})\times\{0\}\) in Fig. 2.2(b). Moreover, the unilateral normal reflections are essentially the same as the Prandtl-Meyer reflection configurations considered in [2]. ### Regular reflection configurations In this section, we introduce the detachment and sonic angles and describe the structure of three genuinely different configurations of the four-shock interactions. #### 2.2.1. Detachment and sonic angles For \(\boldsymbol{\theta}\in(0,\theta^{\mathrm{cr}})^{2}\), the incident shocks \(S_{12}\) and \(S_{32}\) intersect the symmetric line \(L_{\mathrm{sym}}\) at points \(P_{0}^{1}\) and \(P_{0}^{2}\) respectively, with coordinates given by (2.16). As discussed above, the symmetric line can be regarded as a rigid wall and, similarly to the previous works [10], we expect that, if the incident angle \(\theta_{1}\) is less than a detachment angle \(\theta^{\mathrm{d}}\), there exists a regular reflection of \(S_{12}\) at point \(P_{0}^{1}\), _i.e._, there exist both a uniform state (5) determined by a pseudo-potential \(\varphi_{5}\) and a straight reflected shock \(S_{25}\) passing through \(P_{0}^{1}\) such that \(S_{25}\) separates the uniform upstream state (2) from the uniform downstream state (5). Similarly, we expect that there exists a regular reflection of \(S_{32}\) at point \(P_{0}^{2}\) if \(\theta_{2}\) is less than \(\theta^{\mathrm{d}}\); that is, there exist both a uniform state (6) determined by a pseudo-potential \(\varphi_{6}\) and a straight reflected shock \(S_{26}\) passing through \(P_{0}^{2}\). The expected structure of shock regular reflection on the upper half-plane can be seen in Fig. 2.1. We now determine the detachment angle \(\theta^{\mathrm{d}}\) and the sonic angle \(\theta^{\mathrm{s}}\). The regular reflection of \(S_{12}\) at point \(P_{0}^{1}\) can be reduced to the study of an algebraic system. By the argument in [10, SS7.1], to determine the uniform state (5), it suffices to apply the Rankine-Hugoniot conditions at the reflection point \(P_{0}^{1}\) only. The pseudo-potential \(\varphi_{5}\) has form (2.4) with \(i=5\), and the straight reflected shock \(S_{25}\) must have the form: \[S_{25}\coloneqq\big{\{}\boldsymbol{\xi}\in\overline{\mathbb{R}_{+}^{2}}\,:\, \varphi_{5}(\boldsymbol{\xi})=\varphi_{2}(\boldsymbol{\xi})\big{\}}=\big{\{} \boldsymbol{\xi}\in\overline{\mathbb{R}_{+}^{2}}\,:\,\eta=\xi\tan\theta_{25}+a _{25}\big{\}}\] for some constants \((u_{5},v_{5},k_{5},\theta_{25},a_{25})\) depending on \((\gamma,v_{2},\theta_{1})\) to be determined, where \(\theta_{25}\) is the angle between the reflected shock \(S_{25}\) and the positive \(\xi\)-axis, and \(a_{25}\) is the \(\eta\)-intercept of the reflected shock \(S_{25}\). Applying the Rankine-Hugoniot conditions (2.3) between states (2) and (5) at \(P_{0}^{1}\), we have \[\varphi_{5}(P_{0}^{1})=\varphi_{2}(P_{0}^{1})\,,\,\,\,D\varphi_{5}(P_{0}^{1}) \cdot\boldsymbol{\tau}_{25}=D\varphi_{2}(P_{0}^{1})\cdot\boldsymbol{\tau}_{25} \,,\,\,\,\rho_{5}D\varphi_{5}(P_{0}^{1})\cdot\boldsymbol{\nu}_{25}=D\varphi_{2 }(P_{0}^{1})\cdot\boldsymbol{\nu}_{25}\,, \tag{2.25}\] where \(\boldsymbol{\nu}_{25}\coloneqq\frac{D(\varphi_{2}-\varphi_{5})(P_{0}^{1})}{|D (\varphi_{2}-\varphi_{5})(P_{0}^{1})|}\), \(\boldsymbol{\tau}_{25}\coloneqq\boldsymbol{\nu}^{\perp}\), and \(\rho_{5}\coloneqq\rho(|D\varphi_{5}|,\varphi_{5})\) is given by (2.2). Furthermore, the pseudo-potential \(\varphi_{5}\) should satisfy the entropy condition: \[\rho_{5}>1\,,\quad\text{or equivalently}\,,\quad D\varphi_{2}(P_{0}^{1})\cdot \boldsymbol{\nu}_{25}>D\varphi_{5}(P_{0}^{1})\cdot\boldsymbol{\nu}_{25}>0\,; \tag{2.26}\] and the slip boundary condition: \[D\varphi_{5}\cdot(0,1)=0\qquad\text{on }L_{\mathrm{sym}}\,. \tag{2.27}\] A necessary condition for the regular reflection of \(S_{12}\) at \(P_{0}^{1}\) is clearly that state (2) must be pseudo-supersonic at \(P_{0}^{1}\). From Lemma 2.3, there exists a unique \(\theta^{+}\in(0,\theta^{\mathrm{cr}}]\) such that \(M_{2}^{(\theta_{1})}>1\) if and only if \(\theta_{1}\in(0,\theta^{+})\). However, note that \(M_{2}^{(\theta_{1})}|_{\theta_{1}=\theta^{+}}\) is not necessarily \(1\) if \(\theta^{+}=\theta^{\mathrm{cr}}\). Introduce the steady detachment angle \(\theta_{\mathrm{stdy}}^{\mathrm{d}}(\rho_{\infty},u_{\infty})\) for the steady potential flow with constant supersonic upstream state \(U_{\infty}=(\rho_{\infty},u_{\infty},0)\), as defined in [10, SS7.1]. Also, for \(\theta_{1}\in(0,\theta^{\mathrm{cr}})\), denote by \(\hat{\theta}_{25}(\theta_{1})\) the acute angle between the pseudo-velocity \(D\varphi_{2}(P_{0}^{1})=(-\xi^{P_{0}^{1}},v_{2})\) and the Figure 2.2. Structure of solutions of Problem 2.4 involving normal shocks. symmetric line \(L_{\rm sym}\): \[\hat{\theta}_{25}(\theta_{1})\coloneqq\arccos\left(\frac{\xi^{P_{0}^{1}}}{|D\varphi _{2}(P_{0}^{1})|}\right)=\arcsin\left(\frac{-v_{2}}{|D\varphi_{2}(P_{0}^{1})|} \right)\in(0,\tfrac{\pi}{2})\,. \tag{2.28}\] Similarly to [10, SS7.4], whenever \(\theta_{1}\in(0,\theta^{+})\), the existence and multiplicity of solutions of the algebraic system (2.25)-(2.27) are determined as follows: 1. If \(\hat{\theta}_{25}(\theta_{1})<\theta^{\rm d}_{\rm stdy}(1,|D\varphi_{2}(P_{0} ^{1})|)\), there are two solutions; 2. If \(\hat{\theta}_{25}(\theta_{1})=\theta^{\rm d}_{\rm stdy}(1,|D\varphi_{2}(P_{0} ^{1})|)\), there is one solution; 3. If \(\hat{\theta}_{25}(\theta_{1})>\theta^{\rm d}_{\rm stdy}(1,|D\varphi_{2}(P_{0} ^{1})|)\), there are no solutions. We demonstrate that there exists at most one value \(\theta^{\rm d}\in(0,\theta^{+})\) such that \(\theta_{1}=\theta^{\rm d}\) satisfies the equality in (b) above. Indeed, using (2.28) and Lemma 2.3, it is clear that \(\hat{\theta}_{25}(\theta_{1})\) is a strictly increasing function of \(\theta_{1}\in(0,\theta^{\rm cr})\), whilst, for \(\theta^{\rm d}_{\rm stdy}(1,|D\varphi_{2}(P_{0}^{1})|)\), we state below a general fact about 2-D steady potential flow, from which it is clear that \(\theta^{\rm d}_{\rm stdy}(1,|D\varphi_{2}(P_{0}^{1})|)\) is strictly decreasing with \(\theta_{1}\in(0,\theta^{+})\). The proof of this statement is given in Appendix A.1. **Lemma 2.5**.: _Let \(\theta^{\rm d}_{\rm stdy}\) and \(\theta^{\rm s}_{\rm stdy}\) denote the steady detachment and sonic angles for the steady potential flow with constant supersonic upstream state \(U_{\infty}=(\rho_{\infty},u_{\infty},0)\). Then \(\theta^{\rm d}_{\rm stdy}\) and \(\theta^{\rm s}_{\rm stdy}\) are smooth, strictly increasing functions of the upstream Mach number \(M_{\infty}\coloneqq\frac{u_{\infty}}{c(\rho_{\infty})}>1\). Moreover, the following limits hold:_ \[\lim_{M_{\infty}\to 1^{+}}(\theta^{\rm d}_{\rm stdy},\theta^{\rm s}_{\rm stdy})=(0,0)\,,\qquad\lim_{M_{\infty}\to\infty}(\theta^{\rm d}_{\rm stdy},\theta^{\rm s }_{\rm stdy})=(\tfrac{\pi}{2},\arctan\sqrt{\tfrac{2}{\gamma-1}})\,.\] With the above discussion, the following proposition gives the necessary and sufficient criterion on the incident angle \(\theta_{1}\) for the existence of a regular reflection of \(S_{12}\) at point \(P_{0}^{1}\). **Proposition 2.6** (Local reflection theory).: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). There exists a unique \(\theta^{\rm d}=\theta^{\rm d}(\gamma,v_{2})\in(0,\theta^{\rm cr}],\) called the detachment angle, such that, for \(\theta_{1}\in(\theta^{\rm d},\theta^{\rm cr}),\) there are no states \((5)\) of form (2.4) satisfying (2.25)-(2.27) and, for each \(\theta_{1}\in(0,\theta^{\rm d}),\) there are exactly two states \((5)\) of form (2.4) satisfying (2.25)-(2.27): the weak reflection state \(\varphi^{\rm wk}_{5}\) and the strong reflection state \(\varphi^{\rm ss}_{5},\) distinguished by \(1<\rho^{\rm wk}_{5}<\rho^{\rm sg}_{5}\,,\) where \(\rho^{\rm wk}_{5}\coloneqq\rho(|D\varphi^{\rm wk}_{5}|,\varphi^{\rm wk}_{5})\) and \(\rho^{\rm ss}_{5}\coloneqq\rho(|D\varphi^{\rm ss}_{5}|,\varphi^{\rm ss}_{5})\) are the constant densities of the weak and strong reflection states \((5)\) respectively. Furthermore, denoting by \(O^{\rm wk}_{5}\coloneqq(u^{\rm wk}_{5},v^{\rm wk}_{5})\) and \(O^{\rm ss}_{5}\coloneqq(u^{\rm ss}_{5},v^{\rm ss}_{5})\) the sonic centres, and by \(c^{\rm wk}_{5}\) and \(c^{\rm sg}_{5}\) the sonic speeds of the weak and strong reflection states \((5)\) respectively, then_ 1. \((\rho^{\rm wk}_{5},O^{\rm wk}_{5})\in C([0,\theta^{\rm d}])\cap C^{\infty}([0, \theta^{\rm d}))\) _and_ \((\rho^{\rm sg}_{5},O^{\rm ss}_{5})\in C((0,\theta^{\rm d}])\cap C^{\infty}((0, \theta^{\rm d}))\)_. In particular, the following limits exist_ \[\lim_{\theta_{1}\to 0^{+}}(\rho^{\rm wk}_{5},O^{\rm wk}_{5},u^{\rm wk}_{5} \xi^{P_{0}^{1}}_{5})=(\rho_{0},\mathbf{0},-v_{2}\eta_{0})\,,\qquad\lim_{ \theta_{1}\to\theta^{\rm d-}}(\rho^{\rm wk}_{5},\rho^{\rm sg}_{5},O^{\rm wk}_{5},O^{\rm sg}_{5})\,,\] _where_ \(\rho_{0}\) _and_ \(\eta_{0}\) _are the density and shock location of the normal reflection state_ \((0)\) _which are given in SS_2.1.3_._ 2. _For any_ \(\theta_{1}\in(0,\theta^{\rm d}),\)__ (2.29) \[|D\varphi_{2}(P_{0}^{1})|>|D\varphi_{2}(P_{0}^{1}|_{\theta_{1}= \theta^{\rm d}})|>1\,,\] (2.30) \[0<u^{\rm wk}_{5}<u^{\rm sg}_{5}<\xi^{P_{0}^{1}}\,,\qquad v^{\rm wk }_{5}=v^{\rm sg}_{5}=0\,.\] 3. _There exists a unique_ \(\theta^{\rm s}=\theta^{\rm s}(\gamma,v_{2})\in(0,\theta^{\rm d}],\) _called the sonic angle, such that_ \[|D\varphi^{\rm wk}_{5}(P_{0}^{1})|>c^{\rm wk}_{5} \qquad\text{for all }\,\theta_{1}\in(0,\theta^{\rm s})\,,\] \[|D\varphi^{\rm wk}_{5}(P_{0}^{1})|<c^{\rm wk}_{5} \qquad\text{for all }\,\theta_{1}\in(\theta^{\rm s},\theta^{\rm d}]\,.\] _Furthermore, if_ \(\theta^{\rm s}<\theta^{\rm cr},\) _then_ \(\theta^{\rm s}<\theta^{\rm d}\) _and_ \(|D\varphi^{\rm wk}_{5}(P_{0}^{1})|=c^{\rm wk}_{5}\) _when_ \(\theta_{1}=\theta^{\rm s}.\)__ Proof.: The proof follows similarly to [10, Theorem 7.1.1], once \(\theta^{\rm d}\) and \(\theta^{\rm s}\) are determined. Thus, it remains to show the existence and uniqueness of \(\theta^{\rm d}\) and \(\theta^{\rm s}\). As discussed above, \(\hat{\theta}_{25}(\theta_{1})\) is strictly increasing with \(\theta_{1}\in(0,\theta^{\mathrm{cr}}),\) whilst \(\theta_{\mathrm{stdy}}^{\mathrm{d}}(1,|D\varphi_{2}(P_{0}^{1})|)\) is strictly decreasing with \(\theta_{1}\in(0,\theta^{+})\). Furthermore, Lemmas 2.3 and 2.5 show that \[\lim_{\theta_{1}\to 0^{+}}\hat{\theta}_{25}(\theta_{1})=0\ <\ \tfrac{\pi}{2}=\lim_{ \theta_{1}\to 0^{+}}\theta_{\mathrm{stdy}}^{\mathrm{d}}(1,|D\varphi_{2}(P_{0}^{1})| )\,.\] Then it follows that there exists at most one solution \(\vartheta^{\mathrm{d}}\in(0,\theta^{+})\) of the equation: \[\hat{\theta}_{25}(\vartheta^{\mathrm{d}})=\theta_{\mathrm{stdy}}^{\mathrm{d} }(1,|D\varphi_{2}(P_{0}^{1}|_{\theta_{1}=\vartheta^{\mathrm{d}}})|)\,. \tag{2.31}\] We define the detachment angle \(\theta^{\mathrm{d}}\in(0,\theta^{+}]\) by \[\theta^{\mathrm{d}}\coloneqq\min\left\{\theta^{+},\,\inf\{\vartheta^{ \mathrm{d}}\in(0,\theta^{+})\,:\,\vartheta^{\mathrm{d}}\text{ satisfies \eqref{eq:def _The unit tangent vectors to \(S_{25}\) and \(S_{26}\) are given respectively by_ \[\boldsymbol{e}_{S_{25}}\coloneqq(\cos\theta_{25},\sin\theta_{25})\,,\qquad \boldsymbol{e}_{S_{26}}\coloneqq(\cos\theta_{26},\sin\theta_{26})\,. \tag{2.35}\] It follows from Proposition 2.6(i) and Definition 2.7 that \(\varphi_{j}=\varphi_{0}\), and \(\theta_{2j}=(6-j)\pi\) when \(\theta_{j-4}=0\) for \(j=5,6\). **Lemma 2.8** (Further properties of states (5) and (6)).: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). Let \(\rho_{5}\) and \(\rho_{6}\) be the constant densities of states (5) and (6) given by Proposition 2.6, and let \(\theta_{25}\) and \(\theta_{26}\) be the reflected shock angles given by (2.34). Then, for \(j=5,6,\) there exists a constant \(\delta_{5,6}>0\) depending only on \((\gamma,v_{2})\) such that, for any \(\boldsymbol{\theta}\in\overline{\Theta},\)_ \[1+\delta_{5,6}\leq\rho_{j}\leq\delta_{5,6}^{-1}\,, \tag{2.37}\] \[\delta_{5,6}\leq|\theta_{2j}-\tfrac{\pi}{2}|\,. \tag{2.36}\] _Moreover, for any \(\bar{\theta}\in(0,\theta^{\mathrm{d}})\), there exists \(\delta_{5,6}^{(\bar{\theta})}>0\) depending only on \((\gamma,v_{2},\bar{\theta})\) such that_ \[\delta_{5,6}^{(\bar{\theta})}<|\theta_{2j}-(6-j)\pi|\qquad\text{whenever } \boldsymbol{\theta}\in\overline{\Theta}\cap\{\theta_{j-4}>\bar{\theta}\}\,. \tag{2.38}\] Proof.: We prove the results only for state (5), since the proof for state (6) is similar. Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). By Proposition 2.6(i), we see that \(\rho_{5}\in C(\overline{\Theta})\), so that the uniform upper bound in (2.36) is clear. For the uniform lower bound, the entropy condition (2.26) gives \(\rho_{5}>1\). Furthermore, the Rankine-Hugoniot conditions between states (2) and (5) imply that \[\ell(\rho_{5})=\big{|}(u_{2},v_{2})-(u_{5},v_{5})\big{|}=\sqrt{v_{2}^{2}+u_{5 }^{2}}\geq|v_{2}|>0\qquad\text{for any }\boldsymbol{\theta}\in\overline{\Theta}\,, \tag{2.39}\] with \(\ell(\cdot)\) given by (2.11). In particular, \(\ell(\cdot)\) is an increasing function on \([1,\infty)\) with \(\ell(1)=0\), and hence (2.39) implies the existence of a small constant \(\delta_{5,6}^{(1)}>0\), depending only on \((\gamma,v_{2})\), such that \(\rho_{5}>1+\delta_{5,6}^{(1)}\) for any \(\boldsymbol{\theta}\in\overline{\Theta}\). Next, we prove (2.37). From (2.34) and the geometric properties of the shock polar curve for 2-D steady potential flow, we have \[\tfrac{\pi}{2}+\hat{\theta}_{25}<\theta_{25}\leq\pi\qquad\text{for any } \boldsymbol{\theta}\in\overline{\Theta}\,, \tag{2.40}\] with \(\hat{\theta}_{25}\) given by (2.28). From Proposition 2.6(i) and (2.34), we see that \(\theta_{25}\to\pi^{-}\) as \(\theta_{1}\to 0^{+}\). By continuity, there exists a small constant \(\delta_{5,6}^{(2)}>0\) depending only on \((\gamma,v_{2})\) such that \(\theta_{25}>\frac{3\pi}{4}\) whenever \(\theta_{1}\in[0,\delta_{5,6}^{(2)})\). On the other hand, by (2.28) and Lemma 2.3, there exists a constant \(\delta_{5,6}^{(3)}>0\) depending only on \((\gamma,v_{2})\) such that \(\hat{\theta}_{25}>\delta_{5,6}^{(3)}\) whenever \(\theta_{1}\in[\delta_{5,6}^{(2)},\theta^{\mathrm{d}}]\). Together with (2.40), we conclude that \(\theta_{25}>\min\{\frac{3\pi}{4},\frac{\pi}{2}+\delta_{5,6}^{(3)}\}\) for all \(\theta_{1}\in[0,\theta^{\mathrm{d}}]\), which leads to (2.37). Finally, bound (2.38) follows directly from (2.40) and Lemma A.2, which states that \(\theta_{25}\) is a strictly decreasing function of \(\theta_{1}\in[0,\theta^{\mathrm{d}}]\). #### 2.2.2. Configurations for four-shock interactions The four-shock interaction configurations are split into three genuinely different cases, depending on parameters \(\boldsymbol{\theta}\in\Theta\) and the critical angles \(\theta^{\mathrm{s}}\) and \(\theta^{\mathrm{d}}\). We briefly describe the expected structure of each configuration and introduce some important notation. **Case I.**_Supersonic-supersonic reflection configuration._ When \(\boldsymbol{\theta}\in(0,\theta^{\mathrm{s}})^{2}\), the supersonic-supersonic reflection configuration is expected; see Fig. 2.3. In this case, both states (5) and (6) are pseudo-supersonic at the reflection points \(P_{0}^{1}\) and \(P_{0}^{2}\), respectively. **Case II.**_Supersonic-subsonic reflection configuration._ When \(\boldsymbol{\theta}\in[\theta^{\mathrm{s}},\theta^{\mathrm{d}})\times(0,\theta^ {\mathrm{s}})\), or \(\boldsymbol{\theta}\in(0,\theta^{\mathrm{s}})\times[\theta^{\mathrm{s}}, \theta^{\mathrm{d}})\), the supersonic-subsonic reflection configuration is expected; see Fig. 2.4 for the case: \(\boldsymbol{\theta}\in[\theta^{\mathrm{s}},\theta^{\mathrm{d}})\times(0, \theta^{\mathrm{s}})\). In this case, either state (5) is pseudo-sonic/pseudo-subsonic at \(P_{0}^{1}\) and state (6) is pseudo-supersonic at \(P_{0}^{2}\), or vice versa. **Case III.**_Subsonic-subsonic reflection configuration_. When \(\boldsymbol{\theta}\in[\theta^{\mathrm{s}},\theta^{\mathrm{d}})^{2}\), the subsonic-subsonic reflection configuration is expected; see Fig. 2.5. In this case, both states (5) and (6) are pseudo-subsonic at the reflection points \(P_{0}^{1}\) and \(P_{0}^{2}\), respectively. It follows from Remark 2.1(i) that the configurations of **Case II** and **Case III** are not possible unless \(v_{2}\in(v_{2}^{\rm s},0)\), for constant \(v_{2}^{\rm s}\) from Lemma A.1. **Definition 2.9** (The sonic boundaries and intersection points).: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For any \(\boldsymbol{\theta}\in\overline{\mathfrak{G}},\) let \(Q_{5}\) and \(Q_{6}\) be the orthogonal projections of \(O_{5}\) and \(O_{6}\) onto \(S_{25}\) and \(S_{26},\) respectively. When \(\theta_{1}\in(0,\theta^{\rm s}),\) the right sonic boundary and intersection points are given by_ \[\Gamma^{5}_{\rm sonic}\coloneqq\left\{\boldsymbol{\xi}\in\partial B _{c_{5}}(O_{5})\cap\overline{\mathbb{R}_{+}^{2}}\,:\,\xi^{Q_{5}}\leq\xi\leq \xi^{P_{0}^{1}},\,\varphi_{5}(\boldsymbol{\xi})\leq\varphi_{2}(\boldsymbol{ \xi})\right\},\] \[\{P_{1}\}\coloneqq\Gamma^{5}_{\rm sonic}\cap L_{\rm sym}\,,\quad \{P_{2}\}\coloneqq\Gamma^{5}_{\rm sonic}\cap S_{25}\,,\] _whilst, for \(\theta_{1}\in[\theta^{\rm s},\theta^{\rm d}),\)_ \[\Gamma^{5}_{\rm sonic}\equiv\{P_{1}\}\equiv\{P_{2}\}\coloneqq\{P_{0}^{1}\}\,.\] _Similarly, when \(\theta_{2}\in(0,\theta^{\rm s}),\) the left sonic boundary and intersection points are given by_ \[\Gamma^{6}_{\rm sonic}\coloneqq\left\{\boldsymbol{\xi}\in\partial B _{c_{6}}(O_{6})\cap\overline{\mathbb{R}_{+}^{2}}\,:\,\xi^{P_{0}^{2}}\leq\xi \leq\xi^{Q_{6}},\,\varphi_{6}(\boldsymbol{\xi})\leq\varphi_{2}(\boldsymbol{ \xi})\right\},\] \[\{P_{3}\}\coloneqq\Gamma^{6}_{\rm sonic}\cap S_{26}\,,\quad\{P_{4 }\}\coloneqq\Gamma^{6}_{\rm sonic}\cap L_{\rm sym}\,,\] _whilst, for \(\theta_{2}\in[\theta^{\rm s},\theta^{\rm d}),\)_ \[\Gamma^{6}_{\rm sonic}\equiv\{P_{3}\}\equiv\{P_{4}\}\coloneqq\{P_{0}^{2}\}\,.\] _The open regions are given by_ \[\Omega_{5}\coloneqq\left\{\boldsymbol{\xi}\in\mathbb{R}_{+}^{2} \setminus\overline{B_{c_{5}}(O_{5})}\,:\,\varphi_{5}(\boldsymbol{\xi})< \varphi_{2}(\boldsymbol{\xi}),\,\xi>\xi^{P_{2}}\right\},\] \[\Omega_{6}\coloneqq\left\{\boldsymbol{\xi}\in\mathbb{R}_{+}^{2} \setminus\overline{B_{c_{6}}(O_{6})}\,:\,\varphi_{6}(\boldsymbol{\xi})< \varphi_{2}(\boldsymbol{\xi}),\,\xi<\xi^{P_{3}}\right\}.\] Figure 2.3. **Case I.** The supersonic-supersonic reflection configuration. Figure 2.4. **Case II.** The supersonic-subsonic reflection configuration in the case: \(\boldsymbol{\theta}\in[\theta^{\rm s},\theta^{\rm d})\times(0,\theta^{\rm s}),\) i.e., state (5) is pseudo-sonic/pseudo-subsonic at \(P_{0}^{1}\). Figure 2.5. **Case III.** The subsonic-subsonic reflection configuration. _Finally, define the symmetric boundary_ \[\Gamma_{\mathrm{sym}}\coloneqq\left\{(\xi,0)\in L_{\mathrm{sym}}\,:\,\xi^{P_{4}} <\xi<\xi^{P_{1}}\right\},\] _and the line segments_ \[S_{25}^{\mathrm{seg}}\coloneqq S_{25}\cap\left\{\xi^{P_{2}}\leq\xi\leq\xi^{P_{ 0}^{1}}\right\},\qquad S_{26}^{\mathrm{seg}}\coloneqq S_{26}\cap\left\{\xi^{P_ {0}^{2}}\leq\xi\leq\xi^{P_{3}}\right\}.\] Note that curves \(\Gamma_{\mathrm{sonic}}^{5}\), \(\Gamma_{\mathrm{sonic}}^{6}\), and \(\overline{\Gamma_{\mathrm{sym}}}\) as defined in Definition 2.9 above are fixed by \((\gamma,v_{2},\boldsymbol{\theta})\) and do not share any common points except their endpoints \(\{P_{1},P_{4}\}\). Also, note that \(\Omega_{j}=\varnothing\) if \(\theta_{j-4}\in[\theta^{8},\theta^{4})\) for \(j=5,6\). For a suitable simple open curve segment \(\Gamma_{\mathrm{shock}}\) (yet to be determined) with endpoints \(P_{2}\) and \(P_{3}\), we use \(\Omega\) to denote the open bounded domain enclosed by the simple closed curve \(\Gamma_{\mathrm{shock}}\cup\Gamma_{\mathrm{sonic}}^{5}\cup\Gamma_{\mathrm{ sym}}\cup\Gamma_{\mathrm{sonic}}^{6}\); see Figs. 2.3-2.5. ### Admissible solutions and main theorems We reformulate the asymptotic boundary value problem, Problem 2.4, into a free boundary problem. Our goal is to find a suitable curved free boundary \(\Gamma_{\mathrm{shock}}\) between points \(P_{2}\) and \(P_{3}\), and a solution \(\varphi\) of Problem 2.4 which satisfies (2.1)-(2.2) in the open region \(\Omega\) (as depicted in Figs. 2.3-2.5) and the corresponding boundary conditions on \(\partial\Omega\coloneqq\Gamma_{\mathrm{shock}}\cup\Gamma_{\mathrm{sonic}}^{5} \cup\Gamma_{\mathrm{sym}}\cup\Gamma_{\mathrm{sonic}}^{6}\). Furthermore, equation (2.1) should be elliptic in \(\Omega\), and the free boundary \(\Gamma_{\mathrm{shock}}\) should be a transonic shock that separates the hyperbolic and elliptic regions of the solution. **Problem 2.10** (Free boundary problem).: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For any \(\boldsymbol{\theta}\in\overline{\Theta}\), find a free boundary \((\)curved reflected-diffracted shock\()\)\(\Gamma_{\mathrm{shock}}\) in region \(\mathbb{R}_{+}^{2}\cap\{\xi^{P_{3}}<\xi<\xi^{P_{2}}\}\) and a function \(\varphi\) defined in region \(\Omega\), as shown in Figs. 2.3-2.5, such that \(\varphi\) satisfies_ 1. _Equation (_2.1_) in_ \(\Omega\) _so that (_2.1_) is elliptic inside_ \(\Omega\); 2. \(\varphi=\varphi_{2}\) _and_ \(\rho D\varphi\cdot\boldsymbol{\nu}=\rho_{2}D\varphi_{2}\cdot\boldsymbol{\nu}\) _on the free boundary_ \(\Gamma_{\mathrm{shock}}\); 3. \(\varphi=\varphi_{j}\) _and_ \(D\varphi=D\varphi_{j}\) _on_ \(\Gamma_{\mathrm{sonic}}^{j}\) _for_ \(j=5,6;\)__ 4. \(D\varphi\cdot(0,1)=0\) _on_ \(\Gamma_{\mathrm{sym}},\)__ _where \(\boldsymbol{\nu}\) is the unit normal vector on \(\Gamma_{\mathrm{shock}}\) pointing into \(\Omega\)._ **Definition 2.11** (Admissible solutions).: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For any \(\boldsymbol{\theta}\in\overline{\Theta},\) a function \(\varphi\in W^{1,\infty}_{\mathrm{loc}}(\mathbb{R}_{+}^{2})\) is called an admissible solution to the four-shock interaction problem if \(\varphi\) is a solution of Problem 2.10 and satisfies the following properties_:__ 1. _There exists a relatively open curve segment_ \(\Gamma_{\mathrm{shock}}\)__\((\)_without self-intersection\()\) _with endpoints_ \(P_{2}=(\xi^{P_{2}},\eta^{P_{2}})\) _and_ \(P_{3}=(\xi^{P_{3}},\eta^{P_{3}})\) _given by_ Definition 2.9 _such that_ \(\Gamma_{\mathrm{shock}}\) _is_ \(C^{2}\) _in its relative interior,_ \(\Gamma_{\mathrm{shock}}^{\mathrm{ext}}\coloneqq S_{25}^{\mathrm{seg}}\cup \Gamma_{\mathrm{shock}}\cup S_{26}^{\mathrm{seg}}\) _is a_ \(C^{1}\)_-curve including at_ \(P_{2}\) _and_ \(P_{3},\) _with_ \(S_{25}^{\mathrm{seg}}\) _and_ \(S_{26}^{\mathrm{seg}}\) _given in_ Definition 2.9_,_ \[\Gamma_{\mathrm{shock}}\subseteq\left\{\boldsymbol{\xi}\in\mathbb{R}_{+}^{2} \setminus\overline{D_{c_{2}}(O_{2})}\,:\,\xi^{P_{3}}<\xi<\xi^{P_{2}}\right\},\] _with_ \(\partial B_{c_{2}}(O_{2})\) _as the sonic circle of state (_2_), and curves_ \(\overline{\Gamma_{\mathrm{shock}}}\) _and_ \(\Gamma_{\mathrm{sonic}}^{5}\cup\Gamma_{\mathrm{sym}}\cup\Gamma_{\mathrm{sonic}}^{6}\) _share no common points except their endpoints_ \(\{P_{2},P_{3}\}\) _so that_ \(\Gamma_{\mathrm{shock}}\cup\Gamma_{\mathrm{sonic}}^{5}\cup\Gamma_{\mathrm{ sym}}\cup\Gamma_{\mathrm{sonic}}^{6}\) _is a closed curve without self-intersection._ 2. _For the open bounded domain_ \(\Omega\subseteq\mathbb{R}_{+}^{2}\) _enclosed by_ \(\Gamma_{\mathrm{shock}}\cup\Gamma_{\mathrm{sonic}}^{5}\cup\Gamma_{\mathrm{ sym}}\cup\Gamma_{\mathrm{sonic}}^{6},\) _solution_ \(\varphi\) _is in_ \(C^{3}(\Omega)\cap C^{2}\big{(}\overline{\Omega}\setminus(\Gamma_{\mathrm{sonic}}^{5} \cup\Gamma_{\mathrm{sonic}}^{6})\big{)}\cap C^{1}(\overline{\Omega})\) _and satisfies_ \(\varphi\in C_{\mathrm{loc}}^{0,1}(\mathbb{R}_{+}^{2})\cap C_{\mathrm{loc}}^{1}( \overline{\mathbb{R}_{+}^{2}}\setminus\overline{\Gamma_{\mathrm{shock}}^{\mathrm{ ext}}\cup S_{12}\cup S_{32}})\) _and_ (2.41) \[\varphi=\begin{cases}\max\{\varphi_{5},\varphi_{6}\}&\text{ in }\,\Omega_{5}\cup\Omega_{6}\,,\\ \bar{\varphi}&\text{ in }\,\mathbb{R}_{+}^{2}\setminus(\Omega\cup\Omega_{5}\cup\Omega_{6})\,, \end{cases}\] _where_ \(\Omega_{5}\) _and_ \(\Omega_{6}\) _are given in_ Definition 2.9_, and_ \(\bar{\varphi}\) _is given by (_2.20_)._ 3. _Equation (_2.1_) is strictly elliptic in_ \(\overline{\Omega}\setminus(\Gamma_{\mathrm{sonic}}^{5}\cup\Gamma_{\mathrm{sonic}}^{6}),\) _i.e., solution_ \(\varphi\) _satisfies_ \[|D\varphi(\boldsymbol{\xi})|<c(|D\varphi(\boldsymbol{\xi})|,\varphi(\boldsymbol{ \xi}))\qquad\text{for all }\boldsymbol{\xi}\in\overline{\Omega}\setminus(\Gamma_{\mathrm{sonic}}^{5} \cup\Gamma_{\mathrm{sonic}}^{6})\,.\] 4. _In_ \(\Omega,\) _solution_ \(\varphi\) _satisfies_ (2.42) \[\max\{\varphi_{5},\varphi_{6}\}\leq\varphi\leq\varphi_{2}\,,\] (2.43) \[D(\varphi_{2}-\varphi)\cdot(\cos\theta,\sin\theta)\leq 0\qquad\text{ for all } \theta\in[\theta_{26},\theta_{25}]\,.\] **Remark 2.2**.: _From the definition of admissible solutions, we have_ 1. _The requirement that_ \(\varphi\) _is a solution of_ Problem 2.10 _includes the condition that_ \(\varphi=\varphi_{j}\) _and_ \(D\varphi=D\varphi_{j}\) _on_ \(\Gamma^{j}_{\mathrm{sonic}}\) _for_ \(j=5,6,\) _which becomes a one-point boundary condition in the case_ \(\theta_{j-4}\in(\theta^{s},\theta^{d}],\) _according to the definition of_ \(\Gamma^{j}_{\mathrm{sonic}}\) _given by_ Definition 2.9_._ 2. _Let_ \(\varphi\) _be an admissible solution in the sense of_ Definition 2.11_, and let_ \(\boldsymbol{\nu}\) _be the unit normal vector on_ \(\Gamma_{\mathrm{shock}}\) _interior to_ \(\Omega\)_. Similarly to_ _[_2_, Lemma 2.26]__, it can be shown that_ \(\Gamma_{\mathrm{shock}}\) _is a transonic shock, since_ \(\varphi\) _satisfies_ \[D\varphi_{2}\cdot\boldsymbol{\nu}>D\varphi\cdot\boldsymbol{\nu}>0\,,\quad 0< \frac{D\varphi\cdot\boldsymbol{\nu}}{c(|D\varphi|,\varphi)}<1<D\varphi_{2}\cdot \boldsymbol{\nu}\qquad\text{ on }\Gamma_{\mathrm{shock}}\,.\] 3. _It follows from_ Proposition 2.6(i) _that_ \(\varphi_{j}\) _depends continuously on_ \(\theta_{j-4}\in[0,\theta^{d}]\) _for_ \(j=5,6.\) _Moreover, for any_ \(r>0,\)__ \[\lim_{\theta_{j-4}\to 0^{+}}\|\varphi_{j}-\varphi_{0}\|_{C^{0,1}(B_{r}( \boldsymbol{0})\cap\mathbb{R}^{2}_{+})}=0\qquad\text{ for }j=5,6\,,\] _i.e., the weak reflection state_ \((j)\) _coincides with the normal reflection state_ \((0)\) _when_ \(\theta_{j-4}=0.\) _One can verify directly that the normal reflection solution_ \(\varphi_{\mathrm{norm}}\in C^{0,1}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})\) _as defined in (_2.24_) is an admissible solution corresponding to_ \(\boldsymbol{\theta}=\boldsymbol{0}\) _in the sense of_ Definition 2.11_._ The first main theorem of this paper is to provide the global existence of such admissible solutions in the sense of Definition 2.11. The proof is given in SS4.4. **Theorem 2.1** (Existence of admissible solutions).: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\mathrm{min}},0)\). For any \(\boldsymbol{\theta}\in\Theta,\) there exists an admissible solution \(\varphi\) of the four-shock interaction problem corresponding to parameters \(\boldsymbol{\theta}\) in the sense of_ Definition 2.11_. Moreover, the global solution \(\varphi\) converges to the normal reflection solution \(\varphi_{\mathrm{norm}}\) in \(W^{1,1}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})\) as \(\boldsymbol{\theta}\in\Theta\) converges to \(\boldsymbol{0},\) where \(\varphi_{\mathrm{norm}}\) is given by (2.24)._ The admissible solutions have additional regularity properties, given in Theorem 2.2 below. Moreover, the transonic shocks, as free boundaries of admissible solutions, satisfy additional convexity properties, given in Theorem 2.3 below. The proofs of Theorems 2.2-2.3 are given in SS5.1-SS5.2, respectively. **Theorem 2.2** (Regularity of admissible solutions).: _Fix \(\gamma\geq 1,\)\(v_{2}\in(v_{\mathrm{min}},0),\) and \(\boldsymbol{\theta}\in\Theta\). Let \(\varphi\) be an admissible solution corresponding to parameters \(\boldsymbol{\theta}\) in the sense of_ Definition 2.11_, with transonic shock \(\Gamma_{\mathrm{shock}}\). Then the following properties hold_:__ 1. \(\Gamma_{\mathrm{shock}}\) _is_ \(C^{\infty}\) _in its relative interior and_ **Case I.** _When_ \(\boldsymbol{\theta}\in[0,\theta^{s})^{2},\)__\(\varphi\in C^{\infty}(\overline{\Omega}\setminus(\Gamma^{5}_{\mathrm{sonic}}\cup\Gamma^{6}_{ \mathrm{sonic}}))\cap C^{1,1}(\overline{\Omega}),\) _and_ \(\overline{S^{\mathrm{seg}}_{25}\cup\Gamma_{\mathrm{shock}}\cup S^{\mathrm{ seg}}_{26}}\) _is a_ \(C^{2,\alpha}\)_-curve for any_ \(\alpha\in(0,1),\) _including at points_ \(P_{2}\) _and_ \(P_{3}.\)__ 2. _When_ \(\boldsymbol{\theta}\in[\theta^{s},\theta^{d})\times[0,\theta^{s}),\)__\(\varphi\in C^{\infty}(\overline{\Omega}\setminus(\Gamma^{6}_{\mathrm{sonic}}\cup\{P_{1}\}))\cap C^{1,1}( \overline{\Omega}\setminus\{P_{1}\})\cap C^{1,\bar{\alpha}}(\overline{\Omega}),\) _and_ \(\overline{S^{\mathrm{seg}}_{26}\cup\Gamma_{\mathrm{shock}}}\) _is a_ \(C^{1,\bar{\alpha}}\)_-curve for some_ \(\bar{\alpha}\in(0,1)\)_. Furthermore,_ \(\overline{S^{\mathrm{seg}}_{26}\cup\Gamma_{\mathrm{shock}}}\setminus\{P_{1}\}\) _is a_ \(C^{2,\alpha}\)_-curve for any_ \(\alpha\in(0,1),\) _including at_ \(P_{3}.\)__ _When_ \(\boldsymbol{\theta}\in[0,\theta^{s})\times[\theta^{s},\theta^{d}),\)__\(\varphi\in C^{\infty}(\overline{\Omega}\setminus(\Gamma^{5}_{\mathrm{sonic}}\cup\{P_{4}\}))\cap C^{1,1}( \overline{\Omega}\setminus\{P_{4}\})\cap C^{1,\bar{\alpha}}(\overline{\Omega}),\) _and_ \(\overline{S^{\mathrm{seg}}_{25}\cup\Gamma_{\mathrm{shock}}}\) _is a_ \(C^{1,\bar{\alpha}}\)_-curve for some_ \(\bar{\alpha}\in(0,1)\)_. Furthermore,_ \(\overline{S^{\mathrm{seg}}_{25}\cup\Gamma_{\mathrm{shock}}}\setminus\{P_{4}\}\) _is a_ \(C^{2,\alpha}\)_-curve for any_ \(\alpha\in(0,1),\) _including at_ \(P_{2}.\)__ **Case III.** _When_ \(\boldsymbol{\theta}\in[\theta^{s},\theta^{d})^{2},\)__\(\varphi\in C^{\infty}(\overline{\Omega}\setminus\{P_{1},P_{4}\})\cap C^{1,\bar{ \alpha}}(\overline{\Omega}),\) _and_ \(\overline{\Gamma_{\mathrm{shock}}}\) _is a_ \(C^{1,\bar{\alpha}}\)_-curve for some_ \(\bar{\alpha}\in(0,1)\)_. Furthermore,_ \(\overline{\Gamma_{\mathrm{shock}}}\setminus\{P_{1},P_{4}\}\) _is a_ \(C^{2,\alpha}\)_-curve for any_ \(\alpha\in(0,1)\)_. Furthermore,_ \(\overline{\Gamma_{\mathrm{shock}}}\setminus\{P_{1},P_{4}\}\) _is a_ \(C^{2,\alpha}\)_-curve for any_ \(\alpha\in(0,1)\)_._ 2. _Let_ \(\mathcal{U}\coloneqq\{\boldsymbol{\xi}\in\mathbb{R}^{2}_{+}\,:\,\max\{\varphi_{ 5}(\boldsymbol{\xi}),\varphi_{6}(\boldsymbol{\xi})\}<\varphi_{2}(\boldsymbol{ \xi})\}\)_. For any constant_ \(\sigma>0\) _and_ \(j=5,6,\) _define domain_ \(\mathcal{U}^{j}_{\sigma}\) _by_ \[\mathcal{U}^{j}_{\sigma}\coloneqq\begin{cases}\mathcal{U}\cap\big{\{}\boldsymbol{ \xi}\in\mathbb{R}^{2}_{+}\,:\,\mathrm{dist}(\boldsymbol{\xi},\Gamma^{j}_{ \mathrm{sonic}})<\sigma\big{\}}\cap B_{c_{j}}(O_{j})&\text{ if }\theta_{j-4}\in[0,\theta^{s})\,,\\ \varnothing&\text{ if }\theta_{j-4}\in[\theta^{s},\theta^{d})\,.\end{cases}\] _Fix any point_ \(\boldsymbol{\xi}_{0}\in(\Gamma^{5}_{\mathrm{sonic}}\cup\Gamma^{6}_{ \mathrm{sonic}})\setminus\{P_{2},P_{3}\}\) _and denote_ \(d\coloneqq\mathrm{dist}(\boldsymbol{\xi}_{0},\Gamma_{\mathrm{shock}})\)_. Then, for a sufficiently small constant_ \(\varepsilon_{0}>0\) _and for any_ \(\alpha\in(0,1),\) _there exists a constant_ \(K<\infty\) depending on \((\gamma,v_{2},\varepsilon_{0},\alpha,d)\) and \(\|\varphi\|_{C^{1,1}(\Omega\cap(\mathcal{U}_{\ell_{0}}^{5}\cup\mathcal{U}_{\ell_{ 0}}^{6}))}\) such that_ \[\|\varphi\|_{2,\alpha,\widehat{\Omega\cap B_{d/2}(\boldsymbol{\xi}_{0})\cap( \mathcal{U}_{\ell_{0}/2}^{5}\cup\mathcal{U}_{\ell_{0}/2}^{6})}}\leq K\,;\] 3. _For any_ \(\boldsymbol{\xi}_{0}\in(\Gamma_{\mathrm{sonic}}^{5}\cup\Gamma_{\mathrm{sonic}}^{6}) \setminus\{P_{2},P_{3}\},\)__ \[\lim_{\boldsymbol{\xi}\to\xi_{0},\,\boldsymbol{\xi}\in\Omega}(D_{rr}\varphi-D_ {rr}\max\{\varphi_{5},\varphi_{6}\})(\boldsymbol{\xi})=\frac{1}{\gamma+1}\,,\] _where_ \(r=|\boldsymbol{\xi}-O_{j}|\) _for_ \(\boldsymbol{\xi}\) _near_ \(\Gamma_{\mathrm{sonic}}^{j}\) _for_ \(j=5,6;\)__ 4. _For_ \(i=1,2,\) _whenever_ \(\theta_{i}\in[0,\theta^{8}),\)__\(\lim_{\boldsymbol{\xi}\to\mathcal{U}_{i+1},\,\boldsymbol{\xi}\in\Omega}D^{2} \varphi(\boldsymbol{\xi})\) _does not exist_.__ **Theorem 2.3** (Convexity of transonic shocks).: _Fix \(\boldsymbol{\theta}\in\Theta\). For any admissible solution of the four-shock interaction problem in the sense of Definition 2.11, the transonic shock \(\Gamma_{\mathrm{shock}}\) is uniformly convex on closed subsets of its relative interior in the sense described in Theorems B.4-B.5. Furthermore, for any weak solution of (2.1)-(2.2) in the sense of Definition 2.1, satisfying also all the properties of Definition 2.11 except (2.43), the transonic shock \(\Gamma_{\mathrm{shock}}\) is a strictly convex graph if and only if condition (2.43) holds._ ## 3. Uniform Estimates for Admissible Solutions To establish the first main theorem, Theorem 2.1, we apply the Leray-Schauder degree theorem to a suitable iteration map, similar to [2, 10]. To achieve this, in this section, we make several essential uniform estimates of the admissible solutions. In SS4, we construct a suitable iteration map between two function spaces with weighted Holder norms, where any fixed point of the iteration map is an admissible solution in the sense of Definition 2.11. The main results we establish in this section are the following: 1. The strict directional monotonicity of \(\varphi_{2}-\varphi\), and the directional monotonicity of \(\varphi-\varphi_{5}\) and \(\varphi-\varphi_{6}\); 2. The uniform positive lower bound of \(\mathrm{dist}(\Gamma_{\mathrm{shock}},\partial B_{1}(O_{2}))\); 3. The uniform estimate of the ellipticity of equation (2.1) in \(\Omega\); 4. The uniform estimates of admissible solutions in some suitable weighted Holder norms. Our problem shares many similarities with the Prandtl-Meyer shock reflection problem [2], and the von Neumann shock reflection-diffraction problem [10]. Many of the following proofs are similar to those within [2, 10], but have been adapted to the current setting with careful estimates. ### Monotonicity and ellipticity #### 3.1.1. Directional monotonicity and uniform bounds Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). Let \(\varphi\) be an admissible solution corresponding to parameters \(\boldsymbol{\theta}\in\Theta\) in the sense of Definition 2.11, and let the constant states \(\varphi_{2}\) and \(\varphi_{j}\), \(j=5,6\), be given by (2.17) and (2.33) respectively. We aim to establish the directional monotonicity properties for \(\phi\) that satisfies \[(c^{2}-\varphi_{\xi}^{2})\phi_{\xi\xi}-2\varphi_{\xi}\varphi_{\eta}\phi_{\xi \eta}+(c^{2}-\varphi_{\eta}^{2})\phi_{\eta\eta}=0\qquad\mbox{ in }\Omega\,, \tag{3.1}\] where \(\phi\) can be taken as either \(\varphi_{2}-\varphi\) or \(\varphi-\varphi_{j}\), \(j=5,6\), and \[c^{2}(|D\varphi|,\varphi)=1+(\gamma-1)\big{(}B-\frac{1}{2}|D\varphi|^{2}- \varphi\big{)}\,, \tag{3.2}\] where \(B\) is the Bernoulli constant given by (2.5). **Lemma 3.1**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For \(\boldsymbol{\theta}\in\Theta,\) let \(\varphi\) be an admissible solution in the sense of Definition 2.11 corresponding to parameters \(\boldsymbol{\theta}\). Then \(\varphi\) satisfies that, for \(j=5,6\),_ 1. \(\partial_{\boldsymbol{e}}(\varphi-\varphi_{j})\) _is not constant in_ \(\Omega\) _for any unit vector_ \(\boldsymbol{e}\in\mathbb{R}^{2};\)__ 2. _For vector_ \(\boldsymbol{e}_{S_{2j}}\) _given by (_2.35_),_ \[\partial_{\boldsymbol{e}_{S_{2j}}}(\varphi_{2}-\varphi)<0\qquad\mbox{in } \overline{\Omega}\setminus\Gamma_{\mathrm{sonic}}^{j}\,;\] 3. \(\partial_{\boldsymbol{e}_{S_{2j}}}(\varphi-\varphi_{j})\,,\,-\partial_{\eta}( \varphi-\varphi_{j})\geq 0\quad\mbox{in }\overline{\Omega}\,.\)__ Proof.: We first prove the claim. **Lemma 3.2**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For \(\boldsymbol{\theta}\in\Theta\), let \(\varphi\) be an admissible solution in the sense of Definition 2.11 corresponding to parameters \(\boldsymbol{\theta}\). Then \(\varphi\) satisfies that, for \(j=5,6\),_ 1. \(\partial_{\boldsymbol{e}}(\varphi-\varphi_{j})\) _is not constant in_ \(\Omega\) _for any unit vector_ \(\boldsymbol{e}\in\mathbb{R}^{2};\)__ 2. _For vector_ \(\boldsymbol{e}_{S_{2j}}\) _given by (_2.35_),_ \[\partial_{\boldsymbol{e}_{S_{2j}}}(\varphi_{2}-\varphi)<0\qquad\mbox{in } \overline{\Omega}\setminus\Gamma_{\mathrm{sonic}}^{j}\,;\] 3. \(\partial_{\boldsymbol{e}_{S_{2j}}}(\varphi-\varphi_{j})\,,\,-\partial_{\eta}( \varphi-\varphi_{j})\geq 0\quad\mbox{in }\overline{\Omega}\,.\)__ Proof.: We first prove the claim. **Lemma 3.3**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For \(\boldsymbol{\theta}\in\Theta\), let \(\varphi\) be an admissible solution in the sense of Definition 2.11 corresponding to parameters \(\boldsymbol{\theta}\). Then \(\varphi\) satisfies that, for \(j=5,6\),_ 1. \(\partial_{\boldsymbol{e}}(\varphi-\varphi_{j})\) _is not constant in_ \(\Omega\) _for any unit vector_ \(\boldsymbol{e}\in\mathbb{R}^{2};\)__ 2. _For vector_ \(\boldsymbol{e}_{S_{2j}}\) _given by (_2.35_),_ \[\partial_{\boldsymbol{e}_{S_{2j}}}(\varphi_{2}-\varphi)<0\qquad\mbox{in } \overline{\Omega}\setminus\Gamma_{\mathrm{sonic}}^{j}\,;\] 3. \(\partial_{\boldsymbol{e}_{S_{2j}}}(\varphi-\varphi_{j})\,,\,-\partial_{\eta}( \varphi-\varphi_{j})\geq 0\quad\mbox{in }\overline{\Omega}\,.\)__ Proof.: We first prove the claim. **Lemma 3.4**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For \(\boldsymbol{\theta}\in\Theta\), let \(\varphi\) be an admissible solution in the sense of Definition 2.11 corresponding to parameters \(\boldsymbol{\theta}\). Then \(\varphi\) satisfies that, for \(j=5,6\),_ 1. \(\partial_{\boldsymbol{e}}(\varphi-\varphi_{j})\) _is not constant in_ \(\Omega\) _for any unit vector_ \(\boldsymbol{e}\in\mathbb{R}^{2};\)__ 2. _For vector_ \(\boldsymbol{e}_{S_{2j}}\) _given by (_2.35_),_ \[\partial_{\boldsymbol{e}_{S_{2j}}}(\varphi_{2}-\varphi)<0\qquad\mbox{in } \overline{\Omega}\setminus\Gamma_{\mathrm{sonic}}^{j}\,;\] 3. \(\partial_{\boldsymbol{e}_{S_{2j}}}(\varphi-\varphi_{j})\,,\,-\partial_{\eta}( \varphi-\varphi_{j})\geq 0\quad\mbox{in }\overline{\Omega}\,.\)__ Proof.: We first prove the claim. **Lemma 3.5**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For \(\boldsymbol{\theta}\in\Theta\), let \(\varphi\) be an admissible solution in the sense of Definition 2.11 corresponding to parameters \(\boldsymbol{\theta}\). Then \(\varphi\) satisfies that, for \(j=5,6\),_ 1. \(\partial_{\boldsymbol{e}}(\varphi-\varphi_{j})\) _is not constant in_ \(\Omega\) _for any unit vector_ \(\boldsymbol{e}\in\mathbb{R}^{2};\)__ 2. _For vector_ \(\boldsymbol{e}_{S_{2j}}\) _given by (_2.35_),_ \[\partial_{\boldsymbol{e}_{S_{2j}}}(\varphi_{2}-\varphi)<0\qquad\mbox{in } \overline{\Omega}\setminus\Gamma_{\mathrm{sonic}}^{j}\,;\] 3. \(\partial_{\boldsymbol{e}_{S_{2j}}}(\varphi-\varphi_{j})\,,\,-\partial_{\eta}( \varphi-\varphi_{j})\geq 0\quad\mbox{in }\overline{\Omega}\,.\)__ Proof.: The proof of (i) is given by a slight modification of [2, Lemma 3.1]. Indeed, for \(\phi\coloneqq\varphi-\varphi_{5}\), if \(\partial_{\mathbf{e}}\phi\) is constant in \(\Omega\), then \(\partial_{\mathbf{e}}\phi=0\) in \(\Omega\) since \(D\phi=\mathbf{0}\) at \(P_{1}\). Furthermore, \(\partial_{\mathbf{e}}\phi=\mathbf{e}\cdot(u_{6}-u_{5},0)\) at \(P_{4}\) so that \(\mathbf{e}\) must be parallel to \((0,1)\), since \(u_{5}-u_{6}>0\) when \(\mathbf{\theta}\neq\mathbf{0}\). Thus, \(\partial_{\eta}\phi\equiv 0\) in \(\Omega\) and hence \(\partial_{\xi\eta}\phi=\partial_{\eta\eta}\phi\equiv 0\) in \(\Omega\). By the strict ellipticity of equation (3.1) in \(\Omega\), \(\partial_{\xi\xi}\phi\equiv 0\) in \(\Omega\), which implies that there exist constants \((u,v,k)\) such that \(\phi=u\xi+v\eta+k\) in \(\Omega\). We note that \(v=0\) since \(\partial_{\eta}\phi\equiv 0\). Applying the boundary conditions: \(\phi=0\) and \(D\phi=\mathbf{0}\) at \(P_{1}\), we find that \(u=k=0\), so that \(\phi\equiv 0\) in \(\Omega\). However, \(D\phi(P_{4})=D\varphi_{6}(P_{4})-D\varphi_{5}(P_{4})=(u_{6}-u_{5},0)\neq\mathbf{0}\), which is a contradiction. The proof is similar for \(\phi=\varphi-\varphi_{6}\). This concludes the proof of (i). The proofs of (ii) and (iii) are similar to [2, Lemma 3.2] and [2, Lemma 3.6], respectively. For any \(\mathbf{\theta}\in\overline{\Theta}\), define the following fan-shaped set by \[\operatorname{Cone}(\mathbf{e}_{S_{25}},\mathbf{e}_{S_{26}})\coloneqq\left\{r(\cos \theta,\sin\theta)\,:\,\theta_{26}\leq\theta\leq\theta_{25},\,r\geq 0\right\}.\] Let \(\operatorname{Cone}^{0}(\mathbf{e}_{S_{25}},\mathbf{e}_{S_{26}})\) be the interior of \(\operatorname{Cone}(\mathbf{e}_{S_{25}},\mathbf{e}_{S_{26}})\). It follows from Lemma 3.1 that, if \(\varphi\) is an admissible solution corresponding to \(\mathbf{\theta}\in\Theta\), then \(\varphi\) satisfies \[\partial_{\mathbf{e}}(\varphi_{2}-\varphi)<0\qquad\text{in }\overline{\Omega}\,\text{ for any }\mathbf{e}\in\operatorname{Cone}^{0}(\mathbf{e}_{S_{25}},\mathbf{e}_{S_{26}})\,, \tag{3.3}\] and \(-\mathbf{\nu}(P)\in\left\{(\cos\theta,\sin\theta)\,:\,\theta_{25}-\frac{\pi}{2}< \theta<\theta_{26}+\frac{\pi}{2}\right\}\) for all \(P\in\Gamma_{\mathrm{shock}}\), where \(\mathbf{\nu}\) is the unit normal vector on \(\Gamma_{\mathrm{shock}}\) pointing into the interior of \(\Omega\). **Lemma 3.2**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For \(\mathbf{\theta}\in\Theta,\) let \(\varphi\) be an admissible solution in the sense of Definition 2.11 with parameters \(\mathbf{\theta}\). Then there exists a \(C^{1}\)-function \(f_{\mathrm{sh}}\) such that_ \[\Gamma_{\mathrm{shock}}=\left\{\mathbf{\xi}\in\mathbb{R}_{+}^{2}\,:\,\eta=f_{ \mathrm{sh}}(\xi),\,\xi^{P_{3}}<\xi<\xi^{P_{2}}\right\},\] _and \(f_{\mathrm{sh}}^{\prime}(\xi)\in(\tan\theta_{25},\tan\theta_{26})\) for \(\xi\in(\xi^{P_{3}},\xi^{P_{2}})\) with \(f_{\mathrm{sh}}^{\prime}(\xi^{P_{j-3}})=\tan\theta_{2j},\) for \(j=5,6\)._ Proof.: By Lemma 2.8, \(\mathbf{e}_{\eta}=(0,1)\in\operatorname{Cone}^{0}(\mathbf{e}_{S_{25}},\mathbf{e}_{S_{26}})\). From the strict directional monotonicity (3.3), we see that \(\partial_{\mathbf{e}_{\eta}}(\varphi_{2}-\varphi)<0\) on \(\overline{\Gamma_{\mathrm{shock}}}\), which implies the existence of a \(C^{1}\)-function \(f_{\mathrm{sh}}\) that solves the implicit relation \(\varphi_{2}(\xi,f_{\mathrm{sh}}(\xi))=\varphi(\xi,f_{\mathrm{sh}}(\xi))\). Taking the derivative, we have \[f_{\mathrm{sh}}^{\prime}(\xi)=-\frac{\partial_{\mathbf{e}_{\xi}}(\varphi_{2}- \varphi)(\xi,f_{\mathrm{sh}}(\xi))}{\partial_{\mathbf{e}_{\eta}}(\varphi_{2}- \varphi)(\xi,f_{\mathrm{sh}}(\xi))}\,.\] From Definition 2.11(i), we obtain that \(f_{\mathrm{sh}}^{\prime}(\xi^{P_{2}})=\tan\theta_{25}\) and \(f_{\mathrm{sh}}^{\prime}(\xi^{P_{3}})=\tan\theta_{26}\). Denoting by \(\mathbf{\nu}(P)\) the unit normal vector on \(\Gamma_{\mathrm{shock}}\) at a point \(P\in\Gamma_{\mathrm{shock}}\), interior to \(\Omega\), we have \[\mathbf{\nu}(P)=\frac{D(\varphi_{2}-\varphi)(P)}{|D(\varphi_{2}-\varphi)(P)|}= \frac{\left(f_{\mathrm{sh}}^{\prime}(\xi),-1\right)}{\sqrt{1+\left(f_{ \mathrm{sh}}^{\prime}(\xi)\right)^{2}}}\,.\] Combining the above with property (ii) in Lemma 3.1, for any \((a_{1},a_{2})\in[0,\infty)^{2}\), \((a_{1},a_{2})\neq(0,0)\), \[a_{1}\big{(}f_{\mathrm{sh}}^{\prime}(\xi)\cos\theta_{25}-\sin \theta_{25}\big{)}+a_{2}\big{(}f_{\mathrm{sh}}^{\prime}(\xi)\cos\theta_{26}- \sin\theta_{26}\big{)}\] \[\quad=\sqrt{1+\left(f_{\mathrm{sh}}^{\prime}(\xi)\right)^{2}}\, \mathbf{\nu}(P)\cdot(a_{1}\mathbf{e}_{S_{25}}+a_{2}\mathbf{e}_{S_{26}})<0\qquad\text{ for any }\xi\in(\xi^{P_{3}},\xi^{P_{2}})\,.\] Choosing \((a_{1},a_{2})\) as \((1,0)\) and \((0,1)\) in turn, we obtain that \(f_{\mathrm{sh}}^{\prime}(\xi)>\tan\theta_{25}\) and \(f_{\mathrm{sh}}^{\prime}(\xi)<\tan\theta_{26}\), respectively, where we have used \(\theta_{25}\in(\frac{\pi}{2},\pi]\) and \(\theta_{26}\in[0,\frac{\pi}{2})\). **Lemma 3.3**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For any \(\mathbf{\theta}\in\Theta,\) let \(\varphi\) be an admissible solution corresponding to parameters \(\mathbf{\theta}\in\Theta\) in the sense of Definition 2.11, and let \(\Omega\) be the pseudo-subsonic region of \(\varphi\). Then there exists a constant \(C_{\mathrm{ub}}>0\) depending only on \((\gamma,v_{2})\) such that the following properties hold:_ \[\overline{\Omega}\subseteq B_{C_{\mathrm{ub}}}(\mathbf{0})\,, \tag{3.5}\] \[\|\varphi\|_{C^{0,1}(\overline{\Omega})}\leq C_{\mathrm{ub}}\,,\] (3.6) \[\rho^{*}(\gamma)\leq\rho\leq C_{\mathrm{ub}}\quad\text{in }\Omega\,, \qquad 1<\rho\leq C_{\mathrm{ub}}\quad\text{on }\Gamma_{\mathrm{ shock}}\,, \tag{3.4}\] _where \(\rho^{*}(\gamma)\coloneqq(\frac{2}{\gamma+1})^{\frac{1}{\gamma-1}}\) when \(\gamma>1,\) and \(\rho^{*}(1)\coloneqq e^{-\frac{1}{2}}=\lim_{\gamma\to 1+}\rho^{*}(\gamma)\)._ Proof.: We divide the proof into three steps. **1**. To prove (3.4), we use the expression of \(\Gamma_{\rm shock}\) from Lemma 3.2. In particular, it follows from from Definition 2.11(i) and Lemma 3.2 that \(0<f_{\rm sh}(\xi)\leq\min\{\xi\tan\theta_{25}+a_{25},\xi\tan\theta_{26}+a_{26}\}\) whenever \(\xi^{P_{3}}<\xi<\xi^{P_{2}}\), which implies that \[\Omega\subseteq\big{\{}\boldsymbol{\xi}\,:\,u_{6}-c_{6}<\xi<u_{5}+c_{5},\,0< \eta\leq\max\{a_{25},a_{26}\}\big{\}}\,.\] From (2.16) and Proposition 2.6(i) and Definition 2.7, we know that \((\rho_{5},u_{5},a_{25})\) depend continuously on \(\theta_{1}\in[0,\theta^{\rm d}]\), while \((\rho_{6},u_{6},a_{26})\) depend continuously on \(\theta_{2}\in[0,\theta^{\rm d}]\), from which (3.4) follows directly. **2**. From Definition 2.11(iv), we see that \(\inf_{\overline{\Omega}}\max\{\varphi_{5},\varphi_{6}\}\leq\varphi\leq\sup_{ \overline{\Omega}}\varphi_{2}\). By (3.4) and the expressions of \((\varphi_{2},\varphi_{5},\varphi_{6})\) given in (2.17) and (2.33), there exists a constant \(C_{1}>0\) depending only on \((\gamma,v_{2})\) such that \(-C_{1}\leq\min_{\overline{\Omega}}\max\{\varphi_{5},\varphi_{6}\}\leq\max_{ \overline{\Omega}}\varphi_{2}\leq C_{1}\), which implies that \(\max_{\overline{\Omega}}|\varphi|\leq C_{1}\). Then the uniform bound (3.5) follows from \[|D\varphi|^{2}<c^{2}(|D\varphi|,\varphi)=\frac{2}{\gamma+1}\big{(}1+(\gamma-1) (B-\varphi)\big{)}\,,\] where we have used Definition 2.11(iii) and (3.2). **3**. The upper bounds in (3.6) follow directly from the expression of density (2.2) along with (3.5). For the lower bound in \(\Omega\), we combine the Bernoulli law (2.2) with Definition 2.11(iv) to obtain \[\frac{1}{2}|D\varphi|^{2}+h(\rho)=\frac{1}{2}|D\varphi_{2}|^{2}+(\varphi_{2}- \varphi)+h(\rho_{2})\geq\frac{1}{2}|D\varphi_{2}|^{2}\geq 0\,.\] Then, from Definition 2.11(iii), we have \[\frac{1}{\gamma-1}\big{(}\frac{1}{2}(\gamma+1)\rho^{\gamma-1}-1\big{)}=\frac{ 1}{2}c^{2}+h(\rho)\geq\frac{1}{2}|D\varphi|^{2}+h(\rho)\geq 0\,,\] so that the lower bound of \(\rho\) in \(\Omega\) is obtained. For the lower bound of \(\rho\) on \(\Gamma_{\rm shock}\), we combine Problem 2.10(ii) and Remark 2.2(iii) to obtain \[\rho=\frac{D\varphi_{2}\cdot\boldsymbol{\nu}}{D\varphi\cdot\boldsymbol{\nu}}> 1\qquad\text{on }\Gamma_{\rm shock}\,.\qed\] 1.2. Uniform positive lower bound of \(\operatorname{dist}(\Gamma_{\rm shock},\partial B_{1}(O_{2}))\) We now obtain a uniform estimate for the positive lower bound of \(\operatorname{dist}(\Gamma_{\rm shock},\partial B_{1}(O_{2}))\) for any admissible solution. This allows us to make the uniform estimate for the ellipticity of equation (2.1) within the pseudo-subsonic domain of admissible solutions. **Proposition 3.4**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). There exists a constant \(C_{\rm sh}>0\) depending only on \((\gamma,v_{2})\) such that any admissible solution corresponding to parameters \(\boldsymbol{\theta}\in\Theta\) satisfies_ \[\operatorname{dist}\bigl{(}\Gamma_{\rm shock},\,\partial B_{1}(O_{2})\bigr{)} \geq C_{\rm sh}^{-1}\,. \tag{3.7}\] Proof.: The proof of (3.7) follows the same argument as for [2, Proposition 3.7] except the case that \(v_{2}\leq-1\). In the following, we focus on the case that \(v_{2}\leq-1\) and give the proof in three steps. **1**. We first rewrite equation (2.1) as \[\operatorname{div}\!\mathcal{A}(D\varphi,\varphi)+\mathcal{B}(D\varphi, \varphi)=0\,,\] where \(\mathcal{A}(\boldsymbol{p},z)\coloneqq\rho(|\boldsymbol{p}|,z)\boldsymbol{p}\) and \(\mathcal{B}(|\boldsymbol{p}|,z)\coloneqq 2\rho(|\boldsymbol{p}|,z)\) with \(\boldsymbol{p}=(p_{1},p_{2})\in\mathbb{R}^{2}\), \(z\in\mathbb{R}\), and \[\rho(|\boldsymbol{p}|,z)=\begin{cases}\big{(}1+(\gamma-1)(\frac{1}{2}v_{2}^{2} -\frac{1}{2}|\boldsymbol{p}|^{2}-z)\big{)}^{\frac{1}{\gamma-1}}&\quad\text{if }\gamma>1\,,\\ \exp(\frac{1}{2}v_{2}^{2}-\frac{1}{2}|\boldsymbol{p}|^{2}-z)&\quad\text{if } \gamma=1\,.\end{cases} \tag{3.8}\] We denote the sonic speed as \(c(|\boldsymbol{p}|,z)\coloneqq\rho^{(\gamma-1)/2}\) for \(\gamma\geq 1\). For a constant \(R>1\), define \[\mathcal{K}_{R}\coloneqq\Big{\{}(\boldsymbol{p},z)\in\mathbb{R}^{2}\times \mathbb{R}\,:\,|\boldsymbol{p}|+|z|\leq R,\,\rho(|\boldsymbol{p}|,z)\geq R^{-1 },\,\frac{|\boldsymbol{p}|^{2}}{c^{2}(|\boldsymbol{p}|,z)}\leq 1-R^{-1}\Big{\}}\,. \tag{3.9}\] For each \(R>1,\) there exists a constant \(\lambda_{R}>0\) depending only on \((\gamma,v_{2},R)\) such that, for any \(\boldsymbol{\kappa}=(\kappa_{1},\kappa_{2})\in\mathbb{R}^{2},\) \[\sum_{i,j=1}^{2}\partial_{p_{j}}\mathcal{A}_{i}(\boldsymbol{p},z)\kappa_{i} \kappa_{j}\geq\lambda_{R}|\boldsymbol{\kappa}|^{2}\qquad\text{for any }( \boldsymbol{p},z)\in\mathcal{K}_{R}\,.\] The next two lemmas follow directly from [2, Lemmas 3.8-3.9]. **Lemma 3.5**.: _For \(R>2,\) let \(\mathcal{K}_{R}\) be given by (3.9). Then there exist functions \((\tilde{\mathcal{A}},\tilde{\mathcal{B}})(\boldsymbol{p},z)\) in \(\mathbb{R}^{2}\times\mathbb{R}\) satisfying the following properties\(:\)_ 1. _For any_ \((\boldsymbol{p}_{0},z_{0})\in\mathcal{K}_{R},\ (\tilde{\mathcal{A}},\tilde{ \mathcal{B}})(\boldsymbol{p},z)=(\mathcal{A},\mathcal{B})(\boldsymbol{p},z)\) _for all_ \((\boldsymbol{p},z)\in B_{\varepsilon}((\boldsymbol{p}_{0},z_{0}));\)__ 2. _For any_ \((\boldsymbol{p},z)\in\mathbb{R}^{2}\times\mathbb{R}\) _and_ \(\boldsymbol{\kappa}=(\kappa_{1},\kappa_{2})\in\mathbb{R}^{2},\ \sum_{i,j=1}^{2} \partial_{p_{j}}\tilde{\mathcal{A}}_{i}(\boldsymbol{p},z)\kappa_{i}\kappa_{j }\geq\lambda_{R}|\boldsymbol{\kappa}|^{2};\)__ 3. \(|\tilde{\mathcal{B}}(\boldsymbol{p},z)|\leq C_{0}\) _and_ \(|D^{m}_{(\boldsymbol{p},z)}(\tilde{\mathcal{A}},\tilde{\mathcal{B}})( \boldsymbol{p},z)|\leq C_{m}\) _in_ \(\mathbb{R}^{2}\times\mathbb{R}\) _for each_ \(m=1,2,\cdots,\)__ _where the positive constants \(\varepsilon\) and \(\lambda_{R}\) depend only on \((\gamma,v_{2},R),\) and \(C_{m}\) depends on \((\gamma,v_{2},R,m).\)_ **Lemma 3.6**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For any given constants \(\alpha\in(0,1),\)\(m\in\mathbb{N},\) and \(r>0,\) there exist constants \(C,C_{m}>0\) depending only on \((\gamma,v_{2},\alpha,r),\) with \(C_{m}\) depending additionally on \(m,\) such that any admissible solution \(\varphi\) corresponding to parameters \(\boldsymbol{\theta}\in\Theta\) satisfies the following estimates\(:\)_ 1. _For any_ \(B_{4r}(P)\subseteq\Omega,\)__ \[\left\|\varphi\right\|_{2,\alpha,\overline{B_{2r}(P)}}\leq C\,,\qquad\left\| \varphi\right\|_{m,\overline{B_{r}(P)}}\leq C_{m}\,;\] 2. _If_ \(P\in\Gamma_{\mathrm{sym}},\) _and_ \(B_{4r}(P)\cap\Omega\) _is the half-ball_ \(B_{4r}^{+}(P)\coloneqq B_{4r}(P)\cap\{\eta>0\},\) _then_ \[\left\|\varphi\right\|_{2,\alpha,\overline{B_{2r}(P)}\cap\Omega}\leq C\,, \qquad\left\|\varphi\right\|_{m,\overline{B_{r}(P)}\cap\Omega}\leq C_{m}\,.\] **2.** Fix \(\varepsilon>0\) small enough such that \(2\xi^{P_{3}}<\xi^{P_{0}}\) for all \(\theta_{2}\in[0,\varepsilon),\) where \(\xi^{P_{0}}<0\) is the \(\xi\)-coordinate of \(P_{0}\coloneqq\lim\limits_{\theta_{2}\to 0^{+}}P_{3},\) which depends only on \((\gamma,v_{2})\). We define \[d_{\mathrm{sep}}\coloneqq\min\left\{-\frac{\xi^{P_{0}}}{2},\inf\limits_{ \theta_{2}\in[\varepsilon,\theta^{d}]}|u_{6}|\right\}. \tag{3.10}\] It is direct to verify that \(d_{\mathrm{sep}}>0\) by using \(u_{6}=v_{2}\tan\theta_{26}\) and property (2.38), and \(d_{\mathrm{sep}}\) depends only on \((\gamma,v_{2})\). Then, for any \(\boldsymbol{\theta}\in\Theta,\)\(\mathrm{dist}(P_{2},P_{3})\geq 2d_{\mathrm{sep}}\). We also define \[r_{1}\coloneqq\frac{1}{2}\inf\limits_{\boldsymbol{\theta}\in\Theta}\left\{|D \varphi_{5}(P_{0}^{1})|,|D\varphi_{6}(P_{0}^{2})|\right\}. \tag{3.11}\] By (2.25) and Lemma 2.8, it is direct to verify that \(r_{1}>0\) depends only on \((\gamma,v_{2}).\) For any \(r\in(0,r_{1})\) and for \(i=1,2\), we define two sets: \[\mathscr{A}\coloneqq\left\{\theta_{i}\in(0,\theta^{\mathrm{d}}]\,:\,|P_{0}^{i }-P_{i+1}|\geq\frac{r}{20}\right\}\cup\{0\}\,,\quad\mathscr{B}\coloneqq \left\{\theta_{i}\in[0,\theta^{\mathrm{d}}]\,:\,|P_{3i-2}-P_{i+1}|\geq\frac{r} {20}\right\}.\] From the continuous dependence of \(P_{0}^{i},P_{3i-2},\) and \(P_{i+1}\) on \(\theta_{i}\in(0,\theta^{\mathrm{d}}],\) we know that \(\mathscr{A}\) and \(\mathscr{B}\) both are closed sets. Then there exists a constant \(C_{1}>0\) such that \[\mathrm{dist}(P_{i+1},\Gamma_{\mathrm{sym}})\geq\frac{2}{C_{1}}\qquad\text{ for all }\theta_{i}\in\mathscr{A}\cup\mathscr{B}\,.\] If \(\mathrm{dist}(P_{i+1},\Gamma_{\mathrm{sym}})\leq\frac{1}{C_{1}},\) then \(\theta_{i}\notin\mathscr{A}\cup\mathscr{B},\) which implies that \[\max\{|P_{0}^{i}-P_{i+1}|,|P_{3i-2}-P_{i+1}|\}<\frac{r}{20}\,.\] We now seek a constant \(C_{r}>0\) depending only on \((\gamma,v_{2},r)\) such that, for any admissible solution of Problem 2.10 corresponding to parameters \(\boldsymbol{\theta}\in\Theta,\) \[\sup_{P\in\Gamma_{\mathrm{shock}}\cap B_{r/2}(P_{3i-2})}\mathrm{dist}(P,\Gamma_{ \mathrm{sym}})>C_{r}^{-1}\qquad\text{if }|P_{3i-2}-P_{i+1}|\leq\frac{r}{10}\,. \tag{3.12}\] To obtain the above result, we follow the proof as in [10, Lemma 9.4.2], in which we need to take a limit of parameters \(\boldsymbol{\theta}^{(n)}.\) In order to take the limit, we use a compactness lemma below. For concreteness, in the following, we use the notation defined in Definition 2.9, and appendix superscripts \((n)\) and \(*\) to indicate that the corresponding quantities are related to parameters \(\mathbf{\theta}^{(n)}\) and \(\mathbf{\theta}^{*}\), respectively. **Lemma 3.7**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). Let \(\big{\{}\mathbf{\theta}^{(n)}\big{\}}_{n\in\mathbb{N}}\subseteq\Theta\) be any sequence of parameters satisfying \(\lim_{n\to\infty}\mathbf{\theta}^{(n)}=\mathbf{\theta}^{*}\) for some \(\mathbf{\theta}^{*}\in\overline{\Theta}\). For each \(n\in\mathbb{N}\), let \(\varphi^{(n)}\) be an admissible solution corresponding to parameters \(\mathbf{\theta}^{(n)}\). Then there exists a subsequence \(\{\varphi^{(n_{j})}\}_{j\in\mathbb{N}}\) converging uniformly on any compact subset of \(\mathbb{R}^{2}_{+}\) to a function \(\varphi^{*}\in C^{0,1}_{\rm loc}(\overline{\mathbb{R}^{2}_{+}})\). Moreover, \(\varphi^{*}\) is a weak solution of equation (2.1) in \(\mathbb{R}^{2}_{+}\) and satisfies the following properties:_ 1. _For the corner points_ \(P_{m},\,m=1,2,3,4,\) _and the reflection points_ \(P^{i}_{0},\,i=1,2,\)__ \[\lim_{j\to\infty}P^{(n_{j})}_{m}=P^{*}_{m}\,,\qquad\lim_{j\to\infty}P^{i,(n_{j} )}_{0}=P^{i,*}_{0}\,.\] _Note that_ \(\xi^{P^{1,*}_{0}}=\infty\) _in the case that_ \(\theta^{*}_{1}=0,\) _and_ \(\xi^{P^{2,*}_{0}}=-\infty\) _in the case that_ \(\theta^{*}_{2}=0.\)__ 2. _Let_ \(\eta=f^{(n_{j})}_{2l}(\xi),\,l=5,6,\) _and_ \(\eta=f^{(n_{j})}_{\rm sh}(\xi)\) _be the expressions of_ \(S^{(n_{j})}_{2l}\) _and_ \(\Gamma^{(n_{j})}_{\rm shock}\) _from_ Definition 2.7 _and_ Lemma 3.2, _respectively. Extend_ \(f^{(n_{j})}_{\rm sh}\) _by_ \[f^{(n_{j})}_{\rm sh}(\xi)=\left\{\begin{array}{ll}f^{(n_{j})}_{25}(\xi)& \quad\text{for $\xi^{P^{(n_{j})}_{2}}<\xi<\xi^{P^{1,(n_{j})}_{0}}$}\,,\\ f^{(n_{j})}_{26}(\xi)&\quad\text{for $\xi^{P^{2,(n_{j})}_{0}}<\xi<\xi^{P^{(n_{ j})}_{3}}$}\,.\end{array}\right.\] _Then_ \(\{f^{(n_{j})}_{\rm sh}\}_{j\in\mathbb{N}}\) _converges uniformly to a function_ \(f^{*}_{\rm sh}\in C^{0,1}([\xi^{P^{2,*}_{0}},\xi^{P^{1,*}_{0}}])\)_._ 3. _Let smooth functions_ \(\eta=g^{(n_{j})}_{l,{\rm so}}(\xi)\) _be the expressions of_ \(\Gamma^{l,(n_{j})}_{\rm sonic}\) _for_ \(l=5,6\)_. Then_ \(\{g^{(n_{j})}_{5,{\rm so}}\}_{j\in\mathbb{N}}\) _converges pointwise to_ \(g^{*}_{5,{\rm so}}\) _on_ \((\xi^{P^{*}_{2}},\xi^{P^{*}_{1}}),\) _and_ \(\{g^{(n_{j})}_{6,{\rm so}}\}_{j\in\mathbb{N}}\) _converges pointwise to_ \(g^{*}_{6,{\rm so}}\) _on_ \((\xi^{P^{*}_{4}},\xi^{P^{*}_{3}}).\)__ 4. _Denote_ \(\widehat{\Omega}^{*}\coloneqq\big{\{}\mathbf{\xi}\in[\xi^{P^{*}_{4}},\xi^{P^{*}_{ 1}}]\times\overline{\mathbb{R}_{+}}\,:\,0\leq\eta\leq f^{*}_{\rm bd}(\xi) \big{\}},\) _where_ \(f^{*}_{\rm bd}(\cdot)\) _is given by_ \[f^{*}_{\rm bd}(\xi)=\left\{\begin{array}{ll}g^{*}_{6,{\rm so}}(\xi)&\quad \text{for $\xi^{P^{*}_{4}}\leq\xi\leq\xi^{P^{*}_{3}}$}\,,\\ f^{*}_{\rm sh}(\xi)&\quad\text{for $\xi^{P^{*}_{3}}<\xi\leq\xi^{P^{*}_{2}}$}\,,\\ g^{*}_{5,{\rm so}}(\xi)&\quad\text{for $\xi^{P^{*}_{2}}<\xi\leq\xi^{P^{*}_{1}}$}\,. \end{array}\right.\] _Denote by_ \(\Omega^{*}\) _the interior of_ \(\widehat{\Omega}^{*},\,\Gamma^{*}_{\rm shock}\coloneqq\{(\xi,f^{*}_{\rm sh}( \xi))\,:\,\xi\in(\xi^{P^{*}_{3}},\xi^{P^{*}_{2}})\},\) _and_ \(\Gamma^{*}_{\rm sym}\coloneqq\{(\xi,0)\,:\,\xi\in(\xi^{P^{*}_{4}},\xi^{P^{*}_{ 1}})\},\) _and denote by_ \(\Gamma^{*,0}_{\rm sym}\) _the relative interior of_ \(\Gamma^{*}_{\rm sym}\setminus\Gamma^{*}_{\rm shock}\)_. Then_ \(\varphi^{*}\in C^{\infty}(\Omega^{*}\cup\Gamma^{*,0}_{\rm sym}),\) _and_ \(\ which contradicts the fact that \(\varphi^{*}\) is a weak solution of equation (2.1) in \(\mathbb{R}^{2}_{+}\). Then Lemma 3.7(v) is verified. We prove (3.12) by contradiction. If it is not true, we can choose a sequence of parameters \(\{\boldsymbol{\theta}^{(n)}\}_{n\in\mathbb{N}}\) such that \[\sup_{P\in\Gamma_{\mathrm{shock}}^{(n)}\cap B_{r/2}(P_{3i-2}^{(n)})}\mathrm{ dist}(P,\Gamma_{\mathrm{sym}}^{(n)})\leq\frac{1}{n}\qquad\text{if }|P_{3i-2}^{(n)}-P_{i+1}^{(n)}|\leq\frac{r}{10}\,.\] Then, by Lemma 3.7, we can further choose a subsequence \(\{\boldsymbol{\theta}^{(n)}\}_{j\in\mathbb{N}}\) and take a limit to some \(\boldsymbol{\theta}^{*}\in\overline{\Theta}\), which provides a contradiction to Lemma 3.7(v). From (3.12), we can choose \(\hat{Q}_{i}\in\overline{\Gamma_{\mathrm{shock}}\cap B_{r/2}(P_{3i-2})}\) with \(\mathrm{dist}(\hat{Q}_{i},\Gamma_{\mathrm{sym}})\geq C_{r}^{-1}\), so that \(\hat{Q}_{i}\in\overline{\Gamma_{\mathrm{shock}}\cap B_{3r/4}(P_{0}^{i})}\). For \(i=1,2\), we take \(Q_{i}=P_{i+1}\in\overline{\Gamma_{\mathrm{shock}}}\) if \(\mathrm{dist}(P_{i+1},\Gamma_{\mathrm{sym}})>C_{1}^{-1}\); otherwise, we take \(Q_{i}=\hat{Q}_{i}\in\overline{\Gamma_{\mathrm{shock}}\cap B_{3r/4}(P_{0}^{i})}\) as chosen above. Then it follows that \[\min\{\mathrm{dist}(Q_{1},Q_{2}),\,\mathrm{dist}(Q_{1},\Gamma_{\mathrm{sym}}), \,\mathrm{dist}(Q_{2},\Gamma_{\mathrm{sym}})\}\geq\min\{d_{\mathrm{sep}},\,C_ {1}^{-1},\,C_{r}^{-1}\}\eqqcolon a\,.\] Combining with Lemma 3.7 and following a similar proof as for [10, Lemmas 9.4.1 and 15.4.1], we obtain that, for \(a>0\) defined above, there exists a constant \(C_{2}>0\) depending only on \((\gamma,v_{2},a)\) such that, for any admissible solution corresponding to parameters \(\boldsymbol{\theta}\in\Theta\), \[\mathrm{dist}(\Gamma_{\mathrm{shock}}[Q_{1},Q_{2}],\,\Gamma_{\mathrm{sym}}) \geq C_{2}^{-1}\,,\] where \(\Gamma_{\mathrm{shock}}[Q_{1},Q_{2}]\) represents the segment on \(\Gamma_{\mathrm{shock}}\) between points \(Q_{1}\) and \(Q_{2}\). Therefore, \[\mathrm{dist}(\Gamma_{\mathrm{shock}}\setminus(B_{r}(P_{0}^{1})\cup B_{r}(P_{ 0}^{2})),\,\Gamma_{\mathrm{sym}})>C_{2}^{-1}\,. \tag{3.13}\] **3.** Since \(v_{2}\leq-1\), \(B_{1}(O_{2})\subseteq\mathbb{R}\times(-\infty,0)\). By Lemma 2.3 and property (ii) of Proposition 2.6, \(d_{\mathrm{ref}}\coloneqq|D\varphi_{2}(P_{0}^{1}|_{\theta_{1}=\theta^{4}})|-1>0\) depending only on \((\gamma,v_{2})\) satisfies \[\mathrm{dist}(P_{0}^{1},\,B_{1}(O_{2}))=|D\varphi_{2}(P_{0}^{1})|-1\geq d_{ \mathrm{ref}}\,.\] By symmetry, we also see that \(\mathrm{dist}(P_{0}^{2},\,B_{1}(O_{2}))\geq d_{\mathrm{ref}}\). Denote \(\bar{r}\coloneqq\frac{1}{2}\min\{r_{1},d_{\mathrm{ref}}\}>0\) with \(r_{1}\) defined by (3.11). Then \[\mathrm{dist}(\Gamma_{\mathrm{shock}}\cap(B_{\bar{r}}(P_{0}^{1})\cup B_{\bar {r}}(P_{0}^{2})),\,B_{1}(O_{2}))\geq\bar{r}\,.\] Meanwhile, from the argument in Step **2**, there exists a constant \(C_{2}>0\) depending only on \((\gamma,v_{2},\bar{r})\) such that (3.13) holds with \(r\) replaced by \(\bar{r}\). Thus, for any admissible solution corresponding to parameters \(\boldsymbol{\theta}\in\Theta\), we have \[\mathrm{dist}(\Gamma_{\mathrm{shock}},\,B_{1}(O_{2}))\geq\min\{\bar{r},\,C_{2} ^{-1}\}\eqqcolon C_{\mathrm{sh}}^{-1}\,.\] This completes the proof of (3.7). #### 3.1.3. Uniform estimate for the ellipticity of equation (3.1) We now show a uniform estimate for the ellipticity of equation (3.1), which may be degenerate near \(\Gamma_{\mathrm{sonic}}^{5}\cup\Gamma_{\mathrm{sonic}}^{6}\). Fix a function \(h^{*}\in C^{\infty}(\mathbb{R})\) such that \[h^{*}(s)=\begin{cases}s&\quad\text{for }s\leq\frac{1}{2}\,,\\ 1&\quad\text{for }s\geq 1\,,\end{cases}\] satisfying \(0\leq(h^{*})^{\prime}(\cdot)\leq 2\) on \(\mathbb{R}\). For each \(\boldsymbol{\theta}\in\overline{\Theta}\), define \[\hat{c}_{j}\coloneqq\min\left\{c_{j},\,|D\varphi_{j}(P_{0}^{j-4})|\right\}, \quad q_{j}:=\mathrm{dist}(O_{j},\,S_{2j})\qquad\text{for }j=5,6\,. \tag{3.14}\] By Lemma 2.8 and [10, Lemma 6.1.2], we see that \(\hat{c}_{j}>q_{j}\) for every \(\boldsymbol{\theta}\in\overline{\Theta}\), so that there exists a constant \(\hat{\delta}>0\) depending only on \((\gamma,v_{2})\) such that \[\inf_{\boldsymbol{\theta}\in\overline{\Theta}}\min\{\hat{c}_{5}-q_{5},\,\hat{c} _{6}-q_{6}\}\geq\hat{\delta}\,. \tag{3.15}\] For \(j=5,6\), define \(g_{j}^{*}:\mathbb{R}^{2}\to\mathbb{R}_{+}\) by \[g_{j}^{*}(\boldsymbol{\xi})\coloneqq\frac{1}{2}(\hat{c}_{j}-q_{j})h^{*}(\frac {\mathrm{dist}(\boldsymbol{\xi},\,\partial B_{\hat{c}_{j}}(O_{j}))}{\hat{c}_{j} -q_{j}})+\left(1-\frac{\hat{c}_{j}^{2}}{c_{j}^{2}}\right).\] We choose a function \(\chi\in C^{\infty}(\mathbb{R})\) such that \[\chi(\xi)=\left\{\begin{aligned} & 1\qquad\text{for }\xi\leq-\frac{d_{\text{sep}}}{4}\,,\\ & 0\qquad\text{for }\xi\geq\frac{d_{\text{sep}}}{4}\,,\end{aligned}\right.\] satisfying \(-\frac{4}{d_{\text{sep}}}\leq\chi^{\prime}(\cdot)\leq 0\) on \(\mathbb{R}\), where \(d_{\text{sep}}>0\) is given by (3.10) depending only on \((\gamma,v_{2})\). Finally, define a function \(g:\mathbb{R}^{2}\to\mathbb{R}_{+}\) by \[g(\boldsymbol{\xi})\coloneqq\chi(\xi)g_{6}^{*}(\boldsymbol{\xi})+(1-\chi(\xi ))g_{5}^{*}(\boldsymbol{\xi})\,. \tag{3.16}\] By Definition 2.11 and Lemma 3.3, there exist constants \(d>0\) and \(C>1\) depending only on \((\gamma,v_{2})\) such that, if \(\varphi\) is an admissible solution corresponding to \(\boldsymbol{\theta}\in\Theta\) with pseudo-subsonic region \(\Omega\), then \(g\) given by (3.16) satisfies the following properties: 1. For \(\boldsymbol{\xi}\in\Omega\) satisfying \(\text{dist}(\boldsymbol{\xi},\Gamma^{j}_{\text{sonic}})<d\), \(C^{-1}\text{dist}_{j}(\boldsymbol{\xi},\,\Gamma^{j}_{\text{sonic}})\leq g( \boldsymbol{\xi})\leq C\,\text{dist}_{j}(\boldsymbol{\xi},\,\Gamma^{j}_{\text {sonic}})\), where \(\text{dist}_{j}(\boldsymbol{\xi},\,\Gamma^{j}_{\text{sonic}})\coloneqq\text{ dist}(\boldsymbol{\xi},\,\Gamma^{j}_{\text{sonic}})+(c_{j}-\hat{c}_{j})\) for \(j=5,6\). 2. For each \(\varepsilon>0\), there exists a constant \(C_{\varepsilon}>1\) depending only on \((\gamma,v_{2},\varepsilon)\) such that, if a point \(\boldsymbol{\xi}\in\overline{\Omega}\) satisfies \(\text{dist}(\boldsymbol{\xi},\,\Gamma^{5}_{\text{sonic}}\cup\Gamma^{6}_{ \text{sonic}})>\varepsilon\), then \(C_{\varepsilon}^{-1}\leq g(\boldsymbol{\xi})\leq C_{\varepsilon}\). In fact, for property (a) above, it suffices to choose \(d\coloneqq\min\{\frac{d_{\text{sep}}}{2},\hat{\delta}\}\), with constants \(d_{\text{sep}}\) and \(\hat{\delta}\) given by (3.10) and (3.15) respectively. Then, choosing \(\varepsilon:=\frac{d}{2}\) in property (b), we deduce that there exists a constant \(C_{\flat}>1\) depending only on \((\gamma,v_{2})\) such that, for any \(\boldsymbol{\theta}\in\Theta\), \[C_{\flat}^{-1}\text{dist}^{\flat}(\boldsymbol{\xi},\,\Gamma^{5}_{\text{sonic }}\cup\Gamma^{6}_{\text{sonic}})\leq g(\boldsymbol{\xi})\leq C_{\flat}\, \text{dist}^{\flat}(\boldsymbol{\xi},\,\Gamma^{5}_{\text{sonic}}\cup\Gamma^{6} _{\text{sonic}})\qquad\text{for all }\boldsymbol{\xi}\in\overline{\Omega}\,, \tag{3.17}\] where \[\text{dist}^{\flat}(\boldsymbol{\xi},\,\Gamma^{5}_{\text{sonic}}\cup\Gamma^{6} _{\text{sonic}})\coloneqq\min\big{\{}\frac{d}{2},\,\text{dist}_{5}(\boldsymbol {\xi},\,\Gamma^{5}_{\text{sonic}}),\,\text{dist}_{6}(\boldsymbol{\xi},\, \Gamma^{6}_{\text{sonic}})\big{\}}\,. \tag{3.18}\] **Proposition 3.8**.: _There exists a constant \(\mu>0\) such that, if \(\varphi\) is an admissible solution corresponding to \(\boldsymbol{\theta}\in\Theta\) and \(\Omega\) is its pseudo-subsonic region, then the pseudo-Mach number_ \[M(\boldsymbol{\xi})\coloneqq\frac{|D\varphi(\boldsymbol{\xi})|}{c(|D\varphi( \boldsymbol{\xi})|,\varphi(\boldsymbol{\xi}))}\] _satisfies_ \[M^{2}(\boldsymbol{\xi})\leq 1-\mu g(\boldsymbol{\xi})\qquad\text{in }\overline{\Omega}\,, \tag{3.19}\] _and there exists a constant \(C>1\) such that_ \[C^{-1}\text{dist}^{\flat}(\boldsymbol{\xi},\,\Gamma^{5}_{\text{sonic}}\cup \Gamma^{6}_{\text{sonic}})|\boldsymbol{\kappa}|^{2}\leq\sum_{i,j=1}^{2}\mathcal{ A}^{i}_{p_{j}}(D\varphi(\boldsymbol{\xi}),\varphi(\boldsymbol{\xi}))\kappa_{i} \kappa_{j}\leq C|\boldsymbol{\kappa}|^{2} \tag{3.20}\] _for all \(\boldsymbol{\xi}\in\overline{\Omega}\) and \(\boldsymbol{\kappa}=(\kappa_{1},\kappa_{2})\in\mathbb{R}^{2}\), where \(\mathcal{A}(\boldsymbol{p},z)=\rho(|\boldsymbol{p}|,z)\boldsymbol{p},\,\text{ dist}^{\flat}(\,\cdot\,,\,\cdot\,)\) is given by (3.18), and constants \(\mu\) and \(C\) depend only on \((\gamma,v_{2})\)._ Proof.: Let \(\varphi\) be any admissible solution corresponding to \(\boldsymbol{\theta}\in\Theta\) with pseudo-subsonic region \(\Omega\) and curved transonic shock \(\Gamma_{\text{shock}}\). **1.** From Lemma 3.3, there exists a constant \(R>1\) depending only on \((\gamma,v_{2})\) such that \[\Omega\subseteq B_{R}(\boldsymbol{0})\,,\qquad\|c(|D\varphi|,\varphi)\|_{C^{0}( \overline{\Omega})}+\|g\|_{C^{2}(\overline{\Omega})}\leq R\,,\] where \(g(\cdot)\) is given by (3.16). Since \(O_{5},O_{6}\in\Gamma_{\text{sym}}\), \(\partial_{\eta}g=0\) on \(\Gamma_{\text{sym}}\). Then, by [10, Theorem 5.3.1], we can choose constants \(C_{0}>1\), \(\delta\in(0,\frac{3}{4C_{0}})\), and \(\mu_{1}\in(0,1)\) depending only on \((\gamma,v_{2})\) such that, for any \(\mu\in(0,\mu_{1}]\), either inequality \(M^{2}+\mu g\leq C_{0}\delta<1\) holds in \(\Omega\) or the maximum of \(M^{2}+\mu g\) over \(\overline{\Omega}\) cannot be attained in \(\Omega\cup\Gamma_{\text{sym}}\). In the first case, (3.19) follows immediately. Thus, we focus on the second case that the maximum of \(M^{2}+\mu g\) must be attained on \(\partial\Omega\setminus\Gamma_{\text{sym}}\). **2.** We claim that there exists a constant \(\mu_{2}\in(0,\mu_{1}]\) depending only on \((\gamma,v_{2})\) such that, whenever \(\mu\in(0,\mu_{2}]\), \((M^{2}+\mu g)\leq 1\) holds in \(\Omega\), from which (3.19) follows. Let the maximum of \(M^{2}+\mu g\) be attained at \(P_{\text{max}}\in\partial\Omega\setminus\Gamma_{\text{sym}}\), and let \((M^{2}+\mu g)(P_{\text{max}})>1\). Then, using Proposition 3.4, we may proceed as in Steps **2-3** of the proof of [2, Proposition 3.15] to find \(\mu_{2}\in(0,\mu_{1}]\) depending only on \((\gamma,v_{2})\) such that \(P_{\max}\notin\Gamma_{\mathrm{shock}}\) when \(\mu\in(0,\mu_{2}]\). Therefore, \(P_{\max}\in\Gamma_{\mathrm{sonic}}^{5}\cup\Gamma_{\mathrm{sonic}}^{6}\). However, reducing \(\mu_{2}\) depending only on \((\gamma,v_{2})\) if necessary, we have \[(M^{2}+\mu g)(P_{\max})=\sup_{\boldsymbol{\xi}\in\Gamma_{\mathrm{sonic}}^{5} \cup\Gamma_{\mathrm{sonic}}^{6}}(M^{2}+\mu g)(\boldsymbol{\xi})\leq 1\,,\] which is a contradiction. In the final inequality above, we have used that \(g|_{\Gamma_{\mathrm{sonic}}^{j}}=0\) for \(\theta_{j-4}\in[0,\theta^{\mathrm{s}})\), whilst \(g|_{\Gamma_{\mathrm{sonic}}^{j}}\leq C_{\flat}(c_{j}-|D\varphi_{j}(P_{0}^{j- 4})|)\) for \(\theta_{j-4}\in[\theta^{\mathrm{s}},\theta^{\mathrm{d}})\), for \(j=5,6\). ### Interior Holder estimates away from the sonic boundaries For any set \(U\subseteq\mathbb{R}^{2}\) and any constant \(\varepsilon>0\), define the \(\varepsilon\)-neighbourhood of \(U\) as \[\mathcal{N}_{\varepsilon}(U)\coloneqq\{\boldsymbol{\xi}\in\mathbb{R}^{2}\,: \,\mathrm{dist}(\boldsymbol{\xi},\,U)<\varepsilon\}\,. \tag{3.21}\] From Proposition 3.4, there exists a constant \(C_{\mathrm{sh}}>0\) depending only on \((\gamma,v_{2})\) such that \[\mathrm{dist}(\Gamma_{\mathrm{shock}},\,\partial B_{1}(O_{2}))\geq C_{ \mathrm{sh}}^{-1}\] for any admissible solution and parameters \(\boldsymbol{\theta}\in\Theta\). It follows that \[|D\varphi_{2}(\boldsymbol{\xi})|^{2}\geq 1+d_{0}\qquad\text{for any } \boldsymbol{\xi}\in\mathcal{N}_{\frac{1}{2C_{\mathrm{sh}}}}(\Gamma_{\mathrm{ shock}})\] for some constant \(d_{0}>0\) depending only on \((\gamma,v_{2})\). Subsequently, we claim that there exist constants \(d_{0}^{\prime}\in(0,d_{0})\) and \(\varepsilon\in(0,\frac{1}{2C_{\mathrm{sh}}})\) depending only on \((\gamma,v_{2})\) such that, for any admissible solution \(\varphi\), \[|D\varphi_{2}(\boldsymbol{\xi})|-|D\varphi(\boldsymbol{\xi})|\geq d_{0}^{ \prime}\qquad\text{for any }\boldsymbol{\xi}\in\overline{\Omega}\cap\mathcal{N}_{ \varepsilon}(\Gamma_{\mathrm{shock}})\,. \tag{3.22}\] Indeed, in the case: \(\gamma>1\), from Definition 2.11(iii)-(iv) and the Bernoulli law: \[\frac{1}{2}|D\varphi|^{2}+\varphi+\frac{\rho^{\gamma-1}-1}{\gamma-1}=B=\frac{ 1}{2}|D\varphi_{2}|^{2}+\varphi_{2}\,,\] we have \[\frac{\gamma+1}{2}\left(\rho^{\gamma-1}-1\right) \geq\left(\rho^{\gamma-1}-1\right)+\frac{\gamma-1}{2}\left(|D \varphi|^{2}-1\right)\] \[= (\gamma-1)(\varphi_{2}-\varphi)+\frac{\gamma-1}{2}\left(|D\varphi _{2}|^{2}-1\right)\geq\frac{\gamma-1}{2}d_{0}\,,\] and \[|D\varphi_{2}|^{2}-|D\varphi|^{2}=\frac{2\left(\rho^{\gamma-1}-1\right)}{ \gamma-1}+(\varphi-\varphi_{2})\geq\frac{2d_{0}}{\gamma+1}+(\varphi-\varphi_ {2})\,.\] Since \(\varphi\) is Lipschitz continuous in \(\overline{\Omega}\) from Lemma 3.3, and \(\varphi=\varphi_{2}\) on \(\Gamma_{\mathrm{shock}}\), we obtain \[|D\varphi_{2}(\boldsymbol{\xi})|^{2}-|D\varphi(\boldsymbol{\xi})|^{2}\geq \frac{d_{0}}{\gamma+1}\qquad\text{for any }\boldsymbol{\xi}\in\overline{\Omega}\cap \mathcal{N}_{\varepsilon}(\Gamma_{\mathrm{shock}})\,,\] with some \(\varepsilon\in(0,\frac{1}{2C_{\mathrm{sh}}})\) sufficiently small. Then (3.22) follows from the above inequality and the uniform boundedness property in Lemma 3.3. In the case: \(\gamma=1\), it is straightforward to obtain (3.22) by using Definition 2.11(iii): for any admissible solution \(\varphi\), \(|D\varphi|^{2}\leq c^{2}(|D\varphi|,\varphi)\equiv 1\) in \(\overline{\Omega}\), since the sonic speed is a constant. For \(\boldsymbol{\xi}=(\xi,\eta)\in\mathbb{R}^{2}\setminus\{O_{2}\}\), define the polar coordinates \((r,\theta)\) centred at \(O_{2}=(u_{2},v_{2})\) by \[r(\cos\theta,\sin\theta)\coloneqq(\xi,\eta)-O_{2}\,.\] It follows from (3.22) that \[\partial_{r}(\varphi_{2}-\varphi)\leq-(|D\varphi_{2}|-|D\varphi|)\leq-d_{0}^{ \prime}\qquad\text{in }\overline{\Omega}\cap\mathcal{N}_{\varepsilon}(\Gamma_{\mathrm{ shock}})\,. \tag{3.23}\] Therefore, we may apply the implicit function theorem to express \(\Gamma_{\mathrm{shock}}\) under these polar coordinates as \[\Gamma_{\mathrm{shock}}=\{\boldsymbol{\xi}(r,\theta)\,:\,r=f_{O_{2},\mathrm{sh} }(\theta),\,\theta_{P_{2}}<\theta<\theta_{P_{3}}\}\,,\] where \((f_{O_{2},\mathrm{sh}}(\theta_{P_{i}}),\theta_{P_{i}})\) are the polar coordinates of points \(P_{i}\), for \(i=2,3\). By Lemma 3.3 and (3.23), there exists a constant \(C_{1}>0\) depending only on \((\gamma,v_{2})\) such that \[\|f_{O_{2},\mathrm{sh}}\|_{C^{0,1}}([\theta_{P_{2},\theta_{P_{3}}]})\leq C_{1}\,.\] From the above, with the help of Lemma 3.5, we deduce the following properties; the proof is similar to that of [2, Lemmas 3.17-3.18]. **Lemma 3.9**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). Let \(\varphi\) be an admissible solution corresponding to parameters \(\boldsymbol{\theta}\in\Theta\)._ 1. _There exists a constant_ \(\delta^{\prime}>0\) _depending only on_ \((\gamma,v_{2})\) _such that_ \[\partial_{\boldsymbol{\nu}}\varphi_{2}-\delta^{\prime}>\partial_{\boldsymbol{ \nu}}\varphi\geq\delta^{\prime}\qquad\text{on }\overline{\Gamma_{\rm shock}}\,,\] _where_ \(\boldsymbol{\nu}=\frac{D(\varphi_{2}-\varphi)}{|D(\varphi_{2}-\varphi)|}\) _is the unit normal vector on_ \(\Gamma_{\rm shock}\) _pointing into the interior of_ \(\Omega\)_._ 2. _For each_ \(d>0\) _and_ \(m=2,3,\cdots,\) _there exist positive constants_ \(s\) _and_ \(C_{m}\) _depending only on_ \((\gamma,v_{2},d)\) _such that, whenever_ \(P=\boldsymbol{\xi}(r_{P},\theta_{P})\in\Gamma_{\rm shock}\) _satisfies_ \(\operatorname{dist}(P,\Gamma_{\rm sonic}^{5}\cup\Gamma_{\rm sonic}^{6})\geq d,\)__ \[|f^{(m)}_{O_{2}\operatorname{sh}}(\theta_{P})|+\sup_{B_{s}(P)\cap\Omega}|D^{m} \varphi|\leq C_{m}\,.\] From Lemmas 3.6, 3.8, and 3.9(ii), we have the following _a priori_ interior estimates. **Corollary 3.10**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For each \(d>0\) and \(m=2,3,\cdots,\) there exists a constant \(C_{m,d}>0\) depending only on \((\gamma,v_{2},m,d)\) such that any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta\) satisfies_ \[\|\varphi\|_{m,\overline{\Omega\cap\{\operatorname{dist}(\boldsymbol{\xi}, \Gamma_{\rm sonic}^{5}\cup\Gamma_{\rm sonic}^{6})>d\}}}\leq C_{m,d}\,.\] ### Weighted Holder estimates near the sonic boundaries From Definition 2.9, the sonic boundary \(\Gamma_{\rm sonic}^{5}\) depends continuously on \(\theta_{1}\in[0,\theta^{\rm d})\) and shrinks into a single point \(P_{1}^{1}=P_{1}=P_{2}\) when \(\theta_{1}\in[0,\theta^{\rm s})\) tends to \(\theta^{\rm s}\) from below (if additionally \(v_{2}\in[v_{2}^{\rm s},0)\)). Moreover, from Proposition 3.8, the ellipticity of equation (3.1) degenerates on the sonic boundary \(\Gamma_{\rm sonic}^{5}\) for \(\theta_{1}\in[0,\theta^{\rm s})\), while equation (3.1) is uniformly elliptic in the case that \(\theta_{1}\in(\theta^{\rm s},\theta^{\rm d})\) up to \(\Gamma_{\rm sonic}^{5}\) away from \(\Gamma_{\rm sonic}^{6}\). The above reasons lead us to consider the following four cases separately for the weighted Holder estimates of the admissible solutions near \(\Gamma_{\rm sonic}^{5}\). Since \(\theta^{\rm s}\) can be equal to \(\theta^{\rm cr}\) depending on \((\gamma,v_{2})\), some of the following cases may not occur (_cf._ Remark 2.1(i)): 1. \(\theta_{1}\in[0,\theta^{\rm s})\) _away from_ \(\theta^{\rm s}\); 2. \(\theta_{1}\in[0,\theta^{\rm s})\) _close to_ \(\theta^{\rm s}\); 3. \(\theta_{1}\in[\theta^{\rm s},\theta^{\rm d})\) _close to_ \(\theta^{\rm s}\); 4. \(\theta_{1}\in[\theta^{\rm s},\theta^{\rm d})\) _away from_ \(\theta^{\rm s}\). Corresponding weighted \(C^{2,\alpha}\)-estimates near \(\Gamma_{\rm sonic}^{6}\) for the four cases with respect to \(\theta_{2}\in[0,\theta^{\rm d})\) are obtained immediately due to the symmetry of our problem. For any \(\boldsymbol{\theta}\in\Theta\), define an open set \(\Omega^{5}\subseteq\Omega\) by \[\Omega^{5}\coloneqq\left(\Omega\cap\{\boldsymbol{\xi}\in\mathbb{R}_{+}^{2}\, :\,\xi>u_{5}+q_{5}\sin\theta_{25}\}\right)\setminus\overline{B_{c_{5}^{\rm s} }(O_{5})}\,,\] where \(q_{5}=\operatorname{dist}(O_{5},S_{25})\) is given by (3.14), and \(c_{5}^{\ast}\coloneqq\frac{1}{2}\big{(}|O_{5}P_{2}|+\operatorname{dist}(O_{5}, S_{25})\big{)}\). For \(\boldsymbol{\xi}\in\overline{\Omega^{5}}\), we introduce the modified polar coordinates \((x,y)\) centred at \(O_{5}\) defined by \[(x,y)\coloneqq\mathcal{R}(\xi,\eta)\quad\iff\quad(\xi,\eta)-O_{5}=(c_{5}-x)( \cos y,\sin y)\qquad\text{with }y\in[0,\tfrac{\pi}{2}]\,. \tag{3.24}\] It is clear that \(\mathcal{R}(\Omega^{5})\subseteq\{(x,y)\,:\,0<x<c_{5}-c_{5}^{\ast}\}\). For the pseudo-potential \(\varphi_{5}\) given by (2.33), we have \[\varphi_{5}\circ\mathcal{R}^{-1}(x,y)=-\frac{1}{2}(c_{5}-x)^{2}+\frac{1}{2}u_{ 5}^{2}-u_{5}\xi^{P_{0}^{1}}\qquad\text{for }(x,y)\in\mathcal{R}(\Omega^{5})\,.\] Note that \(u_{5}\xi^{P_{0}^{1}}\to-v_{2}\eta_{0}\) as \(\theta_{1}\to 0^{+}\), where \(\eta_{0}>0\) is defined in SS2.1.3. For any admissible solution \(\varphi\) corresponding to parameters \(\boldsymbol{\theta}\in\Theta\), let \(\psi\) be given by \[\psi(x,y)\coloneqq(\varphi-\varphi_{5})\circ\mathcal{R}^{-1}(x,y)\qquad\text{ for }(x,y)\in\mathcal{R}(\Omega^{5})\,.\] Then \(\psi\) satisfies a free boundary problem for an elliptic equation: 1. _Equation for_ \(\psi\) _in_ \(\mathcal{R}(\Omega^{5})\): Direct calculation gives (3.25) \[\mathcal{N}(\psi)=\big{(}2x-(\gamma+1)\psi_{x}+\mathcal{O}_{1}\big{)}\psi_{xx}+ \mathcal{O}_{2}\psi_{xy}+\big{(}\frac{1}{c_{5}}+\mathcal{O}_{3}\big{)}\psi_{yy} -\big{(}1+\mathcal{O}_{4}\big{)}\psi_{x}+\mathcal{O}_{5}\psi_{y}=0\,,\] with \(\mathcal{O}_{j}=\mathcal{O}_{j}(\psi_{x},\psi_{y},\psi,x,c_{5})\) for \(j=1,\cdots,5\), defined as follows: \[\mathcal{O}_{1}(p_{x},p_{y},z,x,c) =-\frac{x^{2}}{c}+\frac{\gamma+1}{2c}(2x-p_{x})p_{x}-\frac{\gamma -1}{c}\Big{(}z+\frac{p_{y}^{2}}{2(c-x)^{2}}\Big{)}\,,\] \[\mathcal{O}_{2}(p_{x},p_{y},z,x,c) =-\frac{2(p_{x}+c-x)p_{y}}{c(c-x)^{2}}\,,\] (3.26) \[\mathcal{O}_{3}(p_{x},p_{y},z,x,c) =\frac{1}{c(c-x)^{2}}\Big{(}(2c-x)x-(\gamma-1)\big{(}z+(c-x)p_{x}- \frac{p_{x}^{2}}{2}\big{)}+\frac{(\gamma-1)p_{y}^{2}}{2(c-x)^{2}}\Big{)}\,,\] \[\mathcal{O}_{4}(p_{x},p_{y},z,x,c) =\frac{1}{c-x}\Big{(}x-\frac{\gamma-1}{c}\big{(}z+(c-x)p_{x}+ \frac{p_{x}^{2}}{2}+\frac{(\gamma+1)p_{y}^{2}}{2(\gamma-1)(c-x)^{2}}\big{)} \Big{)}\,,\] \[\mathcal{O}_{5}(p_{x},p_{y},z,x,c) =-\frac{2(p_{x}+c-x)p_{y}}{c(c-x)^{3}}\,.\] 2. _Boundary condition for_ \(\psi\) _on_ \(\mathcal{R}(\Gamma_{\mathrm{shock}}\cap\partial\Omega^{5})\): Denote the Rankine-Hugoniot condition for \(\varphi\) on \(\Gamma_{\mathrm{shock}}\) as (3.27) \[g^{\mathrm{sh}}(D\varphi,\varphi,\boldsymbol{\xi})\coloneqq\rho(|D\varphi|, \varphi)D\varphi\cdot\boldsymbol{\nu}-D\varphi_{2}\cdot\boldsymbol{\nu}=( \mathcal{A}(D\varphi,\varphi)-D\varphi_{2})\cdot\boldsymbol{\nu}=0\,,\] where \(\mathcal{A}(\boldsymbol{p},z)=\rho(|\boldsymbol{p}|,z)\boldsymbol{p}\), and \(\boldsymbol{\nu}=\frac{D(\varphi_{2}-\varphi)}{|D(\varphi_{2}-\varphi)|}\) is the unit interior normal vector on \(\Gamma_{\mathrm{shock}}\). For constant \(\delta^{\prime}>0\) given in Lemma 3.9, we define a function \(\zeta\in C^{\infty}(\mathbb{R})\) by \[\zeta(t)=\begin{cases}t&\text{ for }t\geq\frac{3\delta^{\prime}}{4}\,,\\ \frac{\delta^{\prime}}{2}&\text{ for }t<\frac{\delta^{\prime}}{2}\,,\end{cases}\] satisfying \(\zeta^{\prime}(t)\geq 0\) on \(\mathbb{R}\). Also, we define an extension of \(g^{\mathrm{sh}}(\boldsymbol{p},z,\boldsymbol{\xi})\) onto \(\mathbb{R}^{2}\times\mathbb{R}\times\overline{\Omega^{5}}\): (3.28) \[g^{\mathrm{sh}}_{\mathrm{mod}}(\boldsymbol{p},z,\boldsymbol{\xi})\coloneqq \big{(}\tilde{\mathcal{A}}(\boldsymbol{p},z)-D\varphi_{2}(\boldsymbol{\xi}) \big{)}\cdot\frac{D\varphi_{2}(\boldsymbol{\xi})-\boldsymbol{p}}{\zeta(|D \varphi_{2}(\boldsymbol{\xi})-\boldsymbol{p}|)}\,,\] where \(\tilde{\mathcal{A}}(\boldsymbol{p},z)\) is given by Lemma 3.5, and \(g^{\mathrm{sh}}_{\mathrm{mod}}(\boldsymbol{p},z,\boldsymbol{\xi})=g^{\mathrm{ sh}}(\boldsymbol{p},z,\boldsymbol{\xi})\) if \(|D\varphi_{2}(\boldsymbol{\xi})-\boldsymbol{p}|\geq\frac{3\delta_{1}}{4}\). Then we define \[M^{(\theta_{1})}(\boldsymbol{p},z,\xi)\coloneqq g^{\mathrm{sh}}_{\mathrm{mod}} (\boldsymbol{p}+D\varphi_{5},z+\varphi_{5},\xi,\xi\tan\theta_{25}+a_{25}+ \frac{z}{v_{2}})\] with \((D\varphi_{5},\varphi_{5})\) evaluated at \(\boldsymbol{\xi}=(\xi,\,\xi\tan\theta_{25}+a_{25}+\frac{z}{v_{2}})\). Note that, by Proposition 2.6(i) and Definition 2.7, \((\theta_{25},\,a_{25})\) depend continuously on \(\theta_{1}\in[0,\theta^{\mathrm{d}}]\) with the limit: \(\lim\limits_{\theta_{1}\to 0^{+}}(\theta_{25},a_{25})=(\pi,\eta_{0})\), where \(\eta_{0}>0\) is given by (2.23). We prescribe the free boundary condition for \(\psi\) to be \[\mathcal{B}^{(\theta_{1})}(\psi_{x},\psi_{y},\psi,x,y)=0\qquad\text{on } \mathcal{R}(\Gamma_{\mathrm{shock}}\cap\partial\Omega^{5})\,,\] where \(\mathcal{B}^{(\theta_{1})}\) is given by (3.29) \[\mathcal{B}^{(\theta_{1})}(p_{x},p_{y},z,x,y)\coloneqq|D\bar{\phi}_{25}-(p_{1}, p_{2})|M^{(\theta_{1})}(p_{1},p_{2},z,\xi)\,,\] with function \(\bar{\phi}_{25}(\boldsymbol{\xi})\coloneqq(\varphi_{2}-\varphi_{5})( \boldsymbol{\xi})=v_{2}(\eta-\xi\tan\theta_{25}-a_{25})\), and \[\xi=v_{2}\tan\theta_{25}+(c_{5}-x)\cos y\,,\qquad\begin{pmatrix}p_{1}\\ p_{2}\end{pmatrix}=\begin{pmatrix}\cos y&\sin y\\ \sin y&-\cos y\end{pmatrix}\begin{pmatrix}-p_{x}\\ -\frac{p_{y}}{c_{5}-x}\end{pmatrix}.\] 3. _Other conditions for_ \(\psi\): From Problem 2.10(iii)-(iv) and Definition 2.11(iv), \(\psi\) satisfies \[\psi\geq 0\ \text{ in }\mathcal{R}(\Omega^{5})\,,\qquad\psi=0\ \text{ on }\mathcal{R}(\Gamma_{\mathrm{sonic}}^{5})\,,\qquad\psi_{y}=0\ \text{ on }\mathcal{R}(\Gamma_{\mathrm{sym}}\cap\partial\Omega^{5})\,.\] **Definition 3.11** (Parabolic norm).: _Fix \(\alpha\in(0,1)\). Define the \(\alpha\)-th parabolic distance between points \(\mathbf{x}=(x,y),\;\tilde{\mathbf{x}}=(\tilde{x},\tilde{y})\in(0,\infty)\times\mathbb{R}\) to be_ \[\delta_{\alpha}^{\rm(par)}(\mathbf{x},\tilde{\mathbf{x}})\coloneqq\left(|x-\tilde{x}|^{ 2}+\max\{x,\tilde{x}\}|y-\tilde{y}|^{2}\right)^{\frac{\alpha}{2}}.\] _Then, given constants \(\sigma>0\) and \(m\in\mathbb{Z}_{+},\) the parabolic norms are defined as follows\(:\)_ 1. _For any open set_ \(\mathcal{D}\subseteq(0,\infty)\times\mathbb{R}\) _and any function_ \(u\in C^{2}(\mathcal{D})\) _in the_ \((x,y)\)_-coordinates, define_ \[\|u\|_{m,0,\mathcal{D}}^{(\sigma),{\rm(par)}}\coloneqq\sum_{0\leq k+l\leq m} \sup_{\mathbf{x}\in\mathcal{D}}\left(x^{k+\frac{l}{2}-\sigma}|\partial_{x}^{k} \partial_{y}^{l}u(\mathbf{x})|\right),\] \[\|u\|_{m,\alpha,\mathcal{D}}^{(\sigma),{\rm(par)}}\coloneqq\|u\|_{m,0, \mathcal{D}}^{(\sigma),{\rm(par)}}+[u]_{m,\alpha,\mathcal{D}}^{(\sigma),{\rm( par)}}\,.\] 2. _For any_ \(a>0,\) _fix an open interval_ \(I\coloneqq(0,a)\)_. For a function_ \(f\in C^{2}(I),\) _define_ \[\|f\|_{m,0,I}^{(\sigma),{\rm(par)}}\coloneqq\sum_{k=0}^{m}\sup_{x\in I}\left(x ^{k-\sigma}|\partial_{x}^{k}f(x)|\right),\] \[[f]_{m,\alpha,I}^{(\sigma),{\rm(par)}}\coloneqq\sup_{\begin{subarray}{c}x, \tilde{x}\in I\\ x\neq\tilde{x}\end{subarray}}\Big{(}\min\left\{x^{\alpha+m-\sigma},\tilde{x}^ {\alpha+m-\sigma}\right\}\frac{|\partial_{x}^{m}f(x)-\partial_{x}^{m}f(\tilde{ x})|}{|x-\tilde{x}|^{\alpha}}\Big{)}\,,\] \[\|f\|_{m,\alpha,I}^{(\sigma),{\rm(par)}}\coloneqq\|f\|_{m,0,I}^{(\sigma),{ \rm(par)}}+[f]_{m,\alpha,I}^{(\sigma),{\rm(par)}}\,.\] 3. _In the case_ \(\sigma=2,\) _write_ \(\|u\|_{2,\alpha,\mathcal{D}}^{\rm(par)}\coloneqq\|u\|_{2,\alpha,\mathcal{D}}^ {(2),{\rm(par)}}\) _and_ \(\|f\|_{2,\alpha,I}^{\rm(par)}\coloneqq\|f\|_{2,\alpha,I}^{(2),{\rm(par)}}\)_._ Case 1. _Admissible solutions for_ \(\theta_{1}<\theta^{\rm s}\) _away from_ \(\theta^{\rm s}\)_._ **Proposition 3.12**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For some constant \(\bar{\varepsilon}>0\) depending only on \((\gamma,v_{2}),\) any \(\sigma\in(0,\theta^{\rm s}),\) and any \(\alpha\in(0,1),\) there exist both \(\varepsilon\in(0,\bar{\varepsilon}]\) depending only on \((\gamma,v_{2},\sigma)\) and \(C>0\) depending only on \((\gamma,v_{2},\alpha)\) such that any admissible solution \(\varphi=\psi\circ\mathcal{R}+\varphi_{5}\) corresponding to \(\mathbf{\theta}\in\Theta\cap\{0\leq\theta_{1}\leq\theta^{\rm s}-\sigma\}\) satisfies_ \[\|\psi\|_{2,\alpha,\mathcal{R}(\Omega_{\varepsilon}^{6})}^{\rm(par)}+\|f_{5,{ \rm sh}}-f_{5,0}\|_{2,\alpha,(0,\varepsilon)}^{\rm(par)}\leq C\,, \tag{3.30}\] _where \(\Omega_{\varepsilon}^{5}\subseteq\Omega\) is a neighbourhood of \(\Gamma_{\rm sonic}^{5},\) and functions \(f_{5,{\rm sh}}\) and \(f_{5,0}\) represent \(\Gamma_{\rm shock}\) and \(S_{25}\) respectively to be defined in \({\rm Step\ 1}\) below._ The proof of this proposition is similar to that of [2, Propositions 3.26 and 3.30]. We sketch the main steps of the proof, while omitting most details. **1.**_The depiction and the properties of the reflected shock \(S_{25}\) and the free boundary \(\Gamma_{\rm shock}\) in a small neighbourhood of the sonic boundary \(\Gamma_{\rm sonic}^{5}\)._ It follows from the geometric properties of \(\overline{\Omega^{5}}\) that there exist positive constants \((\varepsilon_{1},\varepsilon_{0},\omega_{0})\) depending only on \((\gamma,v_{2}),\) with \(\varepsilon_{0}<\varepsilon_{1}\) and \(\omega_{0}\in(0,1),\) and a unique function \(f_{5,0}\in C^{\infty}([-\varepsilon_{0},\varepsilon_{0}])\) satisfying \[\mathcal{R}\big{(}S_{25}\cap\mathcal{N}_{\varepsilon_{1}}(\Gamma_{\rm sonic}^ {5})\big{)}\cap\{|x|<\varepsilon_{0}\}=\big{\{}(x,y)\,:\,|x|<\varepsilon_{0},\, y=f_{5,0}(x)\big{\}}\,,\] where \(\mathcal{N}_{\varepsilon_{1}}(\Gamma_{\rm sonic}^{5})\) is defined by (3.21), and \(2\omega_{0}\leq f_{5,0}^{\prime}(x)\leq\omega_{0}^{-1}\) for \(x\in(-\varepsilon_{0},\varepsilon_{0})\). Then, for any \(\varepsilon\in(0,\varepsilon_{1}]\), we define a set \(\Omega_{\varepsilon}^{5}\subseteq\Omega\) by \[\Omega_{\varepsilon}^{5}\coloneqq\Omega\cap\mathcal{N}_{\varepsilon_{1}}(\Gamma_ {\rm sonic}^{5})\cap\mathcal{R}^{-1}\big{(}\{x<\varepsilon\}\big{)}. \tag{3.31}\] Using (3.20) and (3.25), we can show that there exist \(\bar{\varepsilon}\in(0,\varepsilon_{0}]\), \(L\geq 1,\)\(\delta\in(0,\frac{1}{2}),\) and \(\omega\in(0,\omega_{0})\) depending only on \((\gamma,v_{2})\) such that, for all \((x,y)\in\mathcal{R}(\Omega_{\varepsilon}^{5}),\) \[0\leq\psi(x,y)\leq Lx^{2}\,,\qquad 0\leq\psi_{x}\leq\frac{2-\delta}{1+\gamma}x \leq Lx\,,\qquad|\psi_{y}(x,y)|\leq Lx\,, \tag{3.32}\] and there exists a unique function \(f_{5,\mathrm{sh}}\in C^{1}([0,\bar{\varepsilon}])\) such that \[\mathcal{R}(\Omega^{5}_{\bar{\varepsilon}})=\left\{(x,y)\,:\,0<x<\bar{ \varepsilon},\,0<y<f_{5,\mathrm{sh}}(x)\right\},\] \[\mathcal{R}(\Gamma_{\mathrm{shock}}\cap\partial\Omega^{5}_{\bar{ \varepsilon}})=\left\{(x,y)\,:\,0<x<\bar{\varepsilon},\,y=f_{5,\mathrm{sh}}(x )\right\},\] and \(\omega\leq f^{\prime}_{5,\mathrm{sh}}(x)\leq L\) for any \(x\in(0,\bar{\varepsilon})\). **2.**_Ellipticity for the equation and the free boundary condition._ Equation (3.25) for \(\psi\) in \(\mathcal{R}(\Omega^{5})\) can be written as \[\sum_{i,j=1}^{2}\hat{A}^{(\theta_{1})}_{ij}(\psi_{x},\psi_{y},\psi,x)\partial _{i}\partial_{j}\psi+\sum_{i,j=1}^{2}\hat{A}^{(\theta_{1})}_{i}\partial_{i} \psi=0\,, \tag{3.33}\] with \((\partial_{1},\partial_{2})=(\partial_{x},\partial_{y})\) and \(\hat{A}^{(\theta_{1})}_{12}=\hat{A}^{(\theta_{1})}_{21}\). Then there exist \(\varepsilon_{5}\in(0,\frac{\bar{\varepsilon}}{4}]\) and \(\lambda_{5}>0\) depending only on \((\gamma,v_{2})\) such that, for all \((x,y)\in\mathcal{R}(\overline{\Omega^{5}_{4\varepsilon_{5}}})\), \[\frac{\lambda_{5}}{2}|\boldsymbol{\kappa}|^{2}\leq\sum_{i,j=1}^{2}\hat{A}^{( \theta_{1})}_{ij}(\psi_{x}(x,y),\psi_{y}(x,y),\psi(x,y),x)\frac{\kappa_{i} \kappa_{j}}{x^{2-\frac{i+1}{2}}}\leq\frac{2}{\lambda_{5}}|\kappa|^{2}\qquad \text{for all }\boldsymbol{\kappa}\in\mathbb{R}^{2}\,.\] Moreover, \(\mathcal{B}^{(\theta_{1})}\) defined by (3.29) satisfies \(\mathcal{B}^{(\theta_{1})}(\boldsymbol{0},0,x,y)=0\) for all \((x,y)\in\mathbb{R}^{2}\). Let \(y_{P_{\mathrm{D}}}\) be the \(y\)-coordinate of point \(P_{2}\) defined in Definition 2.9. For each \(m=2,3,\cdots\), there exist constants \(\delta_{\mathrm{bc}}>0\) and \(C>1\) depending only on \((\gamma,v_{2},m)\) such that, whenever \(|(p_{x},p_{y},z,x)|\leq\delta_{\mathrm{bc}}\) and \(|y-y_{P_{2}}|\leq\delta_{\mathrm{bc}}\), \[\big{|}D^{m}\mathcal{B}^{(\theta_{1})}(p_{x},p_{y},z,x,y)\big{|}\leq C\,.\] Furthermore, there exist constants \(\hat{\delta}_{\mathrm{bc}}>0\), \(C>1\), and \(\varepsilon^{\prime}>0\) depending only on \((\gamma,v_{2})\) such that, whenever \(|(p_{x},p_{y},z,x)|\leq\hat{\delta}_{\mathrm{bc}}\) and \(|y-y_{P_{2}}|\leq\hat{\delta}_{\mathrm{bc}}\), \[\partial_{a}\mathcal{B}^{(\theta_{1})}(p_{x},p_{y},z,x,y)\leq-C^{-1} \text{for }\partial_{a}=\partial_{p_{x}},\partial_{p_{y}}\text{, or }\partial_{z}\,,\] \[D_{(p_{x},p_{y})}\mathcal{B}^{(\theta_{1})}(p_{x},p_{y},z,x,y) \cdot\boldsymbol{\nu}^{(x,y)}_{\mathrm{sh}}\geq C^{-1} \text{on }\mathcal{R}(\Gamma_{\mathrm{shock}}\cap\partial\Omega^{5}_{ \varepsilon^{\prime}})\,,\] where the vector field \(\boldsymbol{\nu}^{(x,y)}_{\mathrm{sh}}\) is the unit normal vector to \(\mathcal{R}(\Gamma_{\mathrm{shock}})\) expressed in the \((x,y)\)-coordinates and oriented towards the interior of \(\mathcal{R}(\Omega)\). **3.**_Extension of the domain for the coefficients to equation (3.33)._ Let \(\varepsilon_{0}>0\) and \(L\geq 1\) be the constants defined as above. Then there exist constants \(\varepsilon\in(0,\frac{\varepsilon_{0}}{2}]\) and \(C>0\) depending only on \((\gamma,v_{2})\) such that, for any admissible solution \(\varphi\) corresponding to parameters \(\boldsymbol{\theta}\in\Theta\), function \(\psi=(\varphi-\varphi_{5})\circ\mathcal{R}^{-1}\) satisfies \[\sum_{i,j=1}^{2}\hat{A}^{(\mathrm{mod})}_{ij}(\psi_{x},\psi_{y},\psi,x) \partial_{i}\partial_{j}\psi+\sum_{i,j=1}^{2}\hat{A}^{(\mathrm{mod})}_{i}( \psi_{x},\psi_{y},\psi,x)\partial_{i}\psi=0\qquad\text{in }\mathcal{R}(\Omega^{5}_{ \varepsilon})\,, \tag{3.34}\] with \((\partial_{1},\partial_{2})\coloneqq(\partial_{x},\partial_{y})\) and coefficients \((\hat{A}^{(\mathrm{mod})}_{ij},\hat{A}^{(\mathrm{mod})}_{i})\) satisfying \[(\hat{A}^{(\mathrm{mod})}_{ij},\hat{A}^{(\mathrm{mod})}_{i})=(\hat{A}^{( \theta_{1})}_{ij},\hat{A}^{(\theta_{1})}_{i})\quad\text{ in }\{(p_{x},p_{y},z,x)\,:\,|(p_{x},p_{y})|\leq Lx,\,|z|\leq Lx^{2},\,0<x< \varepsilon\}\,.\] Moreover, the modified coefficients satisfy \[\big{|}(\hat{A}^{(\mathrm{mod})}_{11},\hat{A}^{(\mathrm{mod})}_{ 12},\hat{A}^{(\mathrm{mod})}_{2})(p_{x},p_{y},z,x)\big{|}\leq Cx\qquad\text{in }\mathbb{R}^{2}\times\mathbb{R}\times(0, \varepsilon)\,,\] \[\big{\|}(\hat{A}^{(\mathrm{mod})}_{22},\hat{A}^{(\mathrm{mod})}_{ 1})\big{\|}_{0,\mathbb{R}^{2}\times\mathbb{R}\times(0,\varepsilon)}+\big{\|}D_{(p _{x},p_{y},z,x)}(\hat{A}^{(\mathrm{mod})}_{ij},\hat{A}^{(\mathrm{mod})}_{i}) \big{\|}_{0,\mathbb{R}^{2}\times\mathbb{R}\times(0,\varepsilon)}\leq C\,.\] **4.**_Rescaling method in the local domain to obtain the weighted Holder estimate._ Using the strictly decreasing property of \(\eta^{P_{2}}\) with respect to \(\theta_{1}\in[0,\theta^{\mathrm{s}}]\) which is shown in Lemma A.2(ii), we obtain a constant \(l_{\mathrm{so}}>0\) depending only on \((\gamma,v_{2},\sigma)\) such that, for any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta\cap\{0\leq\theta_{1}\leq\theta^{\mathrm{s}}-\sigma\}\), \[y_{P}=f_{5,\mathrm{sh}}(x_{P})\geq l_{\mathrm{so}}\qquad\text{for any }x_{P}\in[0,\bar{ \varepsilon}]\,.\] Subsequently, we define \(\varepsilon_{*}\coloneqq\min\{\frac{\bar{\varepsilon}}{2},\,l_{\mathrm{so}}^{2}\}\). For fixed \(\varepsilon\in(0,\varepsilon_{*}]\) and any \(z_{0}=(x_{0},y_{0})\in\mathcal{R}(\overline{\Omega_{\varepsilon}^{5}}\,\,|\, \overline{\Gamma_{\text{sonic}}^{5}})\), define the rescaled coordinates \((S,T)\) by \((x,y)\eqqcolon z_{0}+\frac{1}{4}(x_{0}S,\sqrt{x_{0}}T)\), which take values in a local domain \[Q_{r}^{(z_{0})}\coloneqq\big{\{}(S,T)\in(-1,1)^{2}\,:\,z=z_{0}+\frac{r}{4}(x_{ 0}S,\sqrt{x_{0}}T)\in\mathcal{R}(\Omega_{2\varepsilon}^{5})\big{\}}\qquad \text{for any }r\in(0,1]\,.\] Considering property (3.32), we define a rescaled function \(\psi^{(z_{0})}\) by \[\psi^{(z_{0})}(S,T)\coloneqq\frac{1}{x_{0}^{2}}\psi(x_{0}+\frac{x_{0}}{4}S,y_ {0}+\frac{\sqrt{x_{0}}}{4}T)\qquad\text{for }(S,T)\in Q_{1}^{(z_{0})}\,.\] With a suitable rescaling on the coefficients of the modified equation (3.34), we obtain a uniformly elliptic equation for \(\psi^{(z_{0})}\) under the \((S,T)\)-coordinates. Note that, for \(z_{0}\in\mathcal{R}(\overline{\Omega_{\varepsilon}^{5}}\cap\Gamma_{\text{ shock}})\), we can express \(\Gamma_{\text{shock}}\) locally under the rescaled coordinates as \(\Gamma_{\text{shock}}^{(z_{0})}\coloneqq\big{\{}(S,T)\in(-1,1)^{2}\,:\,T=F^{( z_{0})}(S)\big{\}}\subseteq\partial Q_{1}^{(z_{0})}\) with \(F^{(z_{0})}(S)\) defined by \[F^{(z_{0})}(S)\coloneqq\frac{4}{\sqrt{x_{0}}}\big{(}f_{5,\text{sh}}(x_{0}+ \frac{x_{0}}{4}S)-f_{5,\text{sh}}(x_{0})\big{)}\qquad\text{for }S\in(-1,1)\,.\] Then, in the local domain \(Q_{1}^{(z_{0})}\), we apply [10, Theorem 4.2.3] for \(z_{0}\in\mathcal{R}(\Omega_{\varepsilon}^{5})\), [10, Theorem 4.2.8] for \(z_{0}\in\mathcal{R}(\Gamma_{\text{shock}}\cap\partial\Omega_{\varepsilon}^{5})\), and [10, Theorem 4.2.10] for \(z_{0}\in\mathcal{R}(\Gamma_{\text{sym}}\cap\partial\Omega_{\varepsilon}^{5})\), respectively, to obtain the uniform bound: \[\sup_{z_{0}\in\mathcal{R}(\Omega_{\varepsilon}^{5})}\big{\|}\psi^{(z_{0})} \big{\|}_{2,\alpha,\overline{Q_{1/4}^{(z_{0})}}}+\sup_{z_{0}\in\mathcal{R}( \Gamma_{\text{shock}}\cap\partial\Omega_{\varepsilon}^{5})}\frac{1}{\sqrt{x _{0}}}\big{\|}F^{(z_{0})}\big{\|}_{2,\alpha,[-\frac{1}{4}\cdot\frac{1}{4}]} \leq C\,.\] Scaling back into the \((x,y)\)-coordinates, we obtain the weighted Holder estimates (3.30). Case 2. _Admissible solutions for \(\theta_{1}<\theta^{\text{s}}\) close to \(\theta^{\text{s}}\)._ **Proposition 3.13**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in[v_{2}^{\text{s}},0)\). Let \(\bar{\varepsilon}>0\) be the constant from Proposition 3.12. For any \(\alpha\in(0,1),\) there exist constants \(\varepsilon\in(0,\bar{\varepsilon}]\) and \(\sigma_{1}\in(0,\theta^{\text{s}})\) depending only on \((\gamma,v_{2}),\) and \(C>0\) depending only on \((\gamma,v_{2},\alpha)\) such that any admissible solution \(\varphi=\psi\circ\mathcal{R}+\varphi_{5}\) corresponding to \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{\text{s}}-\sigma_{1}\leq\theta_{1}< \theta^{\text{s}}\}\) satisfies_ \[\|\psi\|_{2,\alpha,\mathcal{R}(\Omega_{\varepsilon}^{5})}^{\text{(par)}}+\|f_{ 5,\text{sh}}-f_{5,0}\|_{2,\alpha,(0,\varepsilon)}^{\text{(par)}}\leq C\,,\] _where \(\Omega_{\varepsilon}^{5}\subseteq\Omega,\) and functions \(f_{5,\text{sh}}\) and \(f_{5,0}\) are defined in Step **1** of Case 1 _above._ The proof is similar to that of [2, Proposition 3.32]. The admissible solutions for Case 2 share the same properties as in Steps **1**-**3** of Case 1. We omit the details of the proof and only draw the main differences compared with the proof of Proposition 3.12 here. **1.**_Weighted Holder estimate near \(\mathcal{R}(P_{0}^{1})\)._ Because \(v_{2}\in[v_{2}^{\text{s}},0)\), \(y_{P_{2}}>0\) tends to zero as \(\theta_{1}\to\theta^{\text{s}-}\). Therefore, for the same rescaled coordinates \((S,T)\) and local domain \(Q_{r}^{(z_{0})}\) with \(z_{0}=(x_{0},y_{0})\in\mathcal{R}(\Omega_{\varepsilon}^{5})\) as given in Step **4** of Case 1 above, it is possible that "\(Q_{r}^{(z_{0})}\) does not fit into \(\Omega_{\varepsilon}^{5}\)", meaning that the image of \(Q_{r}^{(z_{0})}\) under transform \((S,T)\mapsto(x,y)\) intersects the boundaries: \(\mathcal{R}(\Gamma_{\text{shock}}\cap\partial\Omega_{\varepsilon}^{5})\) and \(\mathcal{R}(\Gamma_{\text{sym}}\cap\partial\Omega_{\varepsilon}^{5})\) simultaneously. For that reason, we define \(\Omega_{\text{fc}}^{5}\coloneqq\Omega_{\varepsilon}^{5}\cap\mathcal{R}^{-1}( \{x<y_{P_{2}}^{2}\})\). It follows that \(Q_{r}^{(z_{0})}\) fits into \(\Omega_{\text{fc}}^{5}\) for any \(z_{0}\in\mathcal{R}(\Omega_{\text{fc}}^{5})\) due to similar considerations as the choice of \(\varepsilon_{*}\) above. In this case, we follow the same idea as for Proposition 3.12 to obtain that there exists a constant \(C>0\) depending only on \((\gamma,v_{2},\alpha)\) such that, for \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{\text{s}}-\sigma^{*}\leq\theta_{1}< \theta^{\text{s}}\}\), \(\|\psi\|_{2,\alpha,\mathcal{R}(\Omega_{\text{fc}}^{5})}^{\text{(par)}}\leq C\) for some small constant \(\sigma^{*}>0\) depending only on \((\gamma,v_{2})\). **2.**_Weighted Holder estimate away from \(\mathcal{R}(P_{0}^{1})\)._ Note that there exists sufficiently large \(k>1\) depending only on \((\gamma,v_{2})\) such that the following geometric relations hold: \[\{0<x<2\bar{\varepsilon},\,0<y<y_{P_{2}}+\frac{x}{k}\}\ \subseteq\ \mathcal{R}(\Omega_{2\bar{ \varepsilon}}^{5})\ \subseteq\ \{0<x<2\bar{\varepsilon},\,0<y<y_{P_{2}}+kx\}\,.\] In the rest of the domain, there exist constants \(\hat{\varepsilon}\in(0,\frac{\bar{\varepsilon}}{2}]\), \(\sigma^{\prime}\in(0,\sigma^{*}]\), and \(C^{*}>0\) depending only on \((\gamma,v_{2})\) such that, for \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{\text{s}}-\sigma^{\prime}\leq\theta_{1 }<\theta^{\text{s}}\}\), \[0\leq\psi(x,y)\leq C^{*}x^{4}\qquad\text{in }\mathcal{R}(\Omega_{2\bar{ \varepsilon}}^{5})\cap\{x>\frac{1}{10}y_{P_{2}}^{2}\}\,. \tag{3.35}\] Then, for any \(z_{0}=(x_{0},y_{0})\in\mathcal{R}(\overline{\Omega}_{\varepsilon}^{5})\cap\{x>\frac {1}{5}y_{P_{2}}^{2}\}\), we introduce the rescaled coordinates \((S,T)\) as \((x,y)\eqqcolon z_{0}+\frac{\sqrt{x_{0}}}{10k}(x_{0}S,\sqrt{x_{0}}T)\), which take values in a local domain \(Q_{r}^{(z_{0})}\) defined by \[Q_{r}^{(z_{0})}\coloneqq\big{\{}(S,T)\in(-1,1)^{2}\,:\,z=z_{0}+\frac{r\sqrt{x_{ 0}}}{10k}(x_{0}S,\sqrt{x_{0}}T)\in\mathcal{R}(\Omega_{2\varepsilon}^{5})\big{\}} \qquad\text{for any }r\in(0,1]\,.\] Thus, the local domain \(Q_{r}^{(z_{0})}\) fits into \(\Omega_{2\varepsilon}^{5}\), meaning that the image of \(Q_{r}^{(z_{0})}\) under transform \((S,T)\mapsto(x,y)\) does not intersect both \(\mathcal{R}(\Gamma_{\text{shock}})\) and \(\mathcal{R}(\Gamma_{\text{sym}})\) simultaneously. Considering property (3.35), we define a rescaled function \(\psi^{(z_{0})}\) by \[\psi^{(z_{0})}(S,T)\coloneqq\frac{1}{x_{0}^{4}}\psi(x_{0}+\frac{x_{0}^{3/2}}{ 10k}S,y_{0}+\frac{x_{0}}{10k}T)\qquad\text{for }(S,T)\in Q_{1}^{(z_{0})}\,.\] The rescaled function \(\psi^{(z_{0})}\) satisfies a uniformly elliptic equation in the \((S,T)\)-coordinates after a suitable rescaling on the coefficients of the modified equation (3.34). The rest of the proof follows the same idea as in Step 4 of the proof of Proposition 3.12. Case 3. _Admissible solutions for \(\theta_{1}\geq\theta^{\text{s}}\) close to \(\theta^{\text{s}}\)._ **Proposition 3.14**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{2}^{\text{s}},0)\). Let \(\bar{\varepsilon}>0\) be the constant from Proposition 3.12. For each \(\alpha\in(0,1),\) there exist constants \(\varepsilon\in(0,\bar{\varepsilon}]\) and \(\sigma_{3}\in(0,\theta^{\text{d}}-\theta^{\text{s}})\) depending only on \((\gamma,v_{2})\), and a constant \(C>0\) depending only on \((\gamma,v_{2},\alpha)\) such that any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{\text{s}}\leq\theta_{1}<\theta^{ \text{s}}+\sigma_{3}\}\) satisfies_ \[\begin{split}&\|\varphi-\varphi_{5}\|_{2,\alpha,\overline{\Omega }_{\varepsilon}^{5}}+\|f_{5,\text{sh}}-f_{5,0}\|_{2,\alpha,(0,\varepsilon)} \leq C\,,\\ &|D_{\boldsymbol{\xi}}^{m}(\varphi-\varphi_{5})(P_{0}^{1})|= \frac{\mathrm{d}^{m}}{\mathrm{d}x^{m}}\left(f_{5,\text{sh}}-f_{5,0}\right)(0 )=0\qquad\text{for }m=0,1,2\,,\end{split} \tag{3.36}\] _where \(\Omega_{\varepsilon}^{5}\subseteq\Omega\), and functions \(f_{5,\text{sh}}\) and \(f_{5,0}\) are defined in Step 1 below. _ The proof is similar to that of [2, Proposition 3.39], so we omit most details here and only emphasize the difference compared with the previous two cases. **1.**_The depiction and the properties of the reflected shock \(S_{25}\) and the free boundary \(\Gamma_{\text{shock}}\) in a small neighbourhood of the sonic boundary \(\Gamma_{\text{sonic}}^{5}\)._ Let \(\varepsilon_{1},\varepsilon_{0},\omega_{0}>0\) be chosen as in Step 1 of the proof of Proposition 3.12. For the coordinates: \((x,y)=\mathcal{R}(\boldsymbol{\xi})\) given by (3.24), we define \(\hat{x}\coloneqq x-x_{P_{2}}\). Note that \(x_{P_{2}}=0\) if \(\theta_{1}\in[0,\theta^{\text{s}}]\) and \(x_{P_{2}}>0\) if \(\theta_{1}\in(\theta^{\text{s}},\theta^{\text{s}}+\sigma_{2}]\). Then there exists a unique function \(f_{5,0}\in C^{\infty}([-\varepsilon_{0},\varepsilon_{0}])\) satisfying \[\mathcal{R}\big{(}S_{25}\cap\mathcal{N}_{\varepsilon_{1}}(\Gamma_{\text{sonic }}^{5})\big{)}\cap\{(x,y)\,:\,|\hat{x}|<\varepsilon_{0}\}=\big{\{}(x,y)\,:\,| \hat{x}|<\varepsilon_{0},\,y=f_{5,0}(\hat{x})\big{\}}\,,\] and \(2\omega_{0}\leq f_{5,0}^{\prime}(\hat{x})\leq\omega_{0}^{-1}\) for \(\hat{x}\in(-\varepsilon_{0},\varepsilon_{0})\). Reducing \(\bar{\varepsilon}\in(0,\varepsilon_{0}]\) further, we can choose \(\sigma_{2}\in(0,\theta^{\text{d}}-\theta^{\text{s}})\) depending only on \((\gamma,v_{2},\bar{\varepsilon})\) such that, for any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta\cap\{0\leq\theta_{1}\leq\theta^{\text{s}}+\sigma_{ 2}\}\), \(\Omega_{\varepsilon}^{5}\) defined by (3.31) is nonempty due to the monotonicity of the pseudo-Mach number in Lemma 2.3, from which we can generalize the definition of \(\Omega_{\varepsilon}^{5}\) to \[\Omega_{\bar{\varepsilon}}^{5}\coloneqq\Omega\cap\mathcal{N}_{\varepsilon_{1}}( \Gamma_{\text{sonic}}^{5})\cap\mathcal{R}^{-1}\big{(}\big{\{}(x,y)\,:\,0<\hat{ x}<\bar{\varepsilon}\big{\}}\big{)}\,. \tag{3.37}\] There exist \(L\geq 1\), \(\delta\in(0,\frac{1}{2})\), and \(\omega\in(0,\omega_{0})\) depending only on \((\gamma,v_{2})\) such that, for all \((x,y)\in\mathcal{R}(\Omega_{\varepsilon}^{5})\), \[0\leq\psi(x,y)\leq Lx^{2}\,,\qquad 0\leq\psi_{x}\leq\frac{2-\delta}{1+\gamma}x \leq Lx\,,\qquad|\psi_{y}(x,y)|\leq Lx\,, \tag{3.38}\] for \(\psi\coloneqq(\varphi-\varphi_{5})\circ\mathcal{R}^{-1}\), and there exists a unique function \(f_{5,\text{sh}}\in C^{1}([0,\bar{\varepsilon}])\) such that \[\mathcal{R}(\Gamma_{\text{shock}}\cap\partial\Omega_{\bar{\varepsilon}}^{5})= \{(x,y)\,:\,0<\hat{x}<\bar{\varepsilon},\,y=f_{5,\text{sh}}(\hat{x})\}\,,\] and \(\omega\leq f_{5,\text{sh}}^{\prime}(\hat{x})\leq L\) for any \(\hat{x}\in(0,\bar{\varepsilon})\). **2.**_The free boundary condition and the modification on equation (3.33)._ Similar to [2, Lemma 3.37 and Corollary 3.38], it follows that, for \(\sigma_{2}\in(0,\theta^{\text{d}}-\theta^{\text{s}})\) defined in Step 1, there exists a constant \(\mu_{0}>0\) depending only on \((\gamma,v_{2},\sigma_{2})\) such that, for any \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{\rm s}\leq\theta_{1}\leq\theta^{\rm d}- \sigma_{2}\}\), \(g_{\rm mod}^{\rm sh}\) defined by (3.28) satisfies the following properties: \[\partial_{a}g_{\rm mod}^{\rm sh}(D\varphi_{5}(P_{2}),\varphi_{5}(P_{2}),P_{2}) \leq-\mu_{0}\,,\] where \(\partial_{a}=\partial_{p_{1}},\partial_{p_{2}}\), or \(\partial_{z}\). From these properties above, we can show that \(\mathcal{B}^{(\theta_{1})}\) defined by (3.29) satisfies the same properties as in Step **2** of Case 1. The extension of the domain for the coefficients of equation (3.33) is the same as Step **3** of Case 1. **3.**_Rescaling method in the local domain to obtain the weighted Holder estimate._ Note that, from the properties of \(f_{5,0}\) and \(f_{5,{\rm sh}}\) in Step **1**, there exists \(k>1\) depending only on \((\gamma,v_{2})\) such that \[\{0<\hat{x}<\bar{\varepsilon},\,0<y<\frac{\hat{x}}{k}\}\ \subseteq\ \mathcal{R}( \Omega_{\bar{\varepsilon}}^{5})\ \subseteq\ \{0<\hat{x}<\bar{\varepsilon},\,0<y<k\hat{x}\}\,.\] Moreover, there exist constants \(\varepsilon\in(0,\frac{\bar{\varepsilon}}{2}],\sigma_{3}\in(0,\sigma_{2}]\), and \(C>1\) depending only on \((\gamma,v_{2})\) such that, for \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{\rm s}\leq\theta_{1}\leq\theta^{\rm s }+\sigma_{3}\}\), \[x_{P_{2}}\leq\frac{\varepsilon}{10}\,,\qquad 0\leq\psi(x,y)\leq C(x-x_{P_{2}})^{ 5}\quad\text{in }\mathcal{R}(\Omega_{2\varepsilon}^{5})\,. \tag{3.39}\] Then, for any \(z_{0}=(x_{0},y_{0})\in\mathcal{R}(\overline{\Omega_{\varepsilon}^{5}}\setminus \{P_{2}\})\), we introduce the rescaled coordinates \((S,T)\) as \((x,y)\eqqcolon z_{0}+\frac{x_{0}-x_{P_{2}}}{10k}(\sqrt{x_{0}}S,T)\), taking values in a local domain \(Q_{r}^{(z_{0})}\) defined by \[Q_{r}^{(z_{0})}\coloneqq\left\{(S,T)\in(-1,1)^{2}\,:\,z=z_{0}+\frac{r(x_{0}-x_ {P_{2}})}{10k}(\sqrt{x_{0}}S,T)\in\mathcal{R}(\Omega_{2\varepsilon}^{5}) \right\}\qquad\text{for any }r\in(0,1]\,.\] Then the local domain \(Q_{r}^{(z_{0})}\) fits into \(\Omega_{2\varepsilon}^{5}\), meaning that the image of \(Q_{r}^{(z_{0})}\) under transform \((S,T)\mapsto(x,y)\) does not intersect both \(\mathcal{R}(\Gamma_{\rm shock})\) and \(\mathcal{R}(\Gamma_{\rm sym})\) simultaneously. Considering property (3.39), we define a rescaled function \(\psi^{(z_{0})}\) by \[\psi^{(z_{0})}(S,T)\coloneqq\frac{1}{(x_{0}-x_{P_{2}})^{5}}\psi(x_{0}+\frac{x _{0}-x_{P_{2}}}{10k}\sqrt{x_{0}}S,\,y_{0}+\frac{x_{0}-x_{P_{2}}}{10k}T)\qquad \text{for }(S,T)\in Q_{1}^{(z_{0})}.\] The rescaled function \(\psi^{(z_{0})}\) satisfies a uniformly elliptic equation in the \((S,T)\)-coordinates after a suitable rescaling of the coefficients of the modified equation (3.34). The rest of the proof follows the same idea as Step **4** of Case 1. Case 4. _Admissible solutions for \(\theta_{1}\geq\theta^{\rm s}\) away from \(\theta^{\rm s}\)._ Let \(\sigma_{3}>0\) be from Proposition 3.14. By Proposition 3.8, there exists a constant \(\delta\in(0,1)\) depending only on \((\gamma,v_{2})\) such that any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{\rm s}+\frac{\sigma_{3}}{2}\leq \theta_{1}<\theta^{\rm d}\}\) satisfies \[\frac{|D\varphi|^{2}}{c^{2}(|D\varphi|,\varphi)}\leq 1-\delta\qquad\text{in } \overline{\Omega}\cap\{\xi\geq 0\}\,, \tag{3.40}\] where \(c(|D\varphi|,\varphi)\) is defined by (3.2). By Lemma 3.3 and (3.40), \((D\varphi(\boldsymbol{\xi}),\varphi(\boldsymbol{\xi}))\in\mathcal{K}_{R_{*}}\) for some constant \(R_{*}\geq 2\) depending only on \((\gamma,v_{2})\), where \(\mathcal{K}_{R}\) is defined by (3.9). In particular, there exist \(\lambda_{*}>0\) and \(r_{*}>0\), depending only on \((\gamma,v_{2})\), such that any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{\rm s}+\frac{\sigma_{3}}{2}\leq \theta_{1}<\theta^{\rm d}\}\) satisfies \[\lambda_{*}|\boldsymbol{\kappa}|^{2}\leq\sum_{i,j=1}^{2}\partial_{p_{j}} \mathcal{A}_{i}(D\varphi(\boldsymbol{\xi}),\varphi(\boldsymbol{\xi}))\kappa_{i }\kappa_{j}\leq\lambda_{*}^{-1}|\boldsymbol{\kappa}|^{2} \tag{3.41}\] for any \(\boldsymbol{\xi}\in\overline{\Omega}\cap B_{2r_{*}}(P_{0}^{1})\) and any \(\boldsymbol{\kappa}=(\kappa_{1},\kappa_{2})\in\mathbb{R}^{2}\). Set \(\bar{\phi}(\boldsymbol{\xi})\coloneqq\varphi_{2}(\boldsymbol{\xi})-\varphi( \boldsymbol{\xi})\). Recalling equation (3.1) for function \(\phi=\varphi-\varphi_{5}\), the free boundary condition (3.27) on \(\Gamma_{\rm shock}\), and the slip boundary condition defined in Problem 2.10(iv) on \(\Gamma_{\rm sym}\), it follows that \(\bar{\phi}\) satisfies the system: \[\begin{cases}\sum\limits_{i,j=1}^{2}\bar{A}_{ij}(D\bar{\phi},\bar{\phi}, \boldsymbol{\xi})\partial_{i}\partial_{j}\bar{\phi}=0&\text{in }\Omega\cap B_{r_{*}}(P_{0}^{1})\,,\\ \bar{g}^{\rm sym}(D\bar{\phi},\bar{\phi},\boldsymbol{\xi})=0&\text{on }\Gamma_{\rm sym }\,,\\ \bar{g}^{\rm sh}(D\bar{\phi},\bar{\phi},\boldsymbol{\xi})=0&\text{on }\Gamma_{\rm shock }\,,\end{cases} \tag{3.42}\] with coefficients \(\bar{A}_{ij}\) and the boundary functions \(\bar{g}^{\rm sym}\) and \(\bar{g}^{\rm sh}\) defined by \[\begin{split}&\bar{A}_{ij}(\mathbf{p},z,\mathbf{\xi})\coloneqq c^{2}(|D \varphi_{2}-\mathbf{p}|,\varphi_{2}-z)\delta_{ij}-(\partial_{i}\varphi_{2}-p_{i})( \partial_{j}\varphi_{2}-p_{j})\,,\\ &\bar{g}^{\rm sym}(\mathbf{p},z,\mathbf{\xi})\coloneqq v_{2}-p_{2}-\eta \,,\\ &\bar{g}^{\rm sh}(\mathbf{p},z,\mathbf{\xi})\coloneqq-g^{\rm sh}(D\varphi_ {2}-\mathbf{p},\varphi_{2}-z,\mathbf{\xi})\,,\end{split} \tag{3.43}\] for \(g^{\rm sh}\) given by (3.27), where \((\partial_{1},\partial_{2})=(\partial_{\xi},\partial_{\eta})\). We define a vector \(\mathbf{e}\) as \[\mathbf{e}\coloneqq\frac{D\varphi_{2}(P^{1}_{0})}{|D\varphi_{2}(P^{1}_{0})|}= \frac{O_{2}-P^{1}_{0}}{|O_{2}-P^{1}_{0}|}\,,\] and choose \(\mathbf{e}^{\perp}\) as the clockwise rotation of \(\mathbf{e}\) by \(\frac{\pi}{2}\). Note that \(\mathbf{e}\) depends only on \((\gamma,v_{2},\theta_{1})\) and satisfies that, for any \(\mathbf{\xi}\in\Gamma_{\rm shock}\cap B_{r_{*}}(P^{1}_{0})\), \[\partial_{\mathbf{e}}(\varphi_{2}-\varphi)(\mathbf{\xi})=-\partial_{r}(\varphi_{2}- \varphi)+\big{(}\mathbf{e}-\frac{D\varphi(\mathbf{\xi})}{|D\varphi(\mathbf{\xi})|}\big{)} \cdot D(\varphi_{2}-\varphi)\geq\frac{d^{\prime}_{0}}{2}\,, \tag{3.44}\] where \(d^{\prime}_{0}>0\) is the constant from (3.23), depending only on \((\gamma,v_{2})\). Then we define the new coordinates \((S,T)\) by \[\mathbf{\xi}\eqqcolon P^{1}_{0}+S\mathbf{e}+T\mathbf{e}^{\perp}\,. \tag{3.45}\] Under these \((S,T)\)-coordinates, we can express \(S_{25},\Gamma_{\rm shock},\Gamma_{\rm sym}\), and the domain near \(P^{1}_{0}\) as \[\begin{split}& S_{25}\cap B_{r_{*}}(P^{1}_{0})=\{\mathbf{\xi}(S,T) \,:\,S=a_{\rm ref}T,\,T>0,\,(S,T)\in B_{r_{*}}(\mathbf{0})\}\,,\\ &\Gamma_{\rm shock}\cap B_{r_{*}}(P^{1}_{0})=\{\mathbf{\xi}(S,T)\,: \,S=f_{\mathbf{e}}(T),\,T>0,\,(S,T)\in B_{r_{*}}(\mathbf{0})\}\,,\\ &\Gamma_{\rm sym}\cap B_{r_{*}}(P^{1}_{0})=\{\mathbf{\xi}(S,T)\,:\,S= a_{\rm sym}T,\,T>0,\,(S,T)\in B_{r_{*}}(\mathbf{0})\}\,,\\ &\Omega\cap B_{r_{*}}(P^{1}_{0})=\{\mathbf{\xi}(S,T)\,:\,a_{\rm ref}T \leq f_{\mathbf{e}}(T)<S<a_{\rm sym}T,\,T>0,\,(S,T)\in B_{r_{*}}(\mathbf{0})\}\,,\end{split} \tag{3.46}\] where the positive constants \(a_{\rm ref}=\tan(\theta_{25}-\tilde{\theta}_{25}-\frac{\pi}{2})\) and \(a_{\rm sym}=\cot\hat{\theta}_{25}\) depend continuously on \(\theta_{1}\in(0,\theta^{\rm d})\), and there exists a constant \(C>0\) depending only on \((\gamma,v_{2})\) such that \(C^{-1}\leq a_{\rm ref}<a_{\rm sym}\leq C\) for all \(\theta_{1}\in[\theta^{\rm s},\theta^{\rm d})\). Note that \(\hat{\theta}_{25}\) is defined by (2.28), and we have used Proposition 2.6, including (2.40). **Definition 3.15** (Weighted Holder norm).: _For any \(\sigma\in\mathbb{R},\,\alpha\in(0,1),\) and \(m\in\mathbb{N},\) the weighted Holder norms are defined as follows_:__ 1. _For any open bounded connected set_ \(U\subseteq\mathbb{R}^{2},\) _let_ \(\Gamma\) _be a closed portion of_ \(\partial U\)_. Write_ \[\delta_{\mathbf{\xi}_{1}}\coloneqq{\rm dist}(\mathbf{\xi}_{1},\Gamma)\,,\quad\delta_{ \mathbf{\xi}_{1},\mathbf{\xi}_{2}}\coloneqq\min\{\delta_{\mathbf{\xi}_{1}},\delta_{\mathbf{ \xi}_{2}}\}\qquad\text{for any }\mathbf{\xi}_{1},\mathbf{\xi}_{2}\in U\,.\] _For any_ \(u\in C^{m}(U),\) _define_ \[\|u\|_{m,0,U}^{(\sigma),\Gamma}\coloneqq\sum_{0\leq|\mathbf{\beta}| \leq m}\sup_{\mathbf{\xi}_{1}\in U}\Big{(}\delta_{\mathbf{\xi}_{1}}^{\max\{|\mathbf{\beta} |+\sigma,0\}}\big{|}D^{\mathbf{\beta}}u(\mathbf{\xi}_{1})\big{|}\Big{)}\,,\] \[[u]_{m,\alpha,U}^{(\sigma),\Gamma}\coloneqq\|u\|_{m,0,U}^{(\sigma), \Gamma}+[u]_{m,\alpha,U}^{(\sigma),\Gamma}\,,\] _where_ \(D^{\mathbf{\beta}}\coloneqq\partial_{\xi}^{\beta_{1}}\partial_{\eta}^{\beta_{2}}\) _for_ \(\mathbf{\beta}\coloneqq(\beta_{1},\beta_{2})\) _with_ \(\beta_{1},\beta_{2}\in\mathbb{N}\) _and_ \(|\mathbf{\beta}|=\beta_{1}+\beta_{2}\)_._ 2. _For any open bounded interval_ \(I\subseteq\mathbb{R},\) _let_ \(x_{0}\in\partial I\) _be an endpoint of_ \(I\)_. For any_ \(f\in C^{m}(I),\) _define_ \[\|f\|_{m,0,I}^{(\sigma),\{x_{0}\}}\coloneqq\sum_{k=0}^{m}\sup_{x\in I}\Big{(}|x-x_ {0}|^{\max\{k+\sigma,0\}}\big{|}f^{(k)}(x)\big{|}\Big{)}\,,\] \[[f]_{m,\alpha,I}^{(\sigma),\{x_{0}\}}\coloneqq\sup_{\begin{subarray}{ c}x_{1},x_{2}\in I\\ x_{1}\neq x_{2}\end{subarray}}\Big{(}\min\big{\{}|x_{1}-x_{0}|,|x_{2}-x_{0}| \big{\}}^{\max\{m+\alpha+\sigma,0\}}\frac{|f^{(m)}(x_{1})-f^{(m)}(x_{2})|}{|x_{1 }-x_{2}|^{\alpha}}\Big{)}\,,\] \[\|f\|_{m,\alpha,I}^{(\sigma),\{x_{0}\}}\coloneqq\|f\|_{m,0,I}^{( \sigma),\{x_{0}\}}+[f]_{m,\alpha,I}^{(\sigma),\{x_{0}\}}\,.\] **Proposition 3.16**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{2}^{s},0)\). Let \(\sigma_{3}>0\) be the constant from Proposition 3.14. For small constants \(\sigma_{\rm s}\in(0,\frac{\sigma_{3}}{2}]\) and \(\sigma_{\rm d}\in(0,\frac{\theta^{\rm d}}{10})\), there exist \(r\in(0,r_{*})\), \(\alpha\in(0,1)\), and \(C>0\) depending only on \((\gamma,v_{2},\sigma_{\rm s},\sigma_{\rm d})\) such that any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{\rm s}+\sigma_{\rm s}\leq\theta_{1} \leq\theta^{\rm d}-\sigma_{\rm d}\}\) satisfies the estimates_:__ \[\left\|\varphi\right\|_{2,\alpha,\Omega\cap B_{r}(P_{0}^{1})}^{(-1-\alpha),\{ P_{0}^{1}\}}+\left\|f_{\boldsymbol{\xi}}\right\|_{2,\alpha,(0,r)}^{(-1-\alpha),\{0 \}}\leq C\,,\qquad|D_{\boldsymbol{\xi}}^{m}(\varphi-\varphi_{5})(P_{0}^{1})|= 0\ \ \mbox{for }m=0,1\,. \tag{3.47}\] The proof is similar to the proof of [2, Proposition 3.42]. We omit the details of the proof and sketch only the main ideas here. **1**. _Hodograph transform._ Define \(\bar{\phi}^{(h)}(S,T)\coloneqq\bar{\phi}(\boldsymbol{\xi}(S,T))\) for function \(\bar{\phi}=\varphi_{2}-\varphi\) and the \((S,T)\)-coordinates given by (3.45). By property (3.44), we can apply the hodograph transform \(\boldsymbol{y}=(y_{1},y_{2})=(\bar{\phi}^{(h)}(S,T),T)\) to \(\bar{\phi}\) for the original system (3.42). We write \(S=:v(y_{1},y_{2})\) if it satisfies \(\bar{\phi}^{(h)}(S,y_{2})=y_{1}\) for any \[\boldsymbol{y}\in\mathcal{D}_{r_{*}}^{(\theta_{1})}\coloneqq\left\{(\bar{\phi }^{(h)}(S,T),T)\,:\,\boldsymbol{\xi}(S,T)\in\Omega\cap B_{r_{*}}(P_{0}^{1}) \right\},\] from which we obtain \[\partial_{y_{1}}v=\frac{1}{\partial_{S}\bar{\phi}^{(h)}}\,,\qquad\quad\partial _{y_{2}}v=-\frac{\partial_{T}\bar{\phi}^{(h)}}{\partial_{S}\bar{\phi}^{(h)}}\,.\] It follows from Lemma 3.3 and property (3.44) that there exists a constant \(K>1\) depending only on \((\gamma,v_{2})\) such that \[\frac{1}{K}\leq\partial_{y_{1}}v\leq\frac{2}{d_{0}^{\prime}}\,,\qquad|v|+|D_{ \boldsymbol{y}}v|\leq 2K\,. \tag{3.48}\] Under the hodograph transform, system (3.42) is equivalent to \[\left\{\begin{array}{ll}\sum\limits_{i,j=1}^{2}a_{ij}(D_{\boldsymbol{y}}v,v,\boldsymbol{y})\partial_{y_{i}y_{j}}v=0&\quad\mbox{in }\mathcal{D}_{r_{*}}^{(\theta_{1})}\,,\\ g_{h}^{\rm sym}(D_{\boldsymbol{y}}v,v,\boldsymbol{y})=0&\quad\mbox{on } \Gamma_{\rm sym}^{(h)}\,,\\ g_{h}^{\rm sh}(D_{\boldsymbol{y}}v,v,\boldsymbol{y})=0&\quad\mbox{on } \Gamma_{\rm shock}^{(h)}\,,\end{array}\right. \tag{3.49}\] for suitable functions \((a_{ij},g_{h}^{\rm sym},g_{h}^{\rm sh})\), with the boundary sets given by \[\Gamma_{\rm sym}^{(h)}\coloneqq\left\{(\bar{\phi}^{(h)}(S,T),T)\,:\, \boldsymbol{\xi}(S,T)\in\Gamma_{\rm sym}\cap B_{r_{*}}(P_{0}^{1})\right\},\] \[\Gamma_{\rm shock}^{(h)}\coloneqq\left\{(0,T)\,:\,\boldsymbol{ \xi}(S,T)\in\Gamma_{\rm shock}\cap B_{r_{*}}(P_{0}^{1})\right\}.\] Due to (3.48), we extend the domain of functions \((a_{ij},g_{h}^{\rm sym},g_{h}^{\rm sh})\) to the set: \[U\coloneqq\left\{(\boldsymbol{p},z,\boldsymbol{y})\in\mathbb{R}^{2}\times \mathbb{R}\times\mathcal{D}_{r_{*}}^{(\theta_{1})}\,:\,|\boldsymbol{p}|+|z| \leq 2K\right\},\] and continue to use the same notation as in (3.49) for this modified system. **2**. _Holder continuity of \(g_{h}^{\rm sym}\) at the corner._ From Lemma 3.3, there exists a constant \(\tau_{0}>0\) depending only on \((\gamma,v_{2})\) such that \[\Gamma_{\rm sym}^{(h)}\cup\Gamma_{\rm shock}^{(h)}\subseteq\{\boldsymbol{y} \in\mathbb{R}^{2}\,:\,y_{2}>\tau_{0}|y_{1}|\}\,.\] Moreover, \(\Gamma_{\rm shock}^{(h)}\) is in \(C^{2}\) up to endpoint \(\{\boldsymbol{0}\}\). From the definition of admissible solutions and the hodograph transform, we obtain that \(v\in C^{1}(\overline{\mathcal{D}_{r_{*}}^{(\theta_{1})}})\cap C^{2}(\mathcal{D }_{r_{*}}^{(\theta_{1})}\cup\Gamma_{\rm shock}^{(h)})\cap C^{3}(\mathcal{D}_{r _{*}}^{(\theta_{1})})\) satisfies (3.48). From (3.43) and the notation defined in Step **1**, we can obtain the boundedness of coefficients \((a_{ij},g_{h}^{\rm sym},g_{h}^{\rm sh})\) in the modified system (3.49). For any \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{\rm s}+\sigma_{\rm s}\leq\theta_{1} <\theta^{\rm d}\}\) with \(\sigma_{\rm s}\in(0,\frac{\sigma_{3}}{2}]\), we may reduce \(\lambda_{*}>0\) depending only on \((\gamma,v_{2},\sigma_{\rm s})\) in (3.41) to obtain the uniform ellipticity of the modified system (3.49) in \(\mathcal{D}_{r_{*}}^{(\theta_{1})}\) by using (3.48) and the hodograph transform. It follows from Lemma 3.3, (3.41), and (3.48) that there exists a constant \(\lambda_{1}>0\) depending only on \((\gamma,v_{2},\sigma_{\rm s})\) such that, for any \(y\in\Gamma_{\rm shock}^{(h)}\), \[\boldsymbol{\nu}\cdot D_{\boldsymbol{p}}g_{h}^{\rm sh}(D_{\boldsymbol{y}}v,v, \boldsymbol{y})=\frac{1}{v_{y_{1}}}D\bar{\phi}\cdot D_{\boldsymbol{q}}g^{\rm sh }(D\varphi,\varphi,\boldsymbol{\xi})\geq\lambda_{1}\,,\] where we have used the expression of \(D_{\boldsymbol{q}}g^{\text{sh}}(D\varphi,\varphi,\boldsymbol{\xi})\) on \(\Gamma^{(h)}_{\text{shock}}\), and \(\boldsymbol{\nu}=(1,0)\) is the unit normal vector to \(\Gamma^{(h)}_{\text{shock}}\). From (3.48), \(|D_{\boldsymbol{p}}g^{\text{sym}}_{h}(D_{\boldsymbol{y}}v,v,\boldsymbol{y})| \geq\frac{d_{0}^{\ell}}{2}>0\). Similar to the proof of [2, Lemma 3.40], one can show that, for small \(\sigma_{\text{d}}\in(0,\frac{\theta^{\text{d}}}{10})\), there exist constants \(M>0\) and \(\bar{r}\in(0,r_{*})\) depending only on \((\gamma,v_{2},\sigma_{\text{d}})\) such that any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta\cap\{\theta^{s}\leq\theta_{1}<\theta^{\text{d}}- \sigma_{\text{d}}\}\) satisfies \[\partial_{q_{1}}g^{\text{sh}}_{\text{mod}}(D\varphi,\varphi,\boldsymbol{\xi })\leq-M^{-1}\qquad\text{for all }\boldsymbol{\xi}\in\Gamma_{\text{shock}}\cap B_{\bar{r}}(P_{0}^{1})\,.\] Then, for any \(\boldsymbol{y}\in\Gamma^{(h)}_{\text{shock}}\), \(|\text{det}D_{\boldsymbol{p}}(g^{\text{sym}}_{h},g^{\text{sh}}_{h})|>0\) holds. It follows from [10, Proposition 4.3.7] that there exist constants \(\alpha_{1}\in(0,1)\), \(C>0\), and \(r_{0}\in(0,\bar{r}]\) depending only on \((\gamma,v_{2},\sigma_{\text{s}},\sigma_{\text{d}})\) such that, for any \(\boldsymbol{y}\in\overline{\mathcal{D}_{\bar{r}}^{(\theta_{1})}\cap B_{r_{0}}( \boldsymbol{0})}\), \[\big{|}g^{\text{sym}}_{h}(D_{\boldsymbol{y}}v(\boldsymbol{y}),v(\boldsymbol{y }),\boldsymbol{y})-g^{\text{sym}}_{h}(D_{\boldsymbol{y}}v(\boldsymbol{0}),v( \boldsymbol{0}),\boldsymbol{0})\big{|}\leq C|\boldsymbol{y}|^{\alpha_{1}}\,. \tag{3.50}\] **3**. _Holder continuity of \(v(\boldsymbol{y})\) at the corner._ Notice that both \(g^{\text{sym}}_{h}\) and \(g^{\text{sh}}_{h}\) are Lipschitz continuous on \((p,z,\boldsymbol{y})\). Since \(v\in C^{1}(\overline{\mathcal{D}_{\bar{r}}^{(\theta_{1})}})\), inequality (3.50) also holds for any \(\boldsymbol{y}\in\Gamma^{(h)}_{\text{shock}}\) after replacing \(g^{\text{sym}}_{h}\) by \(g^{\text{sh}}_{h}\). Moreover, functions \((g^{\text{sym}}_{h},g^{\text{sh}}_{h})(\cdot,v(\boldsymbol{0}),\boldsymbol{0})\) are bounded in \(C^{1,\alpha_{1}}(\bar{B}_{2K}(D_{\boldsymbol{y}}v(\boldsymbol{0})))\), and \(|\text{det}D_{\boldsymbol{p}}(g^{\text{sym}}_{h},g^{\text{sh}}_{h})(D_{ \boldsymbol{y}}v(\boldsymbol{0}),v(\boldsymbol{0}),\boldsymbol{0})|>0\) follows from Step **2**. Then, from [10, Proposition 4.3.9], there exist \(r_{1}\in(0,r_{0})\) and \(C_{1}>0\) depending only on \((\gamma,v_{2},\sigma_{\text{s}},\sigma_{\text{d}})\) such that \[|D_{\boldsymbol{y}}v(\boldsymbol{y})-D_{\boldsymbol{y}}v(\boldsymbol{0})|\leq C _{1}|\boldsymbol{y}|^{\alpha_{1}}\qquad\text{for any }\boldsymbol{y}\in\Gamma^{(h)}_{\text{shock}}\cap B_{r_{1}}( \boldsymbol{0})\,. \tag{3.51}\] By the hodograph transform and the geometric properties of the domain near corner \(P_{0}^{1}\), it follows from (3.50)-(3.51) that \[|\partial_{\eta}\varphi(\boldsymbol{\xi})-\partial_{\eta}\varphi (P_{0}^{1})|\leq C|\boldsymbol{\xi}-P_{0}^{1}|^{\alpha_{1}}\qquad\text{for any } \boldsymbol{\xi}\in\overline{\Omega\cap B_{r_{1}}(P_{0}^{1})}\,,\] \[|D\varphi(\boldsymbol{\xi})-D\varphi(P_{0}^{1})|\leq C| \boldsymbol{\xi}-P_{0}^{1}|^{\alpha_{1}}\qquad\text{for any } \boldsymbol{\xi}\in\Gamma_{\text{shock}}\cap B_{r_{1}}(P_{0}^{1})\,. \tag{3.52}\] The open domain \(\Omega_{r_{1}}\coloneqq\Omega\cap B_{r_{1}}(P_{0}^{1})\) contains two Lipschitz boundaries \(\Gamma^{1}\coloneqq\Gamma_{\text{shock}}\cap B_{r_{1}}(P_{0}^{1})\) and \(\Gamma^{2}\coloneqq\Gamma_{\text{sym}}\cap B_{r_{1}}(P_{0}^{1})\) intersecting at corner \(P_{0}^{1}\). We also take \[B^{(1)}(\boldsymbol{p},z,\boldsymbol{\xi})\coloneqq(1,0)\cdot(\boldsymbol{p}-D \bar{\phi}(P_{0}^{1}))\,,\qquad B^{(2)}(\boldsymbol{p},z,\boldsymbol{\xi}) \coloneqq\bar{g}_{\text{sym}}(\boldsymbol{p},z,\boldsymbol{\xi})\,.\] Now we update system (3.42) for \(\bar{\phi}\in C^{1}(\overline{\Omega_{r_{1}}})\cap C^{2}(\Omega_{r_{1}}\cup \Gamma^{1})\cap C^{3}(\Omega_{r_{1}})\) into domain \(\Omega_{r_{1}}\), replacing the boundary conditions by \[B^{(1)}(D\bar{\phi},\bar{\phi},\boldsymbol{\xi})=\bar{h}(\boldsymbol{\xi})\ \ \text{on}\ \Gamma^{1}\,,\qquad B^{(2)}(D\bar{\phi},\bar{\phi},\boldsymbol{\xi})=0\ \ \text{on}\ \Gamma^{2}\,,\] where \(\bar{h}(\boldsymbol{\xi})\coloneqq(1,0)\cdot(D\bar{\phi}(\boldsymbol{\xi})-D \bar{\phi}(P_{0}^{1}))\), which satisfies \[|\bar{h}(\boldsymbol{\xi})-\bar{h}(P_{0}^{1})|\leq C|\boldsymbol{\xi}-P_{0}^{1} |^{\alpha_{1}}\qquad\text{for any }\boldsymbol{\xi}\in\Gamma_{\text{shock}}\cap B_{r_{1}}(P_{0}^{1})\,.\] We can also extend the domain of functions \((\bar{A}_{ij},B^{(1)},B^{(2)})\) for the updated system to the set: \[V\coloneqq\{(\boldsymbol{p},z,\boldsymbol{\xi})\in\mathbb{R}^{2}\times\mathbb{ R}\times\overline{\Omega_{r_{1}}}\,:\,|\boldsymbol{p}|+|z|\leq 2K_{1}\}\,.\] Using [10, Proposition 4.3.7] again, we obtain that there exist \(\alpha\in(0,\alpha_{1}]\), \(r_{2}\in(0,r_{1}]\), and \(C>0\) depending only on \((\gamma,v_{2},\sigma_{\text{s}},\sigma_{\text{d}})\) such that, for any \(\boldsymbol{\xi}\in\overline{\Omega\cap B_{r_{2}}(P_{0}^{1})}\), \[\big{|}B^{(1)}(D\bar{\phi}(\boldsymbol{\xi}),\bar{\phi}(\boldsymbol{\xi}), \boldsymbol{\xi})-B^{(1)}(D\bar{\phi}(P_{0}^{1}),\bar{\phi}(P_{0}^{1}),P_{0}^{ 1})\big{|}\leq C\big{|}\boldsymbol{\xi}-P_{0}^{1}\big{|}^{\alpha}\,,\] which, combined with (3.52), implies that \[\big{|}D\varphi(\boldsymbol{\xi})-D\varphi(P_{0}^{1})\big{|}\leq C\big{|} \boldsymbol{\xi}-P_{0}^{1}\big{|}^{\alpha}\qquad\text{for any }\boldsymbol{\xi}\in\overline{\Omega\cap B_{r_{2}}(P_{0}^{1})}\,. \tag{3.53}\] **4**. _Weighted Holder estimate near corner \(P_{0}^{1}\)._ From Lemma 3.2, Remark 2.2(i), and (3.53), there exists a constant \(\varepsilon_{\text{bd}}>0\) depending only on \((\gamma,v_{2})\) such that, for any \(\boldsymbol{\xi}_{0}\in\Gamma_{\text{bd}}\cap B_{r_{3}}(P_{0}^{1})\) for some \(r_{3}\in(0,r_{2}]\) with either \(\Gamma_{\text{bd}}\coloneqq\Gamma_{\text{shock}}\) or \(\Gamma_{\text{bd}}\coloneqq\Gamma_{\text{sym}}\), \[\big{\{}\boldsymbol{\xi}\in\partial\Omega\,:\,|\boldsymbol{\xi}-\boldsymbol{\xi}_{0}| <\varepsilon_{\text{bd}}|\boldsymbol{\xi}_{0}-P_{0}^{1}|\big{\}}\subseteq\Gamma_{ \text{bd}}\,.\] We now consider the following problem for \(\varphi\in C^{1}(\overline{\Omega\cap B_{r_{3}}(P_{0}^{1})})\cap C^{3}(\Omega \cap B_{r_{3}}(P_{0}^{1}))\): \[\left\{\begin{aligned} &\operatorname{div}\tilde{\mathcal{A}}(D \varphi,\varphi)+\tilde{\mathcal{B}}(D\varphi,\varphi)=0&&\text{in }\Omega\cap B_{r_{3}}(P_{0}^{1})\,,\\ & g_{\operatorname{mod}}^{\operatorname{sh}}(D\varphi,\varphi, \boldsymbol{\xi})=0&&\text{on }\Gamma_{\operatorname{shock}}\cap B_{r_{3}}(P_{0}^{1})\,,\\ & g_{\operatorname{sym}}(D\varphi,\varphi,\boldsymbol{\xi})=0&& \text{on }\Gamma_{\operatorname{sym}}\cap B_{r_{3}}(P_{0}^{1})\,,\end{aligned}\right.\] where \((\tilde{\mathcal{A}},\tilde{\mathcal{B}})(\boldsymbol{q},w)\) are defined on \(\mathbb{R}^{2}\times\mathbb{R}\) from Lemma 3.5, \(g_{\operatorname{mod}}^{\operatorname{sh}}(\boldsymbol{q},w,\boldsymbol{\xi})\) is defined on \(\mathbb{R}^{2}\times\mathbb{R}\times\overline{\Omega\cap B_{r_{3}}(P_{0}^{1})}\) by (3.28), and \(g_{\operatorname{sym}}(\boldsymbol{q},w,\boldsymbol{\xi})=(0,1)\cdot \boldsymbol{q}\). Combining the above with property (3.53) and using [10, Proposition 4.3.11(i)], we obtain that there exist \(\tilde{\alpha}\in(0,\alpha]\) and \(C>0\) depending only on \((\gamma,v_{2},\sigma_{\ast},\sigma_{\mathrm{d}})\) such that \(\varphi\in C^{1,\tilde{\alpha}}(\Omega\cap B_{r}(P_{0}^{1}))\) satisfies \(\|\varphi\|_{1,\,\tilde{\alpha},\,\Omega\cap B_{r}(P_{0}^{1})}\leq C\) and, for \(\Gamma_{\operatorname{shock}}\) given by (3.46), \(\|f_{\boldsymbol{e}}\|_{1,\tilde{\alpha},(0,r)}\leq C\). Finally, using [10, Proposition 4.3.11(ii)], we obtain (3.47). ### Compactness of the set of admissible solutions Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). For any \(\theta_{\ast}\in(0,\theta^{\mathrm{d}})\), the arguments in SS3.3 are also valid for the weighted \(C^{2,\alpha}\)-estimates near \(\Gamma_{\operatorname{sonic}}^{6}\) with respect to \(\theta_{2}\in[0,\theta^{\mathrm{d}})\). According to all the _a priori_ estimates obtained in Lemma 3.9, Corollary 3.10, and Propositions 3.12-3.14 and 3.16, there exists a constant \(\bar{\alpha}\in(0,1)\) depending only on \((\gamma,v_{2},\theta_{\ast})\) such that the set: \[\left\{\,\|\varphi\|_{1,\bar{\alpha},\overline{1}}+\|f_{\operatorname{sh}}\|_{ 1,\bar{\alpha},[\boldsymbol{\xi}^{p_{3}},\bar{\xi}^{p_{2}}]}\,:\,\,\,\varphi \text{ is an admissible solution}\,\,\,\,\right\}\] is bounded, where \(\eta=f_{\operatorname{sh}}(\xi)\) is the expression of \(\Gamma_{\operatorname{shock}}\) as a graph, given by Lemma 3.2. For each admissible solution, the corresponding pseudo-subsonic region \(\Omega\) is a bounded domain enclosed by \(\Gamma_{\operatorname{sonic}}^{5}\), \(\Gamma_{\operatorname{shock}}\), \(\Gamma_{\operatorname{sonic}}^{6}\), and \(\Gamma_{\operatorname{sym}}\). These four curves intersect only at \(P_{i}\) for \(i=1,2,3,4\). Note that \(\Gamma_{\operatorname{sonic}}^{5},O_{5},P_{1}\), and \(P_{2}\) depend continuously on \(\theta_{1}\in[0,\theta^{\mathrm{d}}]\), while \(\Gamma_{\operatorname{sonic}}^{6},O_{6},P_{3}\), and \(P_{4}\) depend continuously on \(\theta_{2}\in[0,\theta^{\mathrm{d}}]\). Combining the above results with (3.13) and Lemma 3.7, we have the following compactness result; the details of the proof can be found in [10, Proposition 11.6.1]. **Lemma 3.17**.: _Fix \(\gamma\geq 1\), \(v_{2}\in(v_{\min},0),\) and \(\theta_{\ast}\in(0,\theta^{\mathrm{d}})\). Let \(\left\{\boldsymbol{\theta}^{(j)}\right\}_{j\in\mathbb{N}}\subseteq\Theta\cap [0,\theta_{\ast}]^{2}\) be a sequence of parameters satisfying that \(\lim_{j\to\infty}\boldsymbol{\theta}^{(j)}=\boldsymbol{\theta}^{(\infty)}\) for some \(\boldsymbol{\theta}^{(\infty)}\in[0,\theta_{\ast}]^{2}\). For each \(j\in\mathbb{N}\), let \(\varphi^{(j)}\) be an admissible solution corresponding to parameters \(\boldsymbol{\theta}^{(j)},\) with pseudo-subsonic region \(\Omega^{(j)}\) and curved transonic shock \(\Gamma_{\operatorname{shock}}^{(j)}\). Then there exists a subsequence \(\left\{\varphi^{(j_{k})}\right\}_{k\in\mathbb{N}}\) such that the following properties hold_:__ 1. \(\left\{\varphi^{(j_{k})}\right\}_{k\in\mathbb{N}}\) _converges uniformly on any compact subset of_ \(\overline{\mathbb{R}_{+}^{2}}\) _to a function_ \(\varphi^{(\infty)}\in C^{0,1}_{\operatorname{loc}}\big{(}\overline{\mathbb{R}_{+ }^{2}}\big{)},\) _and the limit function_ \(\varphi^{(\infty)}\) _is an admissible solution corresponding to_ \(\boldsymbol{\theta}^{(\infty)};\)__ 2. \(\Omega^{(j_{k})}\to\Omega^{(\infty)}\) _in the Hausdorff metric_\(;\)__ 3. _If_ \(\boldsymbol{\xi}^{(j_{k})}\in\overline{\Omega^{(j_{k})}}\) _for each_ \(k\in\mathbb{N},\) _and_ \(\left\{\boldsymbol{\xi}^{(j_{k})}\right\}_{k\in\mathbb{N}}\) _converges to_ \(\boldsymbol{\xi}^{(\infty)}\in\overline{\Omega^{(\infty)}},\) _then_ \[\varphi^{(j_{k})}(\boldsymbol{\xi}^{(j_{k})})\to\varphi^{(\infty)}(\boldsymbol{ \xi}^{(\infty)})\,,\quad D\varphi^{(j_{k})}(\boldsymbol{\xi}^{(j_{k})})\to D \varphi^{(\infty)}(\boldsymbol{\xi}^{(\infty)})\qquad\text{ as }k\to\infty\,,\] _where, in the case that_ \(\boldsymbol{\xi}^{(j_{k})}\in\Gamma_{\operatorname{shock}}^{(j_{k})},\) _the derivative of_ \(\varphi^{(j_{k})}\) _at_ \(\boldsymbol{\xi}^{(j_{k})}\) _is defined as the limit of the derivative from the interior of_ \(\Omega^{(j_{k})}\)_:_ \[D\varphi^{(j_{k})}(\boldsymbol{\xi}^{(j_{k})})\coloneqq\lim_{\boldsymbol{\xi} \to\boldsymbol{\xi}^{(j_{k})},\,\boldsymbol{\xi}\in\Omega^{(j_{k})}}D\varphi^{(j _{k})}(\boldsymbol{\xi})\,,\] _and, similarly, for_ \(\boldsymbol{\xi}^{(\infty)}\in\Gamma_{\operatorname{shock}}^{(\infty)},\)__ \[D\varphi^{(\infty)}(\boldsymbol{\xi}^{(\infty)})\coloneqq\lim_{\boldsymbol{\xi} \to\boldsymbol{\xi}^{(\infty)},\,\boldsymbol{\xi}\in\Omega^{(\infty)}}D\varphi^{( \infty)}(\boldsymbol{\xi})\,.\] ## 4. Iteration Method and Existence of Admissible Solutions In this section, we carefully construct the iteration set and the iteration map based on the uniform estimates obtained in Section 3. Then we apply the Leray-Schauder degree theorem to show the existence of admissible solutions. The construction of the iteration set follows closely the process described in [2, Chapter 4] with the main difference that both sonic arcs \(\Gamma_{\operatorname{sonic}}^{5}\) and \(\Gamma_{\operatorname{sonic}}^{6}\) may degenerate into single points. In this section, we always fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\min},0)\). ### Mapping between the elliptic and standard domains #### 4.1.1. Mapping into the standard domain Define the standard domain to be the rectangle \(\mathcal{Q}^{\rm iter}\coloneqq(-1,1)\times(0,1)\). Before constructing a map between \(\Omega\) and \(\mathcal{Q}^{\rm iter}\), we introduce some useful notation and basic geometric properties. Let \(\varphi_{2}\) be given by (2.17), and let \((\varphi_{5},\varphi_{6})\) be given by (2.33). For any \(\delta_{0}>0\), we define \(S_{2j}^{\delta_{0}}\coloneqq\{\boldsymbol{\xi}\in\mathbb{R}^{2}:(\varphi_{2} -\varphi_{j})(\boldsymbol{\xi})=-\delta_{0}\}\) and \(q_{j}^{\delta_{0}}\coloneqq\operatorname{dist}(O_{j},S_{2j}^{\delta_{0}})\) for \(j=5,6\), and also write \[u_{5}^{\delta_{0}}\coloneqq u_{5}+q_{5}^{\delta_{0}}\sin\theta_{25}\geq 0\,, \qquad u_{6}^{\delta_{0}}\coloneqq u_{6}-q_{6}^{\delta_{0}}\sin\theta_{26} \leq 0\,.\] For constant \(\hat{\delta}>0\) from (3.15), we set \(\tilde{\delta}_{0}\coloneqq\frac{\hat{\delta}}{\hat{4}}\), which depends only on \((\gamma,v_{2})\). Then, for \(\hat{c}_{j}\) given by (3.14), a direct calculation gives that, for any \(\boldsymbol{\theta}\in\overline{\Theta}\), \[\hat{c}_{j}-q_{j}^{\delta_{0}}\geq\hat{\delta}-\frac{\delta_{0}}{\ell(\rho_{j })}\geq 2\tilde{\delta}_{0}\,,\] after choosing \(\delta_{0}>0\) small enough, depending only on \((\gamma,v_{2})\), and using Lemma 2.8. Therefore, \(S_{2j}^{\delta_{0}}\) and \(\partial B_{\hat{c}_{j}}(O_{j})\) intersect at two distinct points for any \(\boldsymbol{\theta}\in\overline{\Theta}\). **Definition 4.1** (Extended domain \(Q^{\boldsymbol{\theta}}\)).: _Let \(\delta_{0}>0\) be chosen as above. Define \(P_{2}^{\prime}\) and \(P_{3}^{\prime}\) by_ \[\{P_{2}^{\prime}\}\coloneqq S_{25}^{\delta_{0}}\cap\partial B_{\hat{c}_{5}}(O _{5})\cap\{\xi\geq u_{5}^{\delta_{0}}\}\,,\qquad\{P_{3}^{\prime}\}\coloneqq S _{26}^{\delta_{0}}\cap\partial B_{\hat{c}_{6}}(O_{6})\cap\{\xi\leq u_{6}^{ \delta_{0}}\}\,.\] _Denote by \(\Gamma_{\rm sonic}^{\delta,\delta_{0}}\) the arc of \(\partial B_{\hat{c}_{5}}(O_{5})\cap\{\xi\geq u_{5}^{\delta_{0}}\}\) with endpoints \(\{P_{1},P_{2}^{\prime}\}\), and denote by \(\Gamma_{\rm sonic}^{6,\delta_{0}}\) the arc of \(\partial B_{\hat{c}_{6}}(O_{6})\cap\{\xi\leq u_{6}^{\delta_{0}}\}\) with endpoints \(\{P_{4},P_{3}^{\prime}\}\). Define the extended domain \(Q^{\boldsymbol{\theta}}\) as the open bounded region enclosed by \(\Gamma_{\rm sonic}^{5,\delta_{0}},\Gamma_{\rm sonic}^{6,\delta_{0}},S_{25}^{ \delta_{0}},S_{26}^{\delta_{0}}\), and \(\Gamma_{\rm sym}\); see Fig. 4.1._ For any \(\boldsymbol{\xi}=(\xi,\eta)\in(\mathbb{R}\setminus[u_{6}^{\delta_{0}},u_{5}^{ \delta_{0}}])\times[0,\infty)\), define the \((x,y)\)-coordinates by \((x,y)\coloneqq\mathcal{R}(\boldsymbol{\xi})\) if and only if \[\boldsymbol{\xi}=\begin{cases}O_{6}+(c_{6}-x)\left(\cos(\pi-y),\,\sin(\pi-y) \right)&\text{ if }\xi<u_{6}^{\delta_{0}}\,,\\ O_{5}+(c_{5}-x)\left(\cos y,\,\sin y\right)&\text{ if }\xi>u_{5}^{\delta_{0}}\,, \end{cases} \tag{4.1}\] with \(y\in(0,\pi)\). For any constant \(\epsilon>0\), we define two sets \(\mathcal{D}_{\epsilon}^{5},\mathcal{D}_{\epsilon}^{6}\subseteq Q^{\boldsymbol{ \theta}}\) by \[\mathcal{D}_{\epsilon}^{5}\coloneqq\left\{\boldsymbol{\xi}\in Q^{\boldsymbol{ \theta}}\,:\,\xi>u_{5}^{\delta_{0}}\right\}\setminus\overline{B_{\hat{c}_{5} -\epsilon}(O_{5})}\,,\qquad\mathcal{D}_{\epsilon}^{6}\coloneqq\left\{ \boldsymbol{\xi}\in Q^{\boldsymbol{\theta}}\,:\,\xi<u_{6}^{\delta_{0}} \right\}\setminus\overline{B_{\hat{c}_{6}-\epsilon}(O_{6})}\,. \tag{4.2}\] Let \(\omega_{0}\in(0,\frac{\pi}{2})\) be the solution of \((q^{\delta_{0}}+\tilde{\delta_{0}})\cos\omega_{0}=q^{\delta_{0}}\), where \(q^{\delta_{0}}\coloneqq\max\{q_{5}^{\delta_{0}},q_{6}^{\delta_{0}}\}\). Then \[\tilde{\omega}_{0}\coloneqq\frac{1}{2}\inf_{\boldsymbol{\theta}\in\overline{ \Theta}}\left\{\theta_{25}-\frac{\pi}{2},\,\frac{\pi}{2}-\theta_{26},\,\omega_ {0}\right\}>0\] by (3.14)-(3.15) and Lemma 2.8. A direct computation shows that, for any \(\epsilon\in(0,\tilde{\delta}_{0})\), \[\mathcal{D}_{\epsilon}^{5}\subseteq\left\{\boldsymbol{\xi}\,:\,x _{P_{2}}<x<x_{P_{2}}+\epsilon,\,0<y<\theta_{25}-\frac{\pi}{2}-\tilde{\omega}_{0} \right\}\cap\left\{\xi>u_{5}^{\delta_{0}}+\tilde{\delta}_{0}\sin\tilde{\omega}_ {0}\right\},\] \[\mathcal{D}_{\epsilon}^{6}\subseteq\left\{\boldsymbol{\xi}\,:\,x _{P_{3}}<x<x_{P_{3}}+\epsilon,\,0<y<\frac{\pi}{2}-\theta_{26}-\tilde{\omega}_{0} \right\}\cap\left\{\xi<u_{6}^{\delta_{0}}-\tilde{\delta}_{0}\sin\tilde{\omega}_ {0}\right\}. \tag{4.3}\] In particular, we fix a constant \(k>4\) sufficiently large, depending only on \((\gamma,v_{2})\), such that \(\max\{\hat{c}_{5},\hat{c}_{6}\}\leq\frac{k\tilde{5}_{0}}{4}\sin\tilde{\omega}_{ 0}\) for any \(\boldsymbol{\theta}\in\overline{\Theta}\) and \[\mathcal{D}^{5}_{\frac{3\tilde{c}_{6}}{k}}\subseteq\big{\{}\xi>u_{5}^{\delta_ {0}}+\frac{3\hat{c}_{5}}{k}\big{\}}\,,\qquad\mathcal{D}^{6}_{\frac{3\tilde{c} _{6}}{k}}\subseteq\big{\{}\xi<u_{6}^{\delta_{0}}-\frac{3\hat{c}_{6}}{k}\big{\}}\,. \tag{4.4}\] For each \(\boldsymbol{\theta}\in\overline{\Theta}\), we define an invertible map \(\mathcal{G}_{1}^{\boldsymbol{\theta}}:\overline{Q^{\boldsymbol{\theta}}} \to[-1,1]\times[0,\infty)\), which flattens the extended sonic arcs \(\Gamma^{5,\delta_{0}}_{\mathrm{sonic}}\) and \(\Gamma^{6,\delta_{0}}_{\mathrm{sonic}}\). For each admissible solution \(\varphi\) corresponding to parameters \(\boldsymbol{\theta}\in\overline{\Theta}\), we further construct a map \(G_{2,\mathfrak{ds}_{\mathrm{n}}}\) from \(\mathcal{G}_{1}^{\boldsymbol{\theta}}(\Omega)\) to the standard domain \(\mathcal{Q}^{\mathrm{iter}}\), where \(\Omega\) is the pseudo-subsonic domain associated with \(\varphi\). Finally, we present the _a priori_ estimates in the standard domain for admissible solutions under map \(G_{2,\mathfrak{ds}_{\mathrm{n}}}\circ\mathcal{G}_{1}^{\boldsymbol{\theta}}\). **Construction of \(\mathcal{G}_{1}^{\boldsymbol{\theta}}\)**. For \(r_{0}\in\mathbb{R}\) and \(r_{\mathrm{d}}>0\), we fix a basic cut-off function \(\tilde{\zeta}(\,\cdot\,;r_{0},r_{\mathrm{d}})\in C^{4}(\mathbb{R})\) such that \(\tilde{\zeta}^{\prime}(\,\cdot\,;r_{0},r_{\mathrm{d}})\in[0,\frac{2}{r_{ \mathrm{d}}}]\) and \[\tilde{\zeta}(r;\,r_{0},r_{\mathrm{d}})=\left\{\begin{aligned} & 0&\quad\text{for}\;r\leq r_{0}\,,\\ & 1&\quad\text{for}\;r\geq r_{0}+r_{\mathrm{d}}\,.\end{aligned}\right. \tag{4.5}\] For \(j=5,6\), let the cut-off functions \((\zeta_{j},\chi_{j})\) be defined by \[\zeta_{j}(r)\coloneqq\tilde{\zeta}(r;\,(1-\frac{3}{k})\hat{c}_{j},\frac{\hat{ c}_{j}}{k})\,,\qquad\chi_{j}(\xi)\coloneqq\tilde{\zeta}((-1)^{j-1}\xi;\,(-1)^{j-1}u_{j}^ {\delta_{0}},\frac{2\hat{c}_{j}}{k})\,.\] Then we define \(F_{1}:\overline{Q^{\boldsymbol{\theta}}}\to\mathbb{R}^{2}\) by \(F_{1}(\xi,\eta)\coloneqq(h_{1}(\xi,\eta),\eta)\), where \[h_{1}(\boldsymbol{\xi})\coloneqq\zeta_{\mathrm{d}}(\boldsymbol{\xi})u_{r}+ \big{(}1-\zeta_{\mathrm{d}}(\boldsymbol{\xi})\big{)}\xi\,,\] and \(\zeta_{\mathrm{d}}\) and \(u_{r}\) are given by \[\zeta_{\mathrm{d}}(\boldsymbol{\xi})\coloneqq\chi_{5}(\xi)\zeta_ {5}(|\boldsymbol{\xi}-O_{5}|)+\chi_{6}(\xi)\zeta_{6}(|\boldsymbol{\xi}-O_{6}| )\,,\] \[u_{r}\coloneqq(u_{5}+|\boldsymbol{\xi}-O_{5}|)\chi_{5}(\xi)+(u_{ 6}-|\boldsymbol{\xi}-O_{6}|)\chi_{6}(\xi)\,.\] For each \(\boldsymbol{\theta}\in\overline{\Theta}\), we write \(s_{5}\coloneqq u_{5}+\hat{c}_{5}>0\) and \(s_{6}\coloneqq u_{6}-\hat{c}_{6}<0\), which depend continuously on \(\boldsymbol{\theta}\in\overline{\Theta}\). Let \(\mathcal{N}_{\varepsilon}(\Gamma)\) be defined by (3.21). By construction of map \(F_{1}\), we observe that \[h_{1}(\boldsymbol{\xi})=u_{j}+(-1)^{j-1}\mathrm{dist}(\boldsymbol{\xi},O_{j}) \qquad\text{ for any }\boldsymbol{\xi}\in Q^{\boldsymbol{\theta}}\cap\mathcal{N}_{\frac{2 \hat{c}_{j}}{k}}(\Gamma^{j,\delta_{0}}_{\mathrm{sonic}})\,,\] and \(\overline{F_{1}(Q^{\boldsymbol{\theta}})}\subseteq[s_{6},s_{5}]\times[0,\infty)\). Let the linear map \(L_{\boldsymbol{\theta}}:[s_{6},s_{5}]\to[-1,1]\) be given by \[L_{\boldsymbol{\theta}}(s^{\prime})\coloneqq d_{L}(s^{\prime}-s_{6})-1 \qquad\text{ with }d_{L}\coloneqq\frac{2}{s_{5}-s_{6}}\,, \tag{4.6}\] which maps interval \([s_{6},s_{5}]\) to the standard interval \([-1,1]\). For \(k>4\) given by (4.4), define the cut-off functions \(\tilde{\chi}_{j}\) by \[\tilde{\chi}_{j}(s^{\prime})\coloneqq\tilde{\zeta}((-1)^{j-1}s^{\prime};\,(-1) ^{j-1}u_{j}+(1-\frac{1}{k})\hat{c}_{j},\frac{\hat{c}_{j}}{2k})\qquad\text{ for }j=5,6\,.\] We define a function \(h_{2}:\overline{F_{1}(Q^{\boldsymbol{\theta}})}\to[0,\infty)\) by \[h_{2}(s^{\prime},t^{\prime\prime})\coloneqq\tilde{\chi}_{5}\arcsin\big{(} \frac{t^{\prime\prime}}{s^{\prime}-u_{5}}\big{)}+\tilde{\chi}_{6}\arcsin\big{(} \frac{t^{\prime\prime}}{u_{6}-s^{\prime}}\big{)}+(1-\tilde{\chi}_{5}-\tilde{ \chi}_{6})\,t^{\prime\prime}\,,\] where \(\tilde{\chi}_{5}\) and \(\tilde{\chi}_{6}\) are evaluated at \(s^{\prime}\). For maps \(F_{1}\) and \(h_{2}\) constructed above, we see that \[h_{2}\circ F_{1}(\boldsymbol{\xi})=y\qquad\text{for any }\boldsymbol{\xi}\in Q^{ \boldsymbol{\theta}}\cap\big{(}\mathcal{N}_{\frac{\tilde{c}_{5}}{2k}}(\Gamma^{5, \delta_{0}}_{\mathrm{sonic}})\cup\mathcal{N}_{\frac{\tilde{c}_{6}}{2k}}(\Gamma^{ 6,\delta_{0}}_{\mathrm{sonic}})\big{)}\,,\] for the \((x,y)\)-coordinates given by (4.1). Finally, we define map \(\mathcal{G}_{1}^{\boldsymbol{\theta}}:\overline{Q^{\boldsymbol{\theta}}}\to[-1,1] \times[0,\infty)\) by \[\mathcal{G}_{1}^{\boldsymbol{\theta}}(\boldsymbol{\xi})\coloneqq(L_{\boldsymbol {\theta}}\circ h_{1}(\boldsymbol{\xi}),\,h_{2}\circ F_{1}(\boldsymbol{\xi})) \qquad\text{ for any }\boldsymbol{\xi}\in\overline{Q^{\boldsymbol{\theta}}}\,, \tag{4.7}\] and write the coordinates \((s,t^{\prime})\coloneqq\mathcal{G}_{1}^{\boldsymbol{\theta}}(\boldsymbol{\xi})\). It can be verified directly that \(\mathcal{G}_{1}^{\boldsymbol{\theta}}(\Gamma_{\mathrm{sym}})=\{(s,0)\,:\,-1<s<1\}\). Moreover, map \(\mathcal{G}_{1}^{\boldsymbol{\theta}}:\overline{Q^{\boldsymbol{\theta}}}\to[- 1,1]\times[0,\infty)\) constructed above satisfies the following properties by using (3.3), (4.5), and Lemma 3.17; the proof is similar to [2, Lemmas 4.3 and 4.5]. **Lemma 4.2**.: _There exists a constant \(\mu_{\mathcal{G}_{1}}\in(0,1)\) depending only on \((\gamma,v_{2})\) such that, for each \(\boldsymbol{\theta}\in\overline{\Theta},\) map \(\mathcal{G}_{1}^{\boldsymbol{\theta}}\) defined by (4.7) satisfies the following_:__ 1. \(\|\mathcal{G}_{1}^{\boldsymbol{\theta}}\|_{C^{4}(\overline{\mathcal{Q}^{ \boldsymbol{\theta}}})}+\|(\mathcal{G}_{1}^{\boldsymbol{\theta}})^{-1}\|_{C^{4 }(\overline{\mathcal{G}_{1}^{\boldsymbol{\theta}}}(Q^{\boldsymbol{\theta}}))} \leq\mu_{\mathcal{G}_{1}}^{-1},\) _and_ \(|\mathrm{det}(D\mathcal{G}_{1}^{\boldsymbol{\theta}})|\geq\mu_{\mathcal{G}_{1}}\) _in_ \(\overline{Q^{\boldsymbol{\theta}}};\)__ 2. _For_ \(j=5,6\)_, and for any_ \((s,t^{\prime})\in\overline{\mathcal{G}_{1}^{\boldsymbol{\theta}}(Q^{ \boldsymbol{\theta}})},\)__\(\partial_{t^{\prime}}\big{(}(\varphi_{2}-\varphi_{j})\circ(\mathcal{G}_{1}^{ \boldsymbol{\theta}})^{-1}\big{)}(s,t^{\prime})\leq-\mu_{\mathcal{G}_{1}};\)__ 3. _For any_ \(\theta_{*}\in(0,\theta^{\mathrm{d}}),\) _there exists a constant_ \(\mu_{*}>0\) _depending only on_ \((\gamma,v_{2},\theta_{*})\) _such that any admissible solution_ \(\varphi\) _corresponding to_ \(\boldsymbol{\theta}\in\Theta\cap\{\theta_{1},\theta_{2}\leq\theta_{*}\}\) _satisfies_ \[\partial_{t^{\prime}}\big{(}(\varphi_{2}-\varphi)\circ(\mathcal{G}_{1}^{ \boldsymbol{\theta}})^{-1}\big{)}(s,t^{\prime})\leq-\mu_{*}<0\qquad\text{for all $(s,t^{\prime})\in \overline{\mathcal{G}_{1}^{\boldsymbol{\theta}}}(\Omega)$}\,.\] Using Lemma 4.2(ii), there exists a unique function \(f_{\boldsymbol{\theta}}\in C^{0,1}([-1,1])\) such that \[\mathcal{G}_{1}^{\boldsymbol{\theta}}\big{(}\{\boldsymbol{\xi}\in\mathbb{R}_{ +}^{2}\,:\,\varphi_{2}=\max\{\varphi_{5},\varphi_{6}\}-\delta_{0}\}\big{)}= \left\{(s,f_{\boldsymbol{\theta}}(s))\,:\,-1<s<1\right\},\] which follows from the implicit function theorem, and \[\mathcal{G}_{1}^{\boldsymbol{\theta}}(Q^{\boldsymbol{\theta}})=\left\{(s,t^{ \prime})\in\mathbb{R}_{+}^{2}\,:\,-1<s<1,\,0<t^{\prime}<f_{\boldsymbol{\theta }}(s)\right\}. \tag{4.8}\] Using (3.13), Definition 2.11(ii), Lemma 3.9(ii), Corollary 3.10, Propositions 3.12-3.14 and 3.16, and Lemma 4.2, we introduce a function \(\mathfrak{g}_{\mathrm{sh}}:[-1,1]\to[0,\infty)\) representing \(\Gamma_{\mathrm{shock}}\) in the \((s,t^{\prime})\)-coordinates and state its important properties; the proof is similar to [2, Proposition 4.6]. **Proposition 4.3**.: _For each admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta,\) there exists a unique function \(\mathfrak{g}_{\mathrm{sh}}:[-1,1]\to[0,\infty)\) satisfying the following properties_:__ 1. _In the_ \((s,t^{\prime})\)_-coordinates_:__ \[\mathcal{G}_{1}^{\boldsymbol{\theta}}(\Omega)=\left\{(s,t^{\prime})\,:\,-1<s <1,\,0<t^{\prime}<\mathfrak{g}_{\mathrm{sh}}(s)\right\},\] \[\mathcal{G}_{1}^{\boldsymbol{\theta}}(\Gamma_{\mathrm{shock}})=\left\{(s, \mathfrak{g}_{\mathrm{sh}}(s))\,:\,-1<s<1,\,t^{\prime}=\mathfrak{g}_{\mathrm{ sh}}(s)\right\},\] _and, for any interval_ \(I_{1}\Subset(-1,1),\)__\(\|\mathfrak{g}_{\mathrm{sh}}\|_{C^{3}(\overline{I_{1}})}\leq C_{{}_{I_{1}}}\) _for some constant_ \(C_{I_{1}}>0\) _depending only on_ \((\gamma,v_{2},I_{1})\)_._ 2. _Let_ \(Q_{0}^{\boldsymbol{\theta}}\) _be the bounded region enclosed by_ \(\Gamma_{\mathrm{sonic}}^{5},\Gamma_{\mathrm{sonic}}^{6},S_{25},S_{26},\) _and_ \(\Gamma_{\mathrm{sym}},\) _which satisfies_ \(\Omega\subseteq Q_{0}^{\boldsymbol{\theta}}\subseteq Q^{\boldsymbol{\theta}}\)_. For_ \(\mathcal{D}_{\epsilon}^{5}\) _and_ \(\mathcal{D}_{\epsilon}^{6}\) _given by (_4.2_), there exist unique functions_ \(\mathfrak{g}_{5}\) _and_ \(\mathfrak{g}_{6}\) _with_ \[\mathcal{G}_{1}^{\boldsymbol{\theta}}(Q_{0}^{\boldsymbol{\theta}}\cap\mathcal{ D}_{\epsilon}^{j})=\left\{(s,t^{\prime})\in\mathcal{Q}^{\mathrm{ inter}}\,:\,1+(-1)^{j}s<d_{L}\epsilon,\,0<t^{\prime}<\mathfrak{g}_{j}(s) \right\}\qquad\text{for $j=5,6$}\,.\] _Let_ \(\epsilon_{0}^{*}>0\) _be the smallest constant_ \(\varepsilon_{0}\) _in_ \(\mathrm{Step}\)__\(\mathbf{1}\) _of the proofs of_ Propositions 3.12 _and_ 3.14_. For_ \(d_{L}>0\) _given by (_4.6_), set_ \(\hat{\epsilon}_{0}^{*}\coloneqq d_{L}\epsilon_{0}^{*}\)_. There exists a constant_ \(C>0\) _depending only on_ \((\gamma,v_{2})\) _such that_ \[\|\mathfrak{g}_{5}\|_{C^{3}([-1,-1+\hat{\epsilon}_{0}^{*}])}+\|\mathfrak{g}_{6} \|_{C^{3}([-1,-1+\hat{\epsilon}_{0}^{*}])}\leq C\,.\] 3. _For each_ \(\theta_{*}\in(0,\theta^{\mathrm{d}}),\) _there exist_ \(\bar{\alpha}\in(0,1)\) _and_ \(C_{\theta_{*}}>0\) _depending only on_ \((\gamma,v_{2},\theta_{*})\) _such that, for any admissible solution corresponding to_ \(\boldsymbol{\theta}\in\Theta\cap\{\theta_{1},\theta_{2}\leq\theta_{*}\},\)__ \[\|\mathfrak{g}_{\mathrm{sh}}\|_{2,\bar{\alpha},(-1,-1+\hat{\epsilon}_{0}^{*})}^{(-1 \bar{\alpha}),\{-1\}}+\|\mathfrak{g}_{\mathrm{sh}}\|_{2,\bar{\alpha},(-1,-1+ \hat{\epsilon}_{0}^{*})}^{(-1-\bar{\alpha}),\{1\}}\leq C_{\theta_{*}}\,,\] \[\frac{\mathrm{d}^{m}}{\mathrm{d}s^{m}}(\mathfrak{g}_{\mathrm{sh}}-\mathfrak{g}_{j}) ((-1)^{j-1})=0\qquad\text{for $m=0,1$, and $j=5,6$}\,.\] _where_ \(\|\cdot\|_{2,\alpha,I}^{(\sigma),\{x_{0}\}}\) _is given by_ Definition 3.15(ii)_. Note that the properties above are equivalent to_ \[\|\mathfrak{g}_{\mathrm{sh}}-\mathfrak{g}_{5}\|_{2,\bar{\alpha},(1-\hat{\epsilon}_{ 0}^{*},1)}^{(1+\bar{\alpha}),(\mathrm{par})}+\|\mathfrak{g}_{\mathrm{sh}}- \mathfrak{g}_{6}\|_{2,\bar{\alpha},(-1,-1+\hat{\epsilon}_{0}^{*})}^{(1+\bar{ \alpha}),(\mathrm{par})}\leq C_{\theta_{*}}^{\prime}\] _for some constant_ \(C_{\theta_{*}}^{\prime}>0\) _depending only on_ \((\gamma,v_{2},\theta_{*}),\) _where_ \(\|\cdot\|_{2,\alpha,I}^{(1+\alpha),(\mathrm{par})}\) _is given by_ Definition 3.11(ii) _with replacement of_ \(x\) _by_ \(1-|s|\)_._ 4. _For each_ \(\theta_{*}\in(0,\theta^{\mathrm{d}}),\) _there exists a constant_ \(\hat{k}>1\) _depending only on_ \((\gamma,v_{2},\theta_{*})\) _such that, for any admissible solution_ \(\varphi\) _corresponding to_ \(\boldsymbol{\theta}\in\Theta\cap\{\theta_{1},\theta_{2}\leq\theta_{*}\}\) _and any_ \(s\in[-1,1],\)__ \[\min\{\mathfrak{g}_{\mathrm{sh}}(-1)+\frac{s+1}{\hat{k}},\,\mathfrak{g}_{\mathrm{sh} }(1)+\frac{1-s}{\hat{k}},\,\frac{1}{\hat{k}}\}\leq\mathfrak{g}_{\mathrm{sh}}(s)\] \[\leq\min\{\mathfrak{g}_{\mathrm{sh}}(-1)+\hat{k}(s+1),\,\mathfrak{g}_{ \ **Remark 4.1**.: _By Propositions 3.12-3.13, for each \(\alpha\in(0,1),\) there exist \(\hat{\epsilon}_{3}>0\) and \(C_{\alpha}>0\) depending only on \((\gamma,v_{2},\alpha)\) such that, for any admissible solution corresponding to \(\boldsymbol{\theta}\in\Theta,\)_ \[\|\mathfrak{g}_{\rm sh}-\mathfrak{g}_{j}\|_{2,\alpha,I_{j}}^{(\rm par)}\leq C _{\alpha}\qquad\text{whenever }\theta_{j-4}\leq\theta^{\rm s}\,,\] _for \(j=5,6,\) where \(I_{j}\coloneqq\{s\in(-1,1)\,:\,1+(-1)^{j}s<\hat{\epsilon}_{3}\},\) and \(\|\cdot\|_{2,\alpha,I_{j}}^{(\rm par)}\) is given by Definition 3.11(ii) with replacement of \(x\) by \(1-|s|.\)_ _By Proposition 3.14, for each \(\alpha\in(0,1),\) there exist constants \(\hat{\epsilon}_{4}>0\) and \(C^{\prime}_{\alpha}>0\) depending only on \((\gamma,v_{2},\alpha)\) such that, for any admissible solution corresponding to \(\boldsymbol{\theta}\in\Theta,\)_ \[\|\mathfrak{g}_{\rm sh}-\mathfrak{g}_{j}\|_{2,\alpha,I_{j}^{\prime}}\leq C^{ \prime}_{\alpha}\,,\qquad\frac{{\rm d}^{m}}{{\rm d}s^{m}}(\mathfrak{g}_{\rm sh }-\mathfrak{g}_{j})((-1)^{j-1})=0\quad\text{for }m=0,1,2\,,\] _whenever \(\theta_{j-4}\in[\theta^{\rm s},\theta^{\rm s}+\sigma_{3}]\) for \(j=5,6,\) where \(I^{\prime}_{j}\coloneqq\{s\in(-1,1)\,:\,1+(-1)^{j}s<\hat{\epsilon}_{4}\}.\)_ **Construction of \(u^{(\varphi,\boldsymbol{\theta})}\).** For each \(\boldsymbol{\theta}\in\overline{\Theta}\setminus\{\boldsymbol{0}\},\) it can be verified directly that \(S_{25}\) and \(S_{26}\) given by Definition 2.7 intersect at the unique point \(P_{I}\coloneqq(\xi^{P_{I}},\eta^{P_{I}})\) with \[\xi^{P_{I}}\coloneqq\frac{u_{5}\xi^{P_{0}^{1}}-u_{6}\xi^{P_{0}^{2}}}{u_{5}-u_{ 6}}\,,\qquad\eta^{P_{I}}\coloneqq\frac{u_{5}u_{6}}{u_{5}-u_{6}}\,\frac{\xi^{P_ {0}^{1}}-\xi^{P_{0}^{2}}}{v_{2}}\,. \tag{4.9}\] Similarly, \(S_{25}^{\delta_{0}}\) and \(S_{26}^{\delta_{0}}\) intersect at the unique point \(P_{I}^{\delta_{0}}\coloneqq(\xi^{P_{I}},\eta^{P_{I}^{\delta_{0}}})\) with \(\eta^{P_{I}^{\delta_{0}}}\coloneqq\eta^{P_{I}}-\frac{\delta_{0}}{v_{2}}>\eta^ {P_{I}}.\) In Lemma A.3, we show the limits: \[\lim_{\boldsymbol{\theta}\to\boldsymbol{0},\,\boldsymbol{\theta}\in\overline {\Theta}\setminus\{\boldsymbol{0}\}}(\xi^{P_{I}},\eta^{P_{I}})=(0,\eta_{0})\,,\] where \(\eta_{0}\) is given by (2.23). Then we define \(P_{I}|_{\boldsymbol{\theta}=\boldsymbol{0}}\coloneqq(0,\eta_{0})\) such that \(P_{I}\) is \(C(\overline{\Theta})\). By (4.9) and Lemma A.2(ii), \(\xi^{P_{I}}\geq 0\) for any \(\boldsymbol{\theta}\in\overline{\Theta}\cap\{\theta_{2}=0\}\). Therefore, there exists a constant \(\sigma_{\rm sep}>0\) small enough, depending only on \((\gamma,v_{2}),\) such that \[\xi^{P_{3}}\leq-d_{\rm sep}<-\tfrac{d_{\rm sep}}{2}\leq\xi^{P_{I}}\qquad\text{ for any }\boldsymbol{\theta}\in\overline{\Theta}\cap\{0\leq\theta_{2}\leq\sigma_{\rm sep}\}\,,\] where \(d_{\rm sep}>0\) is given by (3.10). Also, by Lemma A.2(ii), there exists a small constant \(\delta_{\rm sep}>0\) depending only on \((\gamma,v_{2},\sigma_{\rm sep})\) such that \[\eta^{P_{3}}\leq\eta_{0}-\delta_{\rm sep}<\eta_{0}\leq\min\{a_{25},a_{26}\} \leq\eta^{P_{I}}\qquad\text{ for any }\boldsymbol{\theta}\in\overline{\Theta}\cap\{\sigma_{\rm sep}\leq\theta_{2} \leq\theta^{4}\}\,.\] From the above two inequalities, using \(P_{3},P_{I}\in S_{26}\) and Lemma 2.8, we obtain the uniform positive lower bound: \[\inf_{\boldsymbol{\theta}\in\overline{\Theta}}(\xi^{P_{I}}-\xi^{P_{3}})>0\,.\] **Definition 4.4**.: _Let \(\tilde{\zeta}(r;r_{0},r_{\rm d})\) be given by (4.5), and let \(\xi^{P_{I}}\) be given by (4.9). For \(\boldsymbol{\theta}\in\overline{\Theta},\) define_ \[\chi^{*}_{\boldsymbol{\theta}}(\xi)\coloneqq\tilde{\zeta}(-\xi;-\xi^{P_{I}}, \,\frac{\xi^{P_{I}}-\xi^{P_{3}}}{5C_{\rm bg}})\] _for some constant \(C_{\rm bg}>1\). The smooth background function \(\varphi^{*}_{\boldsymbol{\theta}}(\boldsymbol{\xi})\) is defined by_ \[\varphi^{*}_{\boldsymbol{\theta}}(\boldsymbol{\xi})\coloneqq\varphi_{5}( \boldsymbol{\xi})\,(1-\chi^{*}_{\boldsymbol{\theta}}(\xi))+\varphi_{6}( \boldsymbol{\xi})\,\chi^{*}_{\boldsymbol{\theta}}(\xi)\,.\] _For constant \(\delta_{0}>0\) from Definition 4.1, constant \(C_{\rm bg}\) is fixed sufficiently large, depending only on \((\gamma,v_{2}),\) such that \(\max\{\varphi_{5},\varphi_{6}\}-\varphi^{*}_{\boldsymbol{\theta}}\leq\tfrac{ \delta_{0}}{2}.\)_ From the construction above, it can be checked that \(\varphi^{*}_{\boldsymbol{\theta}}\leq\max\{\varphi_{5},\varphi_{6}\}\) in \(\mathbb{R}^{2}\) and there exists a constant \(\bar{k}>1\) sufficiently large, depending only on \((\gamma,v_{2}),\) such that, for \(j=5,6,\) \[\varphi^{*}_{\boldsymbol{\theta}}=\varphi_{\boldsymbol{\theta}}=\varphi_{j} \qquad\text{ in }\mathcal{D}^{j}_{\hat{c}_{j}/\bar{k}}\,, \tag{4.10}\] where \(\hat{c}_{j}\) is given by (3.14). Then \[\big{\{}\boldsymbol{\xi}\in\mathbb{R}^{2}_{+}\,:\,\xi^{P_{3}}<\xi<\xi^{P_{2}}, \,(\varphi_{2}-\varphi^{*}_{\boldsymbol{\theta}})(\boldsymbol{\xi})=0\big{\}} \subseteq Q^{\boldsymbol{\theta}}\,. \tag{4.11}\] Using Lemma 4.2(ii) and Definition 4.4, we can directly obtain that, for any \(\boldsymbol{\theta}\in\overline{\Theta}\), there exists a constant \(\bar{\mu}>0\) depending only on \((\gamma,v_{2})\) such that \[\partial_{t^{\prime}}\big{(}(\varphi_{2}-\varphi_{\boldsymbol{\theta}}^{*}) \circ(\mathcal{G}_{1}^{\boldsymbol{\theta}})^{-1}\big{)}(s,t^{\prime})\leq- \bar{\mu}\qquad\text{ for any }(s,t^{\prime})\in\overline{\mathcal{G}_{1}^{ \boldsymbol{\theta}}(Q^{\boldsymbol{\theta}})}\,. \tag{4.12}\] For any admissible solution \(\varphi\) corresponding to parameters \(\boldsymbol{\theta}\in\Theta\), let \(\mathfrak{g}_{\text{sh}}:[-1,1]\to[0,\infty)\) be the unique function given by Proposition 4.3. Then \(G_{2,\mathfrak{g}_{\text{sh}}}:\mathcal{G}_{1}^{\boldsymbol{\theta}}(Q^{ \boldsymbol{\theta}})\to\mathbb{R}^{2}\) is defined by \[(s,t)=G_{2,\mathfrak{g}_{\text{sh}}}(s,t^{\prime})\coloneqq(s,\frac{t^{\prime }}{\mathfrak{g}_{\text{sh}}(s)})\qquad\text{for any }(s,t^{\prime})\in \mathcal{G}_{1}^{\boldsymbol{\theta}}(Q^{\boldsymbol{\theta}})\,. \tag{4.13}\] By Proposition 4.3(iv), map \(G_{2,\mathfrak{g}_{\text{sh}}}\) is well defined and invertible with \(G_{2,\mathfrak{g}_{\text{sh}}}^{-1}(s,t)=(s,t\mathfrak{g}_{\text{sh}}(s))\). Then \(G_{2,\mathfrak{g}_{\text{sh}}}\circ\mathcal{G}_{1}^{\boldsymbol{\theta}}( \Omega)=\mathcal{Q}^{\text{iter}}=(-1,1)\times(0,1)\) by Proposition 4.3(i). Therefore, \(u^{(\varphi,\boldsymbol{\theta})}\), defined as \[u^{(\varphi,\boldsymbol{\theta})}(s,t)\coloneqq(\varphi-\varphi_{\boldsymbol {\theta}}^{*})\circ(G_{2,\mathfrak{g}_{\text{sh}}}\circ\mathcal{G}_{1}^{ \boldsymbol{\theta}})^{-1}(s,t)\,, \tag{4.14}\] is well defined for all \((s,t)\in\mathcal{Q}^{\text{iter}}\). ### A priori estimates for \(u^{(\varphi,\boldsymbol{\theta})}\) For \(\epsilon_{0}^{*}>0\) from Proposition 4.3(ii), define a constant \(\epsilon_{0}^{\prime}>0\) depending only on \((\gamma,v_{2})\) as \[\epsilon_{0}^{\prime}\coloneqq\min_{\boldsymbol{\theta}\in\overline{\Theta}} d_{L}\epsilon_{0}^{*}\;\leq\;\tilde{\epsilon}_{0}^{*}\,, \tag{4.15}\] where \(d_{L}>0\) is given by (4.6). For \(r\in(0,1)\) and for \(j=5,6\), we define \[\mathcal{Q}_{r}^{\text{int}}\coloneqq\left\{(s,t)\in\mathcal{Q}^{\text{iter}} \,:\,|s|<1-r\right\},\qquad\mathcal{Q}_{r}^{j}\coloneqq\left\{(s,t)\in \mathcal{Q}^{\text{iter}}\,:\,1+(-1)^{j}\,s<r\right\}. \tag{4.16}\] **Definition 4.5** (Weighted Holder norms in the standard domain).: _For \(\alpha\in(0,1)\) and any \(u\in C^{2}(\mathcal{Q}^{\text{iter}}),\) introduce the norm_: \[\|u\|_{2,\alpha,\mathcal{Q}^{\text{iter}}}^{(*)}\coloneqq\|u\|_{2,\alpha, \overline{\mathcal{Q}^{\text{iter}}_{0}}}+\sum_{j=5,6}\left(\|u\|_{2,\alpha, \mathcal{Q}^{j}_{\ell_{0}^{\prime}}}^{(1+\alpha),(\text{par})}+\|u\|_{1, \alpha,\mathcal{Q}^{j}_{\ell_{0}^{\prime}}}^{(1+\alpha),(\text{subs})}\right),\] _where \(\|\cdot\|_{2,\alpha,U}\) is the standard Holder norm, and the weighted Holder norms \(\|\cdot\|_{m,\alpha,U}^{(\sigma),(\text{subs})}\) and \(\|\cdot\|_{m,\alpha,U}^{(\sigma),(\text{par})},\) for an open set \(U\subseteq\mathcal{Q}^{\text{iter}},\,\sigma>0,\) and \(m\in\mathbb{N},\) are given as follows_:__ 1. _For any_ \(\boldsymbol{s}=(s,t),\,\tilde{\boldsymbol{s}}=(\tilde{s},\tilde{t})\in \mathcal{Q}^{\text{iter}},\) _define_ \[\delta_{\alpha}^{(\text{subs})}(\boldsymbol{s},\tilde{\boldsymbol{s}}) \coloneqq\left((s-\tilde{s})^{2}+(\max\{1-|s|,1-|\tilde{s}|\})^{2}(t-\tilde{ t})^{2}\right)^{\frac{\alpha}{2}}.\] _The Holder norms with subsonic scaling are given by_ \[\|u\|_{m,0,U}^{(\sigma),(\text{subs})}\coloneqq\sum_{0\leq k+l \leq m}\sup_{\boldsymbol{s}\in U}\left((1-|s|)^{k-\sigma}|\partial_{s}^{k} \partial_{t}^{l}u(\boldsymbol{s})|\right),\] \[[u]_{m,\alpha,U}^{(\sigma),(\text{subs})}\coloneqq\sum_{k+l=m} \sup_{\begin{subarray}{c}\boldsymbol{s},\tilde{\boldsymbol{s}}\in U\\ \boldsymbol{s}\neq\tilde{\boldsymbol{s}}\end{subarray}}\left(\min\left\{(1-|s| )^{\alpha+k-\sigma},(1-|\tilde{s}|)^{\alpha+k-\sigma}\right\}\frac{|\partial_ {s}^{k}\partial_{t}^{l}u(\boldsymbol{s})-\partial_{s}^{k}\partial_{t}^{l}u( \tilde{\boldsymbol{s}})|}{\delta_{\alpha}^{(\text{subs})}(\boldsymbol{s},\tilde {\boldsymbol{s}})}\right),\] \[\|u\|_{m,\alpha,U}^{(\sigma),(\text{par})}=\|u\|_{m,0,U}^{(\sigma),( \text{subs})}+[u]_{m,\alpha,U}^{(\sigma),(\text{subs})}\,.\] 2. _For any_ \(\boldsymbol{s}=(s,t),\tilde{\boldsymbol{s}}=(\tilde{s},\tilde{t})\in \mathcal{Q}^{\text{iter}},\) _define_ \[\delta_{\alpha}^{(\text{par})}(\boldsymbol{s},\tilde{\boldsymbol{s}})\coloneqq \left((s-\tilde{s})^{2}+\max\{1-|s|,1-|\tilde{s}|\}(t-\tilde{t})^{2}\right)^{ \frac{\alpha}{2}}.\] _The Holder norms with parabolic scaling are given by_ \[\|u\|_{m,0,U}^{(\sigma),(\text{par})}\coloneqq\sum_{0\leq k+l\leq m }\sup_{\boldsymbol{s}\in U}\left((1-|s|)^{k+\frac{l}{2}-\sigma}|\partial_{s}^{k }\partial_{t}^{l}u(\boldsymbol{s})|\right),\] \[[u]_{m,\alpha,U}^{(\sigma),(\text{par})}\coloneqq\sum_{k+l=m}\sup_{ \begin{subarray}{c}\boldsymbol{s},\tilde{\boldsymbol{s}}\in U\\ \boldsymbol{s}\neq\tilde{\boldsymbol{s}}\end{subarray}}\left(\min\left\{(1-|s|)^{ \alpha+k+\frac{l}{2}-\sigma},(1-|\tilde{s}|)^{\alpha+k+\frac{l}{2}-\sigma} \right\}\frac{|\partial_{s}^{k}\partial_{t}^{l}u(\boldsymbol{s})-\partial_{s}^{k} \partial_{t}^{l}u(\tilde{\boldsymbol{s}})|}{\delta_{\alpha}^{(\text{par})}( \boldsymbol{s},\tilde{\boldsymbol{s}})}\right),\] \[\|u\|_{m,\alpha,U}^{(\sigma),(\text{par})}=\|u\|_{m,0,U}^{(\sigma),(\text{par})}+[u]_{m,\alpha,U}^{(\sigma),(\text{par})}\,.\] _Denote by \(C_{(*)}^{2,\alpha}(\mathcal{Q}^{\text{iter}})\) the set of all functions in \(C^{2}(\mathcal{Q}^{\text{iter}})\) whose \(\|\cdot\|_{2,\alpha,\mathcal{Q}^{\text{iter}}}^{(*)}\)-norms are finite._ Note that \(C^{2,\alpha}_{(*)}(\mathcal{Q}^{\rm iter})\) is compactly embedded into \(C^{2,\bar{\alpha}}_{(*)}(\mathcal{Q}^{\rm iter})\) whenever \(0\leq\tilde{\alpha}<\alpha<1\), which follows from the properties given in [10, Lemma 4.6.3 and Corollary 17.2.7]. Using Corollary 3.10 and Propositions 3.12-3.14 and 3.16, we have the following estimate for admissible solutions \(\varphi\) after mapping the pseudo-subsonic domain \(\Omega\) to the standard domain \(\mathcal{Q}^{\rm iter}\); the proof is similar to [2, Proposition 4.12]. **Proposition 4.6**.: _For each \(\theta_{*}\in(0,\theta^{\rm d}),\) there exist constants \(M>0\) and \(\bar{\alpha}\in(0,\frac{1}{3}]\) depending only on \((\gamma,v_{2},\theta_{*})\) such that, for any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta\cap\{\theta_{1},\theta_{2}\leq\theta_{*}\},\)\(u=u^{(\varphi,\boldsymbol{\theta})}:\mathcal{Q}^{\rm iter}\to\mathbb{R}\), defined by (4.14), satisfies \(u\in C^{2,\bar{\alpha}}_{(*)}(\mathcal{Q}^{\rm iter})\) and_ \[\|u\|_{2,\bar{\alpha},\mathcal{Q}^{\rm iter}}^{(*)}=\|u\|_{2,\bar{\alpha}, \mathcal{Q}^{\rm int}_{\epsilon_{0}^{\prime}/4}}+\sum_{j=5,6}\Big{(}\|u\|_{2, \bar{\alpha},\bar{\mathcal{Q}}^{1}_{\epsilon_{0}}}^{(1+\bar{\alpha}),({\rm par })}+\|u\|_{1,\bar{\alpha},\mathcal{Q}^{1}_{\epsilon_{0}}}^{(1+\bar{\alpha}),( {\rm subs})}\Big{)}\leq M\,.\] #### 4.1.2. Mapping to approximate admissible solutions For each \(\boldsymbol{\theta}\in\overline{\Theta}\), let the extended domain \(Q^{\boldsymbol{\theta}}\) be given by Definition 4.1, and let \(\varphi_{\boldsymbol{\theta}}^{*}\) be given by Definition 4.4. For each \(s^{*}\in(-1,1)\), define \(Q^{\boldsymbol{\theta}}(s^{*})\coloneqq Q^{\boldsymbol{\theta}}\cap(\mathcal{ G}_{1}^{\boldsymbol{\theta}})^{-1}(\{s=s^{*}\})\). Then, for \(i=1,2\), \[\inf_{Q^{\boldsymbol{\theta}}((-1)^{i-1})}(\varphi_{2}-\varphi_{\boldsymbol{ \theta}}^{*})<0\leq\sup_{Q^{\boldsymbol{\theta}}((-1)^{i-1})}(\varphi_{2}- \varphi_{\boldsymbol{\theta}}^{*})\,,\] where the inequality on the right above becomes a strict inequality when \(\theta_{i}\in[0,\theta^{s})\) and an equality when \(\theta_{i}\in[\theta^{s},\theta^{\rm d}]\). Moreover, by the choice of \(C_{\rm bg}>0\) in Definition 4.4, \[\sup_{Q^{\boldsymbol{\theta}}(s)}(\varphi_{2}-\varphi_{\boldsymbol{\theta}}^{ *})-\inf_{Q^{\boldsymbol{\theta}}(s)}(\varphi_{2}-\varphi_{\boldsymbol{\theta} }^{*})\geq\frac{\delta_{0}}{2}>0\qquad\text{ for any }s\in(-1,1)\,.\] **Definition 4.7**.: _Fix \(\alpha\in(0,1),\)\(\theta_{*}\in(0,\theta^{\rm d}),\) and \(\boldsymbol{\theta}\in\Theta\cap\{\theta_{1},\theta_{2}\leq\theta_{*}\}\). Let \(u\in C^{1,\alpha}(\overline{\mathcal{Q}^{\rm iter}})\) be a function satisfying that_ \[\inf_{Q^{\boldsymbol{\theta}}(s)}(\varphi_{2}-\varphi_{\boldsymbol{\theta}}^{ *})<u(s,1)<\sup_{Q^{\boldsymbol{\theta}}(s)}(\varphi_{2}-\varphi_{\boldsymbol{ \theta}}^{*})\qquad\text{ for any }s\in(-1,1)\,. \tag{4.17}\] 1. _For each_ \(s\in(-1,1)\)_, let_ \(\vec{t}^{\prime}>0\) _be the unique solution of_ \((\varphi_{2}-\varphi_{\boldsymbol{\theta}}^{*})\circ(\mathcal{G}_{1}^{ \boldsymbol{\theta}})^{-1}(s,\vec{t}^{\prime})=u(s,1),\) _which exists by (_4.12_) and (_4.17_). Define_ \(\mathfrak{g}^{(u,\boldsymbol{\theta})}_{\rm sh}:(-1,1)\to(0,\infty)\) _by_ \(\mathfrak{g}^{(u,\boldsymbol{\theta})}_{\rm sh}(s)\coloneqq\vec{t}^{\prime}\)_._ 2. _For_ \(\mathcal{G}_{1}^{\boldsymbol{\theta}}\) _and_ \(G_{2,\mathfrak{g}^{(u,\boldsymbol{\theta})}_{\rm sh}}\) _given by (_4.7_) and (_4.13_) respectively, define_ \(\mathfrak{F}_{(u,\boldsymbol{\theta})}:\mathcal{Q}^{\rm iter}\to Q^{ \boldsymbol{\theta}}\) _by_ \(\mathfrak{F}_{(u,\boldsymbol{\theta})}\coloneqq(\mathcal{G}_{1}^{\boldsymbol{ \theta}})^{-1}\circ G_{2,\mathfrak{g}^{(u,\boldsymbol{\theta})}_{\rm sh}}^{-1}\)_._ 3. _Define sets_ \(\Omega(u,\boldsymbol{\theta})\coloneqq\mathfrak{F}_{(u,\boldsymbol{\theta})}( \mathcal{Q}^{\rm iter})\) _and_ \(\Gamma_{\rm shock}(u,\boldsymbol{\theta})\coloneqq\mathfrak{F}_{(u, \boldsymbol{\theta})}((-1,1)\times\{1\})\)_. Finally, define function_ \(\varphi^{(u,\boldsymbol{\theta})}(\boldsymbol{\xi}):\Omega(u,\boldsymbol{ \theta})\to\mathbb{R}\) _by_ \(\varphi^{(u,\boldsymbol{\theta})}(\boldsymbol{\xi})\coloneqq(u\circ\mathfrak{F }_{(u,\boldsymbol{\theta})}^{-1})(\boldsymbol{\xi})+\varphi_{\boldsymbol{ \theta}}^{*}(\boldsymbol{\xi})\) _for all_ \(\boldsymbol{\xi}\in\Omega(u,\boldsymbol{\theta})\)_._ We call \(\varphi^{(u,\boldsymbol{\theta})}\) above an approximate admissible solution if it satisfies the same boundary conditions on \(\Gamma_{\rm sonic}^{5}\cup\Gamma_{\rm sonic}^{6}\) as the admissible solutions in Definition 2.11. For \(\alpha\in(0,1)\) and \(\theta_{*}\in(0,\theta^{\rm d})\), define the space of such approximate admissible solutions: \[\mathfrak{G}_{\alpha}^{\theta_{*}}\coloneqq\bigg{\{}(u,\boldsymbol{\theta})\in C ^{1,\alpha}(\overline{\mathcal{Q}^{\rm iter}})\times[0,\theta_{*}]^{2}\,:\, \begin{array}{l}(u,\boldsymbol{\theta})\text{ satisfies }(\ref{eq:1.1})\text{ and }\\ (u,Du)(\pm 1,\,\cdot)=(0,\boldsymbol{0})\end{array}\bigg{\}}. \tag{4.18}\] By construction, for any \((u,\boldsymbol{\theta})\in\mathfrak{G}_{\alpha}^{\theta_{*}}\), map \(\mathfrak{F}_{(u,\boldsymbol{\theta})}\) satisfies that \(P_{1}=\mathfrak{F}_{(u,\boldsymbol{\theta})}(1,0)\), \(P_{2}=\mathfrak{F}_{(u,\boldsymbol{\theta})}(1,1)\), \(P_{3}=\mathfrak{F}_{(u,\boldsymbol{\theta})}(-1,1)\), and \(P_{4}=\mathfrak{F}_{(u,\boldsymbol{\theta})}(-1,0)\) for points \(P_{i}\), \(i=1,2,3,4\), given by Definition 2.9. Furthermore, \(\varphi^{(u,\boldsymbol{\theta})}=\varphi_{2}\) on \(\Gamma_{\rm shock}(u,\boldsymbol{\theta})\). Other properties of set \(\mathfrak{G}_{\alpha}^{\theta_{*}}\) are given below; the proof is similar to [10, Lemmas 12.2.7 and 17.2.13]. **Lemma 4.8**.: _Fix \(\alpha\in(0,1)\) and \(\theta_{*}\in(0,\theta^{\rm d})\). For each \((u,\boldsymbol{\theta})\in\mathfrak{G}_{\alpha}^{\theta_{*}}\), \(\mathfrak{g}^{(u,\boldsymbol{\theta})}_{\rm sh}\in C^{1,\alpha}([-1,1])\) and the following properties hold:_ 1. \(\Omega(u,\boldsymbol{\theta})\cup\Gamma_{\rm shock}(u,\boldsymbol{\theta}) \subseteq Q^{\boldsymbol{\theta}}\subseteq\mathbb{R}_{+}^{2}\)_. Moreover,_ \(\Gamma_{\rm shock}(u,\boldsymbol{\theta})\) _is a_ \(C^{1,\alpha}\)_-curve up to its endpoints_ \(P_{2}\) _and_ \(P_{3},\) _and is tangential to_ \(S_{25}\) _and_ \(S_{26}\) _at_ \(P_{2}\) _and_ \(P_{3}\) _respectively. If_ \(f_{j,0},j=5,6,\) _are given as in_ \({\rm Step\ 1}\) _of the proofs of_ Propositions 3.12 _and_ 3.14_, then_ \[\big{(}\mathfrak{g}^{(u,\boldsymbol{\theta})}_{\rm sh},\frac{\mathrm{d}}{\mathrm{d}s} \mathfrak{g}^{(u,\boldsymbol{\theta})}_{\rm sh}\big{)}\big{|}_{s=(-1)^{j-1}}= \big{(}f_{j,0}(0)\,,\frac{(-1)^{j}}{d_{L}}f_{j,0}^{\ _where \(d_{L}\) is given by (4.6)._ 2. _Let_ \(\mathcal{N}_{\varepsilon}(\Gamma)\) _denote the open_ \(\varepsilon\)_-neighbourhood of a set_ \(\Gamma,\) _as given by (_3.21_). Let_ \(\delta_{0}\) _be from_ Definition 4.1_,_ _and let the coordinates_ \((x,y)=\mathcal{R}(\boldsymbol{\xi})\) _be given by (_4.1_). Then there exists a constant_ \(\epsilon_{0}>0\) _depending only on_ \((\gamma,\upsilon_{2})\) _such that, for_ \(j=5,6,\) _and any_ \(\epsilon\in(0,\epsilon_{0}),\)__ \[\begin{split}\hat{\Omega}_{\epsilon}^{j}&\coloneqq \mathcal{N}_{\epsilon_{0}}(\Gamma_{\text{sonic}}^{j,\delta_{0}})\cap\big{\{} \boldsymbol{\xi}=\mathcal{R}^{-1}(x,y)\,:\,x_{P_{j-3}}<x<x_{P_{j-3}}+\epsilon \big{\}}\cap\Omega(u,\boldsymbol{\theta})\\ &\quad=\big{\{}\boldsymbol{\xi}=\mathcal{R}^{-1}(x,y)\,:\,x_{P_{j- 3}}<x<x_{P_{j-3}}+\epsilon,\,0<y<\hat{f}_{j,\text{sh}}(x)\big{\}}\,,\\ \Gamma_{\text{shock}}(u,\boldsymbol{\theta})\cap\partial\hat{ \Omega}_{\epsilon}^{j}&=\big{\{}\boldsymbol{\xi}=\mathcal{R}^{-1 }(x,\hat{f}_{j,\text{sh}}(x))\,:\,x_{P_{j-3}}<x<x_{P_{j-3}}+\epsilon\big{\}}\,, \\ \Gamma_{\text{sonic}}^{j}&\cap\partial\hat{\Omega}_ {\epsilon}^{j}&=\big{\{}\boldsymbol{\xi}=\mathcal{R}^{-1}(x_{P_{j- 3}},y)\,:\,0<y<\hat{f}_{j,\text{sh}}(x_{P_{j-3}})\big{\}}=\Gamma_{\text{sonic }}^{j}\,,\\ \Gamma_{\text{sym}}&\cap\partial\hat{\Omega}_{ \epsilon}^{j}&=\big{\{}\boldsymbol{\xi}=\mathcal{R}^{-1}(x,0)\,:\,x _{P_{j-3}}<x<x_{P_{j-3}}+\epsilon\big{\}}\,,\end{split}\] _where_ \(\hat{f}_{j,\text{sh}}(x)\coloneqq\mathfrak{g}_{\text{sh}}^{(u,\boldsymbol{ \theta})}\circ L_{\boldsymbol{\theta}}(u_{j}+(-1)^{j-1}(c_{j}-x))\) _and_ \(L_{\boldsymbol{\theta}}\) _is given by (_4.6_)._ 3. _Let_ \((u,\boldsymbol{\theta}),(\tilde{u},\tilde{\boldsymbol{\theta}})\in\mathfrak{ \mathfrak{g}}_{\alpha}^{0}\) _satisfy_ \(\|(u,\tilde{u})\|_{C^{1,\alpha}(\overline{\mathcal{Q}^{\text{iter}}})}<M\) _for some constant_ \(M>0\)_. Then there exists a constant_ \(C>0\) _depending only on_ \((\gamma,\upsilon_{2},\theta_{*},M,\alpha)\) _such that_ \[\begin{split}&\|\mathfrak{g}_{\text{sh}}^{(u,\boldsymbol{ \theta})}\|_{C^{1,\alpha}([-1,1])}+\|\mathfrak{F}_{(u,\boldsymbol{\theta})} \|_{C^{1,\alpha}(\overline{\mathcal{Q}^{\text{iter}}})}\leq C\,,\\ &\|\mathfrak{g}_{\text{sh}}^{(u,\boldsymbol{\theta})}-\mathfrak{ g}_{\text{sh}}^{(\tilde{u},\tilde{\boldsymbol{\theta}})}\|_{C^{1, \alpha}([-1,1])}+\|\mathfrak{F}_{(u,\boldsymbol{\theta})}-\mathfrak{F}_{( \tilde{u},\tilde{\boldsymbol{\theta}})}\|_{C^{1,\alpha}(\overline{\mathcal{Q }^{\text{iter}}})}\\ &\quad+\|\varphi^{(u,\boldsymbol{\theta})}\circ\mathfrak{F}_{(u, \boldsymbol{\theta})}-\varphi^{(\tilde{u},\tilde{\boldsymbol{\theta}})}\circ \mathfrak{F}_{(\tilde{u},\tilde{\boldsymbol{\theta}})}\|_{C^{1,\alpha}( \overline{\mathcal{Q}^{\text{iter}}})}+\|\varphi_{\boldsymbol{\theta}}^{*} \circ\mathfrak{F}_{(u,\boldsymbol{\theta})}-\varphi_{\tilde{\boldsymbol{ \theta}}}^{*}\circ\mathfrak{F}_{(\tilde{u},\tilde{\boldsymbol{\theta}})}\|_{C^ {1,\alpha}(\overline{\mathcal{Q}^{\text{iter}}})}\\ &\quad\leq C\big{(}\|u-\tilde{u}\|_{C^{1,\alpha}(\overline{ \mathcal{Q}^{\text{iter}}})}+|\boldsymbol{\theta}-\tilde{\boldsymbol{\theta}}| \big{)}\,.\end{split}\] 4. _Let_ \(\epsilon_{0}>0\) _be the constant from_ (ii)_, _and let_ \(\hat{\epsilon}_{0}\coloneqq d_{L}\epsilon_{0}\) _with_ \(d_{L}>0\) _given by (_4.6_). Assume that, for constants_ \(\alpha\in(0,1),\,\sigma\in(1,2],\) _and_ \(M>0,\)__ (4.19) \[\|u\|_{2,\alpha,\mathcal{Q}^{\text{iter}}\cap\{|s|<1-\frac{\hat{\epsilon}_{0} }{10}\}}+\|u\|_{2,\alpha,\mathcal{Q}^{\text{iter}}\cap\{|s|>1-\hat{\epsilon}_{0 }\}}^{(\sigma),(\text{par})}\leq M\,.\] _Then there exists a constant_ \(C>0\) _depending only on_ \((\gamma,\upsilon_{2},\theta_{*},\alpha,\sigma)\) _such that_ \[\|\mathfrak{g}_{\text{sh}}^{(u,\boldsymbol{\theta})}\|_{2,\alpha,[-1+\frac{ \hat{\epsilon}_{0}}{10},1-\frac{\hat{\epsilon}_{0}}{10}]}+\|\mathfrak{g}_{ \text{sh}}^{(u,\boldsymbol{\theta})}-\mathfrak{g}_{5}\|_{2,\alpha,(1,1-\hat{ \epsilon}_{0})}^{(\sigma),(\text{par})}+\|\mathfrak{g}_{\text{sh}}^{(u, \boldsymbol{\theta})}-\mathfrak{g}_{6}\|_{2,\alpha,(-1,-1+\hat{\epsilon}_{0})}^ {(\sigma),(\text{par})}\leq CM\,,\] _where_ \(\mathfrak{g}_{5}\) _and_ \(\mathfrak{g}_{6}\) _are defined in_ Proposition 4.3(ii)_._ _Furthermore, for_ \(I_{\text{so}}\coloneqq(-1,-1+\hat{\epsilon}_{0})\cup(1-\hat{\epsilon}_{0},1),\) _define_ \(\mathfrak{F}_{(0,\boldsymbol{\theta})}:I_{\text{so}}\times(0,\infty)\to\mathbb{ R}^{2}\) _by_ \[\mathfrak{F}_{(0,\boldsymbol{\theta})}(s,t^{\prime})\coloneqq\left\{\begin{array}{ll}(G_{2, \mathfrak{g}_{6}}\circ\mathcal{G}_{1}^{\boldsymbol{\theta}})^{-1}(s,t^{\prime})& \text{ if }\,s\in I_{\text{so}}\cap\{s>0\}\,,\\ (G_{2,\mathfrak{g}_{6}}\circ\mathcal{G}_{1}^{\boldsymbol{\theta}})^{-1}(s,t^{ \prime})&\text{ if }\,s\in I_{\text{so}}\cap\{s<0\}\,.\end{array}\right.\] _Then there exists a constant_ \(C_{0}>0\) _depending only on_ \((\gamma,\upsilon_{2},\theta_{*})\) _such that_ \[\begin{split}&\|\mathfrak{F}_{(0,\boldsymbol{\theta})}\|_{C^{3}( \overline{I_{\text{so}}}\times[0,1])}+\|\mathfrak{F}_{(u,\boldsymbol{\theta})}\|_{2, \alpha,\mathcal{Q}^{\text{iter}}\cap\{|s|<1-\frac{\hat{\epsilon}_{0}}{10}\}}+\| \mathfrak{F}_{(u,\boldsymbol{\theta})}-\mathfrak{F}_{(0,\boldsymbol{\theta})}\|_{2,\alpha,I_{\text{so}}\times(0,1)}^{(\sigma),(\text{par})}\leq C_{0}\,.\end{split}\] 5. _Let_ \(f_{\boldsymbol{\theta}}\) _be given by (_4.8_). For constants_ \(M>0\) _and_ \(\delta_{\text{sh}}>0,\) _assume that_ \((u,\boldsymbol{\theta})\in\mathfrak{\mathfrak{g}}_{\alpha}^{\theta_{*}}\) _satisfies (_4.19_) and, for any_ \(s\in[-1,1],\)__ \[\begin{split}&\min\big{\{}\mathfrak{g}_{\text{sh}}^{(u, \boldsymbol{\theta})}(-1)+\frac{s+1}{M},\,\mathfrak{g}_{\text{sh}}^{(u, \boldsymbol{\theta})}(1)+\frac{1-s}{M},\,\delta_{\text{sh}}\big{\}}\leq\mathfrak{ g}_{\text{sh}}^{(u,\boldsymbol{\theta})}(s)\\ &\quad\leq\min\big{\{}\mathfrak{g}_{\text{sh}}^{(u,\boldsymbol{ \theta})}(-1)+M(s+1),\,\mathfrak{g}_{\text{sh}}^{(u,\boldsymbol{\theta})}(1)+M(1-s),f_{ \boldsymbol{\theta}}(s)-\frac{1}{M}\big{\}}\,.\end{split}\] _Then, for any_ \(\epsilon\in(0,\frac{1}{4}\min\big{\{ _._ * _Let_ \((u,\boldsymbol{\theta}),(\tilde{u},\tilde{\boldsymbol{\theta}})\in\mathfrak{G}_{ \alpha}^{\theta_{*}}\) _satisfy_ \(\|(u,\tilde{u})\|_{C^{1,\alpha}(\overline{\mathcal{Q}^{\mathrm{iter}}})}<M\) _for some constant_ \(M>0\)_. For any open set_ \(K\Subset\mathcal{Q}^{\mathrm{iter}}\) _with_ \(\delta=\mathrm{dist}(K,\overline{\mathcal{Q}^{\mathrm{iter}}}\cap\{|s|=1\}),\) _there exists a constant_ \(C_{\delta}>0\) _depending only on_ \((\gamma,v_{2},\theta_{*},\alpha,\sigma,\delta,M)\) _such that_ \[\|\mathfrak{F}_{(u,\boldsymbol{\theta})}-\mathfrak{F}_{(\tilde{u}, \tilde{\boldsymbol{\theta}})}\|_{C^{2,\alpha}(\overline{K})}\leq C_{\delta} \big{(}\|(u-\tilde{u})(\cdot,1)\|_{C^{2,\alpha}([-1+\delta,1-\delta])}+| \boldsymbol{\theta}-\tilde{\boldsymbol{\theta}}|\big{)}\,,\] \[\|(\varphi^{(u,\boldsymbol{\theta})}\circ\mathfrak{F}_{(u, \boldsymbol{\theta})}-\varphi^{(\tilde{u},\tilde{\boldsymbol{\theta}})}\circ \mathfrak{F}_{(\tilde{u},\tilde{\boldsymbol{\theta}})},\,\psi^{(u, \boldsymbol{\theta})}\circ\mathfrak{F}_{(u,\boldsymbol{\theta})}-\psi^{( \tilde{u},\tilde{\boldsymbol{\theta}})}\circ\mathfrak{F}_{(\tilde{u},\tilde{ \boldsymbol{\theta}})}\|_{C^{2,\alpha}(\overline{K})}\] \[\quad\leq C_{\delta}\big{(}\|u-\tilde{u}\|_{C^{2,\alpha}( \overline{K})}+|\boldsymbol{\theta}-\tilde{\boldsymbol{\theta}}|\big{)}\,,\] _where_ \(\psi^{(u,\boldsymbol{\theta})}\) _is given by_ \(\psi^{(u,\boldsymbol{\theta})}\coloneqq\varphi^{(u,\boldsymbol{\theta})}- \varphi_{\boldsymbol{\theta}}^{*}\) _for each_ \((u,\boldsymbol{\theta})\in\mathfrak{G}_{\alpha}^{\theta_{*}}\)_._ ### Definition and properties of the iteration set For any \(\theta_{*}\in(0,\theta^{\mathrm{d}})\), we now define the iteration set \(\mathcal{K}\subseteq C^{1,\alpha}(\overline{\mathcal{Q}^{\mathrm{iter}}}) \times[0,\theta_{*}]^{2}\) for some \(\alpha\in(0,1)\), as a set of functions satisfying the properties similar to the admissible solutions. In the definition of \(\mathcal{K}\), we construct a nonlinear boundary value problem in a fixed domain and obtain the well-posedness and _a priori_ estimates for the solutions. #### 4.2.1. Definition of the iteration set Before giving the definition of the iteration set, we collect some useful estimates for admissible solutions. **Lemma 4.9**.: _Fix \(\theta_{*}\in(0,\theta^{\mathrm{d}})\). For any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\overline{\Theta}\cap\{\theta_{1},\theta_{2}\leq\theta_ {*}\},\) denote \(\psi\coloneqq(\varphi-\max\{\varphi_{5},\varphi_{6}\})\circ\mathcal{R}^{-1}\). Then there exist constants \(C_{1},\,\bar{\epsilon}>0\) depending only on \((\gamma,v_{2})\), and \(C_{2}^{*}>0\) depending on \((\gamma,v_{2},\theta_{*})\) such that, for any \(\epsilon\in(0,\bar{\epsilon}]\),_ \[\|\varphi-\varphi_{5}\|_{C^{0,1}(\overline{\Omega})}+\|\varphi- \varphi_{6}\|_{C^{0,1}(\overline{\Omega})}\leq C_{1}\,,\] \[|D_{(x,y)}\psi(x,y)|\leq C_{2}^{*}x\qquad\text{for }(x,y)\in \mathcal{R}\big{(}\overline{\Omega}\cap(\mathcal{D}_{\epsilon}^{5}\cup \mathcal{D}_{\epsilon}^{6})\big{)}\,.\] Proof.: The first result follows directly from (2.33), Proposition 2.6, and Lemma 3.3. Without loss of generality, we assume that \(\theta_{*}\in(\theta^{\mathrm{s}},\theta^{\mathrm{d}})\). Then, for sufficiently small \(\sigma>0\) and any \(\boldsymbol{\theta}\in[0,\theta^{\mathrm{s}}+\sigma]\times[0,\theta_{*}]\), \[|D_{(x,y)}\psi(x,y)|\leq C_{2}x\qquad\text{for }(x,y)\in\mathcal{R}\big{(} \overline{\Omega}\cap\mathcal{D}_{\epsilon}^{5}\big{)}\,,\] whenever \(\epsilon\in(0,\bar{\epsilon}]\), where \(\bar{\epsilon}>0\) is the minimum of the constants, \(\varepsilon>0\), in Propositions 3.12-3.14. From the relations: \(\ell(\rho_{5})\cos\theta_{25}=v_{2}\) and \(M_{2}^{(\theta_{1})}=|(-\xi^{P_{0}^{1}},v_{2})|\), and Lemmas A.2(i) and 2.3, \[\frac{\mathrm{d}x_{P_{0}^{1}}}{\mathrm{d}\theta_{1}}=\frac{\mathrm{d}}{ \mathrm{d}\theta_{1}}\big{(}c_{5}-(\xi^{P_{0}^{1}}-u_{5})\big{)}=\frac{ \mathrm{d}}{\mathrm{d}\theta_{1}}\big{(}c_{5}-\xi^{P_{0}^{1}}+v_{2}\tan\theta_ {25}\big{)}>0\,. \tag{4.20}\] Then there exists \(C_{2}^{\prime}>0\) depending only on \((\gamma,v_{2},\theta_{*})\) such that, for any \(\boldsymbol{\theta}\in[\theta^{\mathrm{s}}+\frac{\sigma}{2},\theta_{*}]\times[0, \theta_{*}]\), \[|D_{(x,y)}\psi(x,y)|\leq C_{2}^{\prime}x^{\bar{\alpha}}\leq C_{2}^{\prime} \big{(}x_{P_{0}^{1}}|_{\theta_{1}=\theta^{\mathrm{s}}+\frac{\sigma}{2}}\big{)} ^{\bar{\alpha}-1}x\qquad\text{for }(x,y)\in\mathcal{R}\big{(}\overline{\Omega}\cap \mathcal{D}_{\epsilon}^{5}\big{)}\,,\] after possibly reducing \(\bar{\epsilon}\) further so that \(\bar{\epsilon}<r\), for \(r>0\) from Proposition 3.16 and \(\bar{\alpha}\in(0,\frac{1}{3}]\) from Proposition 4.6. From the symmetry of the four-shock interaction problem, we can repeat the above discussion for \(\psi(x,y)\) in \(\mathcal{R}(\overline{\Omega}\cap\mathcal{D}_{\epsilon}^{6})\). Define \(u^{\mathrm{(norm)}}\in C^{3}(\overline{\mathcal{Q}^{\mathrm{iter}}})\) by (4.14) with \(\boldsymbol{\theta}=\boldsymbol{0}\) and \(\varphi=\varphi_{0}\) from SS2.1.3. Note that \(\varphi_{\boldsymbol{\theta}}^{*}\equiv\varphi_{0}\) in \(Q^{\boldsymbol{\theta}}\) by Definition 4.4 because \(\varphi_{5}=\varphi_{6}=\varphi_{0}\) when \(\boldsymbol{\theta}=\boldsymbol{0}\), which yields that \[u^{\mathrm{(norm)}}\equiv 0\qquad\text{in }\mathcal{Q}^{\mathrm{iter}}\,.\] Let \(\bar{\alpha}\in(0,\frac{1}{3}]\) be the constant from Proposition 4.6, \(\epsilon_{0}>0\) from Lemma 4.8(ii), and \(\bar{\epsilon}>0\) from Lemma 4.9. For constants \(\alpha\in(0,\frac{\bar{\alpha}}{2}]\), \(\delta_{1}\), \(\delta_{2}\), \(\delta_{3}\), \(\epsilon\in(0,\frac{1}{2}\min\{\epsilon_{0},\bar{\epsilon}\})\), and \(N_{1}>1\) to be specified later, we now define the iteration set \(\mathcal{K}\subseteq C_{(*)}^{2,\alpha}(\mathcal{Q}^{\mathrm{iter}})\times[0, \theta_{*}]^{2}\), where space \(C_{(*)}^{2,\alpha}(\mathcal{Q}^{\mathrm{iter}})\) is given by Definition 4.5. For simplicity, in the following, we use \(\big{(}\mathfrak{g}_{\mathrm{sh}},\varphi,\Omega,\Gamma_{\mathrm{shock}}\big{)}\) to denote \(\big{(}\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})},\varphi^{(u, \boldsymbol{\theta})},\Omega(u,\boldsymbol{\theta}),\Gamma_{\mathrm{shock}}(u, \boldsymbol{\theta})\big{)}\) that are given by Definition 4.7. **Definition 4.10** (Iteration set).: _Fix \(\theta_{*}\in(0,\theta^{\mathrm{d}})\). The iteration set \(\mathcal{K}\subseteq C_{(*)}^{2,\alpha}(\mathcal{Q}^{\mathrm{iter}})\times[0, \theta_{*}]^{2}\) is the set of all \((u,\boldsymbol{\theta})\) satisfying the following properties_ 1. \(\|u-u^{(\mathrm{norm})}\|_{2,\alpha,\mathcal{Q}^{\mathrm{lier}}}^{(*)}<\mathscr{K}_{ 1}(\max\{\theta_{1},\theta_{2}\})\) _for_ \(\mathscr{K}_{1}\in C^{0,1}(\mathbb{R})\) _given by_ \[\mathscr{K}_{1}(\theta)=\begin{cases}\delta_{1}&\text{ if }\,\theta\leq\frac{\delta_{1}}{N_{1}}\,,\\ N_{0}&\text{ if }\,\theta\geq\frac{2\delta_{1}}{N_{1}}\,,\\ \text{linear}&\text{ otherwise}\,,\end{cases}\] _with_ \(N_{0}\coloneqq\max\{10M,1\}\) _for_ \(M=M(\gamma,v_{2},\theta_{*})>0\) _from_ Proposition 4.6_._ 2. \((u,\boldsymbol{\theta})\in\mathfrak{G}_{\alpha}^{\theta_{*}}\)_, where_ \(\mathfrak{G}_{\alpha}^{\theta_{*}}\) _is defined by (_4.18_)._ 3. \(\Gamma_{\mathrm{shock}}\) _satisfies_ \(\mathrm{dist}(\Gamma_{\mathrm{shock}},B_{1}(O_{2}))>N_{2}^{-1},\) _and_ \(\mathfrak{g}_{\mathrm{sh}}\) _satisfies_ \(\mathfrak{g}_{\mathrm{sh}}(\pm 1)\geq 0\) _and_ \[\min_{i=1,2}\big{\{}\mathfrak{g}_{\mathrm{sh}}((-1)^{i-1})+\frac{1 +(-1)^{i}s}{N_{3}},\,\frac{1}{N_{3}}\big{\}}\leq\mathfrak{g}_{\mathrm{sh}}(s)\] \[\qquad\qquad\leq\min_{i=1,2}\big{\{}\mathfrak{g}_{\mathrm{sh}}((-1)^{i-1 })+N_{3}(1+(-1)^{i}s),\,f_{\boldsymbol{\theta}}(s)-\frac{1}{N_{3}}\big{\}} \qquad\text{for all }s\in(-1,1),\] _with_ \(N_{2}\coloneqq 2C_{\mathrm{sh}}\) _for_ \(C_{\mathrm{sh}}=C_{\mathrm{sh}}(\gamma,v_{2})>0\) _from_ Proposition 3.4, \(N_{3}\coloneqq 2\hat{k}\) _for_ \(\hat{k}=\hat{k}(\gamma,v_{2},\theta_{*})>0\) _from_ Proposition 4.3(iv), _and_ \(f_{\boldsymbol{\theta}}\) _given by (_4.8_)._ 4. _Let coordinates_ \((x,y)=\mathcal{R}(\boldsymbol{\xi})\) _be given by (_4.1_) and, for_ \(r>0,\) _let_ \(\mathcal{D}_{r}^{5}\) _and_ \(\mathcal{D}_{r}^{6}\) _be given by (_4.2_). Then_ \(\varphi\) _and_ \(\psi\coloneqq(\varphi-\max\left\{\varphi_{5},\varphi_{6}\right\})\circ \mathcal{R}^{-1}\) _satisfy_ (4.21) \[\begin{split}&\varphi-\max\left\{\varphi_{5},\varphi_{6}\right\}> \mathscr{K}_{2}(\max\{\theta_{1},\theta_{2}\})\qquad\quad\text{ in }\overline{\Omega}\setminus(\mathcal{D}_{\epsilon/10}^{5}\cup\mathcal{D}_{ \epsilon/10}^{6})\,,\\ &\partial_{\boldsymbol{e}_{S_{2_{j}}}}(\varphi_{2}-\varphi)<- \mathscr{K}_{2}(\max\{\theta_{1},\theta_{2}\})\qquad\quad\text{ in }\overline{\Omega}\setminus\mathcal{D}_{\epsilon/10}^{j}\text{ for }j=5,6\,,\end{split}\] _and_ (4.22) \[\begin{split}&|\partial_{x}\psi(x,y)|<\mathscr{K}_{ \delta}(\theta_{j-4})x\qquad\quad\text{ in }\mathcal{R}\big{(}\overline{\Omega}\cap(\mathcal{D}_{\epsilon_{0}}^{j} \setminus\mathcal{D}_{\epsilon/10}^{j})\big{)}\text{ for }j=5,6\,,\\ &|\partial_{y}\psi(x,y)|<N_{4}x\qquad\quad\text{ in }\mathcal{R}\big{(}\overline{\Omega}\cap((\mathcal{D}_{\epsilon_{0}}^{5} \setminus\mathcal{D}_{\epsilon/10}^{5})\cup(\mathcal{D}_{\epsilon_{0}}^{6} \setminus\mathcal{D}_{\epsilon/10}^{6})\big{)}\big{)}\,,\\ &|D_{(x,y)}\psi|<N_{4}\epsilon\qquad\quad\text{ in }\mathcal{R}\big{(}\overline{\Omega}\cap(\overline{\mathcal{D}_{\epsilon}^{5}} \cup\overline{\mathcal{D}_{\epsilon}^{6}})\big{)}\,,\\ &\min\left\{\partial_{\boldsymbol{\nu}}(\varphi_{2}-\varphi),\, \partial_{\boldsymbol{\nu}}\varphi\right\}>\mu_{1}\qquad\text{ on }\overline{\Gamma_{\mathrm{shock}}}\,,\\ &\|\varphi-\varphi_{j}\|_{C^{0,1}(\overline{\Omega})}<N_{5} \qquad\quad\text{ for }j=5,6\,,\end{split}\] _for_ \(\boldsymbol{e}_{S_{25}}\) _and_ \(\boldsymbol{e}_{S_{26}}\) _given by (_2.35_), and the unit normal vector_ \(\boldsymbol{\nu}\) _on_ \(\Gamma_{\mathrm{shock}}\) _towards the interior of_ \(\Omega\)_. In the above conditions, functions_ \(\mathscr{K}_{2},\mathscr{K}_{3}\in C^{0,1}(\mathbb{R})\) _are defined by_ \[\mathscr{K}_{2}(\theta)\coloneqq\delta_{2}\min\big{\{}\theta-\frac{\delta_{1} }{N_{1}^{2}},\,\frac{\delta_{1}}{N_{1}^{2}}\big{\}}\,,\qquad\mathscr{K}_{3}( \theta)\coloneqq\begin{cases}\frac{2-\mu_{0}}{1+\gamma}&\text{ if }\,0\leq\theta\leq\theta^{s}+\frac{\sigma_{2}}{2}\,,\\ N_{4}&\text{ if }\,\theta\geq\theta^{s}+\sigma_{2}\,,\\ \text{linear}&\text{ otherwise}\,,\end{cases}\] _for constants_ \(\mu_{1},\,\sigma_{2},\,\mu_{0},\,\epsilon_{0},\,N_{4}\)_, and_ \(N_{5}\) _chosen as follows_:__ * _Let_ \(\delta^{\prime}=\delta^{\prime}(\gamma,v_{2})>0\) _be from_ Lemma 3.9_. Choose_ \(\mu_{1}\coloneqq\frac{\delta^{\prime}}{2}\)_._ * _Let_ \(\sigma_{2}=\sigma_{2}(\gamma,v_{2})>0\) _and_ \(\delta=\delta(\gamma,v_{2})>0\) _be from_ Step__\(\mathbf{1}\) _of the proof of_ Proposition 3.14_. Choose_ \(\mu_{0}\coloneqq\frac{\delta}{2}\)_._ * _Let_ \(\epsilon_{0}=\epsilon_{0}(\gamma,v_{2})>0\) _be from_ Lemma 4.8(ii)_._ * _Let_ \(C_{1}=C_{1}(\gamma,v_{2})>0\) _and_ \(C_{2}^{*}=C_{2}^{*}(\gamma,v_{2},\theta_{*})>0\) _be from_ Lemma 4.9_. Choose_ \(N_{4}\coloneqq 10C_{2}^{*}\) _and_ \(N_{5}\coloneqq 10C_{1}\)_._ 5. _The density function_ \(\rho(|D\varphi|,\varphi)\) _defined by (_3.8_) satisfies_ \[\frac{\rho^{*}(\gamma)}{2}<\rho(|D\varphi|,\varphi)<2C_{\mathrm{ub}}\qquad\text{ in }\overline{\Omega}\setminus(\mathcal{D}_{\epsilon/10}^{5}\cup\mathcal{D}_{\epsilon/10}^{6})\] _for constants_ \(\rho^{*}(\gamma)>0\) _and_ \(C_{\mathrm{ub}}=C_{\mathrm{ub}}(\gamma,v_{2})>0\) _given by (_3.6_) in_ Lemma 3.3_._ 6. _Let the sound speed_ \(c(|D\varphi|,\varphi)\) _as defined in (_3.2_). Function_ \(\varphi\) _satisfies_ \[\frac{|D\varphi(\boldsymbol{\xi})|^{2}}{c^{2}(|D\varphi(\boldsymbol{\xi})|, \varphi(\boldsymbol{\xi}))}<1-\tilde{\mu}\,\mathrm{dist}^{\flat}(\boldsymbol{ \xi},\Gamma_{\mathrm{sonic}}^{5}\cup\Gamma_{\mathrm{sonic}}^{6})\qquad\text{ for }\boldsymbol{\xi}\in\overline{\Omega}\setminus \left(\mathcal{D}_{\epsilon/10}^{5}\cup\mathcal{D}_{\epsilon/10}^{6}\right),\] _where_ \(\tilde{\mu}\coloneqq\frac{\mu_{\mathrm{el}}}{2}\) _for_ \(\mu_{\mathrm{el}}=\frac{\mu}{C_{5}}>0\) _with_ \(C_{\mathrm{b}}=C_{\mathrm{b}}(\gamma,v_{2})>0\) _from_ (3.17)_,_ \(\mu=\mu(\gamma,v_{2})>0\) _is from_ Proposition 3.8_, and_ \(\mathrm{dist}^{\flat}(\cdot,\cdot)\) _is defined in (_3.18_)._ * _The iteration boundary value problem_ (4.23) \[\begin{cases}\mathcal{N}_{(u,\boldsymbol{\theta})}(\hat{\phi})=A_{11}\hat{\phi}_{ \xi\xi}+2A_{12}\hat{\phi}_{\xi\eta}+A_{22}\hat{\phi}_{\eta\eta}=0&\text{in }\Omega\,,\\ \mathcal{M}_{(u,\boldsymbol{\theta})}(D\hat{\phi},\hat{\phi},\boldsymbol{ \xi})=0&\text{on }\Gamma_{\text{shock}}\,,\\ \hat{\phi}=\max\{\varphi_{5},\varphi_{6}\}+\frac{1}{2}|\boldsymbol{\xi}|^{2}& \text{on }\Gamma_{\text{sonic}}^{5}\cup\Gamma_{\text{sonic}}^{6}\,,\\ \partial_{\eta}\hat{\phi}=0&\text{on }\Gamma_{\text{wedge}}\,,\end{cases}\] _has a unique solution_ \(\hat{\phi}\in C^{2}(\Omega)\cap C^{1}(\overline{\Omega}),\) _where the nonlinear operators_ \(\mathcal{N}_{(u,\boldsymbol{\theta})}\) _and_ \(\mathcal{M}_{(u,\boldsymbol{\theta})}\) _are determined later. Moreover, solution_ \(\hat{\phi},\) _under the transformation_\(:\)__ \[\hat{u}\coloneqq(\hat{\phi}-\frac{1}{2}|\boldsymbol{\xi}|^{2}-\varphi_{ \boldsymbol{\theta}}^{*})\circ\mathfrak{F}_{(u,\boldsymbol{\theta})}\qquad \text{in }\mathcal{Q}^{\text{iter}}\,, \tag{4.24}\] _satisfies the estimate\(:\)_ \[\|\hat{u}-u\|_{2,\alpha,\mathcal{Q}^{\text{iter}}}^{(*)}<\delta_{3}\,. \tag{4.25}\] **Definition 4.11** (Extended iteration set).: _Define the extended iteration set \(\mathcal{K}^{\text{ext}}\) as_ \[\mathcal{K}^{\text{ext}}\coloneqq\left\{(u,\boldsymbol{\theta})\in C^{2, \alpha}_{(*)}(\mathcal{Q}^{\text{iter}})\times[0,\theta_{*}]^{2}\,:\,(u, \boldsymbol{\theta})\text{ satisfies Definition \ref{def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:defdef:def:def:def:def:def:defdef:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:defdef:def:def:def:defdef:def:defdef:def:def:def:def:defdef:def:def:def:defdef:def:defdef:def:defdef:defdef:def:defdef:def:def:defdef:def:defdef:def:defdef:def:defdef:def:defdef:defdef:defdef:defdef:defdef:defdef:defdef:defdef:defdef:defdef:defdef:def For any \(\hat{\phi}\in C^{2}(\Omega)\), set \[\hat{\psi}\coloneqq\big{(}\hat{\phi}-(\varphi_{\boldsymbol{\theta}}^{*}+\frac{1} {2}|\boldsymbol{\xi}|^{2})\big{)}\circ\mathcal{R}^{-1}\equiv\big{(}\hat{\phi}- \phi_{\boldsymbol{\theta}}^{*}\big{)}\circ\mathcal{R}^{-1}\qquad\text{in }\mathcal{R}( \Omega\cap\mathcal{D}_{2\epsilon_{\text{eq}}})\,,\] and define the nonlinear operator \(\mathcal{N}_{(u,\boldsymbol{\theta})}^{\text{polar}}(\hat{\phi})\) in \(\Omega\cap\mathcal{D}_{2\epsilon_{\text{eq}}}\) by \[\mathcal{N}_{(u,\boldsymbol{\theta})}^{\text{polar}}(\hat{\phi}) \coloneqq\Big{(}\big{(}2x-(\gamma+1)x\,\zeta_{1}(\frac{\hat{\psi}_{x}}{x})+ \mathcal{O}_{1}^{\text{mod}}\big{)}\hat{\psi}_{xx}+\mathcal{O}_{2}^{\text{ mod}}\hat{\psi}_{xy}+\big{(}\frac{1}{c_{\boldsymbol{\theta}}}+\mathcal{O}_{3}^{ \text{mod}}\big{)}\hat{\psi}_{yy}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\,\big{(} 1+\mathcal{O}_{4}^{\text{mod}}\big{)}\hat{\psi}_{x}+\mathcal{O}_{5}^{\text{ mod}}\hat{\psi}_{y}\Big{)}\circ\mathcal{R}\,,\] with the modified remainder terms \(\mathcal{O}_{j}^{\text{mod}}\), \(j=1,\cdots,5\), given by \[\mathcal{O}_{j}^{\text{mod}}\coloneqq\mathcal{O}_{j}\big{(}x^{3/4}\zeta_{1} \big{(}\frac{\hat{\psi}_{x}}{x^{3/4}}\big{)},N_{4}(\gamma+1)x\,\zeta_{1}\big{(} \frac{\hat{\psi}_{y}}{N_{4}(\gamma+1)x}\big{)},\psi(x,y),x,c_{\boldsymbol{ \theta}}\big{)}\,,\] where \(\mathcal{O}_{j}(p_{x},p_{y},z,x,c)\) are given by (3.26), constant \(N_{4}>0\) is from Definition 4.10(iv), and \(c_{\boldsymbol{\theta}}\coloneqq c_{5}(1-\chi_{\boldsymbol{\theta}}^{*})+c_{6 }\chi_{\boldsymbol{\theta}}^{*}\) with function \(\chi_{\boldsymbol{\theta}}^{*}\) from Definition 4.4. For \((\boldsymbol{p},\boldsymbol{\xi})\in\mathbb{R}^{2}\times(\Omega\cap\mathcal{D }_{2\epsilon_{\text{eq}}})\), coefficients \(A_{ij}^{(2)}(\boldsymbol{p},\boldsymbol{\xi})\), \(i,j=1,2\), are defined to be the second-order coefficients of operator \(c_{\boldsymbol{\theta}}\mathcal{N}_{(u,\boldsymbol{\theta})}^{\text{polar}}( \hat{\phi})\) in the \(\boldsymbol{\xi}\)-coordinates, that is, \[c_{\boldsymbol{\theta}}\mathcal{N}_{(u,\boldsymbol{\theta})}^{\text{polar}}( \hat{\phi})\eqqcolon\sum_{i,j=1}^{2}A_{ij}^{(2)}(D_{\boldsymbol{\xi}}\hat{ \phi},\boldsymbol{\xi})\partial_{i}\partial_{j}\hat{\phi}\qquad\text{in }\Omega\cap \mathcal{D}_{2\epsilon_{\text{eq}}}\,.\] In particular, there are no lower order terms in the above expression, since they vanish identically in \(\Omega\cap\mathcal{D}_{2\epsilon_{\text{eq}}}\), which can be verified by direct computation. **3. Subsonic cut-off near \(\Gamma_{\text{sonic}}\).** For any \(\sigma\in(0,1)\), there exists a constant \(C_{\sigma}>0\) depending only on \((\gamma,v_{2},\theta_{*},\sigma)\) such that, for any \((u,\boldsymbol{\theta})\in\overline{\mathcal{K}^{\text{ext}}}\), there is a function \(v_{\sigma}^{(u,\boldsymbol{\theta})}\in C^{4}(\overline{\Omega})\) with 1. \(\|v_{\sigma}^{(u,\boldsymbol{\theta})}-\phi\|_{C^{1}(\overline{\Omega})}\leq \sigma^{2}\) and \(\|v_{\sigma}^{(u,\boldsymbol{\theta})}\|_{C^{4}(\overline{\Omega})}\leq C_{\sigma}\); 2. For any sequence \(\{(u_{k},\boldsymbol{\theta}_{k})\}\subset\overline{\mathcal{K}^{\text{ext}}}\) converging to \((u,\boldsymbol{\theta})\) in \(C^{1,\alpha}(\overline{\mathcal{Q}^{\text{iter}}})\times[0,\theta_{*}]^{2}\), \[v_{\sigma}^{(u_{k},\boldsymbol{\theta}_{k})}\circ\mathfrak{F}_{(u_{k}, \boldsymbol{\theta}_{k})}\to v_{\sigma}^{(u,\boldsymbol{\theta})}\circ\mathfrak{F }_{(u,\boldsymbol{\theta})}\qquad\text{in }C^{1,\alpha}(\overline{\mathcal{Q}^{\text{iter}}})\,.\] The proof of this statement can be found in [2, Lemma 4.26]. For any \(\sigma>0\), fix a family of cut-off functions \(\varsigma_{\sigma}\in C^{\infty}(\mathbb{R})\) satisfying \[\varsigma_{\sigma}(t)\coloneqq\varsigma_{1}\big{(}\frac{t}{\sigma}\big{)} \qquad\text{ with }\;\varsigma_{1}(t)=\begin{cases}1&\text{ for }t<1\,,\\ 0&\text{ for }t>2\,,\end{cases}\quad\text{and }\,0\leq\varsigma_{1}\leq 1\,\text{ on }\mathbb{R}\,. \tag{4.28}\] For a constant \(\sigma_{\text{cf}}\in(0,1)\) to be specified later, we define the subsonic coefficients: \[A_{ij}^{\text{subs}}(\boldsymbol{p},\boldsymbol{\xi})\coloneqq\varsigma_{ \text{\tiny\ref{eq:K_1}}}A_{ij}^{\text{pot}}(p,\phi(\boldsymbol{\xi}), \boldsymbol{\xi})+(1-\varsigma_{\text{\tiny\ref{eq:K_1}}})A_{ij}^{\text{pot}}(Dv _{\sigma_{\text{cf}}}^{(u,\boldsymbol{\theta})}(\boldsymbol{\xi}),\phi( \boldsymbol{\xi}),\boldsymbol{\xi})\qquad\text{for }i,j=1,2\,,\] with \(\varsigma_{\text{\tiny\ref{eq:K_1}}}=\varsigma_{\text{\tiny\ref{eq:K_1}}}(| \boldsymbol{p}-Dv_{\sigma_{\text{cf}}}^{(u,\boldsymbol{\theta})}(\boldsymbol{ \xi})|)\). In particular, from now on, we choose \(\sigma_{\text{cf}}\coloneqq\sqrt{\delta_{1}}\). Fix a cut-off function \(\chi_{\text{eq}}\in C^{\infty}(\mathbb{R})\) satisfying \[\chi_{\text{eq}}(\theta)=\begin{cases}1&\text{ for }\theta\leq\theta^{8}+\frac{ \sigma_{3}}{2}\,,\\ 0&\text{ for }\theta\geq\theta^{8}+\frac{\sigma_{3}}{2}\,,\end{cases}\qquad\chi_{ \text{eq}}^{\prime}(\theta)\leq 0\,\text{ on }\mathbb{R}\,.\] Then, for \((\boldsymbol{p},\boldsymbol{\xi})\in\mathbb{R}^{2}\times(\Omega\cap\mathcal{D}_{2 \epsilon_{\text{eq}}})\), coefficients \(A_{ij}^{(3)}(\boldsymbol{p},\boldsymbol{\xi})\), \(i,j=1,2\), are defined by \[A_{ij}^{(3)}(\boldsymbol{p},\boldsymbol{\xi})\coloneqq\chi_{\text{eq}}(\theta_{ \text{\tiny\ref{eq:K_1}}})A_{ij}^{(2)}(\boldsymbol{p},\boldsymbol{\xi})+(1- \chi_{\text{eq}}(\theta_{\text{\tiny\ref{eq:K_1}}}))A_{ij}^{\text{subs}}( \boldsymbol{p},\boldsymbol{\xi})\qquad\text{for }\boldsymbol{\xi}\in\Omega\cap\mathcal{D}_{2\epsilon_{\text{eq}}}^{l},\,l=5,6\,.\] **4. Coefficients \(A_{ij}(\boldsymbol{p},\boldsymbol{\xi})\).** We now complete the definition of coefficients \(A_{ij}(\boldsymbol{p},\boldsymbol{\xi})\) for the nonlinear differential operator \(\mathcal{N}_{(u,\boldsymbol{\theta})}\) given by (4.26). Let \(\epsilon_{0}\) be from Lemma 4.8(ii). For each \(\epsilon\in(0,\frac{\epsilon_{0}}{2})\), fix a family of cut-off functions \(\zeta_{2}^{(\epsilon,\boldsymbol{\theta})}\in C^{4}(\overline{Q^{\boldsymbol{ \theta}}})\) with the following properties: 1. Function \(\zeta_{2}^{(\epsilon,\cdot)}(\cdot):(\boldsymbol{\xi},\boldsymbol{\theta}) \mapsto\zeta_{2}^{(\epsilon,\boldsymbol{\theta})}(\boldsymbol{\xi})\) is continuous on \(\cup_{\boldsymbol{\theta}\in[0,\theta_{*}]^{2}}Q^{\boldsymbol{\theta}}\times \{\boldsymbol{\theta}\}\); 2. There exists a constant \(C_{\epsilon}>0\) depending only on \((\gamma,v_{2},\epsilon)\) such that \(\|\zeta_{2}^{(\epsilon,\boldsymbol{\theta})}\|_{C^{4}(\overline{Q^{\boldsymbol{ \theta}}})}\leq C_{\epsilon}\); * \(\zeta_{2}^{(\epsilon,\mathbf{\theta})}(\mathbf{\xi})=\begin{cases}1&\text{ for }\mathbf{\xi}\in\Omega \setminus\mathcal{D}_{\epsilon}\,,\\ 0&\text{ for }\mathbf{\xi}\in\Omega\cap\mathcal{D}_{\epsilon/2}\,.\end{cases}\) Such a family of cut-off functions is constructed in [2, Definition 4.28]. Coefficients \(A_{ij}(\mathbf{p},\mathbf{\xi}),\,i,j=1,2,\) are defined by \[A_{ij}(\mathbf{p},\mathbf{\xi})\coloneqq\zeta_{2}^{(\epsilon_{\text{eq}},\mathbf{\theta} )}(\mathbf{\xi})A_{ij}^{(1)}(\mathbf{\xi})+(1-\zeta_{2}^{(\epsilon_{\text{eq}},\mathbf{ \theta})}(\mathbf{\xi}))A_{ij}^{(3)}(\mathbf{p},\mathbf{\xi})\qquad\text{for }(\mathbf{p},\mathbf{\xi}) \in\mathbb{R}^{2}\times\Omega\,. \tag{4.29}\] We have the following lemma concerning the properties of coefficients \(A_{ij}\); see [2, Lemma 4.30]. **Lemma 4.12**.: _There exist positive constants \(\epsilon^{(1)},\delta_{1}^{(1)},\epsilon_{\text{eq}}\in(0,\frac{\epsilon_{0} }{2}),\lambda_{0}\in(0,1),N_{\text{eq}}\geq 1,\) and \(C>0\) with \((\epsilon^{(1)},\delta_{1}^{(1)},\lambda_{0})\) depending only on \((\gamma,v_{2}),\,(\epsilon_{\text{eq}},N_{\text{eq}})\) depending only on \((\gamma,v_{2},\theta_{*}),\) and \(C>0\) depending only on \((\gamma,v_{2},\theta_{*},\alpha)\) such that, whenever \((\epsilon,\delta_{1})\) from Definition 4.10 are chosen from \((0,\epsilon^{(1)}]\times(0,\delta_{1}^{(1)}],\) for each \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\text{ext}}},\) coefficients \(A_{ij}(\mathbf{p},\mathbf{\xi})\) of \(\mathcal{N}_{(u,\mathbf{\theta})}\) given by (4.29) satisfy the following properties:_ * \(A_{12}(\mathbf{p},\mathbf{\xi})=A_{21}(\mathbf{p},\mathbf{\xi})\) _for all_ \((\mathbf{p},\mathbf{\xi})\in\mathbb{R}^{2}\times\Omega,\) _and_ \(A_{ij}(\mathbf{p},\mathbf{\xi})=A_{ij}^{(1)}(\mathbf{\xi})\) _if_ \(\mathbf{\xi}\in\Omega\setminus\mathcal{D}_{\epsilon_{\text{eq}}}\)_. Furthermore,_ \(D_{\mathbf{p}}^{m}A_{ij}\in C^{1,\alpha}(\mathbb{R}^{2}\times(\overline{\Omega} \setminus\Gamma_{\text{sonic}}))\) _for_ \(m=0,1,2,\)__ \[\lambda_{0}\operatorname{dist}(\mathbf{\xi},\,\Gamma_{\text{sonic}})|\mathbf{\kappa}|^{2} \leq\sum_{i,j=1}^{2}A_{ij}(\mathbf{p},\mathbf{\xi})\kappa_{i}\kappa_{j}\leq\lambda_{ 0}^{-1}|\mathbf{\kappa}|^{2}\qquad\text{for any }\mathbf{\kappa}=(\kappa_{1},\kappa_{2})\in \mathbb{R}^{2}\,,\] _for any_ \((\mathbf{p},\mathbf{\xi})\in\mathbb{R}^{2}\times\Omega,\) _and_ \[\|A_{ij}\|_{L^{\infty}(\mathbb{R}^{2}\times\Omega)}+\|A_{ij}(\mathbf{p},\mathbf{\cdot}) \|_{C^{3/4}(\overline{\Omega})}+\|D_{\mathbf{p}}A_{ij}(\mathbf{p},\cdot)\|_{L^{\infty} (\Omega)}\leq N_{\text{eq}}\qquad\text{for any }\mathbf{p}\in\mathbb{R}^{2}\,,\] \[\|A_{ij}\|_{C^{1,\alpha}(\mathbb{R}^{2}\times\overline{\Omega \setminus\mathcal{D}_{\epsilon_{\text{eq}}}})}+s^{5}\|D_{\mathbf{p}}^{m}A_{ij}\|_{ C^{1,\alpha}(\mathbb{R}^{2}\times(\overline{\Omega}\setminus\mathcal{N}_{\epsilon}( \Gamma_{\text{sonic}})))}\leq C\qquad\text{for each }s\in(0,\frac{\epsilon_{0}}{2})\,.\] * \(A_{ij}(\mathbf{p},\mathbf{\xi})=A_{ij}^{(3)}(\mathbf{p},\mathbf{\xi})\) _for any_ \((\mathbf{p},\mathbf{\xi})\in\mathbb{R}^{2}\times(\Omega\cap\mathcal{D}_{\epsilon_{ \text{eq}}/2}^{l}),\,l=5,6\) _. Furthermore, if_ \(\theta_{l-4}\leq\theta^{\text{s}}+\frac{\sigma_{3}}{4},\) _then_ \(A_{ij}(\mathbf{p},\mathbf{\xi})=A_{ij}^{(2)}(\mathbf{p},\mathbf{\xi});\) _whereas, if_ \(\theta_{l-4}\in[\theta^{\text{s}}+\delta,\theta_{*}]\) _for some_ \(\delta\in(0,\frac{\sigma_{3}}{2}),\) _then_ \[\lambda_{0}(\operatorname{dist}(\mathbf{\xi},\,\Gamma_{\text{sonic}})+\delta)|\mathbf{ \kappa}|^{2}\leq\sum_{i,j=1}^{2}A_{ij}(\mathbf{p},\mathbf{\xi})\kappa_{i}\kappa_{j} \leq\lambda_{0}^{-1}|\mathbf{\kappa}|^{2}\qquad\text{for any }\mathbf{\xi}\in\Omega\cap\mathcal{D}_{\epsilon_{\text{eq}}/2}^{l}\,,\] \[\sup_{\mathbf{p}\in\mathbb{R}^{2}}\|D_{\mathbf{p}}^{m}A_{ij}(\mathbf{p},\cdot)\|_{C^{1, \alpha}(\overline{\Omega\cap\mathcal{D}_{\epsilon_{\text{eq}}/2}^{l}})}\leq C \qquad\text{for }m=0,1,2\,.\] * _Suppose that_ \(\epsilon\) _from_ Definition 4.10 _satisfies_ \(\epsilon\in(0,\frac{\epsilon_{0}}{2})\)_. Then the equation_: \(\mathcal{N}_{(u,\mathbf{\theta})}(\phi)=0\) _coincides with (_3.1_) in_ \(\Omega\setminus\mathcal{D}_{\epsilon/10}\)_. In addition, for_ \(l=5,6,\) _if_ \(x_{P_{0}^{l-4}}\geq\frac{\epsilon}{10}\) _or_ \(\theta_{l-4}\geq\theta^{\text{s}}+\frac{\sigma_{3}}{2},\) _then the equation_: \(\mathcal{N}_{(u,\mathbf{\theta})}(\phi)=0\) _coincides with (_3.1_) in_ \(\Omega\setminus\mathcal{D}_{\epsilon/10}^{11-l}\)_._ **Construction of \(\mathcal{M}_{(u,\mathbf{\theta})}\) in (4.23).** We turn our attention to the definition of the nonlinear linear operator \(\mathcal{M}_{(u,\mathbf{\theta})}(\mathbf{p},z,\mathbf{\xi})\) in (4.23). The construction is presented in the following four steps that follow closely [2, SS4.4.2]. **1.** For \(g^{\text{sh}}\) given by (3.27), set \[\mathcal{M}_{0}(\mathbf{p},z,\mathbf{\xi})\coloneqq g^{\text{sh}}(\mathbf{p}-\mathbf{\xi},z- \frac{1}{2}|\mathbf{\xi}|^{2},\mathbf{\xi})\,,\qquad\mathcal{M}_{1}(\mathbf{p},z,\xi) \coloneqq\mathcal{M}_{0}(\mathbf{p},z,(\xi,\frac{z}{v_{2}}))\,.\] For \(j=5,6\), write \(\phi_{j}\coloneqq\varphi_{j}+\frac{1}{2}|\mathbf{\xi}|^{2}\), which is independent of the \(\eta\)-coordinate since \(O_{j}\in\{\eta=0\}\). Then \((\xi,\frac{1}{v_{2}}\phi_{j}(\mathbf{\xi}))\in\{\varphi_{j}=\varphi_{2}\}\) for any \(\mathbf{\xi}\in\mathbb{R}^{2}\), and \(\mathcal{M}_{1}\) is homogeneous in the sense that \[\mathcal{M}_{1}(D\phi_{j}(\mathbf{\xi}),\phi_{j}(\mathbf{\xi}),\xi)=0\qquad\text{for all }\mathbf{\xi}=(\xi,\eta)\in\mathbb{R}^{2}\text{ and }j=5,6\,.\] Furthermore, observe that \(\mathbf{\xi}=(\xi,\eta)\in\Gamma_{\text{shock}}\) if and only if \(\mathbf{\xi}\in\overline{\Omega}\) with \(\eta=\frac{1}{v_{2}}\phi(\mathbf{\xi})\), and hence \[\begin{cases}\mathcal{M}_{1}(D\phi(\mathbf{\xi}),\phi(\mathbf{\xi}),\xi)=\mathcal{M}_{0 }(D\phi(\mathbf{\xi}),\phi(\mathbf{\xi}),\mathbf{\xi})\,,\\ D_{\mathbf{p}}\mathcal{M}_{1}(D\phi(\mathbf{\xi}),\phi(\mathbf{\xi}),\xi)=D_{\mathbf{p}} \mathcal{M}_{0}(D\phi(\mathbf{\xi}),\phi(\mathbf{\xi}),\mathbf{\xi})\,,\end{cases}\qquad \text{for }\mathbf{\xi}\in\Gamma_{\text{shock}}\,. \tag{4.30}\] **2.** For the cut-off function \(\varsigma_{\sigma}\) given by (4.28) and a constant \(\sigma_{\rm bc}>0\) to be determined later, we set \[\begin{split}\mathcal{M}(\mathbf{p},z,\mathbf{\xi})\coloneqq& \big{(}\varsigma_{\sigma_{\rm bc}}^{(5)}\varsigma_{\sigma_{\rm bc }}^{(6)}+(1-\varsigma_{\sigma_{\rm bc}}^{(5)})\varsigma_{\sigma_{\rm bc}}^{(6 )}+\varsigma_{\sigma_{\rm bc}}^{(5)}(1-\varsigma_{\sigma_{\rm bc}}^{(6)}) \big{)}\mathcal{M}_{1}(\mathbf{p},z,\mathbf{\xi})\\ &+(1-\varsigma_{\sigma_{\rm bc}}^{(5)})(1-\varsigma_{\sigma_{\rm bc }}^{(6)})\mathcal{M}_{0}(\mathbf{p},z,\mathbf{\xi})\,,\end{split} \tag{4.31}\] where \(\varsigma_{\sigma_{\rm bc}}^{(j)}\coloneqq\varsigma_{\sigma_{\rm bc}}(|(\mathbf{p},z)-(D\phi_{j},\phi_{j}(\mathbf{\xi}))|)\) for \(j=5,6\). There exist constants \(\sigma_{\rm bc},\delta_{\rm bc},\bar{\epsilon}_{\rm bc}>0\) depending only on \((\gamma,v_{2})\) such that, if \(\epsilon\) from Definition 4.10 satisfies \(\epsilon\in(0,\bar{\epsilon}_{\rm bc})\), then \(\mathcal{M}(\mathbf{p},z,\mathbf{\xi})\) satisfies that, for all \(\mathbf{\xi}\in\Gamma_{\rm shock}\), \[\delta_{\rm bc}\leq D_{\mathbf{p}}\mathcal{M}(D\phi(\mathbf{\xi}),\phi(\mathbf{\xi}),\mathbf{ \xi})\cdot\mathbf{\nu}_{\rm sh}(\mathbf{\xi})\leq\delta_{\rm bc}^{-1}\,,\qquad D_{z} \mathcal{M}(D\phi(\mathbf{\xi}),\phi(\mathbf{\xi}),\mathbf{\xi})\leq-\delta_{\rm bc}\,,\] where \(\mathbf{\nu}_{\rm sh}\) is the unit normal vector to \(\Gamma_{\rm shock}\) towards the interior of \(\Omega\). These results follow from the calculations in [2, Lemma 4.32]. Indeed, the first follows from a direct calculation by using (4.30), [2, Eq. A.18], the fact that \(\mathcal{M}_{0}\) is \(C^{1}\), and the properties in Definition 4.10(vi)-(v). For the second, direct computation gives \[D_{z}\mathcal{M}_{0}(D\phi(\mathbf{\xi}),\phi(\mathbf{\xi}),\mathbf{\xi})=-\rho^{2-\gamma} D\varphi\cdot\mathbf{\nu}_{\rm sh}(\mathbf{\xi})<0\qquad\text{for $\mathbf{\xi}\in\Gamma_{\rm shock}$}\,,\] whilst, for \(j=5,6\), and any \(\mathbf{\xi}=(\xi,\eta)\in Q^{\mathbf{\theta}}\), we have \[D_{z}\mathcal{M}_{1}(D\phi_{j}(\mathbf{\xi}),\phi_{j}(\mathbf{\xi}),\xi)=-\rho_{j}^{2 -\gamma}\operatorname{dist}(O_{j},S_{2j})+(-1)^{j}\frac{\rho_{j}-1}{v_{2}} \cos\theta_{2j}<0\,,\] after which an argument similar to [2, Lemma 4.32] applies. The constant, \(\sigma_{\rm bc}>0\), is now fixed. **3.** For \(j=5,6\), set \[\mathcal{M}^{(j)}(\mathbf{q},z,\mathbf{\xi})\coloneqq\mathcal{M}(\mathbf{q}+D\phi_{j},z+ \phi_{j}(\mathbf{\xi}),\mathbf{\xi})\,. \tag{4.32}\] Then, for the coordinates \((x,y)=\mathbf{x}=\mathcal{R}(\mathbf{\xi})\) given by (4.1), define \[\hat{\mathcal{M}}^{(j)}(q_{x},q_{y},z,\mathbf{x})\coloneqq\mathcal{M}^{(j)}\big{(} (-1)^{j}(q_{x}\cos y+\frac{q_{y}\sin y}{c_{j}-x}),-q_{x}\sin y+\frac{q_{y} \cos y}{c_{j}-x},z,\mathcal{R}^{-1}(\mathbf{x})\big{)}\,. \tag{4.33}\] **4.** Finally, we extend the definition of \(\mathcal{M}\) in (4.31) to all \((\mathbf{p},z,\mathbf{\xi})\in\mathbb{R}^{2}\times\mathbb{R}\times\overline{\Omega}\). For each \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\rm ext}}\) and a constant \(\sigma>0\), let \(v_{\sigma}^{(u,\mathbf{\theta})}\in C^{4}(\overline{\Omega})\) be given as in Step **3** in the construction of \(\mathcal{N}_{(u,\mathbf{\theta})}\) above. We define a linear operator \(\mathcal{L}_{\sigma}^{(u,\mathbf{\theta})}(\mathbf{p},z,\mathbf{\xi})\) by \[\begin{split}\mathcal{L}_{\sigma}^{(u,\mathbf{\theta})}(\mathbf{p},z, \mathbf{\xi})\coloneqq&\,\mathcal{M}(Dv_{\sigma}^{(u,\mathbf{\theta})}( \mathbf{\xi}),v_{\sigma}^{(u,\mathbf{\theta})}(\mathbf{\xi}),\mathbf{\xi})+D_{\mathbf{p}}\mathcal{ M}(Dv_{\sigma}^{(u,\mathbf{\theta})}(\mathbf{\xi}),v_{\sigma}^{(u,\mathbf{\theta})}(\mathbf{\xi}), \mathbf{\xi})\cdot\mathbf{p}\\ &+D_{z}\mathcal{M}(Dv_{\sigma}^{(u,\mathbf{\theta})}(\mathbf{\xi}),v_{ \sigma}^{(u,\mathbf{\theta})}(\mathbf{\xi}),\mathbf{\xi})z\,,\end{split}\] The operator, \(\mathcal{M}_{(u,\mathbf{\theta})}(\mathbf{p},z,\mathbf{\xi})\), in (4.23) is then defined by \[\mathcal{M}_{(u,\mathbf{\theta})}(\mathbf{p},z,\mathbf{\xi})\coloneqq\varsigma_{\sigma} \mathcal{M}(\mathbf{p},z,\mathbf{\xi})+(1-\varsigma_{\sigma})\mathcal{L}_{\sigma}^{(u, \mathbf{\theta})}(\mathbf{p}-Dv_{\sigma}^{(u,\mathbf{\theta})}(\mathbf{\xi}),z-v_{\sigma}^{(u, \mathbf{\theta})}(\mathbf{\xi}),\mathbf{\xi}) \tag{4.34}\] for \(\varsigma_{\sigma}\coloneqq\varsigma_{\sigma}(|(\mathbf{p},z)-(Dv_{\sigma}^{(u,\bm {\theta})}(\mathbf{\xi}),v_{\sigma}^{(u,\mathbf{\theta})}(\mathbf{\xi}))|)\) given by (4.28). We have the following lemma concerning the properties of the nonlinear operator \(\mathcal{M}_{(u,\mathbf{\theta})}(\mathbf{p},z,\mathbf{\xi})\); see [2, Lemma 4.34]. **Lemma 4.13**.: _Let \(\bar{\epsilon}_{\rm bc}\) be the constant from_ Step **2** _above. Then there exist positive constants \(\epsilon_{\rm bc}\in(0,\bar{\epsilon}_{\rm bc}),\delta_{1}^{(2)},N_{1}^{(1)}, \delta_{\rm bc},C,C_{\theta_{*}}\), and \(\epsilon_{\mathcal{M}}\in(0,\epsilon_{\rm bc}]\) with \((\epsilon_{\rm bc},\delta_{1}^{(2)},N_{1}^{(1)},\delta_{\rm bc},C)\) depending only on \((\gamma,v_{2})\), \(\epsilon_{\mathcal{M}}\) depending on \((\gamma,v_{2},\theta_{*})\), and \(C_{\theta_{*}}\) depending on \((\gamma,v_{2},\theta_{*},\alpha)\) such that, whenever parameters \((\epsilon,\delta_{1},N_{1})\) from_ Definition 4.10 _are chosen from \((0,\bar{\epsilon}_{\rm bc}]\times(0,\delta_{1}^{(2)}]\times[N_{1}^{(1)},\infty)\), the following statements hold: For each \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\rm ext}}\), operator \(\mathcal{M}_{(u,\mathbf{\theta})}:\mathbb{R}^{2}\times\mathbb{R}\times\overline{ \Omega}\to\mathbb{R}\) given by (4.34) with \(\sigma=\sigma_{\rm ef}\equiv\sqrt{\delta_{1}}\) satisfies_ 1. \(\mathcal{M}_{(u,\mathbf{\theta})}\in C^{3}(\mathbb{R}^{2}\times\mathbb{R}\times \overline{\Omega})\) _and, for all_ \((\mathbf{p},z,\mathbf{\xi})\in\mathbb{R}^{2}\times\mathbb{R}\times\overline{\Omega}\) _and_ \(\mathbf{\xi}_{\rm sh}\in\overline{\Gamma_{\rm shock}},\)__ \[\|(\mathcal{M}_{(u,\mathbf{\theta})}(\mathbf{0},0,\cdot),D^{m}_{(\mathbf{p},z) }\mathcal{M}_{(u,\mathbf{\theta})}(\mathbf{p},z,\cdot))\|_{C^{3}(\overline{\Omega})} \leq C_{\theta_{*}}\qquad\text{for }m=1,2,3\,,\] \[\mathcal{M}_{(u,\mathbf{\theta})}(\mathbf{p},z,\mathbf{\xi})=\mathcal{M}(\bm {p},z,\mathbf{\xi})\qquad\text{for }|(\mathbf{p},z)-(D\phi(\mathbf{\xi}),\phi(\mathbf{\xi}))|<\frac{1}{2} \sqrt{\delta_{1}}\,,\] \[|D_{(\mathbf{p},z)}\mathcal{M}_{(u,\mathbf{\theta})}(\mathbf{p},z,\mathbf{\xi})-D _{(\mathbf{p},z)}\mathcal{M}(D\phi(\mathbf{\xi}),\phi(\mathbf{\xi}),\mathbf{\xi})|\leq C\sqrt{ \delta_{1}}\,,\] \[\delta_{\rm bc}\leq D_{\mathbf{p}}\mathcal{M}_{(u,\mathbf{\theta})}(\mathbf{ p},z,\mathbf{\xi}_{\rm sh})\cdot\mathbf{\nu}_{\rm sh}\leq\frac{1}{\delta_{\rm bc}}, \qquad D_{z}\mathcal{M}_{(u,\mathbf{\theta})}(\mathbf{p},z,\mathbf{\xi}_{\rm sh})\leq- \delta_{\rm bc}\,,\] _where_ \(\mathcal{M}(\mathbf{p},z,\mathbf{\xi})\) _is defined by (_4.31_), and_ \(\mathbf{\nu}_{\rm sh}\) _is the unit normal vector to_ \(\Gamma_{\rm shock}\) _pointing into_ \(\Omega\)_._ 2. _Denote_ \(\mathcal{B}^{(u,\mathbf{\theta})}_{\sigma,\Gamma_{\rm shock}}(\mathbf{p},z,\mathbf{\xi}) \coloneqq\mathcal{L}^{(u,\mathbf{\theta})}_{\sigma}(\mathbf{p}-Dv^{(u,\mathbf{\theta})}_{ \sigma}(\mathbf{\xi}),z-v^{(u,\mathbf{\theta})}_{\sigma}(\mathbf{\xi}),\mathbf{\xi})\) _and set_ \[\mathcal{B}^{(u,\mathbf{\theta})}_{\sigma,\Gamma_{\rm shock}}(\mathbf{p},z,\mathbf{\xi}) \coloneqq(b^{\rm sh}_{1}(\mathbf{\xi}),b^{\rm sh}_{2}(\mathbf{\xi}),b^{\rm sh}_{0}(\bm {\xi}),h^{\rm sh}(\mathbf{\xi}))\cdot(p_{1},p_{2},z,1)\,.\] _Then_ \(\|(b^{\rm sh}_{i},h^{\rm sh})\|_{C^{3}(\overline{\Gamma_{\rm shock}})}\leq C _{\theta_{*}}\) _for_ \(i=0,1,2\)_, and, for all_ \((\mathbf{p},z,\mathbf{\xi})\in\mathbb{R}^{2}\times\mathbb{R}\times\overline{\Omega},\)__ \[\|\big{(}\mathcal{M}_{(u,\mathbf{\theta})}-\mathcal{B}_{\sigma_{\rm cf },\Gamma_{\rm shock}}\big{)}(\mathbf{p},z,\mathbf{\xi})\|\leq C\sqrt{\delta_{1}}\, \Big{|}(\mathbf{p},z)-\big{(}Dv^{(u,\mathbf{\theta})}_{\sigma_{\rm cf}}(\mathbf{\xi}),v^ {(u,\mathbf{\theta})}_{\sigma_{\rm cf}}(\mathbf{\xi})\big{)}\Big{|}\,,\] \[\big{|}D_{(\mathbf{p},z)}\big{(}\mathcal{M}_{(u,\mathbf{\theta})}- \mathcal{B}_{\sigma_{\rm cf},\Gamma_{\rm shock}}\big{)}(\mathbf{p},z,\mathbf{\xi})\big{|} \leq C\sqrt{\delta_{1}}\,.\] 3. \(\mathcal{M}_{(u,\mathbf{\theta})}\) _is homogeneous in the sense that, for_ \(j=5,6,\)__ \[\mathcal{M}_{(u,\mathbf{\theta})}(D\phi_{j}(\mathbf{\xi}),\phi_{j}(\mathbf{\xi}),\mathbf{\xi}) =0\qquad\text{for all }\begin{cases}\mathbf{\xi}\in\Gamma_{\rm shock}\cap\mathcal{D}_{ \epsilon_{\mathcal{M}}}&\text{for any }\mathbf{\theta}\in[0,\theta_{*}]^{2}\,,\\ \mathbf{\xi}\in\Gamma_{\rm shock}&\text{when }\max\{\theta_{1},\theta_{2}\}\in[0, \frac{\delta_{1}}{N_{1}}]\,.\end{cases}\] 4. _For coordinates_ \((x,y)=\mathcal{R}(\mathbf{\xi})\) _given by (_4.1_),_ \(j=5,6,\) _and_ \(\mathbf{\xi}\in\overline{\Gamma_{\rm shock}\cap\mathcal{D}^{j}_{\ell_{\rm bc}}},\) _define_ \(\hat{\mathcal{M}}^{(j)}_{(u,\mathbf{\theta})}(q_{x},q_{y},z,x,y)\) _by (_4.33_) with_ \(\mathcal{M}\) _replaced by_ \(\mathcal{M}_{(u,\mathbf{\theta})}\) _in (_4.32_). Then, for_ \(j=5,6,\)__\(\hat{\mathcal{M}}^{(j)}_{(u,\mathbf{\theta})}\) _satisfies the following properties when_ \(\Gamma_{\rm shock}\cap\mathcal{D}^{j}_{\ell_{\rm bc}}\) _is non-empty:_ \[\|\hat{\mathcal{M}}^{(j)}_{(u,\mathbf{\theta})}\|_{C^{3}(\mathbb{R}^{2} \times\mathbb{R}\times\overline{\mathcal{R}(\Gamma_{\rm shock}\cap\mathcal{D}^{j} _{\rm bc})})}\leq C_{\theta_{*}}\,,\] \[\hat{\mathcal{M}}^{(j)}_{(u,\mathbf{\theta})}(\mathbf{q},z,x,y)=\hat{ \mathcal{M}}^{(j)}(\mathbf{q},z,x,y)\qquad\text{in }\mathcal{R}(\Gamma_{\rm shock}\cap\mathcal{D}^{j}_{\ell_{\rm bc}})\,\text{ for any }|(\mathbf{q},z)|\leq\tfrac{\delta_{\rm bc}}{C}\,,\] \[\partial_{a}\hat{\mathcal{M}}^{(j)}_{(u,\mathbf{\theta})}(\mathbf{q},z,x,y) \leq-\delta_{\rm bc}\qquad\text{in }\mathcal{R}(\Gamma_{\rm shock}\cap\mathcal{D}^{j}_{\ell_{\rm M}})\,\text{ for any }(\mathbf{q},z)\in\mathbb{R}^{2}\times\mathbb{R},\] _with_ \(\partial_{a}=\partial_{q_{x}},\)__\(\partial_{q_{y}}\) _or_ \(\partial_{z},\) _provided that_ \(\Gamma_{\rm shock}\cap\mathcal{D}^{j}_{\ell_{\rm M}}\) _is non-empty._ 5. \(\mathcal{M}_{(u,\mathbf{\theta})}(D\phi(\mathbf{\xi}),\phi(\mathbf{\xi}),\mathbf{\xi})=0\) _on_ \(\Gamma_{\rm shock}\) _if and only if_ \(\varphi=\phi-\frac{1}{2}|\mathbf{\xi}|^{2}\) _satisfies the Rankine-Hugoniot jump condition (_3.27_) on_ \(\Gamma_{\rm shock}=\{\varphi=\varphi_{2}\}\)_._ #### 4.2.2. Well-posedness of the boundary value problem (4.23) Since the complete definition of the iteration boundary value problem (4.23) has been given, we now consider its well-posedness. **Lemma 4.14**.: _Fix \(\gamma\geq 1,\)\(v_{2}\in(v_{\rm min},0)\), and \(\theta_{*}\in(0,\theta^{\rm d})\). Let constant \(\sigma_{2}\) be from_ Step **1** _of the proof of_ Proposition 3.14_. Let \(\bar{\alpha}\in(0,\frac{1}{3}]\) and \(\epsilon_{0}>0\) be from_ Proposition 4.6_and_ Lemma 4.8(ii) _respectively, and let \(\alpha\in(0,\frac{\bar{\alpha}}{2}]\) be from_ Definition 4.10_. Then there exist constants \(\epsilon^{(\rm w)}\in(0,\epsilon_{0}],\)\(\delta^{(\rm w)}_{1}\in(0,1),\)\(N^{(\rm w)}_{1}\geq 1,\) and \(\alpha^{*}_{1}\in(0,\bar{\alpha}]\) depending only on \((\gamma,v_{2},\theta_{*})\) such that, whenever parameters \((\epsilon,\delta_{1},N_{1})\) in_ Definition 4.10 _are chosen with_ \(\epsilon\in(0,\epsilon^{(\rm w)}],\)__\(\delta_{1}\in(0,\delta^{(\rm w)}_{1}],\) _and_ \(N_{1}\geq N^{(\rm w)}_{1},\) _the following statements hold:_ Case (i). _If_ \(\max\{\theta_{1},\theta_{2}\}\leq\theta^{\rm s}+\sigma_{2},\) _then the boundary value problem (_4.23_) associated with_ \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\rm ext}}\cap\{\max\{\theta_{1},\theta_{2} \}\leq\theta^{\rm s}+\sigma_{2}\}\) _has a unique solution_ \(\hat{\phi}\in C^{2}(\Omega _for \(\phi_{\mathbf{\theta}}\coloneqq\max\{\varphi_{5},\varphi_{6}\}+\frac{1}{2}|\mathbf{\xi}|^{2}\). Furthermore, for each \(d\in(0,\epsilon_{0}),\) there exists a constant \(C_{d}>0\) depending only on \((\gamma,v_{2},\theta_{*},\alpha,d)\) such that_ \[\|\hat{\phi}\|_{2,\alpha_{1}^{*},\Omega\setminus\mathcal{D}_{d}}\leq C_{d}\,. \tag{4.36}\] Case (ii). _For each \(\delta\in(0,\frac{\sigma_{2}}{2}),\) whenever \(\max\{\theta_{1},\theta_{2}\}\in[\theta^{\mathrm{s}}+\delta,\theta_{*}],\) the boundary value problem (4.23) associated with \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\mathrm{ext}}}\cap\{\max\left\{ \theta_{1},\theta_{2}\right\}\geq\theta^{\mathrm{s}}+\delta\}\) has a unique solution \(\hat{\phi}\in C^{2}(\Omega)\cap C^{1}(\overline{\Omega}\setminus(\Gamma^{5}_ {\mathrm{sonic}}\cup\Gamma^{6}_{\mathrm{sonic}}))\cap C^{0}(\overline{\Omega})\). Moreover, there exists a constant \(C^{\prime}>0\) depending only on \((\gamma,v_{2},\theta_{*},\alpha,\delta)\) such that solution \(\hat{\phi}\) satisfies (4.35), while there exists a constant \(C^{\prime}_{d}>0\) depending only on \((\gamma,v_{2},\theta_{*},\alpha,\delta,d)\) such that (4.36) holds._ Proof.: For Case (i), we fix \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\mathrm{ext}}}\cap\{\max\{\theta_{1},\theta_{2}\}\leq\theta^{\mathrm{s}}+\sigma_{2}\}\) and fix a constant \(h>0\). Using \(\mathcal{G}_{1}^{\mathbf{\theta}}\) defined by (4.7), we rewrite the boundary value problem (4.23) associated with \((u,\mathbf{\theta})\) as a new boundary value problem for the unknown function: \[v(\mathbf{x})\coloneqq\hat{\phi}\circ(L_{h}\circ\mathcal{G}_{1}^{\mathbf{\theta}})^{- 1}(\mathbf{x})\qquad\text{for any }\mathbf{x}\in L_{h}\circ\mathcal{G}_{1}^{\mathbf{\theta}}( \Omega(u,\mathbf{\theta}))\,,\] where \(\mathbf{x}\coloneqq L_{h}(s,t^{\prime})=(\frac{s+1}{2}h,t^{\prime})\) and \((s,t^{\prime})=\mathcal{G}_{1}^{\mathbf{\theta}}(\mathbf{\xi})\) for any \(\mathbf{\xi}\in\Omega(u,\mathbf{\theta})\). We write \[\Gamma_{0}\coloneqq L_{h}\circ\mathcal{G}_{1}^{\mathbf{\theta}}(\Gamma^{6}_{ \mathrm{sonic}}),\ \ \Gamma_{1}\coloneqq L_{h}\circ\mathcal{G}_{1}^{\mathbf{\theta}}(\Gamma_{\mathrm{ shock}}(u,\mathbf{\theta})),\ \ \Gamma_{2}\coloneqq L_{h}\circ\mathcal{G}_{1}^{\mathbf{\theta}}(\Gamma_{\mathrm{sonic}}),\ \ \Gamma_{3}\coloneqq L_{h}\circ\mathcal{G}_{1}^{\mathbf{\theta}}(\Gamma_{\mathrm{ sym}})\,.\] Moreover, we take \[g_{\mathrm{so}}(\mathbf{x})\coloneqq(1-\varsigma_{h})\phi_{5}\circ(L_{h}\circ \mathcal{G}_{1}^{\mathbf{\theta}})^{-1}(\mathbf{x})+\varsigma_{h}\phi_{6}\circ(L_{h} \circ\mathcal{G}_{1}^{\mathbf{\theta}})^{-1}(\mathbf{x})\,,\] where \(\varsigma_{h}\coloneqq\varsigma_{1}(\frac{1}{2}+\frac{2}{h}x_{1})\), and the cut-off function \(\varsigma_{1}(\cdot)\in C^{\infty}(\mathbb{R})\) is given by (4.28). Using Lemmas 4.2 and 4.12-4.13, we can choose suitable constants \(\epsilon^{\mathrm{(w)}}\in(0,\epsilon]\), \(\delta_{1}^{\mathrm{(w)}}>0\), and \(N_{1}^{\mathrm{(w)}}\geq 1\) such that the new boundary value problem for \(v\) satisfies the conditions in Proposition B.1 when \(\epsilon\in(0,\epsilon^{\mathrm{(w)}}]\), \(\delta_{1}\in(0,\delta_{1}^{\mathrm{(w)}}]\), and \(N_{1}\geq N_{1}^{\mathrm{(w)}}\). Therefore, Proposition B.1 gives the existence and uniqueness of a solution \(\hat{\phi}\in C(\overline{\Omega})\cap C^{2,\alpha_{1}}(\overline{\Omega} \setminus(\Gamma^{5}_{\mathrm{sonic}}\cup\Gamma^{6}_{\mathrm{sonic}}))\) of the boundary value problem (4.23) satisfying (4.35)-(4.36). For Case (ii), fix any \(\delta\in(0,\frac{\sigma_{2}}{2})\). In the subcase: \(\min\{\theta_{1},\theta_{2}\}<\theta^{\mathrm{s}}+\delta\leq\max\{\theta_{1}, \theta_{2}\}\) (without loss of generality, we consider the case: \(\theta_{1}<\theta^{\mathrm{s}}+\delta\leq\theta_{2}\)), one can follow the same analysis as above to check all the conditions for Proposition B.2, from which we obtain the existence of a unique \(\hat{\phi}\) satisfying (4.35)-(4.36) for any \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\mathrm{ext}}}\cap\{\theta_{1}< \theta^{\mathrm{s}}+\delta\leq\theta_{2}\}\). In the other subcase: \(\min\{\theta_{1},\theta_{2}\}\in[\theta^{\mathrm{s}}+\delta,\theta_{*}]\), we can apply Proposition B.3 to prove the same results. For each \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\mathrm{ext}}}\), the corresponding pseudo-subsonic region \(\Omega=\Omega(u,\mathbf{\theta})\) depends continuously on \((u,\mathbf{\theta})\). We rewrite (4.23) as a boundary value problem for \[\hat{u}(s,t)\coloneqq(\hat{\phi}-\frac{1}{2}|\mathbf{\xi}|^{2}-\varphi_{\mathbf{\theta }}^{*})\circ\mathfrak{F}_{(u,\mathbf{\theta})}(s,t)\qquad\text{in }\mathcal{Q}^{\mathrm{iter}}\,, \tag{4.37}\] where \(\varphi_{\mathbf{\theta}}^{*}\) is given by Definition 4.4, and \(\mathfrak{F}=\mathfrak{F}_{(u,\mathbf{\theta})}\) is given by Definition 4.7(ii). We obtain \[\left\{\begin{array}{ll}\sum_{i,j=1}^{2}\mathcal{A}_{ij}^{(u,\mathbf{ \theta})}(D_{(s,t)}\hat{u},s,t)\partial_{i}\partial_{j}\hat{u}+\sum_{i=1}^{2} \mathcal{A}_{i}^{(u,\mathbf{\theta})}(D_{(s,t)}\hat{u},s,t)\partial_{i}\hat{u}=f^{( u,\mathbf{\theta})}&\text{in }\mathcal{Q}^{\mathrm{iter}}\,,\\ \mathcal{M}_{(u,\mathbf{\theta})}(D_{(s,t)}\hat{u},\hat{u},s)=0&\text{on } \partial_{\mathrm{sh}}\mathcal{Q}^{\mathrm{iter}}\,,\\ \hat{u}=0&\text{on }\partial_{\mathrm{so}}\mathcal{Q}^{\mathrm{iter}}\,,\\ \mathscr{B}_{(u,\mathbf{\theta})}^{\mathrm{(w)}}(D_{(s,t)}\hat{u},s)\coloneqq \sum_{i=1}^{2}b_{i}^{\mathrm{(w)}}(s)\partial_{i}\hat{u}=0&\text{on }\partial_{\mathrm{w}}\mathcal{Q}^{\mathrm{iter}}\,,\end{array}\right. \tag{4.38}\] where \((\partial_{1},\partial_{2})\coloneqq(\partial_{s},\partial_{t})\) and \[\partial_{\mathrm{sh}}\mathcal{Q}^{\mathrm{iter}}\coloneqq(-1,1)\times\{1\}\,, \quad\partial_{\mathrm{so}}\mathcal{Q}^{\mathrm{iter}}\coloneqq\{-1,1\} \times(0,1)\,,\quad\partial_{\mathrm{w}}\mathcal{Q}^{\mathrm{iter}} \coloneqq(-1,1)\times\{0\}\,.\] From Lemmas 4.8 and 4.12-4.14, we can obtain the following results; see [2, Lemma 4.36]. **Lemma 4.15**.: _For every \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\mathrm{ext}}}\), let \(\mathcal{A}_{ij}^{(u,\mathbf{\theta})},\mathcal{A}_{i}^{(u,\mathbf{\theta})},f^{(u,\mathbf{ \theta})},\mathscr{M}_{(u,\mathbf{\theta})},\mathscr{B}_{(u,\mathbf{\theta})}^{(w)},\) and \(b_{i}^{(w)}\) be given as in (4.38). Then the following properties hold:_ 1. \(\mathcal{A}_{ij}^{(u,\mathbf{\theta})},\mathcal{A}_{i}^{(u,\mathbf{\theta})}\in C(\mathbb{R} ^{2}\times\mathcal{Q}^{\rm iter})\), \(f^{(u,\mathbf{\theta})}\in C(\mathcal{Q}^{\rm iter})\), \(\mathscr{M}_{(u,\mathbf{\theta})}\in C(\mathbb{R}^{2}\times\mathbb{R}\times \partial_{\rm sh}\mathcal{Q}^{\rm iter}),\) and \(\mathscr{B}_{(u,\mathbf{\theta})}^{(\rm w)}\in C(\mathbb{R}^{2}\times\mathbb{R} \times\partial_{\rm w}\mathcal{Q}^{\rm iter});\) 2. _If a sequence_ \(\{(u_{k},\mathbf{\theta}_{k})\}_{k\in\mathbb{N}}\subseteq\overline{\mathcal{K}^{ \rm ext}}\) _converges to_ \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\rm ext}}\) _in_ \(C_{(*)}^{2,\alpha}(\mathcal{Q}^{\rm iter})\times[0,\mathbf{\theta}_{*}]^{2}\)_, then the following sequences converge uniformly_: * \((\mathcal{A}_{ij}^{(u_{k},\mathbf{\theta}_{k})},\mathcal{A}_{i}^{(u_{k},\mathbf{ \theta}_{k})})\to(\mathcal{A}_{ij}^{(u,\mathbf{\theta})},\mathcal{A}_{i}^{(u,\bm {\theta})})\) _on compact subsets of_ \(\mathbb{R}^{2}\times\mathcal{Q}^{\rm iter},\)__ * \(f^{(u_{k},\mathbf{\theta}_{k})}\to f^{(u,\mathbf{\theta})}\) _on compact subsets of_ \(\mathcal{Q}^{\rm iter},\)__ * \(\mathscr{M}_{(u_{k},\mathbf{\theta}_{k})}\to\mathscr{M}_{(u,\mathbf{\theta})}\) _on compact subsets of_ \(\mathbb{R}^{2}\times\mathbb{R}\times\partial_{\rm sh}\mathcal{Q}^{\rm iter},\)__ * \(\mathscr{B}_{(u_{k},\mathbf{\theta}_{k})}^{(\rm w)}\to\mathscr{B}_{(u,\mathbf{\theta})}^ {(\rm w)}\) _on compact subsets of_ \(\mathbb{R}^{2}\times\mathbb{R}\times\partial_{\rm w}\mathcal{Q}^{\rm iter}.\)__ **Corollary 4.16**.: _Let \(\bar{\alpha}\in(0,1)\) be from Proposition 4.6, and let \(\alpha_{1}^{*}\in(0,\bar{\alpha}],\,\epsilon^{(\rm w)},\delta_{1}^{(\rm w)}\), and \(N_{1}^{(\rm w)}\) be from Lemma 4.14. Let \(\epsilon,\delta_{1},\) and \(N_{1}\) from_ Definition 4.10 _satisfy \(\epsilon\in(0,\epsilon^{(\rm w)}],\,\delta_{1}\in(0,\delta_{1}^{(\rm w)}]\), and \(N_{1}\geq N_{1}^{(\rm w)}\)._ 1. _For each_ \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\rm ext}},\,\hat{\phi}\) _solves the boundary value problem (_4.23_) if and only if_ \(\hat{u}\) _given by (_4.37_) solves the boundary value problem (_4.38_). Thus, (_4.38_) has a unique solution_ \(\hat{u}\in C^{2}(\mathcal{Q}^{\rm iter})\cap C^{1}(\overline{\mathcal{Q}^{ \rm iter}}\setminus\overline{\partial_{\rm so}\mathcal{Q}^{\rm iter}})\cap C( \overline{\mathcal{Q}^{\rm iter}})\)_. Furthermore, there exists a constant_ \(C\geq 1\) _depending on_ \((\gamma,v_{2},\theta_{*},\alpha)\) _such that_ \[|\hat{u}(s,t)|\leq C(1-|s|)\qquad\text{in }\mathcal{Q}^{\rm iter}\,.\] _For each_ \(\hat{d}\in(0,\frac{1}{2}),\) _there exists_ \(C_{\hat{d}}>0\) _depending on_ \((\gamma,v_{2},\theta_{*},\hat{d},\alpha)\) _such that_ \[\|\hat{u}\|_{2,\alpha_{1}^{*}\mathcal{Q}^{\rm iter}\cap\{1-|s|>\hat{d}\}}\leq C _{\hat{d}}\,.\] 2. _Let_ \(\{(u_{k},\mathbf{\theta}_{k})\}_{k\in\mathbb{N}}\subseteq\overline{\mathcal{K}^{ \rm ext}}\) _converge to_ \((u,\mathbf{\theta})\in\overline{\mathcal{K}^{\rm ext}}\) _in_ \(C^{1}(\overline{\mathcal{Q}^{\rm iter}})\times[0,\theta_{*}]^{2}\)_, and let_ \(\hat{u}_{k}\) _be the solution of the boundary value problem (_4.38_) associated with_ \((u_{k},\mathbf{\theta}_{k})\)_. Then there exists a unique solution_ \(\hat{u}\in C^{2}(\mathcal{Q}^{\rm iter})\cap C^{1}(\overline{\mathcal{Q}^{\rm iter }}\setminus\overline{\partial_{\rm so}\mathcal{Q}^{\rm iter}})\cap C(\overline{ \mathcal{Q}^{\rm iter}})\) _of the boundary value problem (_4.38_) associated with_ \((u,\mathbf{\theta})\)_. Moreover,_ \(\hat{u}_{k}\) _converges to_ \(\hat{u}\) _uniformly in_ \(\overline{\mathcal{Q}^{\rm iter}}\) _and, for any_ \(\alpha^{\prime}\in[0,\alpha_{1}^{*}]\)_,_ * \(\hat{u}_{k}\to\hat{u}\) _in_ \(C^{1,\alpha^{\prime}}(K)\) _for any compact subset_ \(K\subseteq\overline{\mathcal{Q}^{\rm iter}}\setminus\overline{\partial_{\rm so} \mathcal{Q}^{\rm iter}},\)__ * \(\hat{u}_{k}\to\hat{u}\) _in_ \(C^{2,\alpha^{\prime}}(K)\) _for any compact subset_ \(K\subseteq\mathcal{Q}^{\rm iter}.\)__ 3. _If_ \((u,\mathbf{\theta})\in\overline{\mathcal{K}},\,\text{then}\,\,(u,\mathbf{\theta})\) _satisfies_ Definition 4.10(vii) _with nonstrict inequality in (_4.25_)._ **Remark 4.2**.: _For a constant \(M>0,\) define a set \(\mathcal{K}_{M}^{E}\) by_ \[\mathcal{K}_{M}^{E}\coloneqq\left\{(u,\mathbf{\theta})\in C_{(*)}^{2,\alpha}( \mathcal{Q}^{\rm iter})\times[0,\mathbf{\theta}_{*}]^{2}\,:\:\text{\rm{\rm{and}}}\,\, \|u\|_{2,\alpha,\mathcal{Q}^{\rm iter}}^{(*)}\leq M\right\}.\] _Let \(\overline{\mathcal{K}_{M}^{E}}\) be the closure of \(\mathcal{K}_{M}^{E}\) in \(C_{(*)}^{2,\alpha}(\mathcal{Q}^{\rm iter})\times[0,\mathbf{\theta}_{*}]^{2}\). Then_ Lemma 4.15 _and_ Corollary 4.16 _still hold when \(\overline{\mathcal{K}^{\rm ext}}\) is replaced by \(\overline{\mathcal{K}_{M}^{E}}\) for some \(M>0\)._ #### 4.2.3. Properties of the iteration set From the well-posedness of the iteration boundary value problem (4.23), we can obtain the following _a priori_ estimates on the iteration set. Let \(\bar{\alpha}\in(0,\frac{1}{3}]\) be the constant from Proposition 4.6, and let \((\epsilon^{(\rm w)},\delta_{1}^{(\rm w)},N_{1}^{(\rm w)})\) be the constants from Lemma 4.14. **Lemma 4.17** (_A priori_ estimates).: _There exist positive constants \(\alpha^{(\rm ap)}\in(0,\frac{\bar{\alpha}}{2}),\,\epsilon^{(\rm ap)}\in(0,\epsilon^{( \rm w)}],\,\delta_{1}^{(\rm ap)}\in(0,\delta_{1}^{(\rm w)}],\,N_{1}^{(\rm adm )}\geq N_{1}^{(\rm w)},\,\text{and}\,\,\delta_{3}^{(\rm ap)}\,\,\text{with}\,\,( \alpha^{(\rm ap)},\epsilon^{(\rm ap)},\delta_{1}^{(\rm ap)})\) depending only on \((\gamma,v_{2},\theta_{*}),\,N_{1}^{(\rm adm)}\) depending only on \((\gamma,v_{2},\theta_{*},\delta_{1},\delta_{2},N_{1})\) such that, whenever parameters \((\alpha,\epsilon,\delta_{1},\delta_{3},N_{1})\) in_ Definition 4.10 _are chosen according to_ * \((\alpha,\epsilon,\delta_{1})\in(0,\alpha^{(\rm ap)}]\times(0,\epsilon^{(\rm ap)}] \times(0,\delta_{1}^{(\rm ap)}]\,,\)__ * \(N_{1}\in[N_{1}^{(\rm adm)}(\delta_{1}),\infty)\,,\)__ * \(\delta_{3}\in(0,\delta_{3}^{(\rm ap)}(\delta_{1},\delta_{2},N_{1})]\,,\)__ _with parameter_ \(\delta_{2}>0\) _to be determined later in_ SS4.3_, the following statements hold_ _._ 1. _For any admissible solution_ \(\varphi\) _corresponding to parameter_ \(\boldsymbol{\theta}\in\Theta\cap\{\theta_{1},\theta_{2}\leq\theta_{*}\},\) _function_ \(u=u^{(\varphi,\boldsymbol{\theta})}\) _given by (_4.14_) satisfies that_ \((u,\boldsymbol{\theta})\in\mathcal{K}\)_._ 2. _There exists a constant_ \(C>0\) _depending only on_ \((\gamma,v_{2},\theta_{*})\) _such that, for each_ \((u,\boldsymbol{\theta})\in\overline{\mathcal{K}^{\mathrm{ext}}},\) _the unique solution_ \(\hat{\phi}\in C^{2}(\Omega)\cap C^{1}(\overline{\Omega}\setminus\Gamma_{ \mathrm{sonic}})\cap C^{0}(\overline{\Omega})\) _of the boundary value problem (_4.23_) associated with_ \((u,\boldsymbol{\theta})\) _satisfies_ (4.39) \[\|\hat{u}\|_{2,2\alpha,\mathcal{Q}^{\mathrm{iter}}}^{(*)}\leq C\,,\] _for_ \(\hat{u}:\overline{\mathcal{Q}^{\mathrm{iter}}}\to\mathbb{R}\) _given by (_4.24_), whenever_ \((u,\boldsymbol{\theta})\in\overline{\mathcal{K}^{\mathrm{ext}}}\) _satisfies_ (4.40) \[\|u^{\sharp}-u\|_{C^{1}(\overline{\mathcal{Q}^{\mathrm{iter}}})}+| \boldsymbol{\theta}^{\sharp}-\boldsymbol{\theta}|\leq\delta^{\sharp}\] _for some_ \((u^{\sharp},\boldsymbol{\theta}^{\sharp})\in\overline{\mathcal{K}}\) _and some sufficiently small positive constant_ \(\delta^{\sharp}\) _depending only on_ \((\gamma,v_{2},\theta_{*},\delta_{2},\delta_{3},u^{\sharp},\boldsymbol{\theta}^{ \sharp})\)_._ _Proof._ We describe the proof in four steps, which follow [2, Corollary 4.40, Lemma 4.42, Lemma 4.44, Corollary 4.45] closely. **1.** The proof of statement (i) above follows similarly to [2, Corollary 4.40]. Indeed, let \(u=u^{(\varphi,\boldsymbol{\theta})}\) be given by (4.14) for any admissible solution \(\varphi\) corresponding to \(\boldsymbol{\theta}\in\Theta\cap[0,\theta_{*}]^{2}\). Then \((u,\boldsymbol{\theta})\) automatically satisfies properties (ii)-(v) of Definition 4.10 through the choices of constants \(\{N_{i}\}_{i=2}^{5},\sigma_{2},\mu_{0},\mu_{1},\tilde{\mu},\rho^{*}(\gamma)\), and \(C_{\mathrm{ub}}\) in Definition 4.10(iii)-(v). Property (i) of Definition 4.10 follows from the choice of \(N_{0}\) in Definition 4.10(i) and the choice of \(N_{1}\) below, in which we use an argument similar to [2, Lemma 4.39]. Indeed, by Lemma 4.15, Corollary 4.16, and Remark 4.2, for any \(\alpha\in(0,\frac{\alpha}{2}]\), we have the following continuity property of admissible solutions: For any sequence \(\{\boldsymbol{\theta}_{k}\}_{k\in\mathbb{N}}\subseteq\Theta\) with \(\boldsymbol{\theta}_{k}\to\mathbf{0}\) as \(k\to\infty\), let \(\varphi^{(\boldsymbol{\theta}_{k})}\) be any admissible solution corresponding to \(\boldsymbol{\theta}_{k}\), and let \(u^{(\boldsymbol{\theta}_{k})}\) be given by (4.14) with \(\varphi=\varphi^{(\boldsymbol{\theta}_{k})}\) and \(\boldsymbol{\theta}=\boldsymbol{\theta}_{k}\). Then there exists a subsequence of \(\{u^{(\boldsymbol{\theta}_{k})}\}_{k\in\mathbb{N}}\) converging in \(C^{2,\alpha}_{(*)}(\mathcal{Q}^{\mathrm{iter}})\) to \(u^{(\mathrm{norm})}=0\). Using this continuity property, for any \(\delta_{1}\in(0,\delta_{1}^{(\mathrm{w})}]\), there exists \(N_{1}^{(\mathrm{adm})}\geq N_{1}^{(\mathrm{w})}\) depending on \((\gamma,v_{2},\theta_{*},\delta_{1})\) such that \(\|u-u^{(\mathrm{norm})}\|_{2,\alpha,\mathcal{Q}^{\mathrm{iter}}}^{(*)}<\frac{ \delta_{1}}{2}\) whenever \(\max\{\theta_{1},\theta_{2}\}\in(0,\frac{2\delta_{1}}{N_{1}^{(\mathrm{adm})}}]\). Therefore, we have shown that \((u,\boldsymbol{\theta})\in\mathcal{K}^{\mathrm{ext}}\). Finally, property (vii) of Definition 4.10 now follows from Corollary 4.16 because \((u,\boldsymbol{\theta})\in\mathcal{K}^{\mathrm{ext}}\). We conclude that \((u,\boldsymbol{\theta})\in\mathcal{K}\). **2.** It remains to prove statement (ii). Fix any \((u^{\sharp},\boldsymbol{\theta}^{\sharp})\in\overline{\mathcal{K}}\). We claim that there exist constants \(\epsilon^{(\mathrm{lb})}\in(0,\epsilon^{(\mathrm{w})})\) depending only on \((\gamma,v_{2},\theta_{*})\), \(\delta_{3}^{(\mathrm{ap})}>0\) depending only on \((\gamma,v_{2},\theta_{*},\delta_{1},\delta_{2},N_{1})\), and \(\delta^{\sharp}\) depending only on \((\gamma,v_{2},\theta_{*},\delta_{2},\delta_{3},u^{\sharp},\boldsymbol{\theta}^{ \sharp})\) such that \(\hat{\phi}-(\varphi_{\boldsymbol{\theta}}^{*}+\frac{1}{2}|\boldsymbol{\xi}|^{2 })>0\) in \(\Omega\), whenever \((u,\boldsymbol{\theta})\in\overline{\mathcal{K}^{\mathrm{ext}}}\) satisfies (4.40). The proof of this claim follows from that for [2, Lemma 4.42], which is split into two cases: (i) \(\max\{\theta_{1},\theta_{2}\}\in[\frac{2\delta_{1}}{N_{1}^{2}},\theta_{*}]\), and (ii) \(\max\{\theta_{1},\theta_{2}\}\in[0,\frac{2\delta_{1}}{N_{1}^{2}}]\). In particular, we can choose \(\delta_{3}^{(\mathrm{ap})}=\frac{\delta_{1}\delta_{2}}{2N_{1}^{2}}\) and \(\epsilon^{(\mathrm{lb})}=\bar{k}^{-1}\inf_{\boldsymbol{\theta}\in\overline{ \Theta}}\{\hat{c}_{5},\hat{c}_{6}\}\) for constant \(\bar{k}>1\) that has been fixed after Definition 4.4 such that (4.10) holds. **3.** We obtain the _a priori_ estimates for \(\hat{\phi}\) near \(\Gamma_{\mathrm{sonic}}\). Let coordinates \((x,y)=\mathcal{R}(\boldsymbol{\xi})\) be given by (4.1). For \(j=5,6\), we have the following estimates (with estimates (b)-(c) below only valid when \(\theta_{*}\in(\theta^{\mathrm{s}},\theta^{\mathrm{d}})\)): 1. Whenever \(\theta_{j-4}\in[0,\theta^{\mathrm{s}})\), for each \(\alpha^{\prime}\in(0,1)\), there exist positive constants \(\epsilon_{\mathrm{p}}\in(0,\epsilon_{0}]\) and \(C_{\alpha^{\prime}}\) depending only on \((\gamma,v_{2},\theta_{*},\alpha^{\prime})\) with \[\|(\hat{\phi}-\phi_{j})\circ\mathcal{R}^{-1}\|_{2,\alpha^{\prime},\mathcal{R}( \Omega\cap\mathcal{D}_{\mathrm{rp}}^{j})}^{(2),(\mathrm{par})}\leq C_{\alpha^{ \prime}}\,;\] 2. There exists a constant \(\delta_{\mathrm{p}}\in(0,\theta_{*}-\theta^{\mathrm{s}})\) depending only on \((\gamma,v_{2},\theta_{*})\) such that, whenever \(\theta_{j-4}\in[\theta^{\mathrm{s}},\theta^{\mathrm{s}}+\delta_{\mathrm{p}}]\), for each \(\alpha^{\prime}\in(0,1)\), there exist positive constants \(\epsilon_{\mathrm{p}}\in(0,\epsilon_{0}]\) depending only on \((\gamma,v_{2},\theta_{*})\) and \(C_{\alpha^{\prime}}\) depending only on \((\gamma,v_{2},\theta_{*},\alpha^{\prime})\) so that \[\|\hat{\phi}-\phi_{j}\|_{C^{2,\alpha^{\prime}}(\Omega\cap\mathcal{D}_{\mathrm{rp }}^{j})}\leq C_{\alpha^{\prime}}\,,\qquad D^{m}(\hat{\phi}-\phi_{j})(P_{0}^{j -4})=0\quad\text{for }m=0,1,2\,;\] 3. For \(\delta_{\mathrm{p}}>0\) as above, there exist constants \(\hat{\alpha}\in(0,\frac{1}{3}]\) and \(C>0\) depending only on \((\gamma,v_{2},\theta_{*})\) such that, whenever \(\theta_{j-4}\in[\theta^{s}+\frac{1}{2}\phi_{\mathrm{p}},\theta_{*}]\), \[\|\hat{\phi}-\phi_{j}\|_{2,\hat{\alpha},\hat{\alpha},\hat{\Omega}\cap \mathcal{D}^{j}_{0}}^{(-1-\hat{\alpha}),\{P^{j-4}_{0}\}}\leq C\,,\qquad D^{m} (\hat{\phi}-\phi_{j})(P^{j-4}_{0})=0\quad\text{for }m=0,1\,.\] Using Step **2**, the proof of these _a priori_ estimates follows from that for [2, Lemma 4.44], whereby it is necessary to further reduce constants \(\epsilon\) and \(\delta_{1}\) depending only on \((\gamma,v_{2},\theta_{*})\), and we define \(\alpha^{(\mathrm{ap})}\coloneqq\frac{1}{2}\min\{\alpha^{*}_{1},\hat{\alpha}\}\) with \(\alpha^{*}_{1}\) given in Lemma 4.14. In particular, the estimate in (i) is similar to that for Propositions 3.12-3.13, with the free boundary replaced by a fixed boundary. The estimate in (ii) is similar to that for Proposition 3.14, with the free boundary replaced again by a fixed boundary. Finally, the estimate in (iii) is similar to that for Proposition 3.16. **4.** Following the method from [2, Proposition 4.12], we are able to combine the estimates from Step **3** with the interior estimates from Lemma 4.14 to obtain (4.39). Let parameters \((\alpha,\epsilon,\delta_{1},\delta_{3},N_{1})\) in Definition 4.10 be chosen as in Lemma 4.17. It is simple to verify that \(\mathcal{K}^{\mathrm{ext}}\subseteq C^{2,\alpha}_{(*)}(\mathcal{Q}^{\mathrm{ iter}})\times[0,\theta_{*}]^{2}\) is relatively open. Indeed, the argument is similar to that for [10, Lemmas 12.8.1 and 17.5.1] and [2, Lemma 4.41]. Furthermore, following the proofs of [10, Propositions 12.8.2 and 17.5.3] and [2, Lemma 4.46], and applying the _a priori_ estimates from Lemma 4.17, we conclude that the iteration set \(\mathcal{K}\subseteq C^{2,\alpha}_{(*)}(\mathcal{Q}^{\mathrm{iter}})\times[0, \theta_{*}]^{2}\) is relatively open. **Proposition 4.18** (Openness of the iteration set).: _For the choice of parameters \((\alpha,\epsilon,\delta_{1},\delta_{3},N_{1})\) given in Lemma 4.17, the iteration set \(\mathcal{K}\) defined by Definition 4.10 is a relatively open subset of \(C^{2,\alpha}_{(*)}(\mathcal{Q}^{\mathrm{iter}})\times[0,\theta_{*}]^{2}\)._ ### Definition of the iteration map Fix \(\theta_{*}\in(0,\theta^{\mathrm{d}})\). For the iteration set \(\mathcal{K}\) given by Definition 4.10, let parameters \((\alpha,\epsilon,\delta_{1},\delta_{3},N_{1})\) be chosen as in Lemma 4.17. Define \[\mathcal{K}(\boldsymbol{\theta})\coloneqq\left\{u\in C^{2,\alpha}_{(*)}( \mathcal{Q}^{\mathrm{iter}})\,:\,(u,\boldsymbol{\theta})\in\mathcal{K}\right\} \qquad\text{for each }\boldsymbol{\theta}\in[0,\theta_{*}]^{2}\,,\] and similarly for \(\overline{\mathcal{K}}(\boldsymbol{\theta})\). We now define an iteration map \(\mathcal{I}:\overline{\mathcal{K}}\to C^{2,\alpha}_{(*)}(\mathcal{Q}^{ \mathrm{iter}})\) satisfying the following properties: 1. For each \(\boldsymbol{\theta}\in[0,\theta_{*}]^{2}\), there exists \(u\in\mathcal{K}(\boldsymbol{\theta})\) satisfying \(\mathcal{I}(u,\boldsymbol{\theta})=u\); 2. If \(u\) satisfies \(\mathcal{I}(u,\boldsymbol{\theta})=u\), then \(\varphi^{(u,\boldsymbol{\theta})}\) defined in Definition 4.7(iii) is an admissible solution corresponding to \(\boldsymbol{\theta}\). Let \(\hat{\phi}=\hat{\varphi}^{(u,\boldsymbol{\theta})}+\frac{1}{2}|\boldsymbol{ \xi}|^{2}\in C^{2}(\Omega)\cap C^{1}(\overline{\Omega})\) solve the iteration boundary value problem (4.23) associated with \((u,\boldsymbol{\theta})\) in Definition 4.10. Accordingly, from the definition of \(\hat{u}:\overline{\mathcal{Q}^{\mathrm{iter}}}\to\mathbb{R}\) given by (4.24), we have \[\hat{\varphi}^{(u,\boldsymbol{\theta})}(\boldsymbol{\xi})=\hat{u}\circ( \mathfrak{F}_{(u,\boldsymbol{\theta})})^{-1}(\boldsymbol{\xi})+\varphi^{*}_{ \boldsymbol{\theta}}(\boldsymbol{\xi})\,,\] where \(\varphi^{*}_{\boldsymbol{\theta}}(\boldsymbol{\xi})\) is given by Definition 4.4. We also define functions \((w,w_{2},\hat{w})\) by \[(w,w_{2},\hat{w})(s,t^{\prime})\coloneqq(\varphi-\varphi^{*}_{\boldsymbol{ \theta}},\varphi_{2}-\varphi^{*}_{\boldsymbol{\theta}},\hat{\varphi}^{(u, \boldsymbol{\theta})}-\varphi^{*}_{\boldsymbol{\theta}})\circ(\mathcal{G}^{ \boldsymbol{\theta}}_{1})^{-1}(s,t^{\prime})\,. \tag{4.41}\] From (4.12) and the implicit function theorem, for any \(\boldsymbol{\theta}\in\overline{\Theta}\), there exists a unique function \(\mathfrak{g}_{2}:[-1,1]\to\overline{\mathbb{R}_{+}}\) satisfying \(w_{2}(s,\mathfrak{g}_{2}(s))=0\) on \([-1,1]\). Then, for \(Q^{\boldsymbol{\theta}}\) defined by Definition 4.1, \[\left\{(s,t^{\prime})\,:\,-1<s<1,\,t^{\prime}=\mathfrak{g}_{2}(s)\right\} \subseteq\mathcal{G}^{\boldsymbol{\theta}}_{1}(Q^{\boldsymbol{\theta}})\,.\] It follows from (4.11), Lemma 4.2, and Definition 4.4 that \(\|\mathfrak{g}_{2}\|_{C^{3}([-1,1])}\leq C\) for some constant \(C>0\) depending only on \((\gamma,v_{2})\). For any \(g\in C^{0,1}([-1,1])\) satisfying \(g(s)>0\) for all \(s\in(-1,1)\), introduce the following sets: \[\begin{split}& R_{g}\coloneqq\left\{(s,t^{\prime})\in\mathbb{R}_{+}^{2} \,:\,-1<s<1,\,0<t^{\prime}<g(s)\right\},\\ &\Sigma_{g}\coloneqq\left\{(s,t^{\prime})\in\mathbb{R}_{+}^{2}\,: \,-1<s<1,\,t^{\prime}=g(s)\right\}.\end{split} \tag{4.42}\] For each \((u,\boldsymbol{\theta})\in\overline{\mathcal{K}}\), \(\mathfrak{g}_{\mathrm{sh}}=\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})}:[-1,1]\to\overline{\mathbb{R}_{+}}\) given by Definition 4.7 is a Lipschitz function on \([-1,1]\) and is positive on \((-1,1)\). Then we can define sets \(R_{\mathfrak{g}_{\mathrm{sh}}}\) and \(\Sigma_{\mathfrak{g}_{\mathrm{sh}}}\) with \(g=\mathfrak{g}_{\mathrm{sh}}\) in (4.42). Note that \(w\) and \(\hat{w}\) are defined on \(R_{\mathfrak{g}_{\mathrm{sh}}}\), while \(w_{2}\) is defined on \(R_{\infty}\). The _regularized distance_ function \(\delta_{\mathfrak{g}_{\mathrm{sh}}}\in C^{\infty}(\overline{R_{\infty}}\setminus \overline{R_{\mathfrak{g}_{\mathrm{sh}}}})\) is given by Lemma B.4. Let \(C_{\mathrm{rd}}>0\) be the constant from Lemma B.4, which depends only on \(\mathrm{Lip}[\mathfrak{g}_{\mathrm{sh}}]\). We define \[\delta_{\mathfrak{g}_{\mathrm{sh}}}^{*}(s,t^{\prime})\coloneqq C_{\mathrm{rd} }^{-1}\delta_{\mathfrak{g}_{\mathrm{sh}}}(s,t^{\prime})\qquad\text{for any }(s,t^{\prime})\in \overline{R_{\infty}}\setminus\overline{R_{\mathfrak{g}_{\mathrm{sh}}}}\,. \tag{4.43}\] Then, for all \((s,t^{\prime})\in R_{(1+\kappa)_{\mathfrak{g}_{\mathrm{sh}}}}\setminus \overline{R_{\mathfrak{g}_{\mathrm{sh}}}}\) and all \(\lambda\in[1,2]\), \[(s,t^{\prime}-\lambda\delta_{\mathfrak{g}_{\mathrm{sh}}}^{*}(s,t^{\prime})) \in\{s\}\times\big{[}\,\frac{\mathfrak{g}_{\mathrm{sh}}(s)}{3},\,\mathfrak{g }_{\mathrm{sh}}(s)-(t^{\prime}-\mathfrak{g}_{\mathrm{sh}}(s))\big{]}\Subset R_ {\mathfrak{g}_{\mathrm{sh}}}\,, \tag{4.44}\] where constant \(\kappa\in(0,\frac{1}{3}]\) depends only on \(\mathrm{Lip}[\mathfrak{g}_{\mathrm{sh}}]\). Let \(\left\{(u_{k},\boldsymbol{\theta}_{k})\right\}_{k\in\mathbb{N}}\subseteq \overline{\mathcal{K}^{\mathrm{ext}}}\) converge to \((u,\boldsymbol{\theta})\) in \(C^{2,\alpha}_{(*)}(\mathcal{Q}^{\mathrm{iter}})\times[0,\theta_{*}]^{2}\). Note that \[\|\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})}\|_{2,\alpha,(-1,1)}^{( -1-\alpha),\{-1,1\}}\leq CN_{0}\,,\] where \(N_{0}>0\) is from Definition 4.10(i), and \(C>0\) depends only on \((\gamma,v_{2},\alpha)\). Moreover, for \(\mathfrak{g}_{5}\) and \(\mathfrak{g}_{6}\) defined in Proposition 4.3(ii), \[\frac{\mathrm{d}^{m}}{\mathrm{d}s^{m}}(\mathfrak{g}_{\mathrm{sh}}^{(u, \boldsymbol{\theta})}-\mathfrak{g}_{5})(1)=\frac{\mathrm{d}^{m}}{\mathrm{d}s^ {m}}(\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})}-\mathfrak{g}_{6})(-1 )=0\qquad\text{for }m=0,1\,.\] Using Lemma 4.8(iii) and (v)-(vi), we see that \(\mathfrak{g}_{\mathrm{sh}}^{(u_{k},\boldsymbol{\theta}_{k})}\) converges to \(\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})}\) in \(C^{0,1}([-1,1])\) and \[\lim_{k\to\infty}\|\delta_{\mathfrak{g}_{\mathrm{sh}}^{(u_{k},\boldsymbol{ \theta}_{k})}}-\delta_{\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})}} \|_{C^{m}(K)}=0\] for any compact set \(K\subseteq\overline{R_{\infty}}\setminus\overline{R_{\mathfrak{g}_{\mathrm{sh}}}}\) and \(m\in\mathbb{N}\cup\{0\}\), which follows by Lemma B.4. From [10, Lemma 13.9.2], there exists a function \(\Psi\in C_{\mathrm{c}}^{\infty}(\mathbb{R})\) satisfying \[\operatorname{supp}\Psi\subseteq[1,2]\,,\qquad\int_{-\infty}^{\infty}\lambda^{ m}\Psi(\lambda)\,\mathrm{d}\lambda=1-\operatorname{sgn}(m)\quad\text{for }m=0,1,2\,.\] **Definition 4.19** (Extension operator).: _For each \((u,\boldsymbol{\theta})\in\overline{\mathcal{K}^{\mathrm{ext}}}\), write \(\mathfrak{g}_{\mathrm{sh}}=\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})}\), and let \(\delta_{\mathfrak{g}_{\mathrm{sh}}}^{*}\) be defined by (4.43) satisfying (4.44) with constants \(C_{\mathrm{rd}}>0\) and \(\kappa\in(0,\frac{1}{3}]\) depending only on \((\gamma,v_{2},\theta_{*})\). For any \(v\in C^{0}(\overline{R_{\mathfrak{g}_{\mathrm{sh}}}})\cap C^{2}(R_{\mathfrak{ g}_{\mathrm{sh}}}\cup\Sigma_{\mathfrak{g}_{\mathrm{sh}}}),\) define the extension of \(v(s,t^{\prime})\) by_ \[\mathcal{E}_{\mathfrak{g}_{\mathrm{sh}}}(v)(s,t^{\prime})\coloneqq\left\{ \begin{aligned} & v(s,t^{\prime})&\text{for }(s,t^{\prime})\in \overline{R_{\mathfrak{g}_{\mathrm{sh}}}}\,,\\ &\int_{-\infty}^{\infty}v(s,t^{\prime}-\lambda\delta_{\mathfrak{g }_{\mathrm{sh}}}^{*}(s,t^{\prime}))\Psi(\lambda)\,\mathrm{d}\lambda& \text{for }(s,t^{\prime})\in R_{(1+\kappa)_{\mathfrak{g}_{\mathrm{sh}}}} \setminus\overline{R_{\mathfrak{g}_{\mathrm{sh}}}}\,.\end{aligned}\right.\] For simplicity, we use the following notation: For any \(a,b\in[-1,1]\), denote \[R_{\mathfrak{g}_{\mathrm{sh}}}^{(u,\boldsymbol{\theta})}[a,b]\coloneqq\left\{(s,t^{\prime})\in\mathbb{R}_{+}^{2}\,:\,a<s<b\,,0<t^{\prime}<\mathfrak{g}_{ \mathrm{sh}}^{(u,\boldsymbol{\theta})}(s)\right\}.\] We often use the notation \(R_{\mathfrak{g}_{\mathrm{sh}}}\) to represent \(R_{\mathfrak{g}_{\mathrm{sh}}}^{(u,\boldsymbol{\theta})}\). Combining the above with Lemma 4.8(iii) and the property that the uniform bound of \(\mathrm{Lip}[\mathfrak{g}_{\mathrm{sh}}]\) depends only on \((\gamma,v_{2},\theta_{*})\), we obtain the following proposition, for which the details of the proof can be found in [10, Lemma 13.9.6, and Theorems 13.9.5 and 13.9.8] for each case, respectively. **Proposition 4.20** (Properties of the extension operator \(\mathcal{E}\)).: _Fix \(\alpha\in(0,1)\). For each \((u,\boldsymbol{\theta})\in\overline{\mathcal{K}^{\mathrm{ext}}},\) let the extension operator \(\mathcal{E}_{\mathfrak{g}_{\mathrm{sh}}}:C^{2}(R_{\mathfrak{g}_{\mathrm{sh}}}\cup \Sigma_{\mathfrak{g}_{\mathrm{sh}}})\to C^{2}(R_{(1+\kappa)_{\mathfrak{g}_{ \mathrm{sh}}}})\) be given by_ Definition 4.19_. We introduce the notation for the following three different cases_: 1. _Fix any_ \((b_{1},b_{2})\) _with_ \(-1<b_{1}<b_{2}<1,\) _and_ \(C=C_{\mathrm{int}}>0\) _depending only on_ \((\gamma,v_{2},\theta_{*},\alpha)\)_. For any_ \(\alpha^{\prime}\in(0,\alpha),\) _denote the function spaces_:_ \[X\coloneqq C^{2,\alpha}(R_{\mathfrak{g}_{\mathrm{sh}}}^{(u,\boldsymbol{\theta})}[b _{1},b_{2}]),\quad Y\coloneqq C^{2,\alpha}(R_{(1+\kappa)_{\mathfrak{g}_{\mathrm{sh}}} }^{(u,\boldsymbol{\theta})}[b_{1},b_{2}]),\quad Y^{-}\coloneqq C^{2,\alpha^{ \prime}}(R_{(1+\frac{\kappa}{2})_{\mathfrak{g}_{\mathrm{sh}}}}^{(u, \boldsymbol{\theta})}[b_{1},b_{2}])\,;\] 2. _Fix_ \(\sigma>0\) _and_ \(\epsilon\in(0,\frac{1}{4}]\)_. Fix_ \((b_{1},b_{2})=(-1,-1+\epsilon)\) _or_ \((1-\epsilon,1),\) _and_ \(C=C_{\mathrm{par}}>0\) _depending only on_ \((\gamma,v_{2},\theta_{*},\alpha,\sigma)\)_. For any_ \(\alpha^{\prime}\in(0,\alpha)\) _and_ \(\sigma^{\prime}\in(0,\sigma),\) _denote the function spaces_:__ \[X\coloneqq C^{2,\alpha}_{(\sigma),(\mathrm{par})}(R_{\mathfrak{g}_{ \mathrm{sh}}}^{(u,\boldsymbol{\theta})}[b_{1},b_{2}])\,,\qquad Y\coloneqq C^{2, \alpha}_{(\sigma),(\mathrm{par})}(R_{(1+\kappa)_{\mathfrak{g}_{\mathrm{sh}}}}^{(u, \boldsymbol{\theta})}[b_{1},b_{2}])\,,\] \[Y^{-}\coloneqq C^{2,\alpha^{\prime}}_{(\sigma^{\prime}),(\mathrm{par})}(R _{(1+\frac{\kappa}{2})_{\mathfrak{g}_{\mathrm{sh}}}}^{(u,\boldsymbol{\theta})}[b_{1},b_{2}])\,;\] _._ 3. _Fix_ \((b_{1},b_{2})\) _with either_ \((b_{1},b_{2})=(b_{1}^{(1)},b_{2}^{(1)})=(\frac{1}{2},1)\) _or_ \((b_{1},b_{2})=(b_{1}^{(2)},b_{2}^{(2)})=(-1,-\frac{1}{2}),\) _and_ \(C=C_{\rm sub}>0\) _depending only on_ \((\gamma,v_{2},\theta_{*},\alpha)\)_. For any_ \(\alpha^{\prime}\in(0,\alpha),\) _denote the function spaces_:__ \[X\coloneqq C_{(-1-\alpha),\{s=(-1)^{i-1}\}}^{2,\alpha}(R_{\mathfrak{ g}_{\rm sh}}^{(u,\boldsymbol{\theta})}[b_{1}^{(i)},b_{2}^{(i)}])\,,\quad Y \coloneqq C_{(-1-\alpha),\{s=(-1)^{i-1}\}}^{2,\alpha}(R_{(1+\kappa)\mathfrak{ g}_{\rm sh}}^{(u,\boldsymbol{\theta})}[b_{1}^{(i)},b_{2}^{(i)}])\,,\] \[Y^{-}\coloneqq C_{(-1-\alpha^{\prime}),\{s=(-1)^{i-1}\}}^{2, \alpha^{\prime}}(R_{(1+\frac{\kappa}{2})\mathfrak{g}_{\rm sh}}^{(u, \boldsymbol{\theta})}[b_{1}^{(i)},b_{2}^{(i)}])\,.\] _Then the extension operator \(\mathcal{E}_{\mathfrak{g}_{\rm sh}}\) satisfies the following:_ 1. _There exists_ \(C>0\) _for each case above that_ \(\|\mathcal{E}_{\mathfrak{g}_{\rm sh}}(v)\|_{Y}\leq C\|v\|_{X}\) _. Furthermore, for_ Case (c), _if_ \((v,Dv)=(0,\boldsymbol{0})\) _on_ \(\overline{R_{\mathfrak{g}_{\rm sh}}^{(u,\boldsymbol{\theta})}}\cap\{x_{1}=(-1 )^{i-1}\}\) _for_ \(i=1,2,\) _then_ \[(\mathcal{E}_{\mathfrak{g}_{\rm sh}}(v),D\mathcal{E}_{\mathfrak{g}_{\rm sh}} (v))=(0,\boldsymbol{0})\qquad\text{on $\overline{R_{(1+\kappa)\mathfrak{g}_{\rm sh}}^{(u, \boldsymbol{\theta})}}\cap\{x_{1}=(-1)^{i-1}\}$}\,.\] 2. _For each case,_ \(\mathcal{E}_{\mathfrak{g}_{\rm sh}}:X\to Y\) _is linear and continuous._ 3. _Suppose that_ \(\{(u_{k},\boldsymbol{\theta}_{k})\}_{k\in\mathbb{N}}\subseteq\overline{ \mathcal{K}^{\rm ext}}\) _converges to_ \((u,\boldsymbol{\theta})\) _in_ \(C_{(*)}^{2,\tilde{\alpha}}(\mathcal{Q}^{\rm iter})\times[0,\theta_{*}]^{2}\) _for some_ \(\tilde{\alpha}\in(0,1)\)_. Write_ \(X_{k}\) _for the function space_ \(X\) _with_ \((u,\boldsymbol{\theta})\) _replaced by_ \((u_{k},\boldsymbol{\theta}_{k})\)_. If sequence_ \(\{v_{k}\}_{k\in\mathbb{N}}\subseteq X_{k}\) _satisfies_ \(\|v_{k}\|_{X_{k}}\leq M\) _for all_ \(k\in\mathbb{N}\) _for some constant_ \(M>0\) _and_ \(\{v_{k}\}_{k\in\mathbb{N}}\) _converges uniformly to_ \(v\) _on any compact set_ \(K\Subset R_{\mathfrak{g}_{\rm sh}}^{(u,\boldsymbol{\theta})}\) _for some_ \(v\in X,\) _then_ \[\mathcal{E}_{\mathfrak{g}_{\rm sh}^{(u_{k},\boldsymbol{\theta}_{k})}}(v_{k}) \to\mathcal{E}_{\mathfrak{g}_{\rm sh}^{(u,\boldsymbol{\theta})}}(v)\qquad \text{in $Y^{-}$}\,,\] _where_ \(\mathcal{E}_{\mathfrak{g}_{\rm sh}^{(u_{k},\boldsymbol{\theta}_{k})}}(v_{k})\) _is well defined on_ \(\overline{R_{(1+\frac{\kappa}{2})\mathfrak{g}_{\rm sh}}^{(u,\boldsymbol{ \theta})}[b_{1},b_{2}]}\) _for large enough_ \(k\)_._ The proof of the following result is similar to [2, Lemma 5.5]. **Lemma 4.21**.: _Let parameters \((\alpha,\epsilon,\delta_{1},\delta_{3},N_{1})\) in Definition 4.10 be chosen as in Lemma 4.17, with \(\delta_{2}>0\) to be specified later. Then there exists a constant \(\delta_{3}^{(\rm imp)}>0\) depending only on \((\gamma,v_{2},\theta_{*},\delta_{2})\) such that, if \(\delta_{3}\in(0,\delta_{3}^{(\rm imp)}),\) then, for each \((u,\boldsymbol{\theta})\in\overline{\mathcal{K}},\) there is a unique function \(\mathfrak{\hat{g}}_{\rm sh}:[-1,1]\to\overline{\mathbb{R}_{+}}\) such that_ \[\big{(}w_{2}-\mathcal{E}_{\mathfrak{g}_{\rm sh}^{(u,\boldsymbol{\theta})}}( \hat{w})\big{)}(s,\mathfrak{\hat{g}}_{\rm sh}(s))=0\qquad\text{for all $s\in[-1,1]$}\,. \tag{4.45}\] _Furthermore, there exists a constant \(C>0\) depending only on \((\gamma,v_{2},\theta_{*})\) such that \(\mathfrak{\hat{g}}_{\rm sh}\) satisfies_ \[\left\{\begin{aligned} &\|\mathfrak{\hat{g}}_{\rm sh}- \mathfrak{g}_{2}\|_{2,2\alpha,(-1,1)}^{(-1-2\alpha),\{-1,1\}}\leq C\,,\qquad \|\mathfrak{\hat{g}}_{\rm sh}-\mathfrak{g}_{\rm sh}^{(u,\boldsymbol{\theta})} \|_{2,\frac{\alpha}{2},(-1,1)}\leq C\delta_{3}\,,\\ &\frac{\mathrm{d}^{m}}{\mathrm{d}s^{m}}(\mathfrak{\hat{g}}_{\rm sh }-\mathfrak{g}_{2})(\pm 1)=\frac{\mathrm{d}^{m}}{\mathrm{d}s^{m}}(\mathfrak{\hat{g}}_{\rm sh }-\mathfrak{g}_{\rm sh}^{(u,\boldsymbol{\theta})})(\pm 1)=0\qquad\text{for $m=0,1$}\,.\end{aligned}\right. \tag{4.46}\] **Definition 4.22** (Iteration map).: _Let parameters \((\alpha,\epsilon,\delta_{1},\delta_{3},N_{1})\) in Definition 4.10 be chosen as in Lemma 4.17, with \(\delta_{2}>0\) to be specified later. Furthermore, let \(\delta_{3}\in(0,\delta_{3}^{(\rm imp)})\) for constant \(\delta_{3}^{(\rm imp)}>0\) from Lemma 4.21. For each \((u,\boldsymbol{\theta})\in\overline{\mathcal{K}},\) let \(\tilde{u}:\overline{\mathcal{Q}^{\rm iter}}\to\mathbb{R}\) be given by_ \[\tilde{u}\coloneqq\mathcal{E}_{\mathfrak{g}_{\rm sh}^{(u,\boldsymbol{\theta})}}( \hat{w})\circ(G_{2,\mathfrak{\hat{g}}_{\rm sh}})^{-1}\,,\] _where \(G_{2,\mathfrak{\hat{g}}_{\rm sh}}\) is given by (4.13) with \(\mathfrak{\hat{g}}_{\rm sh}:[-1,1]\to\mathbb{R}_{+}\) from (4.45). Then define the iteration map \(\mathcal{I}:\overline{\mathcal{K}}\to C_{(*)}^{2,\alpha}(\mathcal{Q}^{\rm iter})\) by_ \[\mathcal{I}(u,\boldsymbol{\theta})=\tilde{u}\,.\] Note that map \(\mathcal{I}:\overline{\mathcal{K}}\to C_{(*)}^{2,\alpha}(\mathcal{Q}^{\rm iter})\) defined as above is reasonable, since \[\|\mathcal{I}(u,\boldsymbol{\theta})\|_{2,2\alpha,\mathcal{Q}^{\rm iter}}^{(*)} \leq C\qquad\text{for all $(u,\boldsymbol{\theta})\in\mathcal{K}$} \tag{4.47}\] for \(C>0\) depending only on \((\gamma,v_{2},\theta_{*})\), which follows from Propositions 4.18 and 4.20, and Lemma 4.21. ### Proof of Theorem 2.1: Existence of admissible solutions Fix \(\theta_{*}\in(0,\theta^{\mathrm{d}})\). For the iteration map \(\mathcal{I}\) defined in Definition 4.22, we now prove that, for any fixed point \(u\in\overline{\mathcal{K}}(\boldsymbol{\theta})\) of \(\mathcal{I}(\cdot,\boldsymbol{\theta})\) for some \(\boldsymbol{\theta}\in\Theta\cap[0,\theta_{*}]^{2}\), then \(\varphi=\varphi^{(u,\boldsymbol{\theta})}\) defined in Definition 4.7(iii) is an admissible solution corresponding to \(\boldsymbol{\theta}\) in the sense of Definition 2.11. **Proposition 4.23** (Fixed points of map \(\mathcal{I}(\cdot,\boldsymbol{\theta})\)).: _Let parameters \((\alpha,\epsilon,\delta_{1},\delta_{3},N_{1})\) in Definition 4.10 be chosen as in Definition 4.22. Then parameters \((\epsilon,\delta_{1})\) can be chosen even smaller, depending only on \((\gamma,v_{2},\theta_{*})\), such that, for each \(\boldsymbol{\theta}\in\Theta\cap[0,\theta_{*}]^{2}\), \(u\in\mathcal{K}(\boldsymbol{\theta})\) is a fixed point of \(\mathcal{I}(\cdot,\boldsymbol{\theta}):\overline{\mathcal{K}}(\boldsymbol{ \theta})\to C^{2,\alpha}_{(*)}(\mathcal{Q}^{\mathrm{iter}})\) if and only if \(\varphi=\varphi^{(u,\boldsymbol{\theta})}\) defined in Definition 4.7(iii) is an admissible solution corresponding to \(\boldsymbol{\theta}\) in the sense of Definition 2.11 after extending \(\varphi\) to \(\mathbb{R}^{2}_{+}\) via (2.41)._ Proof.: Since the proof is similar to that for [2, Proposition 5.8], we omit the details of the proof here and sketch only the main ideas in the following four steps. **1.** For \(\boldsymbol{\theta}\in\Theta\cap[0,\theta_{*}]^{2}\), let \(\varphi\) be an admissible solution corresponding to \(\boldsymbol{\theta}\), and let \(u\coloneqq u^{(\varphi,\boldsymbol{\theta})}\) be given by (4.14), which satisfies \(u\in\mathcal{K}(\boldsymbol{\theta})\) by Lemma 4.17(i). It follows directly from Definition 4.7(i) and the boundary condition on \(\Gamma_{\mathrm{shock}}\) that \(\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})}=\mathfrak{g}_{\mathrm{ sh}}\). It is clear that \(\hat{\phi}\coloneqq\phi=\varphi+\frac{1}{2}|\boldsymbol{\xi}|^{2}\) solves the iteration boundary value problem (4.23), which indicates that \(\hat{w}=w\) in (4.41). Then \(\hat{\mathfrak{g}}_{\mathrm{sh}}=\mathfrak{g}_{\mathrm{sh}}\) by solving (4.45). We conclude that \[\mathcal{I}(u,\boldsymbol{\theta})=\mathcal{E}_{\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})}}(\hat{w})\circ(G_{2,\hat{\mathfrak{g}}_{\mathrm{sh}}}) ^{-1}=\mathcal{E}_{\mathfrak{g}_{\mathrm{sh}}}(w)\circ(G_{2,\mathfrak{g}_{ \mathrm{sh}}})^{-1}=u\,,\] so \(u\in\mathcal{K}(\boldsymbol{\theta})\) is a fixed point of \(\mathcal{I}(\cdot,\boldsymbol{\theta})\). **2.** On the other hand, for any fixed point \(u\) of map \(\mathcal{I}(\cdot,\boldsymbol{\theta}):\overline{\mathcal{K}}(\boldsymbol{ \theta})\to C^{2,\alpha}_{(*)}(\mathcal{Q}^{\mathrm{iter}})\), let \(\varphi^{(u,\boldsymbol{\theta})}\) be given by Definition 4.7(iii). We now show that \(\varphi^{(u,\boldsymbol{\theta})}\) is an admissible solution in the sense of Definition 2.11 corresponding to \(\boldsymbol{\theta}\). Using Definition 4.7(i), (4.45), and Definition 4.22, we have \[w_{2}(s,\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})})=u(s,1)= \mathcal{E}_{\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{\theta})}}(\hat{w})( s,\hat{\mathfrak{g}}_{\mathrm{sh}})=w_{2}(s,\hat{\mathfrak{g}}_{\mathrm{sh}})\,,\] from which we obtain that \(\hat{\mathfrak{g}}_{\mathrm{sh}}=\mathfrak{g}_{\mathrm{sh}}^{(u,\boldsymbol{ \theta})}\), \(\hat{u}=u\), and \(\hat{\varphi}^{(u,\boldsymbol{\theta})}=\varphi^{(u,\boldsymbol{\theta})}\). Define \(v\coloneqq\hat{\varphi}^{(u,\boldsymbol{\theta})}-\varphi_{2}\). Then \(v\) solves the following strictly elliptic equation: \[\mathcal{L}_{(u,\boldsymbol{\theta})}(v)=\sum_{i,j=1}^{2}A_{ij}(D\hat{\phi}( \boldsymbol{\xi}),\boldsymbol{\xi})\partial_{i}\partial_{j}v=0\,.\] From the boundary conditions of the iteration boundary value problem (4.23) and \(\hat{u}=u\) above, we see that \(v\leq 0\) on \(\Gamma_{\mathrm{shock}}\cup\Gamma_{\mathrm{sonic}}\) and \(\partial_{\eta}v=-v_{2}>0\) on \(\Gamma_{\mathrm{sym}}\). From the maximum principle and Hopf's lemma, we obtain that \(\hat{\varphi}^{(u,\boldsymbol{\theta})}\leq\varphi_{2}\) on \(\overline{\Omega}\). When \(\max\{\theta_{1},\theta_{2}\}\in[\frac{2\delta_{1}}{N_{1}^{2}},\theta_{*}]\), we see that \(\hat{\varphi}^{(u,\boldsymbol{\theta})}\geq\max\{\varphi_{5},\varphi_{6}\}\) in \(\Omega\), which follows directly from (4.21) in Definition 4.10 and Step **1** of Lemma 4.17. When \(\max\{\theta_{1},\theta_{2}\}\in(0,\frac{2\delta_{1}}{N_{1}^{2}}]\), we can show the same result by applying the maximum principle as in the proof of [2, Lemma 4.42]. Then inequality (2.42) in Definition 2.11(iv) is proved. Similarly, for inequality (2.43) in Definition 2.11(iv), we also split into two cases. Denote \(w\coloneqq\partial_{\mathfrak{e}_{S_{25}}}(\varphi_{2}-\varphi^{(u, \boldsymbol{\theta})})\). We introduce \((X_{1},X_{2})\)-coordinates by \(\boldsymbol{\xi}\coloneqq X_{1}\boldsymbol{e}_{S_{25}}+X_{2}\boldsymbol{e}_{S_ {25}}^{\perp}\). From the equation for \(v\), one can derive the following strictly elliptic equation for \(w=-\partial_{X_{1}}v\): \[\hat{\mathcal{L}}_{(u,\boldsymbol{\theta})}(w)\coloneqq\sum_{i,j=1}^{2}\hat{A} _{ij}\partial_{X_{i}X_{j}}w+\sum_{i=1}^{2}\hat{A}_{i}\partial_{X_{i}}w=0\qquad \text{ in }\Omega\,.\] When \(\max\{\theta_{1},\theta_{2}\}\in[\frac{2\delta_{1}}{N_{1}^{2}},\theta_{*}]\), it follows from (4.21) in Definition 4.10 that \(w<0\) in \(\overline{\Omega}\setminus\mathcal{D}^{5}_{\epsilon/10}\). In domain \(\Omega\cap\mathcal{D}^{5}_{\epsilon}\), one can check \[\left\{\begin{aligned} & w=0&& \text{ on }\Gamma^{5}_{\mathrm{sonic}}\,,\\ &\boldsymbol{b}_{\mathrm{w}}\cdot Dw=0&&\text{ with } \boldsymbol{b}_{\mathrm{w}}\cdot\boldsymbol{\nu}>0\text{ on }\Gamma_{\mathrm{sym}}\,,\\ &\boldsymbol{b}_{\mathrm{sh}}\cdot Dw=0&&\text{ with } \boldsymbol{b}_{\mathrm{sh}}\cdot\boldsymbol{\nu}>0\text{ on }\Gamma_{\mathrm{ shock}}\,,\end{aligned}\right. \tag{4.48}\] where \(\boldsymbol{\nu}\) is the unit normal vector on \(\Gamma_{\rm sym}\cup\Gamma_{\rm shock}\) pointing to \(\Omega\). Note that we have used [10, Lemma 13.4.5] to derive the boundary condition on \(\Gamma_{\rm shock}\). By the maximum principle and Hopf's lemma, we obtain that \(w\leq 0\) in \(\Omega\cap\mathcal{D}_{\epsilon}^{5}\) when \(\epsilon\in(0,\epsilon_{\rm sh}]\), for some sufficiently small \(\epsilon_{\rm sh}>0\) depending only on \((\gamma,v_{2},\theta_{*})\). When \(\max\{\theta_{1},\theta_{2}\}\in(0,\frac{2\delta_{1}}{\Sigma_{1}^{2}}]\), using property (i) of Definition 4.10, one can follow the same procedure as the first case to derive that \(w\leq 0\) in \(\Omega\) when \(\delta_{1}\in(0,\delta_{\rm sh}]\), for some \(\delta_{\rm sh}>0\) depending only on \((\gamma,v_{2},\theta_{*})\). Therefore, \(\hat{\varphi}^{(u,\boldsymbol{\theta})}\) satisfies (2.43) when \(\epsilon\in(0,\epsilon_{\rm sh}]\) and \(\delta_{1}\in(0,\delta_{\rm sh}]\), since the argument above also works for \(\partial_{\epsilon_{S_{26}}}(\varphi_{2}-\varphi^{(u,\boldsymbol{\theta})})\). **3.** Now we check that \(\mathcal{N}_{(u,\boldsymbol{\theta})}(\phi)=0\) in Definition 4.10(vii) coincides with (3.1) in \(\Omega\). From Lemma 4.12(iii), this property holds in \(\Omega\setminus\mathcal{D}_{\epsilon/10}\), so it remains to show that this property holds in \(\Omega\cap\mathcal{D}_{\epsilon/2}\). We only consider domain \(\Omega\cap\mathcal{D}_{\epsilon/2}^{5}\), since the proof is the same for domain \(\Omega\cap\mathcal{D}_{\epsilon/2}^{6}\) due to the symmetry between \(\theta_{1}\) and \(\theta_{2}\) in the problem. For any \(\boldsymbol{\xi}\in\Omega\cap\mathcal{D}_{\epsilon}\), let coordinates \((x,y)=\mathcal{R}(\boldsymbol{\xi})\) be given by (4.1). Using the monotone property (4.20), we choose a constant \(\epsilon^{\prime}_{\rm sh}\in(0,\epsilon_{\rm sh}]\) sufficiently small, depending only on \((\gamma,v_{2},\theta_{*})\), such that \(\theta_{1}\leq\theta^{\rm s}+\min\{\frac{\sigma_{3}}{2},\delta_{\rm p}\}\) whenever \(x_{P_{0}^{1}}<\frac{\epsilon^{\prime}_{\rm sh}}{10}\), for \(\delta_{\rm p}>0\) the constant from Step **3**(b) in the proof of Lemma 4.17. Choose \(\epsilon\in(0,\epsilon^{\prime}_{\rm sh}]\), and suppose \(x_{P_{0}^{1}}<\frac{\epsilon}{10}\). Denote \(\tilde{w}\coloneqq Ax-\partial_{x}\psi(x,y)\) for \(A\coloneqq\frac{1}{1+\gamma}(2-\frac{\mu_{0}}{5})\) and \(\psi=(\varphi^{(u,\boldsymbol{\theta})}-\varphi_{5})\circ\mathcal{R}^{-1}\). It is clear that \(\tilde{w}=0\) on \(\mathcal{R}(\Gamma_{\rm sonic}^{5})\), and \(\tilde{w}_{y}=0\) on \(\mathcal{R}(\Gamma_{\rm sym}\cap\partial\mathcal{D}_{\epsilon/2}^{5})\). From Lemma 4.13(i) and (iii)-(iv), the boundary condition: \(\mathcal{M}_{(u,\boldsymbol{\theta})}(D\hat{\phi},\hat{\phi},\boldsymbol{\xi} )=0\) in (4.23) can be written as \[(b_{1},b_{2},b_{0})\cdot(D\hat{\phi},\hat{\phi})=0\qquad\text{on $\Gamma_{\rm shock }\cap\mathcal{D}_{\epsilon}^{5}$}\,,\] for some \(b_{0},b_{1},b_{2}\in[-\frac{1}{\delta_{\rm bc}},-\delta_{\rm bc}]\). Using the _a priori_ estimates (4.39) in the case: \(\theta_{1}\leq\theta^{\rm s}+\delta_{\rm p}\), we see that \(|\psi_{x}|\leq\frac{1}{\delta_{\rm bc}^{2}}(|\psi_{y}|+|\psi|)\leq Cx^{\frac{ 3}{2}}\) for any \((x,y)\in\mathcal{R}(\Gamma_{\rm shock}\cap\mathcal{D}_{\epsilon}^{5})\). Reducing \(\epsilon^{\prime}_{\rm sh}>0\) further, depending only on \((\gamma,v_{2})\), we obtain that \(\tilde{w}>0\) on \(\mathcal{R}(\Gamma_{\rm shock}\cap\mathcal{D}_{\epsilon}^{5})\) for any \(\epsilon\in(0,\epsilon^{\prime}_{\rm sh}]\). By Lemma 4.12(ii) and the fact that \(\hat{\phi}=\phi\), we have \[\sum_{i,j=1}^{2}A_{ij}(D\hat{\phi},\boldsymbol{\xi})\partial_{i}\partial_{j} \hat{\phi}=\sum_{i,j=1}^{2}A_{ij}^{(3)}(D\hat{\phi},\boldsymbol{\xi})\partial_ {i}\partial_{j}\hat{\phi}=c_{\boldsymbol{\theta}}\mathcal{N}_{(u,\boldsymbol{ \theta})}^{\rm polar}(\hat{\phi})\,,\] from where we can derive the following equation for \(\tilde{w}\): \[a_{11}\tilde{w}_{xx}+2a_{12}\tilde{w}_{xy}+a_{22}\tilde{w}_{yy}+a_{1}\tilde{w} _{x}+a_{2}\tilde{w}_{y}=-A((\gamma+1)A-1)+E(x,y)\qquad\text{in $\mathcal{R}(\Omega\cap \mathcal{D}_{\epsilon/2}^{5})$}\,.\] From expression (3.26) and the _a priori_ estimates (4.39) in Lemma 4.17, reducing \(\epsilon^{\prime}_{\rm sh}>0\) further if necessary, we obtain directly that \(E(x,y)<A((\gamma+1)A-1)\) in \(\mathcal{R}(\Omega\cap\mathcal{D}_{\epsilon/2}^{5})\) for any \(\epsilon\in(0,\epsilon^{\prime}_{\rm sh}]\). Therefore, the property: \(\tilde{w}=Ax-\partial_{x}\psi(x,y)\geq 0\) holds in \(\mathcal{R}(\Omega\cap\mathcal{D}_{\epsilon/2}^{5})\), which follows from Lemma 4.12(i), the maximum principle, and Hopf's lemma. Now we show the inequality: \(\partial_{x}\psi(x,y)\geq-Ax\). From Step **2**, we have \[\big{(}\partial_{\epsilon_{S_{25}}}(\varphi_{2}-\varphi)\big{)}\circ\mathcal{R}^ {-1}=\psi_{x}\cos(\theta_{25}-y)-\frac{\sin(\theta_{25}-y)}{c_{5}-x}\psi_{y} \leq 0\qquad\text{in $\mathcal{R}(\Omega\cap\mathcal{D}_{\epsilon/2}^{5})$}\,.\] This implies that \[\psi_{x}\geq-\frac{\tan(\frac{\pi}{2}-\tilde{\omega}_{0})}{c_{5}-x}\psi_{y} \geq-Cx^{\frac{3}{2}}\qquad\text{in $\mathcal{R}(\Omega\cap\mathcal{D}_{\epsilon/2}^{5})$}\,,\] by using property (4.3) and the _a priori_ estimates (4.39) in Lemma 4.17. Then \(\partial_{x}\psi(x,y)\geq-Ax\) after further reducing \(\epsilon^{\prime}_{\rm sh}>0\). Thus, \(|\psi_{x}|\leq Ax\) is proved, which indicates that \(\mathcal{N}_{(u,\boldsymbol{\theta})}(\phi)=0\) in Definition 4.10(vii) coincides with (3.1) in \(\Omega\cap\mathcal{D}_{\epsilon/2}^{5}\). Moreover, if \(x_{P_{0}^{1}}\geq\frac{\epsilon}{10}\), the same conclusion holds in \(\Omega\cap\mathcal{D}_{\epsilon/2}^{5}\subseteq\Omega\setminus\mathcal{D}_{ \epsilon/10}^{6}\) by Lemma 4.12(iii). **4.** Definition 2.11(iii) follows directly from Lemma 4.12(i) and the fact proved in Step **3**. Let the sound speed \(c(|D\varphi|,\varphi)\) be given by (3.2). The strict ellipticity implies that \[M\coloneqq\frac{|\partial_{x}\varphi(\boldsymbol{\xi})|}{c(|D\varphi(\boldsymbol{ \xi})|,\varphi(\boldsymbol{\xi}))}<1\qquad\text{on $\Gamma_{\rm shock}$}\,.\] Define \(M_{2}:=|\partial_{\boldsymbol{\nu}}\varphi_{2}(\boldsymbol{\xi})|\). Similar to [2, Eq. (2.4.9)], we have the following equality: \[(1+\tfrac{\gamma-1}{2}M^{2})M^{-\tfrac{2(\gamma-1)}{\gamma+1}}=(1+\tfrac{\gamma- 1}{2}M_{2}^{2})M_{2}^{-\tfrac{2(\gamma-1)}{\gamma+1}}\,.\] Then \(M_{2}>1\), and so \(|D\varphi_{2}|>1=c(|D\varphi_{2}|,\varphi_{2})\) holds on \(\Gamma_{\text{shock}}\), _i.e._, \(\Gamma_{\text{shock}}\subseteq\mathbb{R}_{+}^{2}\setminus\overline{B_{c_{2}} (O_{2})}\). Finally, for the property: \(\Gamma_{\text{shock}}\subseteq\{\xi^{P_{3}}<\xi<\xi^{P_{2}}\}\), we follow the same proof as in Lemma 3.2 by using all the properties above. Therefore, all parts of Definition 2.11 have been verified, which implies that \(\varphi^{(u,\boldsymbol{\theta})}\) is an admissible solution in the sense of Definition 2.11. **Proof of Theorem 2.1.** We now complete the proof of Theorem 2.1 in five steps. **1.** We prove that the iteration map \(\mathcal{I}:\overline{\mathcal{K}}\to C^{2,\alpha}_{(*)}(\mathcal{Q}^{\text{ iter}})\) is a compact map in the sense of Definition B.5. Let \(\{(u_{k},\boldsymbol{\theta}_{k})\}_{k\in\mathbb{N}}\subseteq\mathcal{K}\) converge to \((u,\boldsymbol{\theta})\) in \(C^{2,\alpha}_{(*)}(\mathcal{Q}^{\text{iter}})\times[0,\theta_{*}]^{2}\). For each \(k\in\mathbb{N}\), write \(\Omega^{(k)}\coloneqq\Omega(u_{k},\boldsymbol{\theta}_{k})\) and \(\mathfrak{g}^{(k)}_{\text{sh}}\coloneqq\mathfrak{g}^{(u_{k},\boldsymbol{ \theta}_{k})}_{\text{sh}}\), as given by Definition 4.7, and \(\Gamma^{(k)}_{\text{sonic}}\coloneqq\Gamma^{5}_{\text{sonic}}\cup\Gamma^{6}_{ \text{sonic}}\) for the sonic arcs related to parameters \(\boldsymbol{\theta}_{k}\). By Lemma 4.14, there exists a unique solution \(\hat{\phi}^{(k)}\in C^{2}(\Omega^{(k)})\cap C^{1}(\overline{\Omega^{(k)}} \setminus\Gamma^{(k)}_{\text{sonic}})\cap C^{0}(\overline{\Omega^{(k)}})\) of the iteration boundary value problem (4.23) related to \((u_{k},\boldsymbol{\theta}_{k})\). For any \((s,t^{\prime})\in R_{\mathfrak{g}^{(k)}_{\text{sh}}}\), define \[\hat{w}^{(k)}(s,t^{\prime})\coloneqq\big{(}\hat{\phi}^{(k)}+\frac{1}{2}| \boldsymbol{\xi}|^{2}-\varphi^{*}_{\boldsymbol{\theta}_{k}}\big{)}\circ( \mathcal{G}^{\boldsymbol{\theta}_{k}}_{1})^{-1}(s,t^{\prime})\,,\] where \(R_{\mathfrak{g}^{(k)}_{\text{sh}}}\), \(\varphi^{*}_{\boldsymbol{\theta}_{k}}\), and \(\mathcal{G}^{\boldsymbol{\theta}_{k}}_{1}\) are given by (4.42), Definition 4.4, and (4.7), respectively. Then \[\hat{w}^{(k)}=\hat{u}^{(k)}\circ G_{2,\mathfrak{g}^{(k)}_{\text{sh}}}\] for \(\hat{u}^{(k)}:\overline{\mathcal{Q}^{\text{iter}}}\to\mathbb{R}\) given by (4.24) with \(\hat{\phi}=\hat{\phi}^{(k)}\), and \(G_{2,\mathfrak{g}^{(k)}_{\text{sh}}}\) given by (4.13). Subsequently, we obtain functions \(\hat{\mathfrak{g}}^{(k)}_{\text{sh}}:[-1,1]\to\overline{\mathbb{R}_{+}}\) via Lemma 4.21. Since \((u_{k},\boldsymbol{\theta}_{k})\) converges to \((u,\boldsymbol{\theta})\) in \(C^{1,\alpha}(\mathcal{Q}^{\text{iter}})\times[0,\theta_{*}]^{2}\), we find that \(\mathfrak{g}^{(k)}_{\text{sh}}\to\mathfrak{g}_{\text{sh}}\) in \(C^{1,\alpha}([-1,1])\) by Lemma 4.8(iii). Fix any compact subset \(K\subseteq R_{\mathfrak{g}_{\text{sh}}}\). From (4.13), Definition 4.10(iii), and the convergence of \(\{\mathfrak{g}^{(k)}_{\text{sh}}\}_{k\in\mathbb{N}}\), we see that \(G_{2,\mathfrak{g}^{(k)}_{\text{sh}}}\to G_{2,\mathfrak{g}_{\text{sh}}}\) in \(C^{1,\alpha}(K)\). Then, by Corollary 4.16 and the convergence of \(\{\hat{u}^{(k)}\}_{k\in\mathbb{N}}\), we obtain that \(\hat{w}^{(k)}\to\hat{w}\) in \(C^{1,\alpha}(K)\). By Lemma 4.17, Proposition 4.20(iii), and the local convergence of \(\{\hat{w}^{(k)}\}_{k\in\mathbb{N}}\), for any \(b_{1},b_{2}\in[-1,1]\) chosen as in each case of Proposition 4.20(a)-(c), \(\{\mathcal{E}_{\mathfrak{g}^{(k)}_{\text{sh}}}(\hat{w}^{(k)})\}_{k\in\mathbb{N}}\) converges to \(\mathcal{E}_{\mathfrak{g}_{\text{sh}}}(\hat{w})\) in the corresponding function space. Combining the statement above with the convergence of \(\{\hat{\mathfrak{g}}^{(k)}_{\text{sh}}\}_{k\in\mathbb{N}}\) from Lemma 4.21, we conclude that \[\mathcal{E}_{\mathfrak{g}^{(k)}_{\text{sh}}}(\hat{w}^{(k)})\circ(G_{2,\hat{ \mathfrak{g}^{(k)}_{\text{sh}}}})^{-1}\to\mathcal{E}_{\mathfrak{g}_{\text{sh}}}( \hat{w})\circ(G_{2,\hat{\mathfrak{g}}_{\text{sh}}})^{-1}\qquad\text{ in }C^{2,\alpha}_{(*)}(\mathcal{Q}^{\text{iter}})\,.\] Since \(C^{2,2\alpha}_{(*)}(\mathcal{Q}^{\text{iter}})\) is compactly embedded into \(C^{2,\alpha}_{(*)}(\mathcal{Q}^{\text{iter}})\), we obtain from (4.47) that \(\mathcal{I}(U)\) is precompact in \(C^{2,\alpha}_{(*)}(\mathcal{Q}^{\text{iter}})\) for any bounded subset \(U\subseteq\overline{\mathcal{K}}\) so that the iteration map \(\mathcal{I}\) is compact. **2.** Next, we show that constants \(N_{1},\delta_{2}>0\) in Definition 4.10 can be chosen, depending only on \((\gamma,v_{2},\theta_{*})\), such that, for any \(\boldsymbol{\theta}\in\Theta\cap[0,\theta_{*}]^{2}\), no fixed point of \(\mathcal{I}(\cdot,\boldsymbol{\theta})\) lies on boundary \(\partial\mathcal{K}(\boldsymbol{\theta})\), where \(\partial\mathcal{K}(\boldsymbol{\theta})\) is considered relative to \(C^{2,\alpha}_{(*)}(\mathcal{Q}^{\text{iter}})\). Suppose that \((u,\boldsymbol{\theta})\in\overline{\mathcal{K}}\) satisfies \(\mathcal{I}(u,\boldsymbol{\theta})=u\). For \(\varphi=\varphi^{(u,\boldsymbol{\theta})}\) defined in \(\Omega=\Omega(u,\boldsymbol{\theta})\) by Definition 4.7(iii), extend \(\varphi\) to \(\mathbb{R}_{+}^{2}\) via (2.41). Then \(\varphi\) is an admissible solution corresponding to parameters \(\boldsymbol{\theta}\in\Theta\cap[0,\theta_{*}]^{2}\) by Proposition 4.23. From (3.17), Propositions 3.4, 3.8, and 4.3(iv), and Lemma 3.3, constants \((N_{2},\tilde{\mu},,\rho^{*}(\gamma),C_{\text{ub}})\) are fixed so that any admissible solution satisfies the strict inequalities in Definition 4.10(iii) and (vi)-(v). Moreover, from Lemma 4.17(i), \(u\) satisfies the strict inequality given in Definition 4.10(i) when \(N_{1}\geq N_{1}^{(\text{adm})}\). For condition (iv) of Definition 4.10, constants \((\mu_{0},\mu_{1},N_{4},N_{5},\sigma_{2})\) are fixed such that any admissible solution corresponding to \(\boldsymbol{\theta}\in\Theta\cap[0,\theta_{*}]^{2}\) satisfies the strict inequalities (4.22). Next, we focus on the strict inequalities (4.21). It is direct to see that the strict inequalities (4.21) hold when \(\max\{\theta_{1},\theta_{2}\}<\frac{\delta_{1}}{N_{1}^{2}}\), since \(\mathscr{K}_{2}(\max\{\theta_{1},\theta_{2}\})<0\) and inequality (2.42) in Definition 2.11(iv) and Lemma 3.1(ii) hold for any admissible solution corresponding to \(\boldsymbol{\theta}\in\Theta\cap[0,\theta_{*}]^{2}\). When \(\max\{\theta_{1},\theta_{2}\}\geq\frac{\delta_{1}}{N_{1}^{2}}\), we similarly have \[\varphi-\max\{\varphi_{5},\varphi_{6}\}>0 \text{in}\ \overline{\Omega}\setminus(\mathcal{D}_{\epsilon/10}^{5} \cup\mathcal{D}_{\epsilon/10}^{6})\,,\] \[\partial_{\epsilon_{S_{2j}}}(\varphi_{2}-\varphi)<0 \text{in}\ \overline{\Omega}\setminus\mathcal{D}_{\epsilon/10}^{j}\text{ for }j=5,6\,.\] By Corollary 3.10, and Propositions 3.12-3.14 and 3.16, we obtain that \(\varphi\in C^{1,\alpha}(\Omega)\). Combining this with Lemma 3.17, there exist both \(\tau_{0}>0\) depending only on \((\gamma,v_{2},\theta_{*})\) and \(\tau_{1}>0\) depending only on \((\gamma,v_{2},\theta_{*},\delta_{1},N_{1})\) such that, for all \(\boldsymbol{\theta}\in\Theta\cap\{\frac{\delta_{1}}{N_{1}^{2}}\leq\max\{ \theta_{1},\theta_{2}\}\leq\theta_{*}\}\), \[\psi=\varphi-\max\{\varphi_{5},\varphi_{6}\}\geq\tau_{0} \text{in}\ \overline{\Omega}\setminus(\mathcal{D}_{\epsilon/10}^{5}\cup\mathcal{D}_{ \epsilon/10}^{6})\,,\] \[\partial_{\epsilon_{S_{2j}}}(\varphi_{2}-\varphi)\leq-\tau_{1} \text{in}\ \overline{\Omega}\setminus\mathcal{D}_{\epsilon/10}^{j}\text{ for }j=5,6\,.\] For any fixed \(\delta_{1}>0\), we can choose \(\delta_{2}>0\) small enough such that \[\mathscr{K}_{2}(\max\{\theta_{1},\theta_{2}\})\leq\frac{\delta_{1}\delta_{2}} {N_{1}^{2}}<\min\{\tau_{0},\tau_{1}\}\qquad\text{ for all }\boldsymbol{\theta}\in\Theta\cap\{\frac{\delta_{1}}{N_{1}^{2}}\leq\max\{ \theta_{1},\theta_{2}\}\leq\theta_{*}\}\,,\] which implies that the strict inequalities (4.21) hold when \(\max\{\theta_{1},\theta_{2}\}\geq\frac{\delta_{1}}{N_{1}^{2}}\). Finally, let \((\alpha,\epsilon,\delta_{1},\delta_{3},N_{1})\) be chosen as in Lemma 4.17 with \(\delta_{2}\) chosen as above. Then the strict inequality (4.25) of Definition 4.10(vii) holds by Proposition 4.18. Therefore, \(u\in\mathcal{K}(\boldsymbol{\theta})\) for any \(\boldsymbol{\theta}\in\Theta\cap[0,\theta_{*}]^{2}\), which completes this step. **3.** Now, we can fix constants \((\alpha,\epsilon,\delta_{1},\delta_{2},\delta_{3},N_{1})\) in Definition 4.10 as follows: * Choose constants \((\alpha,\epsilon,\delta_{1},\delta_{3},N_{1})\) as in Lemma 4.17 and reduce \((\epsilon,\delta_{1})\) as in Proposition 4.23, depending only on \((\gamma,v_{2},\theta_{*})\); * Fix \(\delta_{2}\) as in Step **2**, depending only on \((\gamma,v_{2},\theta_{*},\delta_{1},N_{1})\); * Adjust \(\delta_{3}>0\) as in Definition 4.22, depending only on \((\gamma,v_{2},\theta_{*},\delta_{2})\). **4.**_Leray-Schauder degree theory._ When \(\boldsymbol{\theta}=\boldsymbol{0}\), \(\varphi_{5}=\varphi_{6}=\varphi_{0}\) with \(\varphi_{0}\) defined in SS2.1.3, and the iteration boundary value problem in the \((s,t^{\prime})\)-coordinates is a homogeneous problem. From Lemma 4.14, the iteration boundary value problem (4.23) corresponding to any \((u,\boldsymbol{0})\in\overline{\mathcal{K}}\) has a unique solution \(\hat{\phi}=\frac{1}{2}|\boldsymbol{\xi}|^{2}+\varphi_{\boldsymbol{0}}^{*}=v_ {2}\eta_{0}\), where \(\eta_{0}>0\) is defined in SS2.1.3. It follows from Definition 4.10 that the unique fixed point \(u\equiv 0\) of \(\mathcal{I}(\cdot,\boldsymbol{0})\) lies in \(\mathcal{K}(\boldsymbol{0})\), and \(\mathbf{Ind}(\mathcal{I}(\cdot,\boldsymbol{0}),\mathcal{K}(\boldsymbol{0}))=1\). From the arguments above, map \(\mathcal{I}(\cdot,\boldsymbol{\theta}):\overline{\mathcal{K}(\boldsymbol{ \theta})}\to C^{2,\alpha}_{(*)}(\mathcal{Q}^{\text{iter}})\) satisfies \(\mathcal{I}(\cdot,\boldsymbol{\theta})\in V(\mathcal{K}(\boldsymbol{\theta}), C^{2,\alpha}_{(*)}(\mathcal{Q}^{\text{iter}}))\) from Definition B.6 in Appendix B. From Theorem B.2 and for any \(t\in[0,1]\), we have \[\mathbf{Ind}(\mathcal{I}(\cdot,t\boldsymbol{\theta}),\mathcal{K}(t \boldsymbol{\theta}))=\mathbf{Ind}(\mathcal{I}(\cdot,\boldsymbol{0}),\mathcal{K} (\boldsymbol{0}))\qquad\text{for all }\boldsymbol{\theta}\in[0,\theta_{*}]^{2}\,.\] We obtain the existence of a fixed point of map \(\mathcal{I}(\cdot,\boldsymbol{\theta})\), which is equivalent to the existence of an admissible solution corresponding to \(\boldsymbol{\theta}\in[0,\theta_{*}]^{2}\) in the sense of Definition 2.11 by Proposition 4.23. The arbitrary choice of \(\theta_{*}\in(0,\theta^{\text{d}})\) completes the proof of the existence of admissible solutions for any \(\boldsymbol{\theta}\in[0,\theta^{\text{d}}]^{2}\). **5.**_Proof of the continuity at \(\boldsymbol{\theta}=\boldsymbol{0}\)._ Let \(\varphi_{\text{norm}}\) be the admissible solution for \(\boldsymbol{\theta}=\boldsymbol{0}\), which is given by (2.24), and let \(\Omega^{(\boldsymbol{0})}\) be the corresponding pseudo-subsonic domain. For any sequence \(\{\boldsymbol{\theta}_{k}\}_{k\in\mathbb{N}}\subseteq\Theta\) with \(\boldsymbol{\theta}_{k}\to\boldsymbol{0}\) as \(k\to\infty\), let \(\varphi^{(\boldsymbol{\theta}_{k})}\) be an admissible solution corresponding to \(\boldsymbol{\theta}_{k}\), and let \(\Omega^{(k)}\) be the corresponding pseudo-subsonic domain. Let \(G\subseteq\mathbb{R}_{+}^{2}\) be any open set satisfying \(\overline{G}\Subset\mathbb{R}_{+}^{2}\). Choose any compact subset \(K\Subset G\setminus\partial\Omega^{(\boldsymbol{0})}\). Then, by Lemma 3.17(ii), passing to a subsequence (still denoted) \(k\), \(K\cap\Omega^{(\boldsymbol{0})}\Subset\Omega^{(k)}\) and \(K\setminus\Omega^{(\boldsymbol{0})}\Subset G\setminus\Omega^{(k)}\). Using Proposition 4.3(iv), Proposition 4.6, Lemma 4.8(v), and the continuity property of admissible solutions given in Step **1** of the proof of Lemma 4.17, we have \[\|\varphi^{(\boldsymbol{\theta}_{k})}-\varphi_{\text{norm}}\|_{1,\alpha,K\cap \Omega^{(\boldsymbol{0})}}\to 0\qquad\text{ as }k\to\infty\,.\] Similarly, using (2.24), Proposition 2.6(i), and (2.41), we have \[\|\varphi^{(\boldsymbol{\theta}_{k})}-\varphi_{\text{norm}}\|_{W^{1,1}(K \setminus\Omega^{(\boldsymbol{0})})}\to 0\qquad\text{as }k\to\infty\,.\] Finally, by (2.41) and the uniform bound in Lemma 3.3, there exists a constant \(C>0\) depending only on \((\gamma,v_{2},G)\) such that \(\|\varphi^{(\boldsymbol{\theta}_{k})}-\varphi_{\mathrm{norm}}\|_{0,1,G}\leq C\) for all \(k\in\mathbb{N}\), which leads to \[\|\varphi^{(\boldsymbol{\theta}_{k})}-\varphi_{\mathrm{norm}}\|_{W^{1,1}(G)} \leq\|\varphi^{(\boldsymbol{\theta}_{k})}-\varphi_{\mathrm{norm}}\|_{W^{1,1}(K )}+2C\,\mathrm{meas}(G\setminus K)\,.\] This completes the proof of Theorem 2.1. ## 5. Optimal Regularity of Solutions and Convexity of Free Boundaries With the _a priori_ estimates obtained in SS3-SS4, we can now give the complete proofs of Theorems 2.2-2.3, respectively. ### Proof of Theorem 2.2: Optimal regularity of solutions This section is devoted to the complete proof of Theorem 2.2. Let \(\varphi\) be an admissible solution in the sense of Definition 2.11. Proof.: Fix \(\boldsymbol{\theta}\in\Theta\). By the symmetry of the problem, it suffices to prove the statement near \(\Gamma^{5}_{\mathrm{sonic}}\). **1**. _Proof of_ Theorem 2.2(i). First, it follows from Lemmas 3.6 and 3.9 that \(\Gamma_{\mathrm{shock}}\) is \(C^{\infty}\) in its relative interior and \(\varphi\in C^{\infty}(\overline{\Omega}\setminus\Gamma^{5}_{\mathrm{sonic}} \cup\Gamma^{6}_{\mathrm{sonic}})\). By Definition 2.9, \(\Gamma^{5}_{\mathrm{sonic}}\) is a closed arc of a circle when \(\theta_{1}\in[0,\theta^{\mathrm{s}})\), and becomes a point \(P_{1}\) when \(\theta_{1}\in[\theta^{\mathrm{s}},\theta^{\mathrm{d}})\). If \(\theta_{1}\in[0,\theta^{\mathrm{s}})\), it follows from Propositions 3.12-3.13 that \(\varphi\) is \(C^{1,1}\) up to \(\Gamma^{5}_{\mathrm{sonic}}\), whilst, if \(\theta_{1}\in[\theta^{\mathrm{s}},\theta^{\mathrm{d}})\), then Propositions 3.14 and 3.16 imply that \(\varphi\) is \(C^{1,\alpha}\) up to \(\Gamma^{5}_{\mathrm{sonic}}=\{P_{1}\}\) for some \(\alpha\in(0,1)\). Furthermore, from Propositions 3.12-3.14 and 3.16, we see that \(\overline{\Gamma_{\mathrm{shock}}\cup S^{\mathrm{seg}}_{25}}\) is a \(C^{1,\alpha}\)-curve, including at \(P_{2}\), for any \(\theta_{1}\in[0,\theta^{\mathrm{d}})\). It remains to show that \(\overline{\Gamma_{\mathrm{shock}}\cup S^{\mathrm{seg}}_{25}}\) is a \(C^{2,\alpha}\)-curve when \(\theta_{1}\in[0,\theta^{\mathrm{s}})\). For \(\theta_{1}\in[0,\theta^{\mathrm{s}})\), let coordinates \((x,y)=\mathcal{R}(\boldsymbol{\xi})\) be given by (3.24). Let \(\bar{\varepsilon}>0\), \(f_{5,0}\), and \(f_{5,\mathrm{sh}}\) be from Step **1** of the proof of Proposition 3.12. We extend \(f_{5,\mathrm{sh}}\) to \((-\bar{\varepsilon},\bar{\varepsilon})\) by \[f_{5,\mathrm{sh}}(x)\coloneqq f_{5,0}(x)\qquad\text{for }x\in(-\bar{ \varepsilon},0]\,, \tag{5.1}\] so that \[(f_{5,\mathrm{sh}}-f_{5,0})(0)=(f_{5,\mathrm{sh}}-f_{5,0})^{\prime}(0)=0\,. \tag{5.2}\] Define \(\psi\coloneqq(\varphi-\varphi_{5})\circ\mathcal{R}^{-1}\) and \(\bar{\psi}_{25}\coloneqq(\varphi_{2}-\varphi_{5})\circ\mathcal{R}^{-1}\). Then \(\bar{\psi}_{25}(x,f_{5,0}(x))=0\), which implies \[\bar{\psi}_{25}(x,f_{5,\mathrm{sh}}(x))-\bar{\psi}_{25}(x,f_{5,0}(x))=\psi(x, f_{5,\mathrm{sh}}(x))\qquad\text{for }x\in(0,\bar{\varepsilon})\,.\] Differentiating the above equality with respect to \(x\) twice, we obtain the following expression: \[(f_{5,\mathrm{sh}}-f_{5,0})^{\prime\prime}(x)=\frac{A_{1}(x)+A_{2}(x)+A_{3}(x) }{\partial_{y}\bar{\psi}_{25}(x,f_{5,\mathrm{sh}}(x))}\,, \tag{5.3}\] where, for \((a_{0},a_{1},a_{2})\coloneqq(1,2,1)\), \[A_{1}(x) \coloneqq\sum_{k=0}^{2}a_{k}\big{(}(f_{5,0}(x))^{k}\partial_{x}^ {2-k}\partial_{y}^{2}\bar{\psi}_{25}(x,f_{5,0}(x))-(f_{5,\mathrm{sh}}(x))^{k} \partial_{x}^{2-k}\partial_{y}^{2}\bar{\psi}_{25}(x,f_{5,\mathrm{sh}}(x) )\,,\] \[A_{2}(x) \coloneqq\big{(}\partial_{y}\bar{\psi}_{25}(x,f_{5,0}(x))- \partial_{y}\bar{\psi}_{25}(x,f_{5,\mathrm{sh}}(x))\big{)}\,f_{5,0}^{\prime \prime}(x)\,,\] \[A_{3}(x) \coloneqq-f_{5,\mathrm{sh}}^{\prime\prime}(x)\partial_{y}\psi(x, f_{5,\mathrm{sh}}(x))-\sum_{k=0}^{2}a_{k}(f_{5,\mathrm{sh}}^{\prime}(x))^{k} \partial_{x}^{2-k}\partial_{y}^{k}\psi(x,f_{5,\mathrm{sh}}(x))\,.\] By (5.2), \(A_{1}(0)=A_{2}(0)=0\). We differentiate the boundary condition in the tangential direction along \(\Gamma_{\mathrm{shock}}\) and apply the arguments in Step **1** of the proof of Proposition 3.12 to obtain that there exists a constant \(C>0\) such that \[|\psi_{xx}|\leq C|(\psi,\psi_{x},\psi_{y},\psi_{xy},\psi_{yy})|\qquad\text{on } \mathcal{R}(\Gamma_{\mathrm{shock}}\cap\partial\Omega^{5}_{\bar{\varepsilon}})\,,\] where \(\Omega^{5}_{\bar{\varepsilon}}\) is from Step **1** of the proof of Proposition 3.12. From the above estimate and Proposition 3.12, we obtain that \(\psi_{xx}(x,f_{5,\mathrm{sh}}(x))\to 0\) as \(x\to 0^{+}\), which implies that \(A_{3}(x)\to 0\) as \(x\to 0^{+}\). Hence, \(\partial_{y}\bar{\psi}_{25}(x,f_{5,\mathrm{sh}}(x))\neq 0\) on \(\mathcal{R}(\Gamma_{\mathrm{shock}}\cap\partial\Omega^{5}_{\bar{\varepsilon}})\). Then we conclude from (5.3) that \((f_{5,\mathrm{sh}}-f_{5,0})^{\prime\prime}(0)=0\). This implies that the extension of \(f_{5,\mathrm{sh}}\) given by (5.1) is in \(C^{2}([-\bar{\varepsilon},\bar{\varepsilon}])\). Furthermore, we conclude from Proposition 3.12 that the extension of \(f_{5,\mathrm{sh}}\) given by (5.1) is in \(C^{2,\alpha}((-\bar{\varepsilon},\bar{\varepsilon}))\) for any \(\alpha\in(0,1)\). This implies that \(\overline{\Gamma_{\mathrm{shock}}\cup S^{\mathrm{seg}}_{25}}\) is \(C^{2,\alpha}\) for any \(\alpha\in(0,1)\), including at point \(P_{2}=\mathcal{R}^{-1}(0,f_{5,\mathrm{sh}}(0))\). This completes the proof of statement (i). **2**. _Proof of_ Theorem 2.2(ii)-(iii). It suffices to consider the case that \(\theta_{1}\in[0,\theta^{\rm s})\). We can apply Theorem B.3 to obtain the regularity of \(\psi\) in the neighbourhood of \(\mathcal{R}(\Gamma^{5}_{\rm sonic})\). Then the admissible solution \(\varphi\) satisfies Theorem 2.2(ii)-(iii). **3**. _Proof of_ Theorem 2.2(iv). By Propositions 3.12-3.13, \(\Gamma_{\rm shock}\cap\partial\Omega^{5}_{\varepsilon}\) can be represented as the graph of \(y=f_{5,{\rm sh}}(x)\) for \(0\leq x\leq\bar{\varepsilon}\). Let \(\{y^{(1)}_{m}\}_{m\in\mathbb{N}}\) be a sequence satisfying that \(0<y^{(1)}_{m}<f_{5,{\rm sh}}(0)\) for each \(m\in\mathbb{N}\), and \(y^{(1)}_{m}\to f_{5,{\rm sh}}(0)\) as \(m\to\infty\). By Theorem 2.2(iii), we can choose a sequence \(\{x^{(1)}_{m}\}_{m\in\mathbb{N}}\) such that \((x^{(1)}_{m},y^{(1)}_{m})\in\mathcal{R}(\Omega^{5}_{\varepsilon})\), \(0<x^{(1)}_{m}<\frac{1}{m}\), and \[\left|\psi_{xx}(x^{(1)}_{m},y^{(1)}_{m})-\frac{1}{\gamma+1}\right|\leq\frac{ 1}{m}\qquad\text{for each }m\in\mathbb{N}\,.\] It follows from Step **1** of the proof of Proposition 3.12 that \(0<y^{(1)}_{m}<f_{5,{\rm sh}}(0)<f_{5,{\rm sh}}(x^{(1)}_{m})\) for each \(m\in\mathbb{N}\) so that \[\lim_{m\to\infty}(x^{(1)}_{m},y^{(1)}_{m})=(0,f_{5,{\rm sh}}(0))\,,\qquad\lim_ {m\to\infty}|\psi_{xx}(x^{(1)}_{m},y^{(1)}_{m})|=\frac{1}{\gamma+1}\,. \tag{5.4}\] On the other hand, there exists a constant \(\varepsilon\in(0,\bar{\varepsilon}]\) such that the boundary condition on \(\Gamma_{\rm shock}\cap\partial\Omega^{5}_{\varepsilon}\) can be written as \(\psi_{x}+b_{1}\psi_{y}+b_{0}\psi=0\) with \((b_{0},b_{1})=(b_{0},b_{1})(\psi_{x},\psi_{y},\psi,x,f_{5,{\rm sh}}(x))\) for any \(x\in(0,\varepsilon)\). Let \(\omega>0\) from Step **1** of the proof of Proposition 3.12. Then \[\{(x,f_{5,{\rm sh}}(x)-\frac{\omega}{10}x)\,:\,0<x<\varepsilon\}\subseteq \mathcal{R}(\Omega^{5}_{\bar{\varepsilon}})\,.\] Denote \(F(x)\coloneqq\psi_{x}(x,f_{5,{\rm sh}}(x)-\frac{\omega}{10}x)\). By the geometric relation given above, we have \[F(x) =\psi_{x}(x,f_{5,{\rm sh}}(x))-\frac{\omega}{10}x\int_{0}^{1} \psi_{xy}(x,f_{5,{\rm sh}}(x)-\frac{\omega}{10}tx)\,{\rm d}t\] \[=-(b_{1}\psi_{y}+b_{0}\psi)(x,f_{5,{\rm sh}}(x))-\frac{\omega}{10 }x\int_{0}^{1}\psi_{xy}(x,f_{5,{\rm sh}}(x)-\frac{\omega}{10}tx)\,{\rm d}t \qquad\text{for }x\in(0,\varepsilon)\,.\] Then Proposition 3.12 implies that \(F(0)=0\), \(F\in C([0,\varepsilon])\cap C^{1}((0,\varepsilon))\), \(\lim\limits_{x\to 0+}\frac{F(x)}{x}=0\) so that, by the mean value theorem, there exists a sequence \(\{x^{(2)}_{m}\}_{m\in\mathbb{N}}\subseteq(0,\varepsilon)\) such that \[\lim_{m\to\infty}x^{(2)}_{m}=0\,,\qquad F^{\prime}(x^{(2)}_{m})=0\,. \tag{5.5}\] For each \(m\in\mathbb{N}\), define \(y^{(2)}_{m}\coloneqq f_{5,{\rm sh}}(x^{(2)}_{m})-\frac{\omega}{10}x^{(2)}_{m}\), so that \(\{(x^{(2)}_{m},y^{(2)}_{m})\}_{m\in\mathbb{N}}\subseteq\mathcal{R}(\Omega^{5} _{\bar{\varepsilon}})\). By the definition of \(F(\cdot)\) and (5.5), we have \[\lim_{m\to\infty}\psi_{xx}(x^{(2)}_{m},y^{(2)}_{m}) =\lim_{m\to\infty}F^{\prime}(x^{(2)}_{m})-\lim_{m\to\infty}(f_{5,{ \rm sh}}(x^{(2)}_{m})-\frac{\omega}{10})\psi_{xy}(x^{(2)}_{m},y^{(2)}_{m})\] \[=-\lim_{m\to\infty}(f_{5,{\rm sh}}(x^{(2)}_{m})-\frac{\omega}{10}) \psi_{xy}(x^{(2)}_{m},y^{(2)}_{m})\,. \tag{5.6}\] Since \(\lim_{m\to\infty}(x^{(2)}_{m},y^{(2)}_{m})=(0,f_{5,{\rm sh}}(0))\), we combine (5.6) with Proposition 3.12 to obtain \[\lim_{m\to\infty}\psi_{xx}(x^{(2)}_{m},y^{(2)}_{m})=0\,.\] We have demonstrated that there are two sequences, \(\{(x^{(1)}_{m},y^{(1)}_{m})\}_{m\in\mathbb{N}}\) and \(\{(x^{(2)}_{m},y^{(2)}_{m})\}_{m\in\mathbb{N}}\), taking values in \(\mathcal{R}(\Omega^{5}_{\bar{\varepsilon}})\), such that the limits of both sequences are \((0,f_{5,{\rm sh}}(0))\), but \[\lim_{m\to\infty}\psi_{xx}(x^{(1)}_{m},y^{(1)}_{m})\neq\lim_{m\to\infty}\psi_{ xx}(x^{(2)}_{m},y^{(2)}_{m})\,.\] This completes the proof of Theorem 2.2(iv). ### Proof of Theorem 2.3: Convexity of free boundaries and transonic shocks We now discuss the geometric properties of the transonic shock as a free boundary. In Chen-Feldman-Xiang [12], a general framework was developed under which the self-similar transonic shock, as a free boundary, is proved to be uniformly convex for the potential flow equation; see Framework (A) in Appendix B.5. In this subsection, we apply this framework to prove the uniform convexity of the transonic shock in the four-shock interaction problem, so it suffices to prove that the admissible solutions satisfy the conditions in Theorems B.4-B.5. **Lemma 5.1**.: _The following statements hold_:__ 1. _Any admissible solution in the sense of_ Definition 2.11 _satisfies the conditions of_ Theorems B.4-B.5_._ 2. _Any global weak solution of the four-shock interaction problem in the sense of_ Definition 2.1 _--satisfying also all properties of_ Definition 2.11 _except (_2.43_) and with transonic shock_ \(\Gamma_{\rm shock}\) being a strictly convex graph--satisfies condition (_2.43_) of_ Definition 2.11_._ Proof.: We divide the proof into six steps. We start with the proof of assertion (i). **1.**_Claim_: Region \(\Omega\) satisfies the conditions in Framework (A). Indeed, for \(\boldsymbol{\theta}\in[0,\theta^{\rm s})^{2}\), the required piecewise regularity holds, since \(\Gamma_{\rm sym}\) is a straight segment, while \(\Gamma_{\rm sonic}^{5}\) and \(\Gamma_{\rm sonic}^{6}\) are the arcs of the sonic circles, and \(\Gamma_{\rm shock}\) has the regularity stated in Theorem 2.2(i). For any incident angles \(\boldsymbol{\theta}\in\Theta\), the fact that all angles of the corners of \(\Omega\) are less than \(\pi\) can be verified as follows: If \(\theta_{1}\in[0,\theta^{\rm s})\), since \(\overline{S_{25}^{\rm seg}\cup\Gamma_{\rm shock}}\) is a \(C^{2,\alpha}\)-curve for any \(\alpha\in(0,1)\), including at points \(P_{2}\), it follows that \(\Gamma_{\rm shock}\) is tangential to \(S_{25}^{\rm seg}\) at \(P_{2}\). Meanwhile, for any \(\boldsymbol{\theta}\in\Theta\), \(\partial B_{c_{5}}(O_{5})\) and \(S_{25}\) always have two intersection points including \(P_{2}\). Therefore, the meeting angles at the two corners \(\{P_{1},P_{2}\}\) belong to \([0,\pi)\) when \(\theta_{1}\in(0,\theta^{\rm s})\). If \(\theta_{1}\in[\theta^{\rm s},\theta^{d})\), we see that \(P_{2}=P_{1}=P_{0}^{1}\). From (2.38), the meeting angle at \(P_{0}^{1}\) is \(\pi-\theta_{25}\in(0,\frac{\pi}{2})\), which satisfies \[\pi-\theta_{25}\geq\delta_{5,6}^{(\theta^{\rm s})}>0\,.\] Similarly, the meeting angles at the two corners \(\{P_{3},P_{4}\}\) also belong to \((0,\pi)\) for any \(\theta_{2}\in[0,\theta^{\rm d})\). **2.** The entropy condition (C-1) in Theorem B.4 follows directly from the properties of Definition 2.11, where state \((0)\) in Theorem B.4 is state \((2)\) in our problem. From the regularity of \(\varphi\) and \(\Gamma_{\rm shock}\) in Theorem 2.2(i), we see that conditions (C-2) and (C-4) of Theorem B.4 hold. Property (iii) of Definition 2.11 implies that condition (C-3) of Theorem B.4 holds. **3.** We use the notation for the endpoints of \(\Gamma_{\rm shock}\) from Framework (A), that is, \(A\coloneqq P_{2}\) and \(B\coloneqq P_{3}\), where \(P_{i+1}\equiv P_{0}^{i}\) when \(\theta_{i}\in[\theta^{\rm s},\theta^{\rm d})\) for \(i=1,2\). Then, by the properties of Definition 2.11, we see that \(\boldsymbol{\tau}_{A}=\boldsymbol{e}_{S_{25}}\) and \(\boldsymbol{\tau}_{B}=\boldsymbol{e}_{S_{26}}\), and it is clear that \(\boldsymbol{\tau}_{A}\neq\pm\boldsymbol{\tau}_{B}\) for all \(\boldsymbol{\theta}\neq\boldsymbol{0}\). Accordingly, let \(\mathrm{Con}\) be given by (B.14). Combining property (2.43) with the fact that \(\Gamma_{\rm shock}\) is the level set: \(\{\varphi-\varphi_{2}=0\}\), we obtain that \(\{P+\mathrm{Con}\}\cap\Omega=\varnothing\) for all \(P\in\Gamma_{\rm shock}\). Thus, condition (C-5) of Theorem B.4 is satisfied. **4.** For condition (C-6) of Theorem B.4, we show that Case (C-6c) holds with \(\phi=\varphi-\varphi_{2}\), \(\boldsymbol{e}=(0,1)\in\mathrm{Con}\), \(\Gamma_{1}\coloneqq\Gamma_{\rm sonic}^{6}\cup\Gamma_{\rm sym}\cup\Gamma_{\rm sonic}^ {5}\), and \(\Gamma_{2}=\varnothing\). For \(j=5,6\), write \(w\coloneqq\partial_{\boldsymbol{e}}(\varphi-\varphi_{j})\), which satisfies a strictly elliptic equation in \(\Omega\) by taking the derivative of (3.1) in direction \(\boldsymbol{e}\), and using Definition 2.11(iii). By conditions (iii)-(iv) of Problem 2.10, \(w=0\) on \(\Gamma_{\rm sym}\cup\Gamma_{\rm sonic}^{j}\), which is a global maximum by Lemma 3.1(iii). Therefore, if \(P\in\Gamma_{\rm sym}\cup\Gamma_{\rm sonic}^{j}\) is a point of local minimum for \(w\), then \(w=0\) on \(B_{r}(P)\cap\Omega\) for some \(r>0\), and so \(w=0\) in \(\Omega\) by the strong maximum principle, which is a contradiction to Lemma 3.1(i). It follows that \(\phi_{\boldsymbol{e}}\equiv w-v_{2}\) cannot attain a local minimum on \(\Gamma_{1}\). **5.** We now show that conditions (C-7)-(C-10) are satisfied with \(\hat{\Gamma}_{0}=\Gamma_{\rm sonic}^{5}\setminus\{P_{2}\}\) for \(\theta_{1}\in[0,\theta^{\rm s})\) (resp. \(\hat{\Gamma}_{0}=\varnothing\) for \(\theta_{1}\in[\theta^{\rm s},\theta^{\rm d})\)), \(\hat{\Gamma}_{1}=\Gamma_{\rm sym}^{0}\), \(\hat{\Gamma}_{2}=\varnothing\), and \(\hat{\Gamma}_{3}=\Gamma_{\rm sonic}^{6}\setminus\{P_{3}\}\) for \(\theta_{2}\in[0,\theta^{\rm s})\) (resp. \(\hat{\Gamma}_{3}=\varnothing\) for \(\theta_{2}\in[\theta^{\rm s},\theta^{\rm d})\)). Indeed, (C-7) clearly holds. Also, (C-8) holds since \(D\varphi=D\varphi_{5}\) on \(\Gamma_{\rm sonic}^{5}\) for \(\theta_{1}\in[0,\theta^{\rm s})\) so that \(\phi_{\boldsymbol{e}}=\partial_{\boldsymbol{e}}(\varphi_{5}-\varphi_{2})\) is a constant. Similarly, \(D\varphi=D\varphi_{6}\) on \(\Gamma_{\rm sonic}^{6}\) for \(\theta_{2}\in[0,\theta^{\rm s})\). Condition (C-9) on \(\hat{\Gamma}_{1}=\Gamma_{\rm sym}^{0}\) can be checked as follows: If \(\boldsymbol{e}\neq(0,\pm 1)\), then, as in Step **6** of the proof of [12, Lemma 7.8], \(\phi_{\mathbf{e}}\) cannot attain its local minima or maxima on \(\Gamma^{0}_{\rm sym}\). In the other case, when \(\mathbf{e}=(0,\pm 1)\), the slip boundary condition \(\partial_{\eta}\phi=0\) on \(\Gamma^{0}_{\rm sym}\) implies that \(\phi_{\mathbf{e}}\) is constant on \(\Gamma^{0}_{\rm sym}\), which verifies (C-9). Case (C-10a) of (C-10) clearly holds here since \(\hat{\Gamma}_{2}=\varnothing\). This concludes the proof of assertion (i). **6.** It remains to prove assertion (ii): Any solution satisfying all the properties of Definition 2.11 except (2.43), and with transonic shock \(\Gamma_{\rm shock}\) being a strictly convex graph in the sense of (B.15)-(B.16), satisfies condition (2.43) of Definition 2.11. We apply (B.15) in the present case with \((A,B)=(P_{2},P_{3})\) and \(\mathbf{e}=(0,1)\) such that \(\mathbf{\xi}(S,T)=(-T,S)\). Then, using the properties of Definition 2.11(i), \[\mathbf{e}_{S_{25}}=\frac{(-1,f^{\prime}(T_{A}))}{|(-1,f^{\prime}(T_{A}))|}\,, \qquad\mathbf{e}_{S_{26}}=-\frac{(-1,f^{\prime}(T_{B}))}{|(-1,f^{\prime}(T_{B}))|}\,.\] Also, from the strict concavity of \(f\) in the sense of (B.16), we obtain that \(f^{\prime}(T_{A})>f^{\prime}(T)>f^{\prime}(T_{B})\) and \(f(T)<f(T_{1})+f^{\prime}(T_{1})(T-T_{1})\) for all \(T,\,T_{1}\in(T_{A},T_{B})\). From this, we see that \(\{P+\text{\rm Con}\}\cap\Omega=\varnothing\) for any \(P\in\Gamma_{\rm shock}\). Since \(\varphi\leq\varphi_{2}\) in \(\Omega\) from Definition 2.11 and \(\varphi=\varphi_{2}\) on \(\Gamma_{\rm shock}\), we obtain that \(\partial_{\mathbf{e}}\varphi\geq\partial_{\mathbf{e}}\varphi_{2}\) for any \(\mathbf{e}\in\text{\rm Con}\), which implies (2.43). ## Appendix A Proof of Lemma 2.5 and Related Properties of Solutions ### Proof of Lemma 2.5: Monotonicity of critical angles for 2-D steady potential flow The proof of Lemma 2.5 is given in four steps. **1.** The shock polar for 2-D steady potential flow connecting a constant supersonic upstream state \(U_{\infty}=(\rho_{\infty},u_{\infty},0)\) to a constant downstream state \(U=(\rho,u,v)\) is given by (A.1) \[u=u_{\infty}-\frac{2\big{(}h(\rho)-h(\rho_{\infty})\big{)}}{\rho+\rho_{\infty }}\frac{\rho}{u_{\infty}}\,,\quad v^{2}=u_{\infty}^{2}-u^{2}-2\big{(}h(\rho)-h (\rho_{\infty})\big{)}\,,\] where \(h(\rho)\coloneqq\frac{1}{\gamma-1}(\rho^{\gamma-1}-1)\) for \(\gamma>1\), and \(h(\rho)=\ln\rho\) for \(\gamma=1\); see _e.g._[19, SS17.2] for the derivation. The downstream density \(\rho\in(\rho_{\infty},\overline{\rho})\) is a parameter along the shock polar, where the maximal density \(\overline{\rho}>\rho_{\infty}\) is determined uniquely by \(v(\overline{\rho})=0\) in (A.1), which satisfies \[M_{\infty}^{2}\Big{(}1-\big{(}\frac{\rho_{\infty}}{\overline{\rho}}\big{)}^{2 }\Big{)}=\frac{2\big{(}h(\overline{\rho})-h(\rho_{\infty})\big{)}}{c_{\infty} ^{2}}\,,\] for \(M_{\infty}\coloneqq\frac{u_{\infty}}{c_{\infty}}>1\), the Mach number of the upstream state that is assumed to be supersonic, and \(c_{\infty}\) the sonic speed. In the following, we denote by \(w\coloneqq\tan\theta_{\rm stdy}\), where \(\theta_{\rm stdy}\in(0,\frac{\pi}{2})\) is the angle between the velocities of the upstream and the downstream flows, and by \(\tau\coloneqq\frac{\rho}{\rho_{\infty}}\in(1,\overline{\tau})\) the ratio between the downstream and upstream densities, with \(\overline{\tau}\coloneqq\frac{\overline{\rho}}{\rho_{\infty}}\). Using the polytropic pressure law with \(\gamma\geq 1\), we have (A.2) \[w=\frac{v}{u}=\frac{\big{(}2h(\tau)\big{)}^{1/2}\big{(}M_{\infty}^{2}(1-\tau^{-2 })-2h(\tau)\big{)}^{1/2}}{M_{\infty}^{2}(1+\tau^{-1})-2h(\tau)}\,.\] From [10, Lemma 7.3.2], there exist both a unique detachment angle \(\theta_{\rm stdy}^{\rm d}\in(0,\frac{\pi}{2})\) and a unique sonic angle \(\theta_{\rm stdy}^{\rm s}\in(0,\theta_{\rm stdy}^{\rm d})\), which are characterized by \[w^{\rm d}\coloneqq\tan\theta_{\rm stdy}^{\rm d}=\sup\left\{w(\tau)\,:\,\tau\in( 1,\overline{\tau})\right\},\qquad w^{\rm s}\coloneqq\tan\theta_{\rm stdy}^{ \rm s}=\frac{v}{u}\quad\text{when }u^{2}+v^{2}=\rho^{\gamma-1}\,.\] By differentiation of (A.2) with respect to \(\tau\in(1,\overline{\tau})\), it follows that \(w^{\rm d}=w(\tau_{\rm d})\), where \(\tau_{\rm d}\in(1,\overline{\tau})\) satisfies (A.3) \[M_{\infty}^{2}\big{(}2h(\tau_{\rm d})+(\tau_{\rm d}^{2}-1)h^{\prime}(\tau_{\rm d })\big{)}-2h(\tau_{\rm d})\big{(}2h(\tau_{\rm d})+\tau_{\rm d}(\tau_{\rm d}+1 )h^{\prime}(\tau_{\rm d})\big{)}=0\,.\] Similarly, by routine calculation, it follows that \(w^{\rm s}=w(\tau_{\rm s})\), where \(\tau_{\rm s}\in(1,\overline{\tau})\) satisfies (A.4) \[(\gamma+1)h(\tau_{\rm s})=M_{\infty}^{2}-1\,.\] For the polytropic gas, it is clear that the detachment angle and sonic angle \(\theta^{\rm d}_{\rm stdy}\) and \(\theta^{\rm s}_{\rm stdy}\) depend only on \(\gamma\) and \(M_{\infty}\). We also observe the following useful identities: (A.5) \[\tau\,h^{\prime}(\tau)=(\gamma-1)h(\tau)+1\,,\quad\frac{h^{\prime\prime}(\tau) }{h^{\prime}(\tau)}=\frac{\gamma-2}{\tau}\qquad\text{ for all }\tau\in(0,\infty)\,.\] **2.** For the detachment angle, we combine (A.2) with (A.3) to obtain a parametric description of the detachment angle and the upstream Mach number in terms of \(\tau_{\rm d}\): (A.6) \[w^{\rm d}=\frac{1}{2}\frac{\big{(}\tau_{\rm d}(\tau_{\rm d}^{2}-1)h^{\prime}-2 h\big{)}^{1/2}\big{(}(\tau_{\rm d}^{2}-1)h^{\prime}+2h\big{)}^{1/2}}{\tau_{\rm d }(\tau_{\rm d}+1)h^{\prime}+h}\,,\qquad M_{\infty}^{2}=\frac{2h\big{(}2h+\tau _{\rm d}(\tau_{\rm d}+1)h^{\prime}\big{)}}{2h+(\tau_{\rm d}^{2}-1)h^{\prime}}\,,\] where \(h,h^{\prime}\), and \(h^{\prime\prime}\) are evaluated at \(\tau=\tau_{\rm d}\). A direct calculation shows that the signs of \((w^{\rm d})_{\tau_{\rm d}}\) and \((M_{\infty}^{2})_{\tau_{\rm d}}\) are equal to the sign of the quantity: \[f_{\rm d}(\tau_{\rm d})\coloneqq 2h^{2}\big{(}3+(\tau_{\rm d}+1)\frac{h^{\prime \prime}}{h^{\prime}}\big{)}+(3\tau_{\rm d}-5)(\tau_{\rm d}+1)hh^{\prime}+\tau_{ \rm d}(\tau_{\rm d}+1)(\tau_{\rm d}^{2}-1)(h^{\prime})^{2}\,.\] Applying (A.5) evaluated at \(\tau=\tau_{\rm d}\), along with \(\gamma\geq 1\), \(\tau_{\rm d}>1\), and \(h>0\), we obtain \[f_{\rm d}(\tau_{\rm d}) =\big{(}3(\tau_{\rm d}-\tau_{\rm d}^{-1})+\big{(}3(\gamma-1)( \tau_{\rm d}-\tau_{\rm d}^{-1})+2(2-\tau_{\rm d}^{-1})\big{)}h\big{)}h+(1+\tau _{\rm d}^{-1})\big{(}\tau_{\rm d}^{2}(\tau_{\rm d}^{2}-1)(h^{\prime})^{2}-2h \big{)}\] \[>(1+\tau_{\rm d}^{-1})\tilde{f}_{\rm d}(\tau_{\rm d})\,,\] where \(\tilde{f}_{\rm d}(\tau_{\rm d})\coloneqq\tau_{\rm d}^{2}(\tau_{\rm d}^{2}-1)(h ^{\prime})^{2}-2h\). We note that \(\tilde{f}_{\rm d}(1)=0\) and \[\tilde{f}_{\rm d}^{\prime}(\tau)=2h^{\prime}(\tau)\big{(}(\gamma-1)\tau(\tau ^{2}-1)h^{\prime}(\tau)+\tau^{\gamma+1}-1\big{)}>0\qquad\text{ for any }\tau>1\,.\] It follows that \(\tilde{f}_{\rm d}(\tau_{\rm d})>0\) and \(f_{\rm d}(\tau_{\rm d})>0\) so that \((w^{\rm d})_{\tau_{\rm d}}>0\) and \((M_{\infty}^{2})_{\tau_{\rm d}}>0\) for all \(\tau_{\rm d}>1\). Using the chain rule, we conclude that \(w^{\rm d}\) and \(\theta^{\rm d}_{\rm stdy}\) are strictly increasing with respect to \(M_{\infty}>1\). **3.** For the sonic angle, we combine (A.2) with (A.4) to give a parametric description of the sonic angle and upstream Mach number in terms of \(\tau_{\rm s}\): (A.7) \[w^{\rm s}=\frac{\big{(}2h\big{)}^{1/2}\big{(}(\tau_{\rm s}^{2}-1)\tau_{\rm s} ^{\gamma-1}-2h\big{)}^{1/2}}{(\tau_{\rm s}+1)\tau_{\rm s}^{\gamma-1}+2h}\,, \qquad M_{\infty}^{2}=1+(\gamma+1)h\,,\] where \(h\) and \(h^{\prime}\) are evaluated at \(\tau=\tau_{\rm s}\). It is clear that \((M_{\infty}^{2})_{\tau_{\rm s}}>0\) since \(h^{\prime}>0\), whilst a direct calculation shows that the sign of \((w^{\rm s})_{\tau_{\rm s}}\) is equal to the sign of the quantity \[f_{\rm s}(\tau_{\rm s})\coloneqq 2h\big{(}1+(\gamma-1)h\big{)}\big{(}1+( \gamma+1)h\big{)}+(\tau_{\rm s}+1)\big{(}(\tau_{\rm s}-1)\tau_{\rm s}^{\gamma- 1}-2h\big{)}h^{\prime}\,.\] Applying (A.5) evaluated at \(\tau=\tau_{\rm s}\), along with \(\gamma\geq 1\), \(\tau_{\rm s}>1\), and \(h^{\prime}(\tau_{\rm s})=\tau_{\rm s}^{\gamma-2}>0\), we obtain \[f_{\rm s}(\tau_{\rm s})=\big{(}2(\gamma+1)\tau_{\rm s}h^{2}+\tau_{\rm s}(\tau_{ \rm s}^{2}-1)h^{\prime}-2h\big{)}h^{\prime}>\tilde{f}_{\rm s}(\tau_{\rm s})h^{ \prime}\,,\] where \(\tilde{f}_{\rm s}(\tau_{\rm s})\coloneqq\tau_{\rm s}(\tau_{\rm s}^{2}-1)h^{ \prime}-2h\). Notice that \(\tilde{f}_{\rm s}(1)=0\) and \[\tilde{f}_{\rm s}^{\prime}(\tau)=(\gamma+1)(\tau^{2}-1)h^{\prime}(\tau)>0 \qquad\text{for any }\tau>1.\] Then it follows that \(\tilde{f}_{\rm s}(\tau_{\rm s})>0\) and \(f_{\rm s}(\tau_{\rm s})>0\) so that \((w^{\rm s})_{\tau_{\rm s}}>0\). We conclude via the chain rule that \(w^{\rm s}\) and hence \(\theta^{\rm s}_{\rm stdy}\) are strictly increasing with respect to \(M_{\infty}>1\). **4.** The limiting values stated in the lemma can be checked directly from (A.6)-(A.7), now that the monotonicity of \((w^{\rm d},M_{\infty}^{2})\) with respect to \(\tau_{\rm d}\) in (A.6) and the monotonicity of \((w^{\rm s},M_{\infty}^{2})\) with respect to \(\tau_{\rm s}\) in (A.7) have been verified. ### Monotonicity properties with respect to the incident angles In Lemma 2.3, we have shown that the pseudo-Mach number of state (2) at the reflection point \(P^{1}_{0}\) is a strictly decreasing function of \(\theta_{1}\in(0,\theta^{\rm cr})\). Subsequently, in Proposition 2.6, we have used Lemmas 2.3 and 2.5 to prove the existence of the unique detachment angle \(\theta^{\rm d}\) and sonic angle \(\theta^{\rm s}\). In Remark 2.1(i), we have stated the relation between \(\theta^{\rm d},\,\theta^{\rm s}\), and \(\theta^{\rm cr}\), which depends on the choice of \((\gamma,v_{2})\). The following lemma provides the proof of this statement. **Lemma A.1**.: _For any \(\gamma>1,\) there exist constants \(v_{2}^{\rm d}\) and \(v_{2}^{\rm s},\) with \(v_{\rm min}<v_{2}^{\rm s}<v_{2}^{\rm d}<0,\) uniquely determined by \(\gamma\) such that_ (A.8) \[\operatorname{sgn}\big{(}\hat{\theta}_{25}(\theta^{\rm cr};v_{2})- \theta_{\rm stdy}^{\rm d}(1,|D\varphi_{2}(P_{0,{\rm cr}}^{1})|)\big{)} =\operatorname{sgn}(v_{2}-v_{2}^{\rm d})\,,\] (A.9) \[\operatorname{sgn}\big{(}\hat{\theta}_{25}(\theta^{\rm cr};v_{2})- \theta_{\rm stdy}^{\rm s}(1,|D\varphi_{2}(P_{0,{\rm cr}}^{1})|)\big{)} =\operatorname{sgn}(v_{2}-v_{2}^{\rm s})\,,\] _where \(P_{0,{\rm cr}}^{1}\coloneqq P_{0}^{1}|_{\theta_{1}\equiv\theta^{\rm cr}},\) and we define \(\theta_{\rm stdy}^{\rm d}(1,M_{\infty})\equiv\theta_{\rm stdy}^{\rm s}(1,M_{ \infty})\coloneqq 0\) for any \(0\leq M_{\infty}\leq 1.\) Furthermore, when \(\gamma=1,\) we define \(v_{\rm min}=v_{2}^{\rm s}=v_{2}^{\rm d}=-\infty.\)_ Proof.: We divide the proof into two steps. **1.** For \(\gamma>1\) and \(v_{2}\in(v_{\rm min},0),\) a direct calculation by using (2.15), (2.18), and Lemma 2.3 gives (A.10) \[\begin{split}& M_{2,{\rm min}}\coloneqq\inf_{\theta_{1}\in(0, \theta^{\rm cr})}|D\varphi_{2}(P_{0}^{1})|=|D\varphi_{2}(P_{0,{\rm cr}}^{1})|= \frac{v_{2}v_{\rm min}}{\sqrt{v_{\rm min}^{2}-v_{2}^{2}}}\,,\\ &\hat{\theta}_{25}(\theta^{\rm cr};v_{2})=\arcsin\left(\frac{-v_ {2}}{M_{2,{\rm min}}}\right)=\arccos\left(|\frac{v_{2}}{v_{\rm min}}|\right)>0 \,.\end{split}\] Then \(\hat{\theta}_{25}(\theta^{\rm cr};v_{2})\) is strictly increasing with respect to \(v_{2}\in(v_{\rm min},0)\). We also calculate (A.11) \[\lim_{v_{2}\to v_{\rm min}^{+}}M_{2,{\rm min}}=\infty\,,\quad\lim_{v_{2}\to 0 ^{-}}M_{2,{\rm min}}=0\,,\quad\frac{{\rm d}M_{2,{\rm min}}^{2}}{{\rm d}v_{2}}= \frac{2v_{\rm min}^{4}v_{2}}{(v_{\rm min}^{2}-v_{2}^{2})^{2}}<0\,,\] so that \(M_{2,{\rm min}}\) is strictly decreasing with respect to \(v_{2}\in(v_{\rm min},0)\). Also observe that \(M_{2,{\rm min}}>1\) if and only if \(v_{2}\in(v_{\rm min},v_{\rm mid})\), for \(v_{\rm mid}\coloneqq\frac{v_{\rm min}}{\sqrt{v_{\rm min}^{2}+1}}\). Then, by Lemma 2.5, \(\theta_{\rm stdy}^{\rm d}(1,M_{2,{\rm min}})\) is strictly decreasing with respect to \(v_{2}\in(v_{\rm min},v_{\rm mid})\). In particular, by (A.10)-(A.11) and [10, Lemma 7.3.3], we have \[(\hat{\theta}_{25}(\theta^{\rm cr};v_{2}),\theta_{\rm stdy}^{\rm d }(1,M_{2,{\rm min}}))\to(\arctan\left(|v_{\rm min}|\right),0) \quad\text{as}\;\;v_{2}\to v_{\rm mid}^{-}\,,\] \[(\hat{\theta}_{25}(\theta^{\rm cr};v_{2}),\theta_{\rm stdy}^{\rm d }(1,M_{2,{\rm min}}))\to(0,\tfrac{\pi}{2}) \text{as}\;\;v_{2}\to v_{\rm min}^{+}\,.\] Then we conclude from the monotonicity properties of \((\hat{\theta}_{25}(\theta^{\rm cr};v_{2})\,,\theta_{\rm stdy}^{\rm d}(1,M_{2, {\rm min}}))\) with respect to \(v_{2}\in(v_{\rm min},v_{\rm mid})\) that there exists a unique value \(v_{2}^{\rm d}\in(v_{\rm min},v_{\rm mid})\), depending only on \(\gamma,\) such that (A.8) holds. The proof of (A.9) follows via a similar argument by using the fact that \(\theta_{\rm stdy}^{\rm s}\in(0,\theta_{\rm stdy}^{\rm d})\) and \[\theta_{\rm stdy}^{\rm s}(1,M_{2,{\rm min}})\to 0 \text{as}\;\;v_{2}\to v_{\rm mid}^{-}\,,\] \[\theta_{\rm stdy}^{\rm s}(1,M_{2,{\rm min}})\to\arctan\left( \tfrac{2}{\gamma-1}\right)^{1/2} \text{as}\;\;v_{2}\to v_{\rm min}^{+}\,.\] **2.** For \(\gamma=1,\) from (2.15), we see that \(v_{\rm min}=-\infty\) and \(\theta^{\rm cr}=\frac{\pi}{2}\). Taking the limit: \(v_{\rm min}\to-\infty\) in Step **1** above, we obtain that \(M_{2,{\rm min}}=|v_{2}|\) and \(\hat{\theta}_{25}(\frac{\pi}{2};v_{2})=\frac{\pi}{2}\). From Lemma 2.5, we know that the sonic and detachment angles for 2-D steady potential flow satisfy \(0<\theta^{\rm s}(1,M_{\infty})<\theta^{\rm d}(1,M_{\infty})<\frac{\pi}{2}\) for any \(M_{\infty}>1,\) which implies that (A.8)-(A.9) hold with \(v_{2}^{\rm d}=v_{2}^{\rm s}=-\infty.\) Lemma 2.3 is also useful to determine the monotonicity of other quantities related to the four-shock interaction problem. In the following lemma, we show that the reflected shock angles \(\theta_{25}\) and \(\theta_{26}\) are monotonic with respect to the incident shock angles \(\theta_{1}\) and \(\theta_{2}\) respectively and, subsequently, that certain intersection points of the reflected shocks are also monotonic functions of the incident angles. **Lemma A.2**.: _Fix \(\gamma\geq 1\) and \(v_{2}\in(v_{\rm min},0)\)._ 1. _For any incident angle_ \(\theta_{1}\in[0,\theta^{\rm d}),\) _let the reflected shock angle_ \(\theta_{25}\in(\frac{\pi}{2},\pi]\) _be given by (_2.34_). Then_ \(\theta_{25}\) _is a continuous, strictly decreasing function of_ \(\theta_{1},\) _and the map:_ \(\theta_{1}\mapsto\theta_{25}\) _is_ \(C^{\infty}\)_-smooth on_ \(\theta_{1}\in[0,\theta^{\rm d})\)_._ 2. _For the right-most intersection_ \(P_{2}=(\xi^{P_{2}},\eta^{P_{2}})\) _of_ \(\partial B_{c_{2}}(O_{5})\) _and_ \(S_{25}\) _as given in_ Definition 2.9_, and for the_ \(\eta\)_-intercept of_ \(S_{25}\) _which is denoted here by_ \(a_{25}\)_:_ \(\eta^{P_{2}}\) _is strictly decreasing with respect to_ \(\theta_{1}\in(0,\theta^{\rm s}),\) _and_ \(a_{25}\) _is strictly increasing with_ \(\theta_{1}\in(0,\theta^{\rm d}).\) Proof.: For (i), the continuity and smoothness of the map: \(\theta_{1}\mapsto\theta_{25}\) follows due to Proposition 2.6 and (2.34), whilst the strict monotonicity follows directly from Lemma 2.3 and [2, Lemma 2.17]. Indeed, suppose that \(\theta_{1},\tilde{\theta}_{1}\in[0,\theta^{\mathrm{d}})\) are such that \(\theta_{25}(\theta_{1})=\theta_{25}(\tilde{\theta}_{1})\). Then, by [2, Lemma 2.17], the uniform state (5) is uniquely determined by \((v_{2},\gamma,\theta_{25})\) so that the location of the reflected shock \(S_{25}\) must coincide for both \(\theta_{1}\) and \(\tilde{\theta}_{1}\). In particular, for the pseudo-Mach number of state (2) at the reflection point \(P_{0}^{1}\), as defined by (2.18), we see that \(M_{2}^{(\theta_{1})}=M_{2}^{(\tilde{\theta}_{1})}\) so that \(\theta_{1}=\tilde{\theta}_{1}\) by Lemma 2.3. We have shown that the map: \(\theta_{1}\mapsto\theta_{25}\) is continuous and injective, and therefore strictly monotonic. In particular, by property (i) of Proposition 2.6 and (2.34), \(\theta_{25}\to\pi^{-}\) as \(\theta_{1}\to 0^{+}\); whilst, by property (ii) of Proposition 2.6 and (2.34), \(\theta_{25}<\pi\) for any \(\theta_{1}\in(0,\theta^{\mathrm{d}}]\). It follows that the map: \(\theta_{1}\mapsto\theta_{25}\) is monotonic decreasing. For (ii), the first result follows directly from part (i) combined with [2, Lemma 2.22], whilst the second result follows directly from part (i) combined with [2, Eqs. (2.4.14) and (2.4.42)]. Finally, we give the proof that the intersection point \(P_{I}\) of the reflected shocks \(S_{25}\) and \(S_{26}\) can be extended to a \(C(\overline{\Omega})\)-function. **Lemma A.3**.: _For \(\boldsymbol{\theta}\in\overline{\Theta}\setminus\{\boldsymbol{0}\},\) denote by \(P_{I}=(\xi^{P_{I}},\eta^{P_{I}})\) the unique intersection point of the reflected shocks \(S_{25}\) and \(S_{26},\) as given by (4.9). Then_ (A.12) \[\lim_{\boldsymbol{\theta}\to\boldsymbol{0},\,\boldsymbol{\theta}\in \overline{\Theta}\setminus\boldsymbol{0}}(\xi^{P_{I}},\eta^{P_{I}})=(0,\eta_ {0})\,,\] _where \(\eta_{0}\) is given by (2.23)._ Proof.: By Proposition 2.6 and Lemma A.2(i), limit (A.12) is equivalent to the limit: \[\lim_{\begin{subarray}{c}(\theta_{25},\theta_{26})\to(\pi,0)\\ (\theta_{25},\theta_{26})\in\overline{\Theta}\setminus\{(\pi,0)\}\end{subarray} }\quad(\xi^{P_{I}},\eta^{P_{I}})=(0,\eta_{0})\,,\] where \(\Theta_{\mathrm{refl}}\coloneqq\{(\theta_{25},\theta_{26}):\boldsymbol{\theta }\in\Theta\}\). The proof of the above limit consists of three steps. **1.** We first consider the unilateral normal reflection case: \(\boldsymbol{\theta}\in\big{(}\{0\}\times(0,\theta^{\mathrm{d}})\big{)}\cup \big{(}(0,\theta^{\mathrm{d}})\times\{0\}\big{)}\). Without loss of generality, we restrict our attention to the case: \(\boldsymbol{\theta}\in(0,\theta^{\mathrm{d}})\times\{0\}\), and denote \((v_{\infty},\beta)\coloneqq(-v_{2},\pi-\theta_{25})\), where \(\theta_{25}\) depends only on \((v_{2},\gamma,\theta_{1})\). We consider the Prandtl-Meyer reflection configuration associated with parameters \((v_{\infty},\beta)\), as defined in [2], and reflect this configuration in the vertical axis. By [2, Lemma 2.17], the location of the reflected shock \(S_{25}\) is uniquely determined by \((v_{2},\gamma,\theta_{25})\), and thus coincides with the Prandtl-Meyer oblique shock \(S_{\mathcal{O}}\). Similarly, the (normal) reflected shock \(S_{26}=S_{0}\) coincides with the Prandtl-Meyer normal shock \(S_{\mathcal{N}}\), so that the intersection point \(P_{I}\) of \(S_{25}\) and \(S_{26}\), as given by (4.9), coincides with the intersection point of \(S_{\mathcal{O}}\) and \(S_{\mathcal{N}}\), as given by [2, Eq. (4.1.33)]. That is, \[(\xi^{P_{I}}\,,\eta^{P_{I}})=\big{(}\frac{a_{25}-\eta_{0}}{\tan\theta_{25}}\,, \eta_{0}\big{)}\,,\] where \(a_{25}\coloneqq-\xi^{P_{0}^{1}}\tan\theta_{25}\) and \(\eta_{0}\) are the \(\eta\)-intercepts of \(S_{25}\) and the normal reflection \(S_{0}\), respectively, as given in SS2. By Proposition 2.6(i) and Lemma A.2(ii), \(a_{25}\to\eta_{0}\) as \(\theta_{25}\to\pi^{-}\). Then we may apply L'Hopital's rule to give \[\lim_{\theta_{25}\to\pi^{-}}\xi^{P_{I}}=-\lim_{\theta_{25}\to\pi^{-}}\frac{ \mathrm{d}a_{25}}{\mathrm{d}\theta_{25}}\,.\] From [2, Eq. (2.4.14)], \(a_{25}=v_{2}-q_{2}\sec\theta_{25}\), for \(q_{2}\coloneqq\mathrm{dist}(S_{25},O_{2})\). By taking the derivative with respect to \(\theta_{25}\), we have \[\frac{\mathrm{d}a_{25}}{\mathrm{d}\theta_{25}}=-\sec\theta_{25}\frac{\mathrm{ d}q_{2}}{\mathrm{d}\theta_{25}}-q_{2}\sec\theta_{25}\tan\theta_{25}\,.\] Using [10, Lemma 6.1.2] and (2.23), \(1<\lim_{\theta_{25}\to\pi^{-}}q_{2}=\eta_{0}-v_{2}<\infty\) so that \[\lim_{\theta_{25}\to\pi^{-}}\frac{\mathrm{d}a_{25}}{\mathrm{d}\theta_{25}}= \lim_{\theta_{25}\to\pi^{-}}\frac{\mathrm{d}q_{2}}{\mathrm{d}\theta_{25}}\,.\] By [2, Eq. (2.4.12)], we have a relation between the pseudo-Mach numbers \((M_{2},M_{5})\coloneqq(\frac{g_{2}}{c_{2}},\frac{g_{5}}{c_{5}})\) for \(q_{5}\coloneqq\operatorname{dist}(S_{25},O_{5})\) and the given parameters \((\gamma,v_{2},\theta_{25})\), expressed as \[M_{2}^{\frac{\gamma-1}{\gamma+1}}(M_{2}^{\frac{2}{\gamma+1}}-M_{5}^{\frac{2}{ \gamma+1}})=v_{2}\sec\theta_{25}\,.\] Taking the derivative: \[0 =\lim_{\theta_{25}\to\pi^{-}}\frac{\mathrm{d}}{\mathrm{d}\theta_{ 25}}\Big{(}M_{2}^{\frac{\gamma-1}{\gamma+1}}(M_{2}^{\frac{2}{\gamma+1}}-M_{5} ^{\frac{2}{\gamma+1}})\Big{)}\] \[=\lim_{\theta_{25}\to\pi^{-}}\frac{1}{\gamma+1}\Big{(}(\gamma-1) M_{2}^{-1}v_{2}\sec\theta_{25}+2M_{2}^{\frac{\gamma-1}{\gamma+1}}\big{(}M_{2}^{ \frac{1-\gamma}{1+\gamma}}-M_{5}^{\frac{1-\gamma}{1+\gamma}}\frac{\mathrm{d}M _{5}}{\mathrm{d}M_{2}}\big{)}\Big{)}\frac{\mathrm{d}M_{2}}{\mathrm{d}\theta_{ 25}}\,.\] By [2, Eqs. (2.4.9)-(2.4.10)], we see that \(\frac{\mathrm{d}M_{5}}{\mathrm{d}M_{2}}<0\) for all \(M_{2}\in(0,\infty)\setminus\{1\}\). Therefore, the bracketed term above is uniformly positive for \(\theta_{25}\in(\frac{\pi}{2},\pi)\) so that \[\lim_{\theta_{25}\to\pi^{-}}\frac{\mathrm{d}q_{2}}{\mathrm{d}\theta_{25}}= \lim_{\theta_{25}\to\pi^{-}}\frac{\mathrm{d}M_{2}}{\mathrm{d}\theta_{25}}=0\,.\] **2.** We now consider the general case \(\boldsymbol{\theta}\in\overline{\Theta}\setminus\{\boldsymbol{0}\}\). By direct calculation, we have \[\xi^{P_{I}}=\frac{a_{25}-a_{26}}{\tan\theta_{26}-\tan\theta_{25}}\,,\] where \(a_{2j}>0\) represents the \(\eta\)-intercept of \(S_{2j}\) for \(j=5,6\). Note that \(a_{25}\) depends only on \((v_{2},\gamma,\theta_{25})\), whilst \(a_{26}\) depends only on \((v_{2},\gamma,\theta_{26})\). In particular, from Step **1**, (A.13) \[0=\lim_{\theta_{25}\to\pi^{-}}\frac{a_{25}-\eta_{0}}{\tan\theta_{25}}=\lim_{ \theta_{26}\to 0^{+}}\frac{a_{26}-\eta_{0}}{\tan\theta_{26}}\,.\] Let \(\big{\{}\boldsymbol{\theta}^{(k)}\big{\}}_{k\in\mathbb{N}}\subseteq \overline{\Theta}\setminus\{\boldsymbol{0}\}\) be any sequence of parameters with \(\boldsymbol{\theta}^{(k)}\to\boldsymbol{0}\) as \(k\to\infty\). By moving to a subsequence, we may assume any of the following cases: (i) \(\theta_{1}^{(k)}=0\) for all \(k\in\mathbb{N}\); (ii) \(\theta_{2}^{(k)}=0\) for all \(k\in\mathbb{N}\); (iii) both \(\theta_{1}^{(k)}>0\) and \(\theta_{2}^{(k)}>0\) for all \(k\in\mathbb{N}\). Denote by \(\xi_{(k)}^{P_{I}}\) the \(\xi\)-coordinates of the intersection points \(P_{I}\) associated with parameters \(\boldsymbol{\theta}^{(k)}\). In cases (i) and (ii), it is clear from the previous step that \(\xi_{(k)}^{P_{I}}\to 0\) as \(k\to\infty\), so we focus on case (iii). Using the triangle inequality, the fact that \(\tan\theta_{25}\) and \(\tan\theta_{26}\) have opposite signs, and (A.13), we obtain \[|\xi_{(k)}^{P_{I}}|=\Big{|}\frac{a_{25}^{(k)}-a_{26}^{(k)}}{\tan\theta_{26}^{( k)}-\tan\theta_{25}^{(k)}}\Big{|}\leq\Big{|}\frac{a_{25}^{(k)}-\eta_{0}}{\tan \theta_{25}^{(k)}}\Big{|}+\Big{|}\frac{a_{26}^{(k)}-\eta_{0}}{\tan\theta_{26}^ {(k)}}\Big{|}\to 0\qquad\text{ as }k\to\infty\,.\] **3.** Finally, we show that \(\eta^{P_{I}}\to\eta_{0}\) as \(\boldsymbol{\theta}\to\boldsymbol{0}\) with \(\boldsymbol{\theta}\in\overline{\Theta}\setminus\{\boldsymbol{0}\}\). Indeed, it is clear from (A.13) that \(a_{25}\to\eta_{0}\) as \(\theta_{1}\to 0^{+}\), and \(a_{26}\to\eta_{0}\) as \(\theta_{2}\to 0^{+}\). Furthermore, it follows from Step **1** of the proof of Lemma 3.3 that \(\min\{a_{25},a_{26}\}\leq\eta^{P_{I}}\leq\max\{a_{25},a_{26}\}\). ## Appendix B Some Known Results Need for the Proofs In this appendix, we present some known results that are used in Sections 3-5 above. ### Well-posedness of the iteration boundary value problem For a constant \(h>0\) and a function \(f_{\mathrm{bd}}:[0,h]\to\mathbb{R}_{+}\), fix a bounded domain \(\Omega\subseteq\mathbb{R}^{2}\) as (B.1) \[\Omega\coloneqq\big{\{}\boldsymbol{x}\in\mathbb{R}^{2}\,:\,0<x_{1}<h,\,0<x_{2} <f_{\mathrm{bd}}(x_{1})\big{\}}\,,\] where \(f_{\mathrm{bd}}\) satisfies that, for constants \(t_{0}\geq 0\), \(t_{h}\), \(t_{1}\), \(t_{2}\), \(t_{3}\), \(M\in(0,\infty)\), and \(\alpha\in(0,1)\), (B.2) \[\begin{split}& f_{\mathrm{bd}}\in C^{1}([0,h])\,,\quad f_{ \mathrm{bd}}(0)=t_{0}\,,\quad f_{\mathrm{bd}}(h)=t_{h}\,,\\ & f_{\mathrm{bd}}(x_{1})\geq\min\{t_{0}+t_{1}x_{1},\,t_{2},\,t_{h }-t_{3}(x_{1}-h)\}\qquad\text{ for all }x_{1}\in(0,h)\,,\\ &\|f_{\mathrm{bd}}\|_{2,\alpha,(0,h)}^{(-1-\alpha),\{0,h\}}\leq M\,.\end{split}\] The boundary, \(\partial\Omega\coloneqq\cup_{k=0}^{3}(\Gamma_{k}\cup\{P_{k+1}\})\), with vertices and segments: (B.3) \[\begin{split}& P_{1}=(h,0)\,,\quad P_{2}=(h,f_{\mathrm{bd}}(h))\,, \quad P_{3}=(0,f_{\mathrm{bd}}(0))\,,\quad P_{4}=(0,0)\,,\\ &\overline{\Gamma_{0}}=\partial\Omega\cap\{x_{1}=0\}\,,\quad \overline{\Gamma_{1}}=\partial\Omega\cap\{x_{2}=f_{\mathrm{bd}}(x_{1})\}\,,\\ &\overline{\Gamma_{2}}=\partial\Omega\cap\{x_{1}=h\}\,,\quad \overline{\Gamma_{3}}=\partial\Omega\cap\{x_{2}=0\}\,,\end{split}\] and \(\Gamma_{k}\), \(k=0,1,2,3\), are the relative interiors of the segments defined above. Let \(g_{\mathrm{so}}\in C^{\infty}(\mathbb{R}^{2}\setminus\{\frac{h}{3}<x_{1}<\frac {2h}{3}\})\) be a piecewise smooth function defined in \(\mathbb{R}^{2}\) such that (B.4) \[\begin{split}&\|g_{\mathrm{so}}\|_{C^{3}(\overline{\Omega} \setminus\{\frac{h}{3}<x_{1}<\frac{2h}{3}\})}\leq C_{\mathrm{so}}\,,\\ & g_{\mathrm{so}}(\cdot,x_{2})\quad\text{is linear on }x_{1}\text{ in }\{x_{1}\leq\frac{h}{4}\}\cup\{x_{1}\geq\frac{3h}{4}\}\,,\\ &\partial_{x_{2}}g_{\mathrm{so}}=0\quad\text{on }\Gamma_{3}\,.\end{split}\] We consider the following nonlinear problem: (B.5) \[\begin{cases}\sum_{i,j=1}^{2}\tilde{A}_{ij}(Du,\mathbf{x})D_{ij}u+\sum_{i=1}^{2} \tilde{A}_{i}(Du,\mathbf{x})D_{i}u=0&\text{in }\Omega\,,\\ \tilde{B}(Du,u,\mathbf{x})=0&\text{on }\Gamma_{1}\,,\\ u=g_{\mathrm{so}}(\mathbf{x})&\text{on }\Gamma_{2}\cup\Gamma_{0}\,,\\ \mathbf{b}^{(\mathrm{w})}(\mathbf{x})\cdot Du=0&\text{on }\Gamma_{3}\,.\end{cases}\] For constants \(\lambda\in(0,1)\), \(M<\infty\), \(\alpha\in(0,1)\), \(\beta\in[\frac{1}{2},1)\), \(\delta\in(0,1)\), \(\sigma\in(0,1)\), and \(\varepsilon\in(0,\frac{h}{10})\), assume that the following conditions are satisfied: 1. For any \(\mathbf{x}\in\Omega\) and \(\mathbf{p},\mathbf{\mu}\in\mathbb{R}^{2}\), \(\lambda\operatorname{dist}(\mathbf{x},\Gamma_{2}\cup\Gamma_{0})|\mathbf{\mu}|^{2}\leq \sum_{i,j=1}^{2}\tilde{A}_{ij}(\mathbf{p},\mathbf{x})\mu_{i}\mu_{j}\leq\lambda^{-1}| \mathbf{\mu}|^{2}\,.\) Moreover, for any \(\mathbf{x}\in\Omega\setminus\{\frac{\varepsilon}{2}\leq x_{1}\leq h-\frac{ \varepsilon}{2}\}\), \[\lambda|\mathbf{\mu}|^{2}\leq\sum_{i,j=1}^{2}\frac{\tilde{A}_{ij}(\mathbf{p},\mathbf{x})\mu _{i}\mu_{j}}{(\min\{x_{1},h-x_{1},\delta\})^{2-\frac{1+\varepsilon}{2}}}\leq \lambda^{-1}|\mathbf{\mu}|^{2}\,.\] 2. \((\tilde{A}_{ij},\tilde{A})(\mathbf{p},\mathbf{x})\) are independent of \(\mathbf{p}\in\mathbb{R}^{2}\) on \(\Omega\cap\{\varepsilon\leq x_{1}\leq h-\varepsilon\}\) with \[\|\tilde{A}_{ij}\|_{L^{\infty}(\Omega\cap\{\varepsilon\leq x_{1}\leq h- \varepsilon\})}\leq\lambda^{-1}\,,\qquad\|(\tilde{A}_{ij},\tilde{A})\|_{1, \alpha,\Omega\cap\{\varepsilon\leq x_{1}\leq h-\varepsilon\}}\leq M\,.\] 3. For any \(\mathbf{p}\in\mathbb{R}^{2}\), \[\|(\tilde{A}_{ij},\tilde{A}_{i})(\mathbf{p},\cdot)\|_{C^{\beta}(\overline{\Omega \setminus\{2\varepsilon\leq x_{1}\leq h-2\varepsilon\}})}+\|D_{\mathbf{p}}(\tilde{ A}_{ij},\tilde{A}_{i})(\mathbf{p},\cdot)\|_{L^{\infty}(\Omega\setminus\{2\varepsilon\leq x _{1}\leq h-2\varepsilon\})}\leq M\,.\] 4. \((\tilde{A}_{ij},\tilde{A}_{i})\in C^{1,\alpha}(\mathbb{R}^{2}\times(\overline{ \Omega}\setminus(\overline{\Gamma}_{2}\cup\overline{\Gamma_{0}})))\) and, for any \(s\in(0,\frac{h}{3})\), \[\|(\tilde{A}_{ij},\tilde{A}_{i})\|_{1,\alpha,\mathbb{R}^{2}\times(\overline{ \Omega}\cap\{s\leq x_{1}\leq h-s\})}\leq M\Big{(}\frac{h}{s}\Big{)}^{M}.\] 5. For each \((\mathbf{p},\mathbf{x})\in\mathbb{R}^{2}\times(\overline{\Omega}\setminus\{\varepsilon <x_{1}<h-\varepsilon\})\) and \(i,j=1,2\), define \[\hat{\mathbf{p}}\coloneqq\mathbf{p}-Dg_{\mathrm{so}}(\mathbf{x})\,,\qquad(\tilde{a}_{ij}, \tilde{a}_{i})(\hat{\mathbf{p}},\mathbf{x})\coloneqq(\tilde{A}_{ij},\tilde{A}_{i})( \hat{\mathbf{p}}+Dg_{\mathrm{so}}(\mathbf{x}),\mathbf{x})\,.\] For any \((\mathbf{p},(x_{1},0))\in\mathbb{R}^{2}\times(\Gamma_{3}\setminus\{\varepsilon \leq x_{1}\leq h-\varepsilon\})\), \[(\tilde{a}_{11},\tilde{a}_{22},\tilde{a}_{1})((\hat{p}_{1},-\hat{p}_{2}),(x_{1},0 ))=(\tilde{a}_{11},\tilde{a}_{22},\tilde{a}_{1})((\hat{p}_{1},\hat{p}_{2}),(x_{1 },0))\,.\] For any \((\mathbf{p},\mathbf{x})\in\mathbb{R}^{2}\times(\Omega\setminus\{\varepsilon\leq x _{1}\leq h-\varepsilon\})\) and \(i=1,2\), \[|\tilde{a}_{ii}(\hat{\mathbf{p}},\mathbf{x})-\tilde{a}_{ii}(Dg_{\mathrm{so}}(\mathbf{x}_{ \mathrm{e}}),\mathbf{x}_{\mathrm{e}})|\leq M|\mathbf{x}-\mathbf{x}_{\mathrm{e}}|^{\beta}\,,\qquad(\tilde{A}_{12},\tilde{A}_{21})(\mathbf{p},\mathbf{x}_{\mathrm{e}})=0\,,\] either for point \(\mathbf{x}_{\mathrm{e}}\coloneqq(0,x_{2})\) or point \(\mathbf{x}_{\mathrm{e}}\coloneqq(h,x_{2})\). 6. For any \(\mathbf{p}\in\mathbb{R}^{2}\) and \(\mathbf{x}\in\Omega\setminus\{\frac{\varepsilon}{2}\leq x_{1}\leq h-\frac{ \varepsilon}{2}\}\), \(\tilde{A}_{1}(\mathbf{p},\mathbf{x})\leq-\lambda\,.\) 7. The nonlinear boundary condition (B.5)\({}_{2}\) satisfies the following conditions: 1. For any \((\mathbf{p},z,\mathbf{x})\in\mathbb{R}^{2}\times\mathbb{R}\times\Gamma_{1}\), \(D_{\mathbf{p}}\tilde{B}(\mathbf{p},z,\mathbf{x})\cdot\mathbf{\nu}^{(1)}(\mathbf{x})\geq\lambda\), where \(\mathbf{\nu}^{(1)}\) is the unit normal vector on \(\Gamma_{1}\) pointing to \(\Omega\). * For any \((\mathbf{p},z)\in\mathbb{R}^{2}\times\mathbb{R}\) and \(k=1,2,3\), \[\|\tilde{B}(Dg_{\text{so}},g_{\text{so}},\cdot)\|_{C^{3}(\overline{ \Omega}\setminus\{\frac{b}{3}\leq x_{1}\leq\frac{2b}{3}\})}+\|D^{k}_{(\mathbf{p},z) }\tilde{B}(\mathbf{p},z,\cdot)\|_{C^{3}(\overline{\Omega})}\leq M\,,\] \[\|D_{\mathbf{p}}\tilde{B}(\mathbf{p},z,\cdot)\|_{C^{0}(\overline{\Omega}) }\leq\lambda^{-1}\,,\] \[D_{z}\tilde{B}(\mathbf{p},z,\mathbf{x})\leq-\lambda\qquad\text{for all }\mathbf{x}\in \Gamma_{1}\,,\] \[D_{p_{1}}\tilde{B}(\mathbf{p},z,\mathbf{x})\leq-\lambda\qquad\text{for all }\mathbf{x}\in \Gamma_{1}\setminus\{\varepsilon\leq x_{1}\leq h-\varepsilon\}\,.\] * There exist \(v\in C^{3}(\overline{\Gamma_{1}})\) and a nonhomogeneous linear operator: \[L(\mathbf{p},z,\mathbf{x})=\mathbf{b}^{(1)}(\mathbf{x})\cdot\mathbf{p}+b^{(1)}_{0}(\mathbf{x})z+g_{1} (\mathbf{x})\] defined for \(\mathbf{x}\in\Gamma_{1}\) and \((\mathbf{p},z)\in\mathbb{R}^{2}\times\mathbb{R}\), satisfying \[\|v\|_{C^{3}(\Omega)}+\|(\mathbf{b}^{(1)},b^{(1)}_{0},g_{1})\|_{C^{3}(\overline{ \Gamma_{1}})}\leq M\,,\] such that, for any \((\mathbf{p},z,\mathbf{x})\in\mathbb{R}^{2}\times\mathbb{R}\times\Gamma_{1}\), \[\big{|}\tilde{B}(\mathbf{p},z,\mathbf{x})-L(\mathbf{p},z,\mathbf{x})\big{|}\leq \sigma\big{(}|\mathbf{p}-Dv(\mathbf{x})|+|z-v(\mathbf{x})|\big{)}\,,\] \[\big{|}D_{\mathbf{p}}\tilde{B}(\mathbf{p},z,\mathbf{x})-\mathbf{b}^{(1)}(\mathbf{x}) \big{|}+\big{|}D_{z}\tilde{B}(\mathbf{p},z,\mathbf{x})-b^{(1)}_{0}(\mathbf{x})\big{|}\leq \sigma\,.\] * Obliqueness requirements: For the interior unit normal vector \(\mathbf{\nu}^{(\text{w})}\) on \(\Gamma_{3}\) to \(\Omega\), \[\mathbf{b}^{(\text{w})}\cdot\mathbf{\nu}^{(\text{w})}\geq\lambda\,,\quad b ^{(\text{w})}_{1}\leq 0\qquad\text{on }\Gamma_{3}\,,\] \[\mathbf{b}^{(\text{w})}=(0,1)\,\text{ on }\Gamma_{3}\setminus\{ \varepsilon\leq x_{1}\leq h-\varepsilon\}\,,\qquad\|\mathbf{b}^{(\text{w})}\|_{C^{ 3}(\overline{\Gamma_{3}})}\leq M\,.\] * \(\tilde{B}(\mathbf{0},0,\cdot)\equiv 0\quad\text{ on }\Gamma_{1}\setminus\{ \varepsilon\leq x_{1}\leq h-\varepsilon\}\,.\) **Proposition B.1** ([10, Proposition 4.7.2]).: _Fix constants \(h,t_{1},t_{2},t_{3}\in(0,\infty)\) and \(t_{0},t_{h}\in[0,\infty)\). For constants \(\lambda>0\), \(M<\infty\), \(\alpha\in(0,1)\), and \(C_{\text{so}}>0\), domain \(\Omega\subseteq\mathbb{R}^{2}\) satisfies conditions (B.1)-(B.4). For \(\beta\in[\frac{1}{2},1)\), \(\delta\in(0,1)\), \(\sigma\in(0,1)\), and \(\varepsilon\in(0,\frac{h}{10})\), the nonlinear boundary value problem (B.5) satisfies conditions (i)-(vii). Then there exist \(\alpha_{1}\in(0,\frac{1}{2})\) depending only on \(\lambda\), and constants \(\delta_{0},\sigma\in(0,1)\) depending only on \((\lambda,M,C_{\text{so}},\alpha,\beta,\varepsilon)\) such that, under further requirement \(\delta\in[0,\delta_{0})\), the boundary value problem (B.5) has a unique solution \(u\in C(\overline{\Omega})\cap C^{1}(\overline{\Omega}\setminus(\overline{ \Gamma_{2}}\cup\overline{\Gamma_{0}}))\cap C^{2}(\Omega)\). Moreover, \(u\) satisfies_ (B.6) \[\|u\|_{0,\overline{\Omega}}\leq C\,,\qquad|u(\mathbf{x})-g_{\text{so}}(\mathbf{x})|\leq C \min\{x_{1},h-x_{1}\}\quad\text{in }\Omega\,,\] _where \(C>0\) depends only on \((\lambda,M,C_{\text{so}},\varepsilon)\). Furthermore, \(u\in C(\overline{\Omega})\cap C^{2,\alpha_{1}}(\overline{\Omega}\setminus \overline{\Gamma_{2}}\cup\overline{\Gamma_{0}})\) satisfies_ (B.7) \[\|u\|_{2,\alpha_{1},\overline{\Omega}\cap\{s<x_{1}<h-s\}}\leq C_{s}\] _for each \(s\in(0,\frac{h}{10}),\) where \(C_{s}>0\) depends only on \((\lambda,M,C_{\text{so}},\alpha,\beta,\varepsilon,s)\)._ **Proposition B.2** ([10, Proposition 4.8.7]).: _Fix constants \(h,t_{1},t_{2},t_{3}\in(0,\infty)\), \(t_{0}=0,\) and \(t_{h}\geq 0\). For constants \(\lambda>0\), \(M<\infty\), \(\alpha\in(0,1)\), and \(C_{\text{so}}>0\), domain \(\Omega\subseteq\mathbb{R}^{2}\) satisfies conditions (B.1)-(B.4) with changes: \(P_{3}=P_{4}=(0,0)\) and \(\overline{\Gamma_{0}}=\{(0,0)\}\). For \(\beta\in[\frac{1}{2},1)\), \(\delta\in(0,1)\), \(\sigma\in(0,1)\), and \(\varepsilon\in(0,\frac{h}{10}),\) the nonlinear boundary value problem (B.5) satisfies conditions (ii), (iv), and (vii) above, and (i*), (iii*), (v*), and (vi*) below_ * _For any_ \(\mathbf{x}\in\Omega\) _and_ \(\mathbf{p},\mathbf{\kappa}=(\kappa_{1}\kappa_{2})\in\mathbb{R}^{2},\)__ \[\min\big{\{}\lambda\operatorname{dist}(\mathbf{x},\overline{\Gamma_{0}})+\delta, \lambda\operatorname{dist}(x,\Gamma_{2})\big{\}}|\mathbf{\kappa}|^{2}\leq\sum_{i,j=1 }^{2}\tilde{A}_{ij}(\mathbf{p},\mathbf{x})\kappa_{i}\kappa_{j}\leq\lambda^{-1}|\mathbf{\kappa}|^ {2}\,,\] \[\big{\|}((\tilde{A}_{ij},\tilde{A}_{i})(Dg_{\text{so}},\cdot),D^{m}_{ \mathbf{p}}(\tilde{A}_{ij},\tilde{A}_{i})(\mathbf{p},\cdot))\big{\|}_{1,\alpha,\Omega \cap\{x_{1}<2\varepsilon\}}^{(-\alpha),\{P_{4}\}}\leq M\qquad\text{for }m=1,2\,.\] _Moreover, for any_ \(\mathbf{x}\in\Omega\cap\{h-\frac{\varepsilon}{2}<x_{1}<h\},\)__ \[\lambda|\mathbf{\mu}|^{2}\leq\sum_{i,j=1}^{2}\frac{\tilde{A}_{ij}(\mathbf{p},\mathbf{x})\mu_{ i}\mu_{j}}{\big{(}\min\{h-x_{1},\delta\}\big{)}^{2-\frac{i+j}{2}}}\leq\lambda^{-1}|\mathbf{\mu}|^{2}\,.\] * _For any_ \(\mathbf{p}\in\mathbb{R}^{2},\)__ \[\|(\tilde{A}_{ij},\tilde{A}_{i})(\mathbf{p},\cdot)\|_{C^{\beta}(\overline{\Omega \cap\{h-2\varepsilon<x_{1}<h\}})}+\|D_{\mathbf{p}}(\tilde{A}_{ij},\tilde{A}_{i})( \mathbf{p},\cdot)\|_{L^{\infty}(\Omega\cap\{h-2\varepsilon<x_{1}<h\})}\leq M\,.\] * _For each_ \((\mathbf{p},\mathbf{x})\in\mathbb{R}^{2}\times(\overline{\Omega}\cap\{h-\varepsilon< x_{1}<h\})\) _and_ \(i,j=1,2,\) _define_ \[\hat{\mathbf{p}}\coloneqq\mathbf{p}-Dg_{\mathrm{so}}(\mathbf{x})\,,\qquad(\tilde{a}_{ij}, \tilde{a}_{i})(\hat{\mathbf{p}},\mathbf{x})\coloneqq(\tilde{A}_{ij},\tilde{A}_{i})( \hat{\mathbf{p}}+Dg_{\mathrm{so}}(\mathbf{x}),\mathbf{x})\,.\] _For any_ \((\mathbf{p},(x_{1},0))\in\mathbb{R}^{2}\times(\Gamma_{3}\cap\{h-\varepsilon<x_{1 }<h\}),\)__ \[(\tilde{a}_{11},\tilde{a}_{22},\tilde{a}_{1})((\hat{p}_{1},-\hat{p}_{2}),(x_{1 },0))=(\tilde{a}_{11},\tilde{a}_{22},\tilde{a}_{1})((\hat{p}_{1},\hat{p}_{2}),(x_{1},0))\,.\] _For any_ \((\mathbf{p},\mathbf{x})\in\mathbb{R}^{2}\times(\Omega\cap\{h-\varepsilon<x_{1}<h\})\) _and_ \(i=1,2,\)__ \[|\tilde{a}_{ii}(\hat{\mathbf{p}},\mathbf{x})-\tilde{a}_{ii}(Dg_{\mathrm{so}}(\mathbf{x}_{e }),\mathbf{x}_{e})|\leq M|\mathbf{x}-\mathbf{x}_{e}|^{\beta},\quad(\tilde{A}_{12},\tilde{ A}_{21})(\mathbf{p},\mathbf{x}_{e})=0\qquad\text{for }\mathbf{x}_{\mathrm{e}}\coloneqq(h,x_{2})\,.\] * _For any_ \(\mathbf{p}\in\mathbb{R}^{2}\) _and_ \(\mathbf{x}\in\Omega\cap\{h-\frac{\varepsilon}{2}<x_{1}<h\},\)__\(\tilde{A}_{1}(\mathbf{p},\mathbf{x})\leq-\lambda\,.\)__ _Then there exist both \(\alpha_{1}\in(0,\frac{1}{2})\) depending only on \((\lambda,\delta)\) and \(\sigma\in(0,1)\) depending only on \((\lambda,\delta,M,C_{\mathrm{so}},\alpha,\beta,\varepsilon)\) such that, under the further requirement \(\delta\in[0,\delta_{0}),\) the boundary value problem (B.5) has a unique solution \(u\in C(\overline{\Omega})\cap C^{1}(\overline{\Omega}\setminus(\overline{ \Gamma_{2}}\cup\overline{\Gamma_{0}}))\cap C^{2}(\Omega)\). Moreover, \(u\) satisfies_ (B.8) \[\|u\|_{0,\overline{\Omega}}\leq C\,,\qquad|u(\mathbf{x})-g_{\mathrm{so}}(\mathbf{x})| \leq C\min\{x_{1},h-x_{1}\}\quad\text{in }\Omega\,,\] _where \(C>0\) depends only on \((\lambda,\delta,M,C_{\mathrm{so}},\varepsilon)\). Furthermore, \(u\in C(\overline{\Omega})\cap C^{2,\alpha_{1}}(\overline{\Omega}\setminus \overline{\Gamma_{2}}\cup\overline{\Gamma_{0}})\) satisfies_ (B.9) \[\|u\|_{2,\alpha_{1},\overline{\Omega\cap\{s<x_{1}<h-s\}}}\leq C_{s}\] _for each \(s\in(0,\frac{h}{10}),\) where \(C_{s}>0\) depends only on \((\lambda,\delta,M,C_{\mathrm{so}},\alpha,\beta,\varepsilon,s)\). In addition, \(u\) satisfies_ \[\|u\|_{2,\alpha_{1},\Omega\cap\{s<\frac{h}{4}\}}^{(-1-\alpha_{1}),\{P_{4}\}}< \hat{C}\,,\] _for constant \(\hat{C}>0\) depending only on \((\lambda,\delta,M,\alpha,\varepsilon)\)._ **Proposition B.3**.: _Fix constants \(h,t_{1},t_{2},t_{3}\in(0,\infty)\) and \(t_{0}=t_{h}=0\). For constants \(\lambda>0,\)\(M<\infty,\)\(\alpha\in(0,1),\) and \(C_{\mathrm{so}}>0,\) domain \(\Omega\subseteq\mathbb{R}^{2}\) satisfies conditions (B.1)-(B.4) with changes_:__ \[P_{3}=P_{4}=(0,0)\,,\quad\overline{\Gamma_{0}}=\{(0,0)\}\,;\qquad P_{2}=P_{1} =(h,0)\,,\quad\overline{\Gamma_{2}}=\{(h,0)\}\,.\] _For \(\beta\in[\frac{1}{2},1),\)\(\delta\in(0,1),\)\(\sigma\in(0,1),\) and \(\varepsilon\in(0,\frac{h}{10}),\) the nonlinear boundary value problem (B.5) satisfies conditions_ (ii), (iv)_, and (vii) _above, and_ (i*) _below_:__ * _For any_ \(\mathbf{x}\in\Omega\) _and_ \(\mathbf{p},\mathbf{\kappa}=(\kappa_{1},\kappa_{2})\in\mathbb{R}^{2},\)__ \[\min\big{\{}\lambda\operatorname{dist}(\mathbf{x},\overline{\Gamma_{0}})+\delta, \lambda\operatorname{dist}(x,\Gamma_{2})+\delta\big{\}}|\mathbf{\kappa}|^{2}\leq \sum_{i,j=1}^{2}\tilde{A}_{ij}(\mathbf{p},\mathbf{x})\kappa_{i}\kappa_{j}\leq\lambda^{- 1}|\mathbf{\kappa}|^{2}\,,\] \[\big{\|}((\tilde{A}_{ij},\tilde{A}_{i})(Dg_{\mathrm{so}},\cdot),D_{\mathbf{p}}^{m }(\tilde{A}_{ij},\tilde{A}_{i})(\mathbf{p},\cdot))\big{\|}_{1,\alpha,\Omega\setminus \{2\varepsilon<x_{1}<h-2\varepsilon\}}^{(-\alpha),\{P_{1},P_{4}\}}\leq M \qquad\text{for }m=1,2\,.\] _Then there exist constants \(\alpha_{1}\in(0,\frac{1}{2})\) depending only on \((\lambda,\delta),\) and \(\sigma\in(0,1)\) depending only on \((\lambda,\delta,M,C_{\mathrm{so}},\alpha,\beta,\varepsilon)\) such that, under the further requirement \(\delta\in[0,\delta_{0}),\) the boundary value problem (B.5) has a unique solution \(u\in C(\overline{\Omega})\cap C^{1}(\overline{\Omega}\setminus(\overline{ \Gamma_{2}}\cup\overline{\Gamma_{0}}))\cap C^{2}(\Omega)\). Moreover, \(u\) satisfies_ (B.10) \[\|u\|_{0,\overline{\Omega}}\leq C\,,\qquad|u(\mathbf{x})-g_{\mathrm{so}}(\mathbf{x})|\leq C \min\{x_{1},h-x_{1}\}\quad\text{in }\Omega\,,\] _where \(C>0\) depends only on \((\lambda,\delta,M,C_{\mathrm{so}},\varepsilon)\). Furthermore, \(u\in C(\overline{\Omega})\cap C^{2,\alpha_{1}}(\overline{\Omega}\setminus \overline{\Gamma_{2}}\cup\overline{\Gamma_{0}})\) satisfies_ (B.11) \[\|u\|_{2,\alpha_{1},\overline{\Omega\cap\{s<x_{1}<h-s\}}}\leq C_{s}\] _for each \(s\in(0,\frac{h}{10}),\) where \(C_{s}>0\) depends only on \((\lambda,\delta,M,C_{\mathrm{so}},\alpha,\beta,\varepsilon,s)\). In addition, \(u\) satisfies_ \[\|u\|_{2,\alpha_{1},\Omega\setminus\{\frac{h}{4}\leq x_{1}\leq\frac{3h}{4}\}}^{(-1- \alpha_{1}),\{P_{1},P_{4}\}}<\hat{C}\,,\] _for constant \(\hat{C}>0\) depending only on \((\lambda,\delta,M,\alpha,\varepsilon)\)._ ### Regularized distance function **Lemma B.4** ([10, Lemmas 13.9.1]).: _For any \(g(\cdot)\in C^{0,1}([-1,1])\) that is positive on \((-1,1),\) there exists a function \(\delta_{g}\in C^{\infty}(\overline{R_{\infty}}\setminus\overline{R_{g}}),\) the regularized distance, such that_ 1. \(|\delta_{g}(s,t^{\prime})-\mathrm{dist}((s,t^{\prime}),\Sigma_{g})|\leq\frac{ 1}{2}\mathrm{dist}(\mathbf{x},\Sigma_{g})\;\) _for all_ \((s,t^{\prime})\in\mathbb{R}^{2}\setminus\overline{\Sigma_{g}};\)__ 2. \(|D^{m}\delta_{g}(s,t^{\prime})|\leq C(m)\big{(}\mathrm{dist}((s,t^{\prime}), \Sigma_{g})\big{)}^{1-m}\) _for all_ \((s,t^{\prime})\in\mathbb{R}^{2}\setminus\overline{\Sigma_{g}}\) _and_ \(m=1,2,\cdots,\) _where_ \(C(m)>0\) _depends only on_ \(m;\)__ 3. \(\delta_{g}(s,t^{\prime})\geq C_{\mathrm{rd}}(t^{\prime}-g(s))\) _for all_ \((s,t^{\prime})\in\overline{R_{g}}\setminus\overline{\Sigma_{g}},\) _where_ \(C_{\mathrm{rd}}>0\) _depends only on_ \(\mathrm{Lip}[g];\)__ 4. _Let_ \(g_{k},g\in C^{0,1}([-1,1])\) _be uniformly bounded Lipschitz functions for all_ \(k\in\mathbb{N},\) _with_ \(g_{k}(s)\) _converging to_ \(g(s)\) _uniformly on_ \([-1,1]\)_. Then_ \(\lim\limits_{k\to\infty}\|\delta_{g_{k}}(s,t^{\prime})-\delta_{g}(s,t^{\prime} )\|_{C^{m}(K)}=0\) _for any_ \(m=0,1,2,\cdots,\) _and any compact set_ \(K\subseteq\overline{R_{\infty}}\setminus\overline{R_{g}}.\)__ ### Leray-Schauder degree theorem **Definition B.5** (Compact map).: _Let \(X\) and \(Y\) be Banach spaces. Fix any open subset \(G\subseteq X\). A map \(\mathbf{f}:\overline{G}\to Y\) is called compact if_ 1. \(\mathbf{f}\) _is continuous_\(;\)__ 2. \(\mathbf{f}(U)\) _is precompact in_ \(Y\) _for any bounded subset_ \(U\subseteq\overline{G}\)_._ **Definition B.6**.: _Let \(X\) be a Banach space, and let \(G\) be a bounded open subset of \(X\). Write \(V(G,X)\) to denote the set of all maps \(\mathbf{f}:\overline{G}\to X\) satisfying_ 1. \(\mathbf{f}\) _is a compact map in the sense of_ Definition B.5_;_ 2. \(\mathbf{f}\) _has no fixed point on the boundary of_ \(\partial G\)_._ **Definition B.7**.: _Let \(X\) be a Banach space, and let \(G\) be a bounded open subset of \(X\). Two maps \(\mathbf{f},\mathbf{g}\in V(G,X)\) are called compactly homotopic on \(\partial G\) if there exists a map \(\mathbf{H}\) with the following properties_:__ 1. \(\mathbf{H}:\overline{G}\times[0,1]\to X\) _is a compact map_\(;\)__ 2. \(\mathbf{H}(\mathbf{x},\tau)\neq\mathbf{x}\) _for all_ \((\mathbf{x},\tau)\in\partial G\times[0,1];\)__ 3. \(\mathbf{H}(\mathbf{x},0)=\mathbf{f}(\mathbf{x})\) _and_ \(\mathbf{H}(\mathbf{x},1)=\mathbf{g}(\mathbf{x})\) _for all_ \(\mathbf{x}\in\overline{G}.\)__ We write \(\partial G:\mathbf{f}\cong\mathbf{g}\) if \(\mathbf{f}\) and \(\mathbf{g}\) are compactly homotopic on \(\partial G\) in the sense of Definition B.7, and we call \(\mathbf{H}\) a compact homotopy. **Theorem B.1** (Leray-Schauder degree theorem).: _Let \(X\) be a Banach space, and let \(G\) be a bounded open subset of \(X\). Then, for each map \(\mathbf{f}\in V(G,X),\) a unique integer \(\mathbf{Ind}(\mathbf{f},G)\) can be assigned with the following properties\(:\)_ 1. _If_ \(\mathbf{f}(\cdot)\equiv\mathbf{x}_{0}\) _on_ \(\overline{G}\) _for some fixed_ \(\mathbf{x}_{0}\in G,\) _then_ \(\mathbf{Ind}(\mathbf{f},G)=1;\)__ 2. _If_ \(\mathbf{Ind}(\mathbf{f},G)\neq 0,\) _then there exists a fixed point_ \(\mathbf{x}\in G\) _of map_ \(\mathbf{f},\) _that is,_ \(\mathbf{f}(\mathbf{x})=\mathbf{x};\)__ 3. \(\mathbf{Ind}(\mathbf{f},G)=\Sigma_{j=1}^{n}\mathbf{Ind}(\mathbf{f},G_{j}),\) _whenever_ \(\mathbf{f}\in V(G,X)\cap(\cap_{j=1}^{n}V(G_{j},X)),\) _where_ \(G_{i}\cap G_{j}=\varnothing\) _for_ \(i\neq j\) _and_ \(\overline{G}=\cup_{j=1}^{n}\overline{G_{j}};\)__ 4. _If_ \(\partial G:\mathbf{f}\cong\mathbf{g},\) _then_ \(\mathbf{Ind}(\mathbf{f},G)=\mathbf{Ind}(\mathbf{g},G).\)__ _Such a number \(\mathbf{Ind}(\mathbf{g},G)\) is called the fixed point index of \(\mathbf{f}\) over \(G\)._ A generalized homotopy invariance of the fixed point index is given by the following theorem. **Theorem B.2** ([48, SS13.6, A4*]).: _Let \(X\) be a Banach space, and let \(t_{2}>t_{1}\). Let \(U\subseteq X\times[t_{1},t_{2}],\) and let \(U_{t}=\{\mathbf{x}\,:\,(\mathbf{x},t)\in U\}\). Then, for any compact map \(\mathbf{h}:\overline{U}\to X\) with \(\mathbf{h}(\mathbf{x},t)\neq\mathbf{x}\) on \(\partial U,\)_ \[\mathbf{Ind}(\mathbf{h}(\cdot,t),U_{t})=\mathrm{const.}\qquad\text{for all }t\in[t_{1},t_{2}]\,,\] _provided that \(U\) is a bounded open set in \(X\times[t_{1},t_{2}]\)._ ### Regularity theorem **Theorem B.3** ([1, Theorem 3.1]).: _For constants \(r,\)\(R>0,\) define \(Q_{r,R}^{+}\) by_ \[Q_{r,R}^{+}\coloneqq\left\{(x,y)\in\mathbb{R}^{2}\,:\,0<x<r,\,|y|<R\right\}.\] _For positive constants \(a,\,b,\,M,\,N,\) and \(\kappa\in(0,\frac{1}{4}),\) suppose that \(\psi\in C(\overline{Q_{r,R}^{+}})\cap C^{2}(Q_{r,R}^{+})\) satisfies \(\psi>0,\,-Mx\leq\psi_{x}\leq\frac{2-\kappa}{a}x,\) and_ \[(2x-a\psi_{x}+O_{1})\psi_{xx}+O_{2}\psi_{xy}+(b+O_{3})\psi_{yy}-(1+O_{4})\psi_ {x}+O_{5}\psi_{y}=0\qquad\text{in }Q_{r,R}^{+}\,,\] _and \(\psi=0\) on \(\partial Q_{r,R}^{+}\cap\{x=0\},\) where \(O_{k}(x,y),\,k=1,\cdots,5,\) are continuously differentiable and_ \[\frac{|O_{1}(x,y)|}{x^{2}}+\frac{|D_{(x,y)}O_{1}(x,y)|}{x}+\sum_{k=2}^{5} \Big{(}\frac{|O_{k}(x,y)|}{x}+|D_{(x,y)}O_{k}(x,y)|\Big{)}\leq N\qquad\text{in }Q_{r,R}^{+}\,.\] _Then \(\psi\in C^{2,\alpha}(\overline{Q_{r/2,R/2}^{+}})\) for any \(\alpha\in(0,1)\) and_ \[\psi_{xx}(0,y)=a^{-1}\,,\quad\ \psi_{xy}(0,y)=\psi_{yy}(0,y)=0\qquad\text{for all }y\in(-\tfrac{R}{2},\tfrac{R}{2})\,.\] ### General framework for the convexity of transonic shocks We state a general framework for the transonic shock as a free boundary, developed in Chen-Feldman-Xiang [12]. Let \(\Omega\) be an open, bounded, and connected set, and \(\partial\Omega=\Gamma_{\text{shock}}\cup\Gamma_{1}\cup\Gamma_{2},\) where the closed curve segment \(\Gamma_{\text{shock}}\) is a transonic shock that separates a pseudo-supersonic constant state denoted as state (2) outside \(\Omega\) from a pseudo-subsonic (non-constant) state inside \(\Omega,\) and \(\Gamma_{1}\cup\Gamma_{2}\) is a fixed boundary. We now present a more structural framework for domain \(\Omega\) under consideration. **Framework** (A). The structural framework for domain \(\Omega\): 1. Domain \(\Omega\) is bounded, with boundary \(\partial\Omega\) that is a continuous closed curve without self-intersections. Furthermore, \(\partial\Omega\) is piecewise \(C^{1,\alpha}\) up to the endpoints of each smooth part for some \(\alpha\in(0,1),\) and the number of smooth parts is finite. 2. At each corner point of \(\partial\Omega,\) angle \(\theta\) between the arcs meeting at that point from the interior of \(\Omega\) satisfies \(\theta\in(0,\pi).\) 3. \(\partial\Omega=\Gamma_{\text{shock}}\cup\Gamma_{1}\cup\Gamma_{2},\) where \(\Gamma_{\text{shock}},\,\Gamma_{1},\) and \(\Gamma_{2}\) are connected and disjoint, and both \(\Gamma_{\text{shock}}^{0}\) and \(\Gamma_{1}\cup\Gamma_{2}\) are non-empty. Moreover, if \(\Gamma_{i}\neq\varnothing\) for some \(i\in\{1,2\},\) then its relative interior is nonempty, _i.e._, \(\Gamma_{i}^{0}\neq\varnothing.\) 4. \(\Gamma_{\text{shock}}\) includes its endpoints \(A\) and \(B\) with corresponding unit tangent vectors \(\boldsymbol{\tau}_{A}\) and \(\boldsymbol{\tau}_{B}\) pointing to the interior of \(\Gamma_{\text{shock}},\) respectively. If \(\Gamma_{1}\neq\varnothing,\) then \(A\) is a common endpoint of \(\Gamma_{\text{shock}}\) and \(\Gamma_{1}\). If \(\Gamma_{2}\neq\varnothing,\) then \(B\) is a common endpoint of \(\Gamma_{\text{shock}}\) and \(\Gamma_{2}.\) Let \(\phi=\varphi-\varphi_{2}\). Then \(\phi\) satisfies the following equation: (B.12) \[(c^{2}-\varphi_{\xi_{1}}^{2})\phi_{\xi_{1}\xi_{1}}-2\varphi_{\xi_{1}}\varphi_{ \xi_{2}}\phi_{\xi_{1}\xi_{2}}+(c^{2}-\varphi_{\xi_{2}}^{2})\phi_{\xi_{2}\xi_{2 }}=0\qquad\text{in }\Omega\,,\] with the boundary conditions: (B.13) \[\phi=0\,,\quad\rho(|D\phi+D\varphi_{2}|^{2},\phi+\varphi_{2})D(\phi+\varphi_{2 })\cdot\boldsymbol{\nu}=\rho_{2}D\varphi_{2}\cdot\boldsymbol{\nu}\qquad\text{ on }\Gamma_{\text{ shock}}\,.\] If \(\boldsymbol{\tau}_{A}\neq\pm\boldsymbol{\tau}_{B},\) define the cone: (B.14) \[\text{Con}\coloneqq\{r\boldsymbol{\tau}_{A}+s\boldsymbol{\tau}_{B}\,:\,r,\,s>0 \}\,.\] **Theorem B.4** ([12, Theorem 2.1]).: _Assume that domain \(\Omega\) satisfies Framework (A). Assume that \(\phi\in C^{1}(\overline{\Omega})\cap C^{2}(\Omega\cup\Gamma_{\text{shock}}^{0}) \cap C^{3}(\Omega)\) is a solution of (B.12)-(B.13), which is not a constant state in \(\Omega\). Moreover, let \(\phi\) satisfy the following conditions\(:\)_ 1. _The entropy condition holds across_ \(\Gamma_{\text{shock}}\)_:_ \(\rho(|D\varphi|^{2},\varphi)>\rho_{2}\) _and_ \(\phi_{\boldsymbol{\nu}}<0\) _along_ \(\Gamma_{\text{shock}},\) _where_ \(\boldsymbol{\nu}\) _is the unit normal vector on_ \(\Gamma_{\text{shock}}\) _pointing to_ \(\Omega;\)__ 2. _There exist constants_ \(C_{1}>0\) _and_ \(\alpha_{1}\in(0,1)\) _such that_ \(\left\|\phi\right\|_{1+\alpha_{1},\overline{\Omega}}\leq C_{1};\)__ 3. _Equation (B.12) is strictly elliptic in_ \(\Omega\cup\Gamma_{\text{shock}}^{0}\)_:_ \(c^{2}-|D\varphi|^{2}>0\) _in_ \(\Omega\cup\Gamma_{\text{shock}}^{0};\)__ 4. \(\Gamma_{\text{shock}}\) _is_ \(C^{2}\) _in its relative interior_\(;\)__ 5. \(\boldsymbol{\tau}_{A}\neq\pm\boldsymbol{\tau}_{B},\) _and_ \(\{P+\text{Con}\}\cap\Omega=\varnothing\) _for any point_ \(P\in\overline{\Gamma_{\text{shock}}};\)__ 6. _There exists a vector_ \(\boldsymbol{e}\in\text{Con}\) _such that one of the following conditions holds\(:\)__ * \(\Gamma_{1}\neq\varnothing,\) _and the directional derivative_ \(\phi_{\boldsymbol{e}}\) _cannot have a local maximum point on_ \(\Gamma_{1}^{0}\cup\{A\}\) _and a local minimum point on_ \(\Gamma_{2}^{0};\)__ * \(\Gamma_{2}\neq\varnothing,\) _and_ \(\phi_{\boldsymbol{e}}\) _cannot have a local minimum point on_ \(\Gamma_{1}^{0}\) _and a local maximum point on_ \(\Gamma_{2}^{0}\cup\{B\};\)__ * \(\phi_{\mathbf{e}}\) _cannot have a local minimum point on_ \(\Gamma_{1}\cup\Gamma_{2};\)__ _where all the local maximum or minimum points are relative to_ \(\overline{\Omega}.\)__ _Then the free boundary \(\Gamma_{\mathrm{shock}}\) is a convex graph in each direction \(\mathbf{e}\in\mathrm{Con}.\) That is, there exists a concave function \(f\in C^{1,\alpha}(\mathbb{R})\) in the orthogonal coordinate system \(\mathbf{\xi}\eqqcolon S\mathbf{e}+T\mathbf{e}^{\perp}\) such that_ (B.15) \[\begin{split}&\Gamma_{\mathrm{shock}}=\left\{\mathbf{\xi}(S,T)\in \mathbb{R}^{2}\,:\,S=f(T),\,T_{A}<T<T_{B}\right\},\\ &\Omega\cap\left\{\mathbf{\xi}\,:\,T_{A}<T<T_{B}\right\}\subseteq \left\{\mathbf{\xi}\,:\,S<f(T)\right\}.\end{split}\] _Moreover, the free boundary \(\Gamma_{\mathrm{shock}}\) is strictly convex in its relative interior in the sense that, if \(P=\mathbf{\xi}(S,T)\in\Gamma_{\mathrm{shock}}^{0}\) and \(f^{\prime\prime}(T)=0,\) then there exists an integer \(m>1,\) independent of the choice of \(\mathbf{e}\in\mathrm{Con}\) such that, for any \(n=2,\cdots,2m-1,\)_ (B.16) \[f^{(n)}(T)=0\,,\qquad f^{(2m)}(T)<0\,.\] _The number of the points at which \(f^{\prime\prime}(T)=0\) is at most finite on each compact subset of \(\Gamma_{\mathrm{shock}}^{0}.\) In particular, the free boundary \(\Gamma_{\mathrm{shock}}\) cannot contain any straight segment._ Furthermore, under some additional assumptions, we can show that the shock curve is uniformly convex in its relative interior in the sense defined in the following theorem: **Theorem B.5** ([12, Theorem 2.3]).: _Let \(\Omega\) and \(\phi\) be as in Theorem B.4. Furthermore, assume that, for any unit vector \(\mathbf{e}\in\mathbb{R}^{2},\) the boundary part \(\Gamma_{1}\cup\Gamma_{2}\) can be further decomposed so that_ * \(\Gamma_{1}\cup\Gamma_{2}=\hat{\Gamma}_{0}\cup\hat{\Gamma}_{1}\cup\hat{\Gamma}_ {2}\cup\hat{\Gamma}_{3},\) _where some_ \(\hat{\Gamma}_{i}\) _may be empty,_ \(\hat{\Gamma}_{i}\) _is connected for each_ \(i=0,1,2,3,\) _and all curves_ \(\hat{\Gamma}_{i}\) _are located along_ \(\partial\Omega\) _in the order of their indices, i.e., non-empty sets_ \(\hat{\Gamma}_{j}\) _and_ \(\hat{\Gamma}_{k},\,k>j,\) _have a common endpoint if and only if either_ \(k=j+1\) _or_ \(\hat{\Gamma}_{i}=\varnothing\) _for all_ \(i=j+1,\cdots,k-1\)_. Also, the non-empty set_ \(\hat{\Gamma}_{i}\) _with the smallest_ \((\)_resp. largest\()\) _index has the common endpoint_ \(A\)__\((\)_resp._ \(B)\) _with_ \(\Gamma_{\mathrm{shock}}\)_. Moreover, if_ \(\hat{\Gamma}_{i}\neq\varnothing\) _for some_ \(i=0,1,2,3,\) _then its relative interior is nonempty_:_ \(\hat{\Gamma}_{i}^{0}\neq\varnothing;\)__ * \(\phi_{\mathbf{e}}\) _is constant along_ \(\hat{\Gamma}_{0}\) _and_ \(\hat{\Gamma}_{3};\)__ * _For_ \(i=1,2,\) _if_ \(\phi_{\mathbf{e}}\) _attains its local minimum or maximum relative to_ \(\overline{\Omega}\) _on_ \(\hat{\Gamma}_{i}^{0},\) _then_ \(\phi_{\mathbf{e}}\) _is constant along_ \(\hat{\Gamma}_{i};\)__ * _One of the following two conditions holds_:__ * _Either_ \(\hat{\Gamma}_{1}=\varnothing\) _or_ \(\hat{\Gamma}_{2}=\varnothing;\)__ * _Both_ \(\hat{\Gamma}_{1}\) _and_ \(\hat{\Gamma}_{2}\) _are non-empty, and_ \(\hat{\Gamma}_{3}=\varnothing,\) _so that_ \(\hat{\Gamma}_{2}\) _has the common endpoint_ \(B\) _with_ \(\Gamma_{\mathrm{shock}}.\) _At point_ \(B,\) _the following conditions hold_:__ * _If_ \(\mathbf{\nu}_{\mathrm{sh}}(B)\cdot\mathbf{e}<0,\) _then_ \(\phi_{\mathbf{e}}\) _cannot attain its local maximum relative to_ \(\overline{\Omega}\) _at_ \(B;\)__ * _If_ \(\mathbf{\nu}_{\mathrm{sh}}(B)\cdot\mathbf{e}=0,\) _then_ \(\phi_{\mathbf{e}}(B)=\phi_{\mathbf{e}}(Q^{*})\) _for the common endpoint_ \(Q^{*}\) _of_ \(\hat{\Gamma}_{1}\) _and_ \(\hat{\Gamma}_{2};\)__ _where_ \(\mathbf{\nu}_{\mathrm{sh}}(B)\coloneqq\lim_{\Gamma_{\mathrm{shock}}^{0}\ni P\to B} \mathbf{\nu}(P),\) _which exists since_ \(\Gamma_{\mathrm{shock}}\) _is_ \(C^{1}\) _up to_ \(B\)_._ _Then the shock function \(f(T)\) in (B.15) satisfies \(f^{\prime\prime}(T)<0\) for all \(T\in(T_{A},T_{B}),\) that is, \(\Gamma_{\mathrm{shock}}\) is uniformly convex on closed subsets of its relative interior._ **Acknowledgements:** The research of Gui-Qiang G. Chen is supported in part by the UK Engineering and Physical Sciences Research Council Awards EP/L015811/1, EP/V008854/1, and EP/V051121/1. The research of Alex Cliffe is supported in part by the UK Engineering and Physical Sciences Research Council Awards EP/N509711/1 and EP/R513295/1. The research of Feimin Huang is supported in part by the National Key R&D Program of China No. 2021YFA1000800, and the National Natural Sciences Foundation of China No. 12288201. The research of Song Liu is supported in part by the Hong Kong Institute of Advanced Study and the GRF grant CityU 11300420. The research of Qin Wang is supported in part by the National Natural Sciences Foundation of China No. 12261100.
2307.14743
Turning Whisper into Real-Time Transcription System
Whisper is one of the recent state-of-the-art multilingual speech recognition and translation models, however, it is not designed for real time transcription. In this paper, we build on top of Whisper and create Whisper-Streaming, an implementation of real-time speech transcription and translation of Whisper-like models. Whisper-Streaming uses local agreement policy with self-adaptive latency to enable streaming transcription. We show that Whisper-Streaming achieves high quality and 3.3 seconds latency on unsegmented long-form speech transcription test set, and we demonstrate its robustness and practical usability as a component in live transcription service at a multilingual conference.
Dominik Macháček, Raj Dabre, Ondřej Bojar
2023-07-27T10:00:05Z
http://arxiv.org/abs/2307.14743v2
# Turning Whisper into Real-Time Transcription System ###### Abstract Whisper is one of the recent state-of-the-art multilingual speech recognition and translation models, however, it is not designed for real time transcription. In this paper, we build on top of Whisper and create **Whisper-Streaming**, an implementation of real-time speech transcription and translation of Whisper-like models. Whisper-Streaming uses local agreement policy with self-adaptive latency to enable streaming transcription. We show that Whisper-Streaming achieves high quality and 3.3 seconds latency on unsegmented long-form speech transcription test set, and we demonstrate its robustness and practical usability as a component in live transcription service at a multilingual conference. National Institute of Information and Communications Technology, Kyoto, Japan\({}^{2}\) \({}^{1}\){machacek,bojar}@ufal.mff.cuni.cz, \({}^{2}\)[email protected] ## 1 Introduction Whisper (Radford et al., 2022) is a recent state-of-the-art system for automatic speech recognition (ASR) for 97 languages and for translation from 96 languages into English. Whisper models are publicly available under the MIT license. However, the current public implementations of Whisper inference usually allow only offline processing of audio documents that are completely available at the time of processing, without any processing time constraints. Real-time streaming mode is useful in certain situations, e.g. for live captioning. It means that the source speech audio has to be processed at the time when it is being recorded. The transcripts or translations have to be delivered with a short additive latency, e.g. in 2 seconds. There are some implementations of Whisper for streaming, but their approach is rather naive, they e.g. first record a 30-second audio segment, and then process it. The latency of these methods is large, and the quality on the segment boundaries is low because a simple content unaware segmentation can split a word in the middle. In this work, we implement, evaluate and demonstrate Whisper in simultaneous streaming mode using the simple but effective LocalAgreement (Liu et al., 2020) algorithm. LocalAgreement is one particular streaming policy that can be used to convert any full-sequence to full-sequence model to operate in simultaneous streaming mode. It was used by the winning system CUNI-KIT at IWSLT 2022 simultaneous speech translation shared task (Polak et al., 2022). We call our implementation **Whisper-Streaming**, although it is applicable to any model with API similar to Whisper. According to our evaluation, it achieves 3.3 seconds latency on average for English ASR on the European Parliament speech test set ESIC (Machacek et al., 2021), when running on NVIDIA A40 GPU, a fast hardware processing unit. We test it also on German and Czech ASR and present the results and suggestions for the optimal parameters. The contribution of this work is implementation, evaluation and demonstration of Whisper-Streaming. Given that Whisper-Streaming can be quickly and easily packaged into a product, we want to ensure that the most recent scientific results, such as the algorithm for simultaneous mode, can be accessible to and be used by industrial researchers and engineers. Furthermore, we want to reliably evaluate the performance of our implementation and share the results with the research community, to further drive research and development of real-time transcription solutions which have real-life use cases. We expect that our results can be used as strong baselines for future comparison. We make Whisper-Streaming publicly available1 along with a demonstration video.2 Footnote 1: [https://github.com/ufal/whisper_streaming](https://github.com/ufal/whisper_streaming) Footnote 2: [https://vimeo.com/840442741](https://vimeo.com/840442741) ## 2 Background In this section, we describe the background for the back-end components of our work. Whisper(Radford et al., 2022) is a Transformer model for speech-to-text transcription and translation trained on a massive amount of multilingual data. We use "large-v2"3 model because it achieves the highest quality of all Whisper model size options. Since the original release of the whisper backend is rather slow, we use the faster-whisper4 reimplementation of Whisper inference using CTranslate2, a fast inference engine for Transformer models. It is approximately four times faster than the standard implementation (as reported by the authors). We use it with 16-bit float precision. Footnote 3: [https://huggingface.co/openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) Although we primarily use Whisper, the underlying model in our implementation can be easily replaced by any other speech-to-text transcription or translation model (e.g. MMS, Pratap et al., 2023) if it produces word-level timestamps and punctuation. StreamingLet us assume a model \(M\) that processes a source sequence \(c_{1},\cdots,c_{n}\) into a target sequence \(t_{1},\cdots,t_{m}\), given a previous target \(s\) that can be used for for inter-sentence coherence. Streaming involves receiving the source sequence consecutively, one chunk at a time, and producing the target simultaneously. A _streaming policy_\(P\) predicts a target segment \(t_{T}\) at time \(T\) as \(t_{T}:=P_{M}(c_{i<T}|s,t_{j<T})\). It operates the model \(M\) on available source chunks \(c_{i<T}\), previous sequence target \(s\), and previous target segments \(t_{j<T}\). The policy is triggered every time a new source segment is available. An empty target segment can be emitted, e.g. when waiting for context. The policy aims to minimize latency and maximize target quality. Streaming was originally proposed for simultaneous translation (Ma et al., 2019), but it is applicable for any sequence-to-sequence task including ASR. Dong et al. (2022) give a summary of streaming speech translation. LocalAgreement(Liu et al., 2020) is a streaming policy that outputs the longest common prefix of the model on \(n\) consecutive source chunks, or an empty segment when less than \(n\) chunks are available. Based on the IWSLT 2022 shared task on simultaneous translation, the CUNI-KIT system compared LocalAgreement to other policies (hold-\(n\) and wait-\(k\)) with different chunk sizes. They found that LocalAgreement with \(n=2\) was the most effective policy. Therefore, we use LocalAgreement-2 for identifying stabilized target segments. ## 3 Whisper-Streaming We describe the core components and inner workings of Whisper-Streaming. It consists of the update loop, audio buffer, skipping the confirmed output in audio buffer, trimming the buffer, joining for inter-sentence context, and optional voice activity detection. Update LoopThe main part of Whisper-Streaming is a program that utilizes a loop to receive source audio chunks and trigger streaming Figure 1: Illustration of processing three consecutive updates. The yellow highlighted text is a “prompt”, the previous context to follow. The black-bordered rectangle is an audio buffer, and the text inside is Whisper’s transcript generated from that sound segment. The blue vertical line is a timestamp that splits the buffer to two parts, the left being previously confirmed, and the right one is unconfirmed. The LocalAgreement-2 policy, or searching the longest common prefix, is applied on the unconfirmed (right) part in two subsequent updates. The longest common prefix is highlighted in green and the green underline highlights the _newly_ confirmed output, whereas the green dashed underline indicates previously and subsequently confirmed output. The gray underline demonstrates an update in the confirmed part that is disregarded. policy updates. The parameter MinChunkSize controls the latency and quality, and determines the minimal duration processed per iteration. If the update computation exceeds MinChunkSize, the next update is performed immediately on the accumulated audio input. This parameter impacts both latency and quality. Audio bufferWhisper is trained to handle sequences that are up to 30 seconds long and contain one full sentence. It provides punctuation and word-level timestamps.5 The process is illustrated in Figure 1. Each update involves storing incoming audio at the top of the audio buffer and processing the entire buffer with Whisper. We keep an invariant that the buffer always starts with a new sentence, to maintain the high quality of Whisper. LocalAgreement-2 is applied to the current and previous Whisper output. The timestamp of the last word in the "confirmed output" is saved. In subsequent updates, we always reprocess Whisper from the beginning of the buffer, including the portion preceding the last "confirmed output" timestamp (indicated by the gray background in Figure 1). Changes to the transcription in the confirmed portion are disregarded, as they are often insignificant in terms of meaning alteration. Footnote 5: When using “faster-whisper” or aannother implementation that supports it. Skipping the confirmed partWhen determining the position of transcribed words relative to the last confirmed word from the previous update, we account for the potential inaccuracies and updates in Whisper timestamps due to new audio chunks. If a word's timestamp falls within a 1-second interval from the last confirmed word, we compare its preceding \(n\)-grams (where \(n\) ranges from 1 to 5) with the suffix in the last confirmed output. If they match, we skip those words. However, this rule can be further enhanced in future work by incorporating measures such as setting and fine-tuning a character edit distance threshold, trimming punctuation and casing from the \(n\)-grams, etc. Trimming the audio bufferTo avoid inacceptably long spikes in latenc, the audio buffer is limited to around 30 seconds. When the confirmed output includes a sentence-ending punctuation mark followed by a word starting a new sentence, the buffer is trimmed at the punctuation mark's timestamp. A language specific sentence segmentation tool (e.g. Koehn et al., 2007) is used for this purpose, ensuring that the buffer always contains a single sentence. Despite this, if the buffer length exceeds 30 seconds, we retain the last confirmed segment marked by Whisper. Joining for inter-sentence contextThe Whisper transcribe function utilizes a "prompt" parameter to maintain consistency within a document (consistent style, terminology, and inter-sentence references). We extract the last 200 words from the confirmed output of previous audio buffers as the "prompt" parameter, as shown in Figure 1 (yellow backgrounded text). Voice activity detectionThere is a parameter to activate or deactivate Whisper's default voice activity detection (VAD) filter, impacting both quality and latency. ## 4 Benchmarking Settings We describe the dataset for evaluation, metrics, settings and hardware we used to evaluate our model. Evaluation DataFor latency and quality analysis, we utilize the dev set of the manually transcribed ESIC corpus (Machacek et al., 2021) for English, German, and Czech ASR containing 179 documents. This corpus contains 5 hours of original English speeches from the European Parliament, including simultaneous interpreting into German and Czech. It provides audio tracks with manual transcripts and word-level timestamps. WerWe use word error rate (WER) after removing punctuation and casing as the standard measure of ASR quality. LatencyIn our latency analysis, we implement our own method wherein we use the timestamps provided in the ESIC corpus to align the gold transcripts to the ASR output using edit distance.6 This allows us to determine the edit operations for each gold word. We calculate the ASR latency by measuring the time difference between when the ASR emitted a word and when the corresponding gold word was spoken, excluding words deleted by the ASR. We compute the average latency within each document and, when comparing different setups across multiple documents, we report the average latency along with standard deviation. Footnote 6: [https://pypi.org/project/edlib/](https://pypi.org/project/edlib/) HardwareFor benchmarking, we use NVIDIA A40 GPUs. We run Whisper on a computer in a cluster that is used by other processes at the same time, which may allocate the same resources and influence the latency. Since it is not always possible to have a dedicated server for a given service, this makes our evaluation very realistic. Since there will be variations in the latency metrics, we report mean and standard deviations. silence and non-voice sounds do not cause Whisper to make mistakes. If reducing the latency is important, an adaptive protocol for setting VAD on and off can be implemented. PerformanceTable 3 and Figure 3 summarize the WER and average latency of Whisper-Streaming on ESIC validation set for the three language tracks. Overall, with 1 second MinChunk-Size, the average latency is 3.3 seconds for English, 4.4 seconds for German and 4.8 seconds for Czech, while the WER is by 2% higher than in the offline mode for English and German, and by 6% higher for Czech. Both WER and latency is the lowest on English, followed by German and Czech. This is related to the amount of language specific data used for training Whisper, as well as the morphological complexity of these languages. The latency increases with larger uncertainty because it requires more updates for an agreement. Moreover, the larger MinChunkSize, the larger the latency, but higher the quality because the system has sufficient context. Offline mode WERWe contrast the results with setups that serve as maximum performance estimates. One of them is offline mode in which processing of the whole audio document is done after recording, without any limitations on processing time. It is the default and most optimized setup for Whisper. The WER in offline mode and with VAD is lower than in streaming mode because the context size is not restricted. The model can use even the right (future) context that is unavailable or limited in streaming mode. Moreover, the internal segmentation of the long-form speech into processing chunks is optimized in the offline mode. Computationally unaware latencyAnother contrastive setup is computationally unaware simulation. It uses an unrealistic assumption that computation for Whisper processing any audio segment is instant, so that the latency caused by computation is not included in the latency measurement. The measurement includes latency caused by uncertainty in the language. The gap between latency in computationally unaware and aware evaluation can be reduced by optimizing the hardware or inference algorithm. Computationally unaware latency can be reduced by improving the model or streaming policy. We observe that the average computationally unaware latency is approximately twice the chunk size. This is expected because we use local agreement of two consecutive updates. However, the processing of English is actually faster, little less than twice the chunk size. We hypothesize that this could be caused by the anticipation ability of Whisper model. The second possible reason is the inaccuracy of the gold timestamps in ESIC. The timestamps were computed by automatic forced alignment, and thus they may be less accurate in non-standard situations such as overlapping and non-transcribed speech, e.g. hesitations and foreign language insertions. ## 6 System Demonstration Demonstration videois available at [https://vimeo.com/840442741](https://vimeo.com/840442741). It is a screencast video of Whisper-Streaming real-time outputs that pro \begin{table} \begin{tabular}{r||c c c|c c|c|c} & \multicolumn{3}{c||}{avg. \% WER} & \multicolumn{3}{c}{avg. latency [s]} \\ & m.ch. & off & on & diff & off & on & diff \\ \hline \multirow{5}{*}{en} & 0.1s & 8.4 & 8.3 & -0.1 & **3.30** & 3.72 & +0.41 \\ & 0.5s & 8.5 & 8.3 & -0.2 & **3.27** & 3.54 & +0.27 \\ & 1.0s & 8.1 & 8.1 & +0.1 & **3.62** & 3.88 & +0.26 \\ & 2.0s & 8.0 & 7.9 & -0.0 & **5.45** & 5.68 & +0.23 \\ \hline \multirow{5}{*}{de} & 0.1s & 12.8 & **9.7** & -3.1 & 3.83 & 3.93 & +0.10 \\ & 0.5s & 12.3 & **9.5** & -2.8 & 3.97 & 4.11 & +0.14 \\ \cline{1-1} & 1.0s & 11.4 & **9.4** & -2.0 & 4.19 & 4.37 & +0.18 \\ \cline{1-1} & 2.0s & 9.6 & **9.3** & -0.3 & 5.79 & 5.94 & +0.15 \\ \end{tabular} \end{table} Table 2: Impact of VAD filter on WER and latency on ESIC dev on the streaming ASR with different minimum chunk size (m.ch., in seconds) of the English original speech (en) and German simultaneous interpreting (de). We highlight the remarkable benefit in bold: the original speech without pauses is processed with lower latency (by 0.23 seconds or more) and comparable quality with VAD off. On the contrary, the VAD on achieves higher quality for interpreting with frequent pauses, with small difference in latency. Figure 2: Impact of VAD filter on latency and quality. The striking difference in VAD activated or deactivated for English vs German is due to German being the speech of an interpreter. cesses live ASR on one ESIC document in three parallel instances for English, German and Czech speech, the original and simultaneous interpreting. The video shows a contrast to gold transcripts with original timing, so that the latency can be observed. The video also contains color highlighting for ASR errors. Integration with ELITRTo demonstrate practical usability, we integrate Whisper-Streaming with the ELITR (European Live Translator, Bojar et al., 2020) framework for complex distributed systems for multi-source and multi-target live speech transcription and translation (Bojar et al., 2021). Within Whisper-Streaming, we implement and release a server that is connected as a worker to Mediator server (Franceschini et al., 2020). Mediator allows a client to request a service of a worker. The client is then allowed to further process the text outputs received by the worker, e.g. translate them with another worker and present them at the web view server that delivers real-time captions to event participants during a live multilingual event. Evaluation eventWe evaluated Whisper-Streaming as a component in an experimental live speech translation service at a multilingual conference. For this, we built a pipeline that used five parallel Whisper-Streaming workers, three of them for ASR only (English, Czech and Ukrainian), and two for speech translation (Czech-to-English and Ukrainian-to-English). There were three parallel language streams at the conference, Czech, English and Ukrainian. One of the languages was spoken at the main floor, and the others were provided by human simultaneous interpreting. A human operator (as in Bojar et al., 2021) was controlling the technical setup and the outputs using the language knowledge and had an option to redirect the streams, if necessary. The qualitative evaluation at the event showed that Whisper-Streaming is a robust and reliable part of the service, reaching acceptable latency and unexpectedly high quality on English, Czech and Ukrainian long-form speech. Demonstration at AACLOur system demonstration at the IJCNLP-AACL 2023 conference will use the ELITR framework. We will either simulate speech source from a recording, or allow participants to speak into microphone in any of the 97 languages supported by Whisper, and observe the real-time outputs. ## 7 Conclusion We implemented, evaluated and demonstrated Whisper-Streaming, a tool that effectively operates an offline ASR model Whisper, with 3.3 second average computationally aware latency on English ESIC corpus. We described and explained the implementation and its underlying components, including LocalAgreement algorithm for streaming. Lastly, we demonstrated the robustness and practical usability at a real-life multi-lingual conference. ## 8 Limitations The data collected in ESIC corpus were created relatively long time ago. It raises concerns about potential leakage into Whisper training set, which could compromise our evaluation. Additionally, Figure 3: Latency and quality in computationally aware and unaware simulations (solid lines and dots vs dashed lines and crosses), together with offline WER (stars and light vertical lines). VAD is deactivated for English, and activated for the other two. \begin{table} \begin{tabular}{c c|c|c c|c c c} & \% WER & & \multicolumn{3}{c|}{\% WER} & \multicolumn{3}{c}{latency [s]} \\ lang. & offline & m.ch. & un. & av. & un. & av. & diff \\ \hline \multirow{2}{*}{en} & 0.5s & 9.7 & 8.5 & 1.02 & 3.27 & +2.25 \\ & & 1.0s & 8.5 & 8.1 & 1.91 & 3.62 & +1.71 \\ & & 2.0s & 8.8 & 8.0 & 3.73 & 5.45 & +1.73 \\ \hline \multirow{3}{*}{de} & 0.5s & 11.1 & 9.5 & 1.11 & 4.11 & +3.00 \\ & 9.2 & 1.0s & 10.0 & 9.4 & 2.02 & 4.37 & +2.35 \\ & & 2.0s & 10.2 & 9.3 & 3.89 & 5.94 & +2.05 \\ \hline \multirow{3}{*}{cs} & 0.5s & 15.8 & 13.3 & 1.25 & 4.69 & +3.44 \\ & 12.3 & 1.0s & 13.8 & 12.9 & 2.24 & 4.76 & +2.51 \\ \cline{1-1} & & 2.0s & 14.0 & 12.8 & 4.29 & 6.29 & +2.00 \\ \end{tabular} \end{table} Table 3: WER and average latency of Whisper-Streaming on ESIC dev set in three language tracks using different MinChunkSize (“m.ch.”). The realistic setup is computationally aware (“aw.”), put into contrast with offline WER (“offline”) and with the computationally unaware simulation (“un.”). The data are the same as in Figure 3. performance tests on more affordable hardware are pending, highlighting the need for further evaluation in terms of computational cost. It is worth noting that the reported latency and quality metrics obtained from ESIC may not be fully generalizable to other languages or language variants due to the nature of the corpus. Furthermore, our focus is on demonstrating the online capabilities of Whisper rather than optimizing the algorithm or implementation. It is important to recognize that the actual latency experienced may fluctuate, and the reported average latency serves as an indicative measure without providing an upper bound. The streaming policy would need certain modifications to guarantee a maximum latency, at a possible loss in quality. Lastly, we have not conducted comparison tests to other state-of-the-art systems, e.g. from IWSLT, because a common evaluation framework is pending, as well as X-to-English long-form speech test set. ## Acknowledgements This research was partially supported by the grants 19-26934X (NEUREM3) of the Czech Science Foundation, and SVV project number 260 698.
2310.06788
Discovery of a variable energy-dependent X-ray polarization in the accreting neutron star GX 5-1
We report on the coordinated observations of the neutron star low-mass X-ray binary (NS-LMXB) \gx in X-rays (IXPE, NICER, Nustar and INTEGRAL), optical (REM and LCO), near-infrared (REM), mid-infrared (VLT VISIR), and radio (ATCA). This Z-source was observed by \IXPE twice in March-April 2023 (Obs. 1 and 2). In the radio band, the source was detected, but only upper-limits to the linear polarization were obtained at a $3\sigma$ level of $6.1\%$ at 5.5 GHz and $5.9\%$ at 9 GHz in Obs.~1 and $12.5\%$ at 5.5~GHz and $20\%$ at 9~GHz in Obs.~2. The mid-IR, near-IR and optical observations suggest the presence of a compact jet which peaks in the mid- or far-IR. The X-ray polarization degree was found to be $3.7\% \pm 0.4 \%$ (at $90\%$ confidence level) during Obs.~1 when the source was in the horizontal branch of the Z-track and $1.8\% \pm 0.4 \%$ during Obs.~2 when the source was in the normal-flaring branch. These results confirm the variation of polarization degree as a function of the position of the source in the color-color diagram as for previously observed Z-track sources (Cyg~X-2 and XTE~1701$-$462). Evidence for a variation of the polarization angle $\sim 20^\circ$ with energy is found in both observations, likely related to the different, non-orthogonal polarization angles of the disk and Comptonization components which peak at different energies.
Sergio Fabiani, Fiamma Capitanio, Rosario Iaria, Juri Poutanen, Andrea Gnarini, Francesco Ursini, Ruben Farinelli, Anna Bobrikova, James F. Steiner, Jiri Svoboda, Alessio Anitra, Maria C. Baglio, Francesco Carotenuto, Melania Del Santo, Carlo Ferrigno, Fraser Lewis, David M. Russell, Thomas D. Russell, Jakob van den Eijnden, Massimo Cocchi, Alessandro Di Marco, Fabio La Monaca, Kuan Liu, John Rankin, Martin C. Weisskopf, Fei Xie, Stefano Bianchi, Luciano Burderi, Tiziana Di Salvo, Elise Egron, Giulia Illiano, Philip Kaaret, Giorgio Matt, Romana Mikušincová, Fabio Muleri, Alessandro Papitto, Iván Agudo, Lucio A. Antonelli, Matteo Bachetti, Luca Baldini, Wayne H. Baumgartner, Ronaldo Bellazzini, Stephen D. Bongiorno, Raffaella Bonino, Alessandro Brez, Niccolò Bucciantini, Simone Castellano, Elisabetta Cavazzuti, Chien-Ting Chen, Stefano Ciprini, Enrico Costa, Alessandra De Rosa, Ettore Del Monte, Laura Di Gesu, Niccolò Di Lalla, Immacolata Donnarumma, Victor Doroshenko, Michal Dovčiak, Steven R. Ehlert, Teruaki Enoto, Yuri Evangelista, Riccardo Ferrazzoli, Javier A. Garcia, Shuichi Gunji, Kiyoshi Hayashida, Jeremy Heyl, Wataru Iwakiri, Svetlana G. Jorstad, Vladimir Karas, Fabian Kislat, Takao Kitaguchi, Jeffery J. Kolodziejczak, Henric Krawczynski, Luca Latronico, Ioannis Liodakis, Simone Maldera, Alberto Manfreda, Frédéric Marin, Andrea Marinucci, Alan P. Marscher, Herman L. Marshall, Francesco Massaro, Ikuyuki Mitsuishi, Tsunefumi Mizuno, Michela Negro, Chi-Yung Ng, Stephen L. O'Dell, Nicola Omodei, Chiara Oppedisano, George G. Pavlov, Abel L. Peirson, Matteo Perri, Melissa Pesce-Rollins, Pierre-Olivier Petrucci, Maura Pilia, Andrea Possenti, Simonetta Puccetti, Brian D. Ramsey, Ajay Ratheesh, Oliver J. Roberts, Roger W. Romani, Carmelo Sgrò, Patrick Slane, Paolo Soffitta, Gloria Spandre, Douglas A. Swartz, Toru Tamagawa, Fabrizio Tavecchio, Roberto Taverna, Yuzuru Tawara, Allyn F. Tennant, Nicholas E. Thomas, Francesco Tombesi, Alessio Trois, Sergey S. Tsygankov, Roberto Turolla, Jacco Vink, Kinwah Wu, Silvia Zane
2023-10-10T17:01:08Z
http://arxiv.org/abs/2310.06788v2
Discovery of a variable energy-dependent X-ray polarization in the accreting neutron star GX 5\(-\)1 ###### Abstract We report on the coordinated observations of the neutron star low-mass X-ray binary (NS-LMXB) GX 5\(-\)1 in X-rays (_IXPE_, NICER, _NuSTAR_ and _INTEGRAL_), optical (REM and LCO), near-infrared (REM), mid-infrared (VLT VISIR), and radio (ATCA). This Z-source was observed by _IXPE_ twice in March-April 2023 (Obs. 1 and 2). In the radio band, the source was detected, but only upper-limits to the linear polarization were obtained at a 3\(\sigma\) level of 6.1% at 5.5 GHz and 5.9% at 9 GHz in Obs. 1 and 12.5% at 5.5 GHz and 20% at 9 GHz in Obs. 2. The mid-IR, near-IR and optical observations suggest the presence of a compact jet which peaks in the mid- or far-IR. The X-ray polarization degree was found to be \(3.7\%\pm 0.4\%\) at 90% confidence level) during Obs. 1 when the source was in the horizontal branch of the Z-track and \(1.8\%\pm 0.4\%\) during Obs. 2 when the source was in the normal-flaring branch. These results confirm the variation of polarization degree as a function of the position of the source in the color-color diagram as for previously observed Z-track sources (Cyg X-2 and XTE 1701\(-\)462). Evidence for a variation of the polarization angle \(\sim 20^{\circ}\) with energy is found in both observations, likely related to the different, non-orthogonal polarization angles of the disk and Comptonization components which peak at different energies. Key Words.:accretion, accretion disks - neutron stars - X-rays: general - X-rays: binaries - X-rays: individual: GX 5\(-\)1 ## 1 Introduction Persistent neutron star low-mass X-ray binaries (NS-LMXBs) are among the X-ray astronomical objects that the _Imaging X-ray Polarimetry Explorer_ (_IXPE_, Weisskopf et al. 2023, 2022; Soffitta et al. 2021) is investigating. Four of them were already observed by _IXPE_ during its first year campaign, namely the Z-source Cyg X-2, the peculiar Z-Atoll transient XTE J1701\(-\)462, and two bright soft-state Atoll-sources, GS 1826\(-\)238 and GX 9+9. The source classification (Z or Atoll) is based on the tracks that they draw on the color-color-diagram (CCD; see, e.g., Hasinger & van der Klis 1989; van der Klis 1995). GS 1826\(-\)238 data are compatible with a null polarization with an upper limit on the polarization degree (PD) of 1.3% (Capitanio et al. 2023), while Cyg X-2 (Farinelli et al. 2023) and GX 9+9 (Chatterjee et al. 2023; Ursini et al. 2023) have shown a statistically significant linear polarization with the PD of \(\sim\)2% and \(\sim\)1.5%, respectively. Strong variations in PD was reported for XTE J1701\(-\)462, observed twice, that showed a high PD 4.5% in the first observation and a PD compatible with a null polarization in the second one (Cocchi et al., 2023). GX 5\(-\)1 is a Galactic Z-source (Kuulkers et al., 1994; Jonker et al., 2002) located near the Galactic Center. It is a radio source with radio emission most likely originating from a compact jet (Fender and Hendry, 2000). The radio counterpart allowed an accurate localization that, despite optical obscuration and crowded field near the Galactic Center, has led to the determination of a likely infra-red companion candidate (Jonker et al., 2000). Until the early 1990's GX 5\(-\)1 X-ray data were likely contaminated by the black hole LMXB GRS 1758\(-\)258, located only 40' away. Sunyaev et al. (1991) and Gilfanov et al. (1993) were able to resolve two sources, showing that GX 5\(-\)1 was \(\sim\)30-50 times brighter than GRS 1758\(-\)258 below 20 keV. GX 5\(-\)1 has not shown any X-ray pulsations or X-ray bursts (Paizis et al., 2005). An X-ray halo due to scattering of X-ray photons is clearly revealed in the _Chandra_ image (Smith et al., 2006; Clark, 2018). Such a halo arises due to the presence of multiple clouds along the line of sight. In X- and \(\gamma\)-rays, Paizis et al. (2005) studied one year of _INTEGRAL_ data, which covered all the Z-track of the source (mainly the horizontal branch (HB) and normal branch (NB)). ISGRI and JEM-X average spectra showed a clear hard X-ray emission above 20 keV, not detected previously and compatible with thermal Comptonization of soft photons from a hot, optically thin plasma in the vicinity of the NS. However, Paizis et al. (2005) were not able to constrain the temperature of the Comptonizing plasma. They assessed the compatibility of GX 5\(-\)1 energy spectrum with the so called 'eastern' (Mitsuda et al., 1984) and 'western' (White et al., 1986) models and found the former being physically more meaningful, describing the spectral 'flattening' above \(\sim 20\) keV as a Comptonized hard-tail emission. Paizis et al. (2006) described the GX 5\(-\)1 energy spectrum in the 20-100 keV energy band observed by _INTEGRAL_ IBIS/ISGRI with a Comptonization component (compt, Titarchuk (1994)) plus a power law to account for the hard X-ray emission. Such a high-energy tail was first detected by Asai et al. (1994) although a possible contamination from the nearby black hole GRS 1758\(-\)258 could not be excluded in this latter case. Paizis et al. (2006) highlighted the presence of a correlation between the X-ray spectral states and the radio emission. Steady radio emission is associated with low/hard state (typical for Atoll sources; Fender and Hendry, 2000; Migliari and Fender, 2006). These sources brighten considerably during intermediate states, often showing bright, transient flares (HB of Z-sources), before quenching during the very soft states in the NB and FB branches of Z-sources (Fender and Hendry, 2000; Di Salvo and Stella, 2002). Moreover, Paizis et al. (2006) reported that the radio flux is positively correlated with the flux of the hard tail in the 40-100 keV energy range. They suggested that this correlation is related to the acceleration of electrons along open field lines in the NS magnetosphere at the base of the jet seen in the radio. Berendsen et al. (2000) reported upper limits to the radio linear polarization of 33% at 6.3 cm and 23% at 3.5 cm not sufficient to constrain the emission mechanism or the optical depth of the jet. ## 2 Observations and data reduction ### Ixpe The X-ray polarimeter _IXPE_(Weisskopf et al., 2023, 2022; Sofflita et al., 2021) observed GX 5\(-\)1 twice (Obsid 02002799) from March 21 to 22 and from April 13 to 15, 2023 (see Table 1 and Fig. 1 for the light curves) with a nominal integration time of 50 ks each. _IXPE_ provides timing, imaging, spectroscopic, and polarimetric data in the 2-8 keV band. Data reduction and analysis were performed by means of the xpfopkssim software version 30.5.0 (Baldini et al., 2022) and the heasoft package version 6.31.1 (Nasa Heasarc, 2014). Data were filtered by using xpfopkssim tools xspecat and binned1 with xpfop to produce images and \(I\), \(Q\), and \(U\) energy spectra for spectro-polarimetric analysis performed with xspec version 12.13.0c (Arnaud, 1996). We used the latest version 12 of the _IXPE_ response matrices available at the xpfopkssim public repository2 (also available at the HEASARC archive). A circular source extraction region of 60'' in radius was selected from the image for each one of the three detector units (DUs). No background subtraction was applied due to the high count rate of the source (\(\sim\)20-25 cts s\({}^{-1}\) per DU) (see Di Marco et al., 2023). Only xspec allows currently a 'weighted analysis' (Di Marco et al., 2022) in which a weight is assigned to each photo-electron track recorded by the DUs depending on the shape of the charge distribution. Footnote 1: The default energy binning is 40 eV. Footnote 2: [https://github.com/lucabaldini/xpeobssim](https://github.com/lucabaldini/xpeobssim) The normalized Stokes parameters \(q=Q/I\) and \(u=U/I\), the PD and polarization angle (PA) with their uncertainties can be calculated by using the model-independent pcube binning algorithm of xpfopkssim. On the other hand, PD and PA obtained with xspec require the definition of a spectro-polarimetric model. Because the PD and PA are not independent, the appropriate way to report results is by means of contour plots at certain confidence levels for the joint measurements of both parameters. We report xspec (PD,PA) contour plots as obtained by using the steppar command, while contour plots associated to the x-peossism analysis are obtained from the statistics of the number of counts (Weisskopf et al., 2010; Strohmayer and Kallman, 2013; Muleri, 2022). The spectro-polarimetric analysis was carried out by taking into account the current _IXPE_ effective area instrument response function (arf) which may not be as accurate as possible at energies above 6 keV where there is a significant roll-off in the spectral response. Because the high-energy part of the _IXPE_ band is of special interest for our study, we estimate the _IXPE_ spectral systematic uncertainties to be as 3% in Obs. 1 and 2% in Obs. 2, based on the comparison with NICER, _NuSTAR_ and IBIS/ISGRI energy spectra. The spectral models for both the _IXPE_ observations are frozen based on the spectral analysis of these other observatories. We used the xspec gain fit tool for the _IXPE_ data only, to shift the energies on which the response matrix is defined and to match the effective area curve during the fit procedure. The spectro-polarimetric fit of the _IXPE_ data was carried out freeing the three DU's normalization constants, the gain slope, gain offset and the polarimetric parameters.3 Footnote 3: The normalization constants, gain slope and offset parameters are identical between \(I\), \(Q\) and \(U\) spectra of the same DU. The PA and PD parameters of those spectra are identical also for each DU. ### NuSTAR GX 5\(-\)1 was observed by _NuSTAR_(Harrison et al., 2013) on March 21 and twice on April 13-14, 2023. All the relevant observation times, sequence IDs and exposure times are provided in Table 1 and the corresponding light curves are shown in Fig. 1. The unfiltered event files were processed with the _NuSTAR_ Data Analysis Software (nustardas v.2.1.2) to produce the cleaned and calibrated level 2 data us ing the latest calibration files (CALDB v.20221130) and the standard filtering criteria with the nupipeline task, where statusexpr"STATUS=b000@xxxr9@xxxxr000" was set due to the source flux exceeding 100 counts s\({}^{-1}\). The spectra and light curves were extracted using the nuproducts task, selecting a circular region of 60'' in radius centered on the source. ### Nicer NICER (Gendreau et al. 2016) observed GX 5\(-\)1 between March 21-25 and April 13-14, 2023. The observations identified with ID 6010230101, 6010230102, and 6010230106 were included in the spectro-polarimetric analysis, because they are simultaneous with the _IXPE_ observations. The calibrated and cleaned files were extracted by using the standard nicer12 command of the NICER Data Analysis Software (nicerds v.10) together with the latest calibration files (CALDB v.20221001). The spectra and the light curves were then obtained with the nicer13-spect and nicer13-lc tasks, while the background was computed using the SCORPEON4 model. Light curves were \begin{table} \begin{tabular}{l c c c c c} \hline Telescope & Obsid & Obs. Start & Obs. Stop & Net Exposure (ks) & Notes \\ \hline _IXPE_ & 02002799 & 2023-03-21,04:16:14 & 2023-03-22, 05:02:52 & 48.6 & obs. segment 1 \\ NICER & 6010230101/2 & 2023-03-21, 03:41:20 & 2023-03-22, 04:50:20 & 13.1 & \\ _NuSTAR_ & 90902310002 & 2023-03-21, 16:41:48 & 2023-03-22, 08:04:34 & 12.6 & \\ _INTEGRAL_ & 2070006/0001 & 2023-03-21, 03:58:07 & 2023-03-22, 04:28:46 & 40.1 & \\ REM & – & 2023-03-22, 05:25:24 & 2023-03-22, 07:33:05 & – & see Sect. 2.7 for exp. details \\ LCO & – & 2023-03-22, 16:33:07 & 2023-03-22, 16:38:07 & – & ” \\ \hline \multicolumn{5}{l}{_No IXPE Obs._} \\ \hline NICER & 6010230103/4 & 2023-03-24 18:19:20 & 2023-03-25 00:44:20 & 5.2 & \\ **ATCA** & – & 2023-03-24, 15:21:20 & 2023-03-25, 01:55:50 & – & see Sect. 2.5 for exp. details \\ VISIR & 110.2448 & 2023-03-28, 08:31:00 & 2023-03-28, 09:21:00 & – & see Sect. 2.6 for exp. details \\ LCO & – & 2023-03-29, 15:50:11 & 2023-03-31, 15:43:07 & – & see Sect. 2.7 for exp. details \\ VISIR & 110.2448 & 2023-03-31, 07:46:00 & 2023-03-31, 08:42:00 & – & see Sect. 2.6 for exp. details \\ \hline \multicolumn{5}{l}{_IXPE Obs._} \\ \hline _IXPE_ & 02002799 & 2023-04-13, 23:43:42 & 2023-04-15, 00:37:32 & 47.1 & obs. segment 2 \\ NICER & 6010230105/6 & 2023-04-13 16:39:04 & 2023-04-15, 23:58:20 & 13.1 & \\ _NuSTAR_ & 90902310004/6 & 2023-04-13, 15:57:07 & 2023-04-14, 23:23:24 & 15.7 & \\ _INTEGRAL_ & multiple & 2023-04-13, 03:51:17 & 2023-04-15 08:53:00 & 85.0 & \\ REM & – & 2023-04-14, 03:16:55 & 2023-04-14, 06:31:56 & – & see Sect. 2.7 for exp. details \\ LCO & – & 2023-04-14, 05:35:19 & 2023-04-14, 23:23:55 & – & \\ ATCA & – & 2023-04-14, 11:32:00 & 2023-04-14, 17:57:40 & – & see Sect. 2.5 for exp. details \\ \hline \end{tabular} 1 \end{table} Table 1: List of observations. Figure 1: Light curves of _NuSTAR_, NICER and IBIS/ISGRI during _IXPE_ Obs. 1 (left panel) and Obs. 2. (right panel). On the top row of each panel the three _IXPE_ DU’s light curves are shown (DU1 blue, DU2 orange, and DU3 green). In the left panel the NICER soft purple and dark red points correspond to ObsID 6010230101 and 6010230102. In the right panel the _NuSTAR_ red and purple points correspond to ObsID 90902310004 and 90902310006. The NICER gray points correspond to ObsID 6010230106. obtained including 50 NICER FPMs available during the ObsIDs included in the analysis (out of 52 available, excluding noisy detectors ID. 14 and ID. 34). Thus, no discontinuities in the NICER light curves are present (see Fig. 1). During the contemporary observation with _IXPE_ there were two significant increases of count rate in the NICER data, up to the double of the typical value. These events were coincident with two solar flares. The first one was a C class peaking at about 60048.5389 MJD and and the second one was an M class peaking at about MJD 60048.6806. We removed them manually from the GTIs (the C class flare from MJD 60048.53535 to MJD 60048.60033 and the M class flare from MJD 60048.67506 to MJD 60048.73744). We noticed some relevant features in the energy spectrum below 4 keV, most likely due to spectral features unaccounted for in the NICER ARF. Because of the source's high count rate, such features become apparent in the spectral modeling. We therefore accounted for calibration artifacts owing to imperfections in modeling the dead layer of the silicon detector at the Si-K edge, and of the concentrator mirror surface roughness at the Au-M edges, which affects the \(\approx\)2.2-3.5 keV range.5 We froze the energy of those edges at their best-fit value as reported in Table 4. Footnote 5: [https://heasarc.gsfc.nasa.gov/docs/nicer/analysis_threading/xspec.html](https://heasarc.gsfc.nasa.gov/docs/nicer/analysis_threading/xspec.html) ### Integral IBIS/isgri _Integral_ IBIS/ISGRI (Winkler et al., 2003) observed the region of GX 5\(-\)1 from March 21 to 22 and from April 13 to 15, 2023, responding to a request by the _IXPE_ team. The data for this source are publicly available and were reduced for the imager IBIS (Ubernti et al., 2003) and the ISGRI detector (Lebrun et al., 2003). We used the MMODA6 platform that empowers the Off-line Science Analysis (OSA) version 11.2 distributed by the ISDC (Courvoisier et al., 2003) with the most recent calibration files that are continuously ingested in the Instrument Characteristic repository. We first built a mosaicked image of all individual pointings that constitute the standard dithering strategy of observation for IBIS/ISGRI in the 28-40 keV energy range. These images were used to make the catalog of detected sources with a signal-to-noise ratio larger than 7. Using these catalog, we extracted light curves with 1000 s time bins and spectra in 256 standard channels for IBIS/ISGRI, these were grouped in 10 equally spaced logarithmic channels between 28 and 150 keV. We also accounted for a systematic uncertainty of the spectra at the 1.5% level. The equivalent on-axis exposures of the IBIS/ISGRI spectra are 40 and 85 ks, respectively, after correction for dead time and vignetting. Products are available at the _INTEGRAL_ Product gallery.7 Footnote 6: [https://www.astro.unige.ch/mmoda/](https://www.astro.unige.ch/mmoda/) Footnote 7: [https://www.astro.unige.ch/mmoda/gallery/astrophysical-entity/ox-5-1](https://www.astro.unige.ch/mmoda/gallery/astrophysical-entity/ox-5-1) ### Atca The Australia Telescope Compact Array (ATCA) observed GX 5\(-\)1 on March 24 and April 14, 2023. On March 24, the telescope observed with the array in its 750C configuration.8 Observations taken on April 14 were carried out with the array in a relatively compact H214 configuration, in combination with an isolated antenna located 6 km from the array core, which was also included for our analysis. For both observations, data were recorded simultaneously at central frequencies of 5.5 GHz and 9.0 GHz, with 2 GHz of bandwidth at each frequency. Footnote 8: [https://www.narrabri.atnf.csiro.au/operations/array_configurations/display/tutorial/atatkins_kxtulo](https://www.narrabri.atnf.csiro.au/operations/array_configurations/display/tutorial/atatkins_kxtulo) We used PKS 1934\(-\)638 for bandpass and flux density calibration. PKS 1934\(-\)638 was also used to solve for the antenna leakages (D-terms) for the polarization calibration. The nearby source B1817\(-\)254 was used for gain calibration and to calibrate the PA using the Common Astronomy Software Applications for radio astronomy (casa, version 5.1.2; CASA Team et al., 2022) atcaphellepers.py task qwfromogain.9 Calibration and imaging followed standard procedures within casa. When imaging, we used a Briggs robust parameter of 0 to balance sensitivity and resolution (Briggs, 1995), as well as suppress the effects from some bright, diffuse emission within the field. Footnote 9: [https://heasarc.gsfc.nasa.gov/docs/nicer/analysis_threading/xspec.html](https://heasarc.gsfc.nasa.gov/docs/nicer/analysis_threading/xspec.html) Footnote 10: [https://www.astro.unige.ch/mmoda/](https://www.astro.unige.ch/mmoda/) Footnote 11: [https://www.astro.unige.ch/mmoda/gallery/astrophysical-entity/ox-5-1](https://www.astro.unige.ch/mmoda/gallery/astrophysical-entity/ox-5-1) For our March 24 observations, fitting for a point source in the image plane, we detect GX 5\(-\)1 at a flux density of \(960\pm 19\,\mu\)Jy at 5.5 GHz and \(810\pm 11\,\mu\)Jy at 9 GHz coincident with the previously reported radio the X-ray position (e.g., Berendsen et al., 2000; Liu et al., 2007). These detections correspond to a radio energy spectral index of \(-0.37\pm 0.09\). The Stokes \(Q\) and \(U\) values were measured at the position of the peak source flux density (Stokes \(I\)). No significant linearly polarized (LP) emission was detected at either frequency. Measuring the root mean square of the image noise in a \(50\arcsec\times 50\arcsec\) region over the source position (taken as \(1\sigma\)), provides \(3\sigma\) upper limits on the polarized intensity \(\sqrt{Q^{2}+U^{2}}\) of \(58\,\mu\)Jy beam\({}^{-1}\) at 5.5 GHz and \(48\,\mu\)Jy beam\({}^{-1}\) at 9 GHz. These corresponds to a \(3\sigma\) upper limit on the PD of 6.1% at 5.5 GHz and 5.9% at 9 GHz. Stacking the two frequencies to maximize the sensitivity also yields a non-detection of linearly polarized emission, with a \(3\sigma\) upper-limit of 4.2% (centered at 7.25 GHz). On April 14, following the same calibration and imaging procedure, we measured the flux density of GX 5\(-\)1 to be \(750\pm 50\,\mu\)Jy and \(620\pm 40\,\mu\)Jy at 5.5 and 9 GHz, respectively. These detections correspond to a radio spectral index of \(-0.4\pm 0.1\). We note that due to the more compact array configuration for this epoch, and the presence of diffuse emission in the field, we imaged with a strictly uniform weighting scheme (setting the Briggs robust parameter to \(-2\)), reducing the impact of diffuse emission in the field on our resultant images. The compact configuration coupled with a shorter exposure time did result in a higher noise level in the images. At 5.5 GHz, we do not detect any linearly polarized emission, with a \(3\sigma\) upper-limit on the PD of 12.5%. Similarly, at 9 GHz we measure a \(3\sigma\) upper-limit on the PD of 20%. Stacking the two frequencies places an \(3\sigma\) upper-limit on the PD of 8% at 7.25 GHz. We do note that during the final \(\sim\)30 min of the observation, some linear polarization was detected from close to the source position, but only at 9 GHz. Due to the non-detection at 5.5 GHz and the short and sudden nature of this emission, we attribute it to radio frequency interference and not an astrophysical event. ### Vlt Visir Mid-IR observations of the field of GX 5\(-\)1 were made with the European Southern Observatory's Very Large Telescope (VLT) on March 28 and 31 2023 under the program 110.2448 (PI: D. Russell). The VLT Imager and Spectrometer for the mid-Infrared (VISIR; Lagage et al., 2004) instrument on the VLT was used in small-field imaging mode. Four filters (\(M\)-band, \(J8.9\), \(B10.7\) and \(B11.7\)) were used, with central wavelengths of 4.67, 8.70, 10.64, and 11.51 \(\mu\)m, respectively. For each observation, the integration time on source was composed of a number of nodding cycles and nodding between source and sky. The total observing time was usually almost twice the integration time. Observations of standard stars were made on the same nights as the target, in the same filters (photometric standards HD137744, HD169916, HD130157 and HD145897 were observed, all at airmass 1.0-1.1). Conditions were clear on both nights. All data (target and standard stars) were reduced using the VISIR pipeline in the gaggano environment. 10 Raw images from the chop/nod cycle were recombined. Photometry was performed on the combined images using PHOT in IRAF.11 For the B10.7 filter, two standard stars were observed on each night, just before and after each observation of GX 5\(-\)1, to check for stability. We found that the counts/flux ratio values from these standards agree to a level of 6.4%, which we adopt as the systematic error of all flux measurements. The estimated counts/flux ratio in each filter was used to convert count rates (or upper limits) of GX 5\(-\)1 to flux densities. Footnote 10: [https://www.eso.org/sci/software/gasgano.html](https://www.eso.org/sci/software/gasgano.html) On 2023-03-28 (MJD 60031.37), the B10.7 filter was used only, and we derive a 3\(\sigma\) flux density upper limit of 3.27 mJy at 10.64 \(\mu\)m (the airmass was 1.05-1.10). On 2023-03-31 (MJD 60034.34), GX 5\(-\)1 was detected in \(M\)-band and \(J8.9\), with flux densities of 4.2 \(\pm\) 1.4 mJy at 4.67 \(\mu\)m and 10.0 \(\pm\) 2.3 mJy at 8.70 \(\mu\)m, respectively (at airmass 1.07-1.17). The significance of the detection was 5.9\(\sigma\) in both filters. The errors on the fluxes incorporate the statistical error on each detection, and the systematic error from the standard stars, in quadrature. The source was not detected in the \(B10.7\) and \(B11.7\) filters on 2023-03-31, with flux upper limits that were less constraining than on 2023-03-28. GX 5\(-\)1 lies in a crowded region of the Galactic plane, with several stars detected within 5\(\arcsec\) of the source. The near-IR counterpart was confirmed through photometry and spectroscopy (named as star 513; Jonker et al. 2000; Bandyopadhyay et al. 2003), and its coordinates agree with the radio and X-ray position. We are confident that the source we detect with VISIR is indeed GX 5\(-\)1, because star 503 from Jonker et al. (2000) and Bandyopadhyay et al. (2003) is also detected (at a low significance of 3.1-3.3\(\sigma\)) at the correct coordinates. This star is estimated from Fig. 1 of Jonker et al. (2000) to lie 4\(\aas@@fstack{\prime\prime}\)3 to the south-east of GX 5\(-\)1. In the \(J8.9\) VISIR image the detected source is measured to be 4\(\aas@@fstack{\prime\prime}\)28 to the south-east of the position of the detected GX 5\(-\)1, as expected. The flux density of star 503 is 1.4 \(\pm\) 0.5 mJy in \(M\)-band and 4.5 \(\pm\) 2.2 mJy in \(J8.9\). ### REM and LOO Optical (SDSS \(griz\) filters) and near infrared (NIR; 2MASS \(H\)-band) observations of GX 5\(-\)1 were acquired with the robotic 60 cm Rapid Eye Mount (REM; Zerbi et al. 2001; Covino et al. 2004) telescope on March 22 and on April 14, 2023 (see Table 1). Strictly simultaneous observations were obtained in all bands; on March 22, a series of 150 30-s exposures were acquired in H-band (dithering was applied), and 20 300-s exposures in each of the optical bands; on April 14, a series of 225 30-s exposures were acquired in H-band (dithering was applied), and 30 300-s exposures in each of the optical bands. Optical observations were also acquired with the 1 m and 2 m telescopes of the Las Cumbres Observatory (LCO) network, using the \(i^{\prime}\) and \(Y\) filters. Observations with the 2 m telescope (Siding Spring - Faulkes Telescope South) were performed on 2023-03-22T16:33:07, 2023-03-29T15:50:11, 2023-03-30T15:46:39 and 2023-03-31T15:43:07 (300s integration in all epochs and bands), with the 11 telescopes (at the locations of Cerro Tololo, Siding Spring and Sutherland) on 2023-03-21T07:09:39, 2023-03-22T01:01:36, 2023-03-22T08:11:35, 2023-04-14T05:35:19, T08:11:37, T15:11:30, T23:23:55 (300s integration in all epochs and bands). The optical images were bias- and flat-field corrected using standard procedures; the contribution of the sky in the NIR images was evaluated by performing a median of the dithered images five-by-five, and was then subtracted from each image. In all bands and epochs, all reduced images were then aligned and averaged in order to increase the signal-to-noise. Aperture photometry was performed using PHOT in IRAF. Flux-calibration was performed against a group of 6 stars with magnitudes tabulated in the 2MASS catalog12 and 6 stars from the PanSTARRS catalog.13 Footnote 12: [https://irsa.ipac.caltech.edu/Missions/2mass.html](https://irsa.ipac.caltech.edu/Missions/2mass.html) Footnote 13: [https://catalogs.mast.stsci.edu/pantstarrs/](https://catalogs.mast.stsci.edu/pantstarrs/) Footnote 14: See Sect. 3.5 and Foight et al. (2016a). Due to the very high extinction of the source (\(N_{\rm H}=4.93\times 10^{22}\,{\rm cm}^{-2}\) and \(4.88\times 10^{22}\,{\rm cm}^{-2}\), which translates15 into \(A_{V}=17.2\) mag and 17 mag in Obs. 1 and Obs. 2, respectively) and to the combination of the low spatial resolution of our images and the crowded field of the source, GX 5\(-\)1 is not detected in any of the optical and NIR images acquired. A blend of our target with at least one of the nearby stars can be detected at very low significance in the averaged \(H\)-band image, at a position consistent with the proposed optical and NIR counterpart of the source (Jonker et al. 2000, star 513). However, this detection is not significant enough to extract a flux for the blend. Footnote 15: See Sect. 3.5 and Foight et al. (2016a). Footnote 16: See Sect. 3.5 and Foight et al. (2016a). We estimate the following 3\(\sigma\) upper limits (only the most constraining ones per epoch are quoted): \(H=13.84\), \(Y=19.08\), \(z=18.28\), \(i^{\prime}=21.79\) (LCO), \(r^{\prime}=20.22\), \(g^{\prime}=20.45\) for Obs 1, \(H=14.00\), \(Y=18.86\), \(z=18.35\), \(i^{\prime}=21.22\), \(r=20.46\), \(g=20.50\) for Obs 2. These upper limits are consistent with the \(H\)-band magnitude of the proposed NIR counterpart reported by Jonker et al. (2000) (Star 513; \(H=14.1\pm 0.2\)). ## 3 Results ### Polarimetric model-independent analysis The model-independent analysis of the X-ray polarization with ixpeobissim (see Table 2) in the 2-8 keV energy band gives the PD of 4.3%\(\pm\)0.3% and 2.0%\(\pm\)0.3% in Obs. 1 and Obs. 2, respectively (with 1\(\sigma\) errors). The corresponding PAs are \(-\)9:7 \(\pm\) 2:0 Figure 2: Polarization of GX 5\(-\)1 measured by _IXPE_ in the 2–8 keV energy range during Obs. 1 and Obs. 2 obtained with xspec and xlproossim. Contours are computed for two parameters of interest at 50%, 90%, and 99.9% confidence levels. and \(-9\aas@@ 2\pm 4\aas@@ 0\). We see a significantly larger PD in Obs. 1 compared to Obs. 2. However, the PA is compatible with being constant during the two observations. Contour plots of the kkreyssin analysis in the 2-8 keV energy range are reported in Fig. 2 (dashed lines). The behavior of polarization as a function of energy is reported in Table 2. The energy dependence of the PD is essentially the same in the two observation, whereas the PA varies from positive to negative values. A proper assessment of this behavior requires the use of Stokes parameters. The left panels of Fig. 3 show the Stokes parameters as a function of energy of both the observations separately. For each observation, the Stokes \(q\) parameters are consistent with being constant in energy, albeit at different values between the first and second observations. On the other hand, the Stokes \(u\) parameters are not compatible with a constant (see Obs. 1 and Obs. 2 columns of Table 3). This requires the variation of the PA. On the basis of the assumption that the geometry and the physical process producing polarization are the same in the two observations, we also calculated the Stokes parameters of the _IXPE_ observations combined (Obs. 1 and Obs. 2) to improve the statistics. While the resulting normalized Stokes parameter \(q\) still remains compatible with a constant value, the Stokes \(u\) is even more far from being a constant with the reduced \(\chi^{2}\) values for \(\nu=3\) degrees of freedom being \(\chi_{\nu}^{2}=1.26\) and \(\chi_{\nu}^{2}=7.16\) for \(q\) and \(u\), respectively. As anticipated by the separate analysis of the two observations, this behavior of \(q\) and \(u\) implies a variation of the PA. Such a variation by about \(20^{\circ}\) is highlighted in Fig. 4. In the top panel the contour plots on the PD-PA plane for the _IXPE_ whole observation are shown. Polarization in the 2-3 keV energy bin is not compatible with that in the 5-8 keV bin with a probability of \(\sim 98.7\%\). In the bottom panel the variation of the \begin{table} \begin{tabular}{c c c c} \hline \hline Energy (keV) & Parameters & Obs. 1 & Obs. 2 \\ \hline \multirow{2}{*}{2–8} & PD (\%) & \(4.3\pm 0.3\) & \(2.0\pm 0.3\) \\ & PA (deg) & \(-9.7\pm 2.0\) & \(-9.2\pm 4.0\) \\ \hline \multirow{2}{*}{2–3} & PD (\%) & \(4.0\pm 0.5\) & \(2.0\pm 0.4\) \\ & PA (deg) & \(0.2\pm 3.4\) & \(6.0\pm 6.2\) \\ \hline \multirow{2}{*}{3–4} & PD (\%) & \(3.8\pm 0.4\) & \(1.7\pm 0.3\) \\ & PA (deg) & \(-14.4\pm 3.8\) & \(-3.5\pm 6.5\) \\ \hline \multirow{2}{*}{4–5} & PD (\%) & \(4.4\pm 0.5\) & \(2.5\pm 0.5\) \\ & PA (deg) & \(-9.8\pm 3.5\) & \(-19.2\pm 6.0\) \\ \hline \multirow{2}{*}{5–8} & PD (\%) & \(5.4\pm 0.7\) & \(2.6\pm 0.7\) \\ & PA (deg) & \(-14.0\pm 3.7\) & \(-21.0\pm 7.9\) \\ \hline \end{tabular} 1 \end{table} Table 2: Polarization in the 2–8 keV band (see Fig. 2) and in 4 energy bins estimated with kkreyssin for Obs. 1 and Obs. 2. Figure 3: Stokes parameters as a function of energy for _IXPE_ Obs. 1 and Obs. 2 (left panels) and for the combined data set (right panels) obtained with kreyssin. The normalized Stokes \(q\) parameter is compatible with a constant value in each one of the _IXPE_ observations, whereas the Stokes \(u\) parameter is not (see Table 3). This behavior of the Stokes parameters is consistent with a variation of the PA (see Fig. 4). \begin{table} \begin{tabular}{c c c c} \hline \hline Stokes & Best-fit & Obs. 1 & Obs. 2 & Obs. 1 + Obs. 2 \\ \hline parameter & parameters & & \\ \hline \multirow{2}{*}{\(q\)} & constant (\%) & \(3.86\pm 0.25\) & \(1.86\pm 0.24\) & \(2.83\pm 0.17\) \\ & \(\chi_{\nu}^{2}\) & \(1.31\) & \(0.15\) & \(1.26\) \\ & \(\alpha\) (\%) & \(2.68\) & \(93.0\) & \(28.7\) \\ \hline \multirow{2}{*}{\(n\)} & constant (\%) & \(-13.2\pm 0.25\) & \(-0.74\pm 0.24\) & \(-0.89\pm 0.17\) \\ & \(\chi_{\nu}^{2}\) & \(4.26\) & \(4.00\) & \(7.16\) \\ \hline \multirow{2}{*}{\(n\)} & \(\sigma\) (\%) & \(0.52\) & \(0.74\) & \(0.01\) \\ \hline \end{tabular} 1 \end{table} Table 3: Results of the Stokes parameters fit with a constant value for Obs. 1, Obs. 2, and the combined data set. PA with energy is shown together with the fit by a constant giving a value of \(-9\aas@@fstack{\circ}1\pm 1\aas@@fstack{\circ}6\) with \(\chi^{2}_{\nu}=6.44\) for \(\nu=3\) corresponding to a significance level \(\alpha=0.02\%\). ### X-ray spectral analysis The NICER, _NuSTAR_ and IBIS/ISGRI light curves contemporary to _IXPE_ observations are shown in Fig. 1. The time-resolved CCD and hardness-intensity diagram (HID) for all the observations with _IXPE_, NICER and _NuSTAR_ show GX 5\(-\)1 moving along the complete Z-track from March 21 to April 14. _NuSTAR_ CCD and HID, obtained from the GITS contemporary to the _IXPE_ observations (see Fig. 5), highlight clearly the Z shape, disentangling also the NB with respect to the FB, due to the wide energy band from 3 to 20 keV used to construct the CCD colors. During _IXPE_ Obs. 1, the source was in the HB, whereas it was in the NB-FB during Obs. 2 which we have checked also by constructing CCD and HID from the _IXPE_ data of both observations. From March 24 to 25 (about 3 days after the end of Obs. 1), when ATCA was observing, NICER detected GX 5\(-\)1 moving from the HB-NB corner towards the NB. We fit the NICER, _NuSTAR_ and IBIS/ISGRI data of Obs. 1 and Obs. 2 presenting results in Fig. 6 and Table 4. The better fits we obtained are based on the following spectral models for Obs. 1 and Obs. 2: tbabs*(diskbb+expabs*powerlaw+thcomp*bbodyrad), tbabs*(diskbb+thcomp*bbodyrad), respectively. We included the normalizing cross-calibration multiplicative factors for the NICER, IBIS/ISGRI and _NuSTAR_ FPMA (frozen at unity) and FPMB telescopes. A tbabs (Wilms et al., 2000) multiplicative model component was used to take into account the low-energy absorption due to the interstellar medium. We used the abundances and cross-section tables according to Wilms et al. (2000) and Verner et al. (1996) (wilm, vern in xspec). We modeled the GX 5\(-\)1 energy spectra of both the observations with a multi-color disk and a harder boundary/spreading layer (BL, or SL) emission (e.g., Popham & Sunyaev, 2001; Revnivtsev et al., 2013). In both observations, the convolution model component thcomp(Zdziarski et al., 2020) is used to represent Comptonized emission from the BL/SL (e.g. Di Salvo et al., 2002; Farinelli et al., 2009) modeled with the bbodyrad component. The \(f\) parameter of thcomp represents the fraction of Comptonized seed photons. In Obs. 1, this parameter is \(>0.63\) with the best-fit value equal to 0.99. Effectively, all photons from the BL/SL bbodyrad are Comptonized. In Obs. 2, only a fraction between 2.7% and 11.2% (best-fit 3.2%) of seed photons are Comptonized. It is worth noting that a higher Comptonization fraction corresponds \begin{table} \begin{tabular}{l l c c} \hline \hline Components & Parameters & Obs. 1 & Obs. 2 \\ \hline edge & \(E\) (keV) & 1.82148 (frozen) & \\ & \(r\) & 0.172\({}^{+0.08}_{-0.08}\) & 0.152 \(\pm\) 0.017 \\ edge & \(E\) (keV) & 1.95197 (frozen) & \\ & \(r\) & 0.051 \(\pm\) 0.016 & 0.484 \(\pm\) 0.016 \\ edge & \(E\) (keV) & 2.28003 (frozen) & \\ & \(r\) & 0.037 \(\pm\) 0.014 & 0.025 \(\pm\) 0.014 \\ edge & \(E\) (keV) & 2.44444 (frozen) & \\ & \(r\) & 0.050 \(\pm\) 0.013 & 0.038 \(\pm\) 0.013 \\ edge & \(E\) (keV) & 3.16139 (frozen) & \\ & \(r\) & 0.0202 \({}^{+0.000}_{-0.000}\) & 0.012 \(\pm\) 0.007 \\ \hline \multicolumn{4}{c}{Continuum parameters} \\ tbabs & \(N_{\rm H}\) (\(10^{32}\) cm\({}^{-2}\)) & 4.93\({}^{+0.12}_{-0.02}\) & 4.88 \(\pm\) 0.04 \\ diskbb & \(kT\) (keV) & 0.95 \(\pm\) 0.07 & 1.20 \(\pm\) 0.03 \\ \multicolumn{4}{c}{\(R_{\rm w}\) (\(\nu\)cos\(6\) km\(\nu^{2}\) & 25\({}^{+1}_{-1}\) & 1.96\({}^{+0.43}_{-0.12}\) \\ thcomp & \(\Gamma\)\({}^{-}\) & 2.35\({}^{+1.3}_{-1.3}\) & \(<2^{1.4}_{-1}\) \\ & \(kT_{\rm c}\) (keV) & 2.96\({}^{+0.08}_{-0.07}\) & 3.08\({}^{+0.17}_{-0.11}\) \\ bbodyrad & \(kT_{\rm c}\) (keV) & 0.99\({}^{+0.17}_{-0.12}\) & 0.03\({}^{+0.03}_{-0.05}\) \\ & \(kT_{\rm ch}\) (keV) & 1.27\({}^{+0.28}_{-0.12}\) & 1.68\({}^{+0.03}_{-0.04}\) \\ & \(R_{\rm ch}\) (km\(\nu^{2}\)) & 1.91\({}^{+0.11}_{-0.12}\) & 9.3\({}^{+0.24}_{-0.5}\) \\ expabs & \(E_{\rm tot}\) (keV) & \([\pm\)\(F_{\rm rad}^{+0.08}\) & – \\ powerlaw & \(\Gamma\)\({}^{-}\) & 2.62\({}^{+0.20}_{-0.20}\) & – \\ & norm & 0.45\({}^{+0.23}_{-0.14}\) & – \\ \hline \multicolumn{4}{c}{Cross-calibration constants} \\ const & \(C_{\rm NLSTAR}\) & \(1.107\pm 0.003\) & \(1.034\pm 0.004\) \\ & \(C_{\rm NLSTAR}\) & 1 (frozen) & \\ & \(C_{\rm NLSTAR}\) & 1.0136 \(\pm\) 0.0013 & 0.998 \(\pm\) 0.0014 \\ & \(C_{\rm NLSTAR}\) & 0.66 \(\pm\) 0.08 & 1.29 \(\pm\) 0.03 \\ & \(\chi^{2}\)/d.o.f. & 421\(\pm\)/51 & 338/365 \\ \hline \multicolumn{4}{c}{\(r_{\rm tbexp}\)} \\ \multicolumn{4}{c}{\(f_{\rm 5-4keV}\)} \\ \multicolumn{4}{c}{\(f_{\rm 6-4keV}\)} \\ \multicolumn{4}{c}{\(f_{\rm 8-5keV}\)} \\ \multicolumn{4}{c}{\(f_{\rm 8-5keV}\)} \\ \multicolumn{4}{c}{\(f_{\rm 8-5keV}\)} \\ \multicolumn{4}{c}{\(f_{\rm 8-5keV}\)} \\ \multicolumn{4}{c}{\(f_{\rm 8-5keV}\)} \\ \multicolumn{4}{c}{\(f_{\rm 8-5keV}\)} \\ \multicolumn{4}{c}{\(f_{\rm 8-5keV}\)} \\ \hline \end{tabular} * **Notes.** The errors are at 90% confidence level. The edges reported on the top section of the table are referred only to the NICER energy spectrum. [11] Radii are estimated assuming the distance to the source of 7.6 kpc (see Sect. 3.5). [11] The optical depth \(\tau_{\rm thcomp}\) comes from Eq. (14) in Zdziarski et al. (2020). [11] The unabsorbed flux is measured in units \(10^{-4}\) erg cm\({}^{-2}\) s\({}^{-1}\). \end{table} Table 4: Best-fit parameters of GX 5\(-\)1 spectral model from NICER, _NuSTAR_ and IBIS/ISGRI data simultaneous to _IXPE_ observation. Figure 4: Polarization contour plot (top panel) and PA (bottom panel) as a function of energy of the combined _IXPE_ data set (Obs. 1 and Obs. 2). to a higher PD observed from the source. During Obs. 1, when the source is on the HB, the energy spectrum is harder, and a high-energy excess was seen (it was absent in Obs. 2). This excess is modeled with an additional hard tail represented by a power-law component with a low-energy exponential roll-off. The e-folding energy for the absorption of the exponential roll-off is set equal to the bbodyrad \(kT_{\rm bb}\) energy of the SL/BL. This parameter link comes from the assumption that the power-law emission originates from the seed distribution of the SL/BL blackbody. The BL and the inner disk are hotter in Obs. 2 than in Obs. 1, while the BL sphere-equivalent radius and inner disk radius are larger in Obs. 1. In contrast to Obs. 1, the additional power-law component is not needed to model the spectrum of Obs. 2. The softer disk component dominates up to \(\sim 3.5\) keV in Obs. 1 and up to \(\sim 5.5\) keV in Obs. 2 (see Fig. 6). In the 2-8 keV band, the energy flux in the Comptonized component (including the contribution of the exponentially absorbed power-law component) accounts for \(\sim 61\)% of the total in Obs. 1. In Obs. 2, the energy flux of the Comptonized component drops to \(\sim 33\)% of the total in the same energy band. ### X-ray spectro-polarimetric analysis The results of the spectro-polarimetric analysis, including \(IXPE\), once the spectral model used to fit the data from the other observatories is frozen, is reported in Table 5. The PD obtained with a polconst multiplicative model component applied to the whole spectral model of GX 5\(-\)1 in Obs. 1 and Obs. 2 is \(3.7\%\pm 0.4\%\) and \(1.8\%\pm 0.4\%\), respectively. The PA is around \(-9\degr\), in both the observations. The polarization contour plot in the 2-8 keV energy band obtained with xspec (polconst model applied to Figure 5: _NuSTAR_ CCD and HID of GX 5\(-\)1 contemporary to \(IXPE\) observation. During \(IXPE\) Obs. 1 the source was in the HB, while during Obs. 2 it was in the NB-FB. Black, red and purple colors refer to the observing days March 21, April 13 and 14, 2023, respectively. Figure 6: Deconvolved NICER (1.5–10 keV), _NuSTAR_ (3.5–45 keV for Obs. 1 and 3.5–35 keV for Obs. 2) and IBIS/ISGRI (28–50 keV) spectra simultaneous with both \(IXPE\) observations as obtained from the best-fit model reported in Table 4 and residuals between data and model in units of \(\sigma\). the spectral model) is shown in Fig. 2. These contours are nearly identical to those obtained using kpfecossym. The spectro-polarimetric analysis with xspec allows us to assign a polarization to the different spectral components. The result of such an analysis is reported in Table 5 and Fig. 7. For Obs. 1, we assigned the same polarization for the expabs*powerlaw and thcomp*bbodyrad components, assuming that the power law is just a continuation of the BL/SL component, with the power-law low energy rollover \(E_{\rm cut}\) being equal to the temperature of the BL/SL bbodyrad\(kT_{\rm bb}\). The Comptonization component is well constrained at a 99.9% confidence level only in Obs. 1, while the disk component is not constrained at 99% in either observation (see Fig. 7). Moreover, in both the observations the polarization of each component has similar PAs, suggesting that the geometry responsible for the polarization is similar. Figure 8 shows the fit of the \(Q\) and \(U\) Stokes parameters as a function of energy with the two polconst components for both the observations. The contribution to the total flux of the polarized component is different (higher in Obs. 1) probably due to a dilution effect by unpolarized radiation connected with a low covering fraction of the Comptonization component (see Table 4). Unfortunately, even if the source was active in the radio band during the observation campaign, it is impossible to compare the direction of the PA with the direction of the jet, because the radio observations reported in this paper do not have the spatial resolution to resolve it. Moreover, literature does not report any information about jet direction. The variation of the PA of the total emission, reported in Sect. 3.1, obtained with kpfecossym model independent analysis can be explained by the different PAs of the disk and Comptonized spectral components which are not aligned, nor being 90\({}^{\circ}\) apart. Indeed, the PA of the total emission varies from being positive at lower energies (dominated by the diskbb component) to negative values at higher energies dominated by the Comptonization component with a positive PA. When assessing polarization as a function of time, no significant variations are seen. This implies that polarization is not sensitive to the flux variation present in the light curves (see Fig. 1). ### Spectral energy distribution Thanks to the multi energy observation campaign it was possible to produce a broadband (radio-X-ray) spectral energy distribution (SED) of GX 5\(-\)1 for both the observations (see Fig. 9). Optical and infrared fluxes have been de-reddened using the hydrogen column density reported in this work (\(N_{\rm H}=4.93\times 10^{22}\) cm\({}^{-2}\) and \(4.88\times 10^{22}\) cm\({}^{-2}\) in Obs. 1 and 2, respectively), which was converted into an estimate of the \(V\)-band extinction \(A_{V}\) using the relation reported in Foight et al. (2016b), resulting in \(A_{V}=17.18\pm 0.77\) mag and \(17.00\pm 0.72\) mag for the two observations, respectively. The different absorption coefficients at optical and near-IR wavelengths have then been evaluated using the relations reported in Cardelli et al. (1989) and Nishiyama et al. (2008), respectively. For the mid-IR, the coefficients reported in Weingartner & Draine (2001) have instead been used. GX 5\(-\)1 is detected at radio and mid-IR wavelengths, with upper limits in the near-IR and optical. As mentioned in Sect. 2.7, this is due to the large value of dust extinction of \(A_{V}\approx 17\). This implies an extinction of \(\sim\)7 mag at 1 \(\mu\)m (\(Y\)-band) and \(\sim\)3 mag at 1.6 \(\mu\)m (\(H\)-band). The radio spectral index of \(-0.4\pm 0.1\) is too steep for the radio emission to arise from a steady, compact jet (for which we expect a spectral index from \(\sim\) 0 to +0.5). As such, it is likely due to optically thin jet ejections, or a combination of compact jet and optically thin ejections. In the mid-IR, we report the first detection of this source. The de-reddened flux densities are comparable to the near-IR values reported in the literature (Naylor et al. 1991; Jonker et al. 2000; Bandyopadhyay et al. 2003); however, the de-reddened near-IR fluxes depend sensitively on the value of the interstellar extinction. We note that different values of the neutral hydrogen col Figure 7: Polarization contour plots from the spectro-polarimetric analysis in the 2-8 keV energy range of Obs. 1 (top panel) and Obs. 2 (bottom panel). The polconst polarimetric model is applied separately to the diskbb component of both _IXPE_ observations and to the (expabs*powerlaw+thcomp*bbodyrad) components as a whole. The expabs*powerlaw component (labeled as pl. in the plot legend) is included only in Obs. 1. Contour plots are computed for 4 parameters of interest, taking into account the fact that the polarization of the disk and Comptonization components are correlated in the simultaneous fit. \begin{table} \begin{tabular}{l l l l} \hline \hline DU & Parameters & Obs. 1 & Obs. 2 \\ \hline DU1 & \(N_{\rm REU}\) & \(0.796\pm 0.006\) & \(0.798\pm 0.004\) \\ & gain slope & \(0.953^{+0.001}_{-0.002}\) & \(0.961\pm 0.003\) \\ & gain offset (keV) & \(0.13\pm 0.02\) & \(0.102\pm 0.015\) \\ DU2 & \(N_{\rm REU2}\) & \(0.772\pm 0.006\) & \(0.769\pm 0.004\) \\ & gain slope & \(0.952\pm 0.003\) & \(0.960\pm 0.003\) \\ & gain offset (keV) & \(0.15\pm 0.02\) & \(0.131\pm 0.014\) \\ DU3 & \(N_{\rm REU3}\) & \(0.734\pm 0.005\) & \(0.736\pm 0.004\) \\ & gain slope & \(0.976\pm 0.004\) & \(0.966\pm 0.003\) \\ & gain offset (keV) & \(0.10\pm 0.02\) & \(0.114\pm 0.015\) \\ \hline Components & Parameters & Obs. 1 & Obs. 2 \\ \hline \multicolumn{3}{l}{polconst*(diskbb+expabs*powerlaw+thcomp*bbodyrad)} \\ & PD (\%) & \(3.7\pm 0.4\) & \(1.8\pm 0.4\) \\ & BB (\%) & \(-9\pm 3\) & \(-9\pm 6\) \\ & \(\chi^{2}\)/d.o.f. & \(18007(779)\) & \(17337171\) \\ \multicolumn{3}{l}{polconst*(diskbb+pcom*,*(expabs*powerlaw+thcomp*bbodyrad)} \\ & \(\rm{pb}\) (\%) & \(2.3\pm 0.9\) & \(1.8\pm 0.9\) \\ & PA (deg) & \(21\pm 11\) & \(14\pm 15\) \\ \multicolumn{3}{l}{polconst*(l)} & PD (\%) & \(5.7\pm 1.4\) & \(4.3\pm 2.0\) \\ & PA (deg) & \(-16\pm 20\) & \(-32\pm 14\) \\ & \(\chi^{2}\)/d.o.f. & \(1792/1806\) & \(1726/1718\) \\ \hline \end{tabular} 1 \end{table} Table 5: Best-fit parameters of polarization analysis with xspec. umn densities are reported in the literature, ranging from \(N_{\rm H}=2.54\times 10^{22}\) cm\({}^{-2}\) to \(6.20\times 10^{22}\) cm\({}^{-2}\)(Christian & Swank, 1997; Zeegers et al., 2017; Homan et al., 2018; Clark, 2018; Bhulla et al., 2019). Yang et al. (2022) has derived \(N_{\rm H}=(4.52\pm 0.01)\times 10^{22}\) cm\({}^{-2}\) by measuring the Si K edge due to scattering by dust using the _Chandra_ gratings. This value is in agreement with our measurements because \(N_{\rm H}\) is slightly overestimated when fitting the continuum with the absorption model tbabs(Corrales et al., 2016). In this work, we observed GX 5\(-\)1 over a significantly larger energy range, and we are confident to have properly constrained the absorption in the interstellar medium. However, if there is a component of the neutral hydrogen column which is intrinsic to the LMXB, this could cause varying measurements and introduce uncertainty into the relation between \(N_{\rm H}\) and the extinction \(A_{V}\). Moreover, older works using Anders & Grevesse (1989) solar abundances rather than Wilms et al. (2000) interstellar abundances, could be affected by model systematic. The mid-IR flux measured with the VLT VISIR is higher than the extrapolation of the ATCA spectral index from radio to mid-IR. The mid-IR emission is therefore unlikely to be due to optically thin synchrotron from discrete jet ejections that were seen in the radio, but it could be optically thin synchrotron emission from the compact jet (from above the jet spectral break; Russell et al., 2013). We find evidence of strong mid-IR variability between the two epochs, with the 9\(-\)11 \(\mu\)m flux density changing from \(<3.27\) mJy on 2023-03-28, to \(9.95\pm 2.31\) mJy on 2023-03-31. While this variation by a factor of \(\geq 3\) (\(\geq 2.5\) magnitudes) is quite high for a LMXB accretion disk on these timescales, high amplitude (spanning several magnitudes) infrared variability has been reported from a number of bright, persistent NS-LMXBs, including GX 17+2, Cir X-1, 4U 1705\(-\)440 and GX 13+1 (e.g. Glass, 1994; Callanan et al., 2002; Bandyopadhyay et al., 2002; Homan et al., 2009; Corbet et al., 2010; Harrison et al., 2011). Figure 8: Fit of the _IXPE_ Stokes parameters \(Q\) and \(U\) as a function of energy with the model comprising two polconst components applied to the disk (dotted lines) and to the Comptonization (dashed lines) for Obs. 1 (left column) and Obs. 2 (right column). Figure 9: Broad-band radio to X-ray SED of GX 5\(-\)1 during the two observations. For Obs. 2, no mid-IR flux is reported as observations were not performed with VISIR at that time (see Sect. 2.6 for details). The flux densities at mid-IR, near-IR and optical frequencies have been de-reddened as described in Sect. 3.4. The errors are at 68% confidence level. This variability has been generally interpreted as indicative of highly variable synchrotron emission from a compact jet. Variable near-IR polarization has also been reported from the bright, persistent NS-LMXBs Sco X-1 and Cyg X-2, as well as the NS-LMXB transient SAX J1808.4\(-\)3658 (Shahbaz et al. 2008; Russell & Fender 2008; Baglio et al. 2020). The de-reddened mid-IR 4.7-8.7 \(\mu\)m spectral index on 2023-03-31 is \(-\)2.3 to \(-\)0.4 (adopting \(A_{V}=17.18\pm 0.77\)), which is consistent with optically thin synchrotron emission from relativistic particles, or a steeper particle distribution with a thermal (Maxwellian) component, as has been seen at infrared wavelengths from jets in some black hole LMXBs (e.g., Russell et al. 2010; Shahbaz et al. 2013). Thus, this emission could originate from a compact jet which peaks in the mid- or far-IR, although follow-up observations characterizing the IR spectrum and variability would be beneficial to confirm the nature of the IR emission. If the compact jet is present, its spectrum from radio to mid-IR must be inverted, with index \(>0.33\) (and fainter than the observed radio emission). The radio to IR spectrum is similar in some ways to the NS-LMXBs 4U 1728\(-\)34 and 4U 0614+091 (Migliari et al. 2010; Diaz Trigo et al. 2017). ### Measurement of the distance to the source In order to derive some spectral model parameters (namely \(R_{\rm in}\sqrt{\cos\theta}\) of diskbb and \(R_{\rm bb}\) of bbodyrad), an estimate of the distance to the source is needed. Such an estimate can be obtained from the equivalent hydrogen column density \(N_{\rm H}\) which we get from our analysis. We find that \(N_{\rm H}=(4.93^{+0.12}_{-0.06})\times 10^{22}\) cm\({}^{-2}\) and \((4.88^{+0.03}_{-0.04})\times 10^{22}\) cm\({}^{-2}\) for the first and the second observations, respectively. Because the two values are the same at the 90% confidence levels, we use the average value and its largest uncertainty range, that is \((4.91^{+0.14}_{-0.07})\times 10^{22}\) cm\({}^{-2}\), in the following discussion. To estimate the distance to the source we adopt the approach proposed by Gambino et al. (2016). We use the model of the infrared Galactic interstellar extinction discussed by Marshall et al. (2006). Because the Galactic coordinates of GX 5\(-\)1 are \(l=5\aas@@fstack{\circ}\)08 and \(b=-1\aas@@fstack{\circ}\)02 we adopt the map that relates the infrared extinction \(A_{K_{\rm s}}\) with the source distance \(d\) valid for \(l=5\aas@@fstack{\circ}\) and \(b=-1\aas@@fstack{\circ}\) (see black dots with the error bars in Fig. 10). Because the visual extinction \(A_{V}\) is related to \(N_{\rm H}\) as (Foight et al. 2016a) \[N_{\rm H}=(2.87\pm 0.12)\times 10^{21}~{}A_{V}, \tag{1}\] and the relation between \(A_{V}\) and the extinction in the \(K_{\rm s}\) band is (Nishiyama et al. 2008) \[A_{K_{\rm s}}=(0.062\pm 0.005)~{}A_{V}, \tag{2}\] we obtain \[A_{K_{\rm s}}=\frac{(0.062\pm 0.005)}{(2.87\pm 0.12)\times 10^{21}}~{}N_{\rm H} \;{\rm mag}=1.06\pm 0.10\;{\rm mag}. \tag{3}\] We fit the \(A_{K_{\rm s}}\) values between 5 and 15 kpc with a linear function (the best-fit line and the lines taking into account the associated errors are in blue color in Fig. 10). We infer that the distance to the source is \(d=7.6\pm 1.1\) kpc. This distance is in agreement with the previously reported values (Penninx 1989; Smith et al. 2006). ## 4 Discussion As shown in Sect. 3, the two multiwavelength observations of GX 5\(-\)1 allowed us to catch the source when it was covering the complete Z-track on its CCD/HID (see Fig. 5). In particular, during the first observation, GX 5\(-\)1 was on the HB of the track, while in the second it moved across to the NB and FB. Very interestingly, we found the same behavior in the peculiar transient XTE J1701\(-\)462 (Cocchi et al. 2023), in which the PD was correlated with the source position on the Z-track, being higher in the HB and decreasing by a factor about two in the NB. Long et al. (2022) hypothesized the same behavior also for Sco X-1. There are at least three regions which may potentially contribute to the polarization: the BL/SL, the accretion disk, and the reflection of the BL/SL photons from the disk atmosphere or a wind. The spectro-polarimetric analysis of the _IXPE_ data (Table 5) shows that the disk polarization is about 2% in both observations, which is compatible with the classical results of a high optical depth scattering atmosphere at an inclination of 60\(\degr\)(Chandrasekhar 1960). The higher PD value of the hard component on the other hand, cannot be explained by repeated Compton scattering in high optical depth environment, neither for a boundary layer coplanar with the disk (which otherwise would resemble a Chandrasekhar-like slab) nor for a spreading layer around the neutron star, for which a maximum PD of \(\la 2\%\) is expected. Disk reflection is probably the most natural way to explain a PD of order of 4%-5%, as shown by Lapidus & Sunyaev (1985). It is important to note that we do not find a strong reflection signature in the spectral analysis, and it deserves mentioning that the reflection contribution to the spectrum may be low and sometimes may be embedded in the continuum but, nevertheless, make a large contribution to the net polarization signal (Schnittman & Krolik 2009). This may be particularly true if the primary spectrum is not a hard power law, but a blackbody-like spectrum with a rollover below 30 keV. Note that a similar argument has been used also for Cyg X-2 (Farinelli et al. 2023) and GX 9+9 (Utsini et al. 2023) to explain the high PD attributed to the Comptonization spectrum in a two-fold spectro-polarimetric approach. Figure 10: Fit of the infrared extinction \(A_{K_{\rm s}}\) between 5 and 15 kpc from Marshall et al. (2006) with a linear function (the best-fit line and the lines taking into account the associated errors are in blue). The red horizontal lines represent the estimate with the corresponding upper and lower limits of \(A_{K_{\rm s}}\) from Eq. (3). The distance to the source is thus \(d=7.6\pm 1.1\) kpc. The PD of the reflected radiation is not easy to predict because it depends on geometrical and physical factors (e.g., the disk ionization parameter) however, it is not likely to exceed \(\sim\)20% (Matt, 1993; Poutanen et al., 1996; Schnittman & Krolik, 2009). We tested the hypothesis as to whether there is a reflection component in the GX 5\(-\)1 energy spectrum by making some assumptions: (1) highly ionized disk due to the absence of a broad emission line in the Fe-K region of the spectrum, (2) inclination \(i=60^{\circ}\) since GX 5\(-\)1 is a Cyg-like Z-source (Homan et al., 2018), (3) keep the reflection amplitude \(f=\Omega/2\pi\) at a typical value for the NS-LMXBs (namely 30%, see, e.g., Di Salvo et al., 2015; Matranga et al., 2017). We applied the following spectral models: Obs. 1: constant*tbabs(diskbb + expabs*powerlaw + rdblur*rfxconv*comptb + comptb), Obs. 2: constant*tbabs(diskbb + rdblur*rfxconv*comptb + comptb). The comptb (Farinelli et al., 2008) model component was included in place of the thcomp to prevent a double convolution of the BL/SL blackbody. Details on rfxconv and rdblur model components can be found in Kolehmainen et al. (2011) and Fabian et al. (1989), respectively. The sum rdblur*rfxconv*comptb + comptb accounts for the reflected radiation (the first term) plus the incident radiation (the second term). This modeling of the reflection component contribute \(\sim\)22% of the flux in Obs. 1 and \(\sim\)12% in Obs. 2. Such a contribution to the total emission can easily account for the polarization detected (see Table 6 for details of the fit parameters of the energy spectrum). The fraction of Comptonized photons in Obs. 2 is significantly smaller with respect to Obs. 1, as obtained also in Sect. 3 by applying only the Comptonization model thcomp. The parameter \(\log A=-1.54^{+0.54}_{-0.06}\) in comptb corresponds to a \(2.8^{+6.3}_{-0.4}\)% of photons upscattered in energy (consistent with \(3.2^{+8.0}_{-0.5}\)% obtained when reflection is not taken into account as in Sect. 3). This confirms the presence of a blackbody component observed through an almost vanished Comptonization medium in Obs. 2, even if reflection is included. However, the fraction of Comptonized photons, albeit small (\(\sim\)3%), is still sufficient to manifest its presence at high energy. Indeed, if we would neglect this small fraction of Comptonized photons by substituting comptb with a blackbody component, such as bbodyrad, we obtain an unacceptable fit result (\(\chi^{2}_{\nu}=3.6\), with a significant excess at high energy due to this small fraction of Comptonized photons). Another possible mechanism for producing polarization is related to scattering in the wind above the accretion disk: as was shown in Sunyaev & Titarchuk (1985), the emission scattered once in a plane (e.g. equatorial wind) can be polarized up to 27% for an inclination \(i=60^{\circ}\). This polarization degree is only weakly dependent on the opening angle of the wind. Assuming that 20% of the source emission is scattered, we can obtain the observed PD. Recently, a similar model was shown to explain well the presence of a constant polarized component in the X-ray pulsars RX J0440.9+4431 / LS V +44 17 (Doroshenko et al., 2023). Although it is well known that strong winds can be indeed present in the soft states of X-ray binaries (e.g., Neilsen & Lee, 2009; Ponti et al., 2012, 2014), it is worth noting that we do not see wind absorption features in the energy spectrum of GX 5\(-\)1, implying that, if present, the wind should be completely ionized. Both _IXPE_ observations show similar behavior of the PD values: they are smaller in the 2\(-\)4 keV band and increase slightly with energy. The reduction of the PD at lower energies can be explained by the energy dependence of the disk emission: in the low energy band we have a marginally polarized disk emission dominating the spectrum, while at higher energies, the emission of the SL and/or the scattered/reflected component is more visible. Another feature of the observed emission that needs to be addressed is the variation of the PA with energy. In the spectropolarimetric analysis the disk and the Comptonization components have non-orthogonal PAs, thus the variation of the polarization plane of the total emission with energy can be interpreted as the energy-dependent contribution of these two components to the total emission. The same applies if, for instance, the SL emission is scattered in the wind and the disk emission is polarized with a different and non-orthogonal angle. The PA of the combined emission will be energy dependent. It is well known that the disk emission exhibits the rotation of the polarization plane with energy (Connors et al., 1980; Dovciak et al., 2008; Loktev et al., 2022). Simulations of the SL emission also show a change of the PA with energy but predict a PD at most 1.5% (Bobrikova A., in prep.). Thus, a single component explanation of the PA variation cannot be considered compatible with the high measured PD. Further modeling will be needed to satisfactory address the explanations presented above. ## 5 Summary _IXPE_ observed the NS-LMXB GX 5\(-\)1 twice in the period March-April 2023. Contemporary observations in the X-ray energy band were put in place with NICER, _NuSTAR_, and _INTEGRAL_. Multi-wavelength coverage was ensured by ATCA in the radio, VLT VISIR in mid-IR, REM in optical, and NIR and LCO in optical. During the observations GX 5\(-\)1 moved across the entire Z-track. _NuSTAR_ disentangled clearly the NB with respect to the FB thanks to its extended energy band. The presence of a hard tail, reported in previous analyses, was clearly detected \begin{table} \begin{tabular}{l l c c} \hline \hline Components & Parameters & Obs1 & Obs2 \\ \hline edge & \(E\) (keV) & 1.820 \(\pm\) 0.013 & 1.818 \(\pm\) 0.013 \\ & r & 0.15 \(\pm\) 0.02 & 0.15 \(\pm\) 0.02 \\ edge & \(E\) (keV) & 1.95197 (Beed) \\ & r & 0.05 \(\pm\) 0.02 & 0.05 \(\pm\) 0.02 \\ edge & \(E\) (keV) & 2.28003 (fixed) \\ & r & 0.034 \(\pm\) 0.014 & 0.027 \(\pm\) 0.013 \\ edge & \(E\) (keV) & 2.4444 (fixed) & 0.023 \\ & r & 0.049 \(\pm\) 0.013 & 0.038 \(\pm\) 0.013 \\ edge & \(E\) (keV) & 3.1619 (fixed) \\ & r & 0.021 \(\pm\) 0.007 & 0.011 \(\pm\) 0.007 \\ tbabs & \(N_{\rm B}\) (\(10^{22}\) cm\({}^{-2}\)) & 4.80 \(\pm\) 0.05 & 4.87 \(\pm\) 0.05 \\ diskbb & \(kT_{\rm B}\) (keV) & 1.21 \(\pm\) 0.05 & 1.20 \(\pm\) 0.03 \\ & \(R_{\rm w}\) (GeV) (\(\delta\)m)\({}^{\rm{exp}}\) & 16 \(\pm\) 1 & 19\({}^{\circ}\)\({}^{\prime in Obs. 1, but not in Obs. 2, when the source had a softer energy spectrum. The X-ray PD was \(\sim\)4% during Obs. 1 when the source was in the HB and \(\sim\)2% during Obs. 2 with the source in the NB-FB. This result is in agreement with findings from the other Z-sources observed by _IXPE_ (namely Cyg X-2 and XTE J1701\(-\)462). The source manifested an unexpected variation of the PA as a function of energy by \(\sim\) 20\({}^{\circ}\). The magnitude of the variation combined with the magnitude of the PD require further modeling. However, it is likely related to the different PAs of the disk and Comptonization components which are non-orthogonal and, moreover, have the emission peaks at different energies. In the radio band, the source was detected, but only upper-limits to the polarization were obtained (\(\sim\)6% at 5.5 GHz and 9 GHz in Obs. 1 and 12.5% at 5.5 GHz and 20% at 9 GHz in Obs. 2). The mid-IR counterpart was detected in M and the J8.9 bands. This emission could originate from a compact jet which peaks in the mid- or far-IR. Follow-up observations characterizing the IR spectrum and its variability would be beneficial. Due to the very high extinction toward the source and to the crowded field, the source was not detected in any of the optical and NIR images acquired. ###### Acknowledgements. The Imaging X-ray Polarimetry Explorer (IXPE) is a joint US and Italian mission. The US contribution is supported by the National Aeronautics and Space Administration (NASA) and led and managed by its Marshall Space Flight Center (MSFC), with industry partner Ball Aerospace (contract NNMISA18AC). The Italian contribution is supported by the Italian Space Agency (Agenzia Spaziale Italiana, ASI) through contract ASI-OHBI-2017-12-10, agreements ASI-INAF-2017-12-H0 and ASI-INFN-2017-13-H0, and its Space Science Data Center (SSDC) with agreements AST-INAF-2022-14-H0 and ASI-INFN 2012-43-H10, and by the Istituto Nazionale di Astrofisica (INAF) and the Istituto Nazionale di Fisica Nucleare (INFN) in Italy. This research used data products provided by the IXPE Team (MSFC, SSDC, NIAF, and INFN) and distributed with additional software tools by the High-Energy Astrophysics Science Archive Research Center (HEASARC), at NASA Goddard Space Flight Center (GSFC). ATCA is part of the Australia Telescope National Facility ([https://rorrorgo.gov/](https://rorrorgo.gov/)) which is funded by the Australian Government for operation as a National Facility managed by CSIRO. We acknowledge the Gomperot people as the Traditional Owners of the ATCA Observatory site. Partly based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), and with the participation of the Russian Federation and the USA. Data have been reduced using the Multi-Messenger Online Data Analysis platform provided by the ISDC (University of Geneva), EPFL, APC, and KAU. J.P. acknowledges support from the Academy of Finland grant 333112. A.B. is supported by the Finnish Cultural Foundation grant 0220175. T.D.R. thanks thanks Statevens and James Miller-Jones for the helpful discussions regarding the radio observations. J.S. and M.D. acknowledge the support from the GACR project 21-08625X.
2305.03446
A construction of deformations to general algebras
One of the questions investigated in deformation theory is to determine to which algebras can a given associative algebra be deformed. In this paper we investigate a different but related question, namely: for a given associative finite-dimensional C-algebra A, find algebras N which can be deformed to A. We develop a simple method which produces associative and flat deformations to investigate this question. As an application of this method we answer a question of Michael Wemyss about deformations of contraction algebras.
Dave Bowman, Dora Puljic, Agata Smoktunowicz
2023-05-05T11:47:00Z
http://arxiv.org/abs/2305.03446v1
# A construction of deformations to general algebras ###### Abstract One of the questions investigated in deformation theory is to determine to which algebras can a given associative algebra be deformed. In this paper we investigate a different but related question, namely: for a given associative finite-dimensional \(\mathbb{C}\)-algebra \(A\), find algebras \(N\) which can be deformed to \(A\). We develop a simple method which produces associative and flat deformations to investigate this question. As an application of this method we answer a question of Michael Wemyss about deformations of contraction algebras. ## Introduction Michael Wemyss and Will Donovan developed a method for characterising commutative rings which appear in algebraic geometry, by using methods from noncommutative ring theory. In [1] they introduced contraction algebras which provide important insight into resolutions of noncommutative singularities and invariants of flops. Contraction algebras can be described in a purely algebraic way, by using generators and relations. They appear in many questions which can be investigated by noncommutative ring theorists who are not familiar with advanced methods of algebraic geometry. Some of these questions are related to nilpotent rings. In 2022, Gavin Brown and Michael Wemyss described all finite dimensional contraction algebras which are local [2]. Notice that the Jacobson radical of a finite dimensional algebra is nilpotent, and hence contraction algebras which are local are very close to being nilpotent. Some other questions involve characterisation of contraction algebras which are not local. Deformations of contraction algebras give insight into invariants of flops. In this context Michael Wemyss asked questions about deformations of contraction algebras which are local. Using geometric methods he was able to conjecture which rings can be obtained as deformations of these contraction algebras. Since contraction algebras are noncommutative, the usual methods using derivations as in [1] cannot be applied. Another approach can be to use Hochschild cohomology and the Gerstenhaber bracket, however this often leads to complicated calculations as seen in [11]. In this paper, we develop a method to calculate deformations of noncommutative local algebras. In particular we answer a question of Michael Wemyss regarding deformations of contraction algebras. For information about contraction algebras we refer reader to [12, 13]. In [10, Conjecture 4.3] Hua and Toda conjectured that there exists an algebraic flat deformation of the contraction algebra. As an application we confirm this conjecture for one of the contraction algebras constructed in [1]. Note that our method can be applied to other contraction algebras described in [1]. Observe that Theorem 4.2 of [10] gives important information about deformations of contraction algebras which were obtained by using geometric methods. More information related to the existence of geometric deformations is given on pages 7-9 in [14]. In [11] deformations of graded Hecke algebras were constructed using novel methods. It is known that deformations of graded Hecke algebras give important information about the Hecke algebras. Our method could be also used as another method of choice for constructing deformations of graded Hecke algebras. Some other methods of constructing deformations of associative algebras were described in [15]. ## 1 Preliminaries **Definition 1.1**.: Let \(S\) be a subset of a \(k\)-algebra \(A\) for a commutative unital ring \(k\). Then \(S\) is a **generating set** if all elements of \(A\) can be written as a \(k\)-linear sum of products of elements of \(S\) using the operations in \(A\). **Definition 1.2** ([15]).: A **formal deformation**\((A_{t},*)\) of a \(k\)-algebra \(A\) is an associative \(k\)-bilinear multiplication \(*\) on the \(k[t]\)-module \(A[t]\), such that in the specification at \(t=0\), the multiplication corresponds to that on \(A\); this multiplication is required to be determined by a multiplication on elements of \(A\) and extended to \(A[t]\) by the Cauchy product rule. Any multiplication \(*\) as in Definition 1.2 is determined by products of pairs of elements of \(A\): for \(a,b\in A\), write \[a*b=\sum_{i\geq 0}\mu_{i}(a\otimes b)t^{i},\] where \(\mu_{0}\) denotes the usual product in \(A\) and \(\mu_{i}\) for \(i\geq 1\) are \(k\)-linear functions from \(A\otimes A\) to \(A\). **Definition 1.3** ([14]).: Let \(A\) be a \(k\)-algebra and for \(t\in\mathbb{C}\) let \(\{A_{t}\}\) be the family of algebras arising from a deformation \(*\) of \(A\). Then \(*\) is a **flat** deformation if each \(A_{t}\) has the same dimension. **Notation 1.4**.: We will denote by \(\mathbb{C}[x,y]\) the polynomial ring over \(\mathbb{C}\) in two non-commuting variables, \(x\) and \(y\). **Notation 1.5**.: Consider monomials of the ring \(\mathbb{C}[x,y]\). We can order the monomials using shortlex ordering by specification \(1<x<y\). We denote the monomials of \(\mathbb{C}[x,y]\) by \(p_{1},p_{1},p_{2}\dots\) where \(p_{i}<p_{j}\) for \(i<j\). **Notation 1.6**.: We will denote by \(\mathbb{C}[x,y][t]\) the polynomial ring in variable \(t\) with coefficients from the polynomial ring in two non-commuting variables \(x\) and \(y\), with coefficients from \(\mathbb{C}\). **Notation 1.7**.: For \(a_{i},b_{i}\in A\) we define \(f:\mathbb{C}[x,y][t]\to A[t]\) to be a homomorphism of \(\mathbb{C}\)-algebras such that \[f:x\mapsto\sum_{i=1}^{n}a_{i}t^{i},\quad f:y\mapsto\sum_{i=1}^{n}b_{i}t^{i}, \quad f:t\mapsto t.\] We will denote \(\ker f\) by \(I\). **Notation 1.8**.: For an element \(p_{i}\) in \(\mathbb{C}[x,y]\) we will denote by \(\overline{p_{i}}\) the same element \(p_{i}\), but with all instances of \(x\) replaced by \(\sum_{i=1}^{n}a_{i}t^{i}\), and all instances of \(y\) replaced by \(\sum_{i=1}^{n}b_{i}t^{i}\). We will denote by \(\overline{p_{i}}\) the same element as \(p_{i}\), but with all instances of \(x\) replaced by \(\sum_{i=1}^{n}a_{i}\), and all instances of \(y\) replaced by \(\sum_{i=1}^{n}b_{i}\). **Notation 1.9**.: Let \(R\) be a ring and \(I\) be an ideal of \(R\), then elements of the factor ring \(R/I\) will be denoted as \(r+I\), where \(r\in R\). Notice that \(r+I=s+I\) for \(r,s\in R\) if and only if \(r-s\in I\). ## 2 Algorithm In this section we shall describe the simplest case of our proposed method. This requires the data of a \(\mathbb{C}\)-algebra \(A\) and a two element generating set \(\{a,b\}\). We shall state the steps of this algorithm and then provide notes regarding important components. Then some examples will be displayed. **Method 1**.: We fix a finite dimensional \(\mathbb{C}\)-algebra \(A\) with two generators \(a,b\). Then: 1. We consider \(A[t]\), the polynomial algebra over \(A\). We will use \(t\) as the parameter of our deformation. We define: \[x :=ta,\] \[y :=tb.\] 2. We calculate: \[x^{2} =t^{2}a^{2},\] \[xy =t^{2}ab,\] \[yx =t^{2}ba,\] \[y^{2} =t^{2}b^{2},\] and continue to calculate larger products of \(x,y\). We shall then proceed with a Diamond Lemma1 like decomposition of large products of \(a,b\) in terms of smaller products of \(a,b\). This will cause all elements of sufficiently large length to have a power of \(t\) as a factor. In doing so we will obtain relations on \(x,y\) and products thereof. We terminate this process when we have enough relations to decompose any large product into only multiples of \(x,y\) and powers of \(t\). Our relations will be given by polynomials \(p_{1},\ldots,p_{m}\in\mathbb{C}[x,y][t]\) for some \(m\in\mathbb{N}.\) This terminates finitely because \(A\) finite dimensional. Footnote 1: See section 1 of [1] 3. We present the algebra: \[\mathcal{N}:=\mathbb{C}[x,y][t]/\langle p_{1},\ldots,p_{m}\rangle.\] We note that sufficiently large products of \(x,y\) will obtain a factor of \(t\). Now we evaluate \(t\) at various values in \(T=[0,\infty)\). We denote by \(N_{s}\) the algebra that arises from \(\mathcal{N}\) by evaluating \(t\) at \(s\) and in particular we write \(N=N_{0}\). By step (2) \(N\) is local, in Section 5 we consider a more general method where \(N\) is not necessarily local. The algebra \(\mathcal{N}\) and the family of algebras \(\{N_{t}\}\) are the output of this method. By Theorem 3.8 the family \(\{N_{t}\}\) is a deformation of \(N\) and by Proposition 3.9 it holds that \(A\in\{N_{t}\}\). Changing the chosen generators \(a,b\) can result in different algebras \(N_{0}\) that have \(A\) as an associative deformation. ### Notes on the method We give some notes before the examples: * In step (1) we are trying to capture the behaviour of the elements in \(A\), but in a way that is controlled by \(t\). This enables us to make use of a presentation while still having access to our deformation parameter \(t\). * We note that since \(a,b\) generate \(A\) some linear combination of products of \(a,b\) will be the identity. In step (2) it is important we find a relation on \(x,y,t\) and \(1\in A\). * In step (3) we discard \(A\) entirely. However, because of the relations \(p_{i}\), \(x\) and \(y\) "remember" that they came from \(A\) and that they form a generating set. This step results in something subtle: the monomials \(x,y\in N\) are no longer dependent on \(t\) (and thus do not vanish when we set \(t=0\)) but their products maintain their dependence on \(t\). We have loosened our grip on \(x,y\) just enough for the deformation to take place. * We note that our deformation will be associative since the multiplication of \(N_{t}\) will inherit from the associative multiplication \(*\) of the algebra \(\mathcal{N}\). * This algorithm is easily generalised to algebras that require a larger generating set. Suppose a finite dimensional \(\mathbb{C}\)-algebra \(A\) has \(n\) generators \(\{a_{1},\dots,a_{n}\}\). Then in step (1) we would define \(x_{i}:=ta_{i}\) for \(1\leq i\leq n\) and proceed as described in Method 1. The resulting family of algebras will also be a flat and associative deformation of \(A\). ### Examples Here we shall work through some examples. We shall start with a simple example where \(A=\mathbb{C}\oplus\mathbb{C}\) and then give a more complicated one where \(A=M_{2}(\mathbb{C})\), the algebra of \(2\times 2\) matrices over \(\mathbb{C}\). Practically it can be useful to fix a basis for the algebra \(A\) as this helps find the decompositions in step (2). **Example 2.1**.: We consider \(A=\mathbb{C}\oplus\mathbb{C}\) and fix \(a=(i,0),\;b=(0,1)\). Clearly \(\mathbb{C}\)-linear combinations of \(a\) and \(b\) span \(A\) so our basis is just \(\{a,b\}\). We write \(\mathbb{1}:=(1,1)\). Now we consider \(A[t]\) and define: \[x :=ta=(ti,0),\] \[y :=tb=(0,t).\] We calculate: \[x^{2}=t^{2}a^{2}=(-t^{2},0)=tix,\] \[xy=t^{2}ab=(0,0)=0,\] \[yx=t^{2}ab=(0,0)=0,\] \[y^{2}=t^{2}b^{2}=(0,t^{2})=ty,\] \[t\mathbb{1}=-ix+y.\] Thus we have relations: \[p_{1}=x^{2}-tix=0,\] \[p_{2}=xy=0,\] \[p_{3}=yx=0,\] \[p_{4}=y^{2}-ty=0,\] \[p_{5}=t\mathbb{1}-(-ix+y).\] Since \(xy=yx=0\) and squares of \(x,y\) can be reduced, we have enough relations to decompose any large product into only multiples of \(x,y\) and powers of \(t\). Thus we present: \[N:=\mathbb{C}[x,y][t]/\langle p_{1},p_{2},p_{3},p_{4},p_{5}\rangle.\] We observe: \[N_{0}=\mathbb{C}[x,y]/\big{\langle}x^{2},xy,yx,y^{2},-ix+y\big{\rangle},\] and \[N_{1}=\mathbb{C}[x,y]/\big{\langle}x^{2}-ix,xy,yx,y^{2}-y,1+ix-y\big{\rangle}\,.\] We see that \(x,y\) behave exactly as \(a,b\in A\) and by Proposition 3.9. It is easily checked that the isomorphism is given by: \[\phi:N_{1} \to A\] \[1 \mapsto\mathbb{1}\] \[x \mapsto(i,0)\] \[y \mapsto(0,1).\] Thus we have produced an algebra \(N_{0}\) that has \(A=\mathbb{C}\oplus\mathbb{C}\) as a deformation. **Example 2.2**.: We consider \(A=M_{2}(\mathbb{C})\) and fix: \[a=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix},\quad b=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}.\] We denote by \(\mathbb{1}\) the multiplicative unit in \(A\). Next we compute: \[\begin{pmatrix}1&0\\ 0&0\end{pmatrix} =\frac{1}{2}(a+b^{2}) =e_{1},\] \[\begin{pmatrix}0&1\\ 0&0\end{pmatrix} =\frac{1}{2}(ab+b) =e_{2},\] \[\begin{pmatrix}0&0\\ 1&0\end{pmatrix} =\frac{1}{2}(b-ab) =e_{3},\] \[\begin{pmatrix}0&0\\ 0&1\end{pmatrix} =\frac{1}{2}(b^{2}-a) =e_{4}\] and thus \(a,b\) generate \(A\) as an algebra. Now we consider \(A[t]\) and define: \[x :=ta,\] \[y :=tb.\] We compute: \[x^{2}=t^{2}\begin{pmatrix}1&0\\ 0&1\end{pmatrix}=t^{2}\mathbb{1},\] \[xy=t^{2}\begin{pmatrix}0&1\\ -1&0\end{pmatrix}=t^{2}(e_{2}-e_{3}),\] \[yx=t^{2}\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}=t^{2}(e_{3}-e_{2}),\] \[y^{2}=t^{2}\begin{pmatrix}1&0\\ 0&1\end{pmatrix}=t^{2}\mathbb{1}.\] We have obtained the relations: \[p_{1}=x^{2}-t^{2}\mathbb{1},\] \[p_{2}=x^{2}-y^{2}=0,\] \[p_{3}=xy+yx=0,\] \[p_{4}=y^{2}-t^{2}\mathbb{1}.\] Now we consider larger products to produce more relations: \[x^{3}=t^{3}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}=t^{2}x,\] \[y^{3}=t^{3}\begin{pmatrix}0&1\\ 1&0\end{pmatrix}=t^{2}y.\] Hence we have also obtained: \[p_{5}=x^{3}-t^{2}x=0,\] \[p_{6}=y^{3}-t^{2}y=0.\] These allow us to derive the following: \[x^{2}y=y^{3}=yx^{2}=y^{3}=t^{2}y,\] \[y^{2}x=x^{3}=xy^{2}=x^{3}=t^{2}x,\] which are enough to reduce an arbitrary product of \(x,y\). Thus we present: \[N:=\mathbb{C}[x,y][t]/\langle p_{1},p_{2},p_{3},p_{4},p_{5},p_{6}\rangle.\] We observe: \[N_{0}=\mathbb{C}[x,y]/\big{\langle}x^{2},y^{2},xy+yx\big{\rangle},\] and \[N_{1}=\mathbb{C}[x,y]/\big{\langle}x^{2}-1,y^{2}-1,xy+yx,x^{3}-x,y^{3}-y\big{\rangle}.\] By Proposition 3.9 we have \(N_{1}\cong A\). It is easily checked that the isomorphism is given by: \[\phi:N_{1} \to A\] \[1 \mapsto\mathbb{1}\] \[x \mapsto\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\] \[y \mapsto\begin{pmatrix}0&1\\ 1&0\end{pmatrix}.\] Thus we have produced an algebra \(N_{0}\) that has \(A=M_{2}(\mathbb{C})\) as a deformation. ## 3 Deformations to \(A\) We now consider the general case in which \(A\) is an associative finite dimensional \(\mathbb{C}\)-algebra generated by some elements \(\sum_{i=1}^{n}a_{i},\sum_{i=1}^{n}b_{i}\) for some \(a_{i},b_{i}\in A\). ### Specification at \(t=0\) In this section we define a multiplication on \(\mathbb{C}[x,y][t]\) which gives the algebra \(N\) at specification \(t=0\). **Proposition 3.1**.: _Let \(\mathbb{C}[t]\) denote the polynomial ring in variable \(t\). Further, assume that for every \(r\in A\) there exists \(i=i(r)\) such that \(rt^{i}\in\text{im}f\). There exist elements \(q_{1},q_{2},\ldots,q_{n}\in\mathbb{C}[x,y][t]\) such that for every \(k\) we have_ \[f(p_{k})\in\sum_{i=1}^{n}\mathbb{C}[t]f(q_{i}). \tag{1}\] _Moreover \(n=\dim A\)._ Proof.: We let \(\{e_{1},\ldots,e_{n}\}\) be a basis of \(A\). Let \(k_{1}\) be the smallest possible integer such that for some integer \(j_{1}\) we have \[e_{j_{1}}t^{k_{1}}+\sum_{\begin{subarray}{c}i=1\\ i\neq j_{1}\end{subarray}}^{n}\alpha_{i}(t)e_{i}\in\text{im}f\] for some \(\alpha_{i}(t)\in\mathbb{C}[t]\). Let \(k_{2}\) be the smallest possible integer such that for some integer \(j_{2}\neq j_{1}\) we have \[e_{j_{2}}t^{k_{2}}+\sum_{\begin{subarray}{c}i=1\\ i\neq j_{1},j_{2}\end{subarray}}^{n}\alpha_{i}^{\prime}(t)e_{i}\in\text{im}f\] for some \(\alpha_{i}^{\prime}(t)\in\mathbb{C}[t]\). Continuing in this way, we define a basis \(\{q_{1},\ldots,q_{n}\}\) such that \[f(q_{i})=e_{j_{m}}t^{k_{m}}+\sum_{\begin{subarray}{c}i=1\\ i\neq j_{1},\ldots,j_{m}\end{subarray}}^{n}\alpha_{i}(t)e_{i}.\] Now notice that \(f(q_{1}),f(q_{2}),\ldots,f(q_{n})\) are linearly independent over \(\mathbb{C}\) as \(e_{1},e_{2},\ldots,e_{n}\) are linearly independent over \(\mathbb{C}\). We now prove that \(f(q_{1})+\mathrm{i}\mathrm{m}f,\ldots,f(q_{n})+\mathrm{i}\mathrm{m}f\) span the \(\mathbb{C}\)-algebra \(\frac{\mathrm{i}\mathrm{m}f}{\mathrm{i}\mathrm{m}f}\) over \(\mathbb{C}\), and hence they are a basis for \(\frac{\mathrm{i}\mathrm{m}f}{\mathrm{i}\mathrm{m}f}\). Let \(z_{1}\in\mathrm{i}\mathrm{m}f\) and write \[z_{1}=\sum_{i=1}^{n}r_{i}(t)e_{j_{i}}t^{s_{i}}\] for some \(r_{i}\in\mathbb{C}[t]-t\mathbb{C}[t]\) and some \(j_{i},s_{i}\in\mathbb{N}\). Notice that for the smallest \(i\) such that \(r_{i}(t)\neq 0\), \(s_{i}\geq k_{i}\) for \(k_{i}\) as above. Let \[z_{2}=z_{1}-f(q_{1})t^{s_{1}-k_{1}}r_{1}(t).\] Note that we have \[z_{2}\in\sum_{i=2}^{n}\mathbb{C}[t]e_{j_{i}}\] so we can write \[z_{2}=\sum_{i=2}^{n}r_{i}^{2}(t)e_{j_{i}}t^{s_{i}^{2}}\] for some \(r_{i}^{2}(t)\in\mathbb{C}[t]-t\mathbb{C}[t]\) and \(s_{i}^{2}\in\mathbb{N}\). Notice that \(q_{2}\in\mathrm{i}\mathrm{m}f\). Now let \[z_{3}=z_{2}-f(q_{2})t^{s_{2}^{2}-k_{2}}r_{2}^{2}(t).\] Continuing in this way we eventually arrive at \(q_{n+1}=0.\) Hence by summing the following equations \[0 =z_{n}-f(q_{n})t^{s_{n}^{n}-k_{n}}r_{n}^{n}(t),\] \[\vdots\] \[z_{3} =z_{2}-f(q_{2})t^{s_{2}^{2}-k_{2}}r_{2}^{2}(t)\] and \[z_{2}=z_{1}-f(q_{1})t^{s_{1}-k_{1}}r_{1}(t)\] we arrive at \[z_{1}=\sum_{l=1}^{n}f(q_{l})t^{s_{l}^{\prime}-k_{l}}r_{l}^{l}(t)\in\sum_{l=1}^ {n}\mathbb{C}[t]f(q_{l})\] for some \(s_{l}^{\prime}\in\mathbb{N}\) as required. **Remark.** Notice that for \(k\in\mathbb{N}\) and \(\alpha_{j}\in\mathbb{C}[t]\) \[f(p_{k})-\sum_{j=1}^{n}\alpha_{j}f(q_{j})=0\] if and only if \[p_{k}-\sum_{j=1}^{n}\alpha_{j}q_{j}\in I.\] **Proposition 3.2**.: _For any \(k,m\in\{1,2,\ldots,n\}\) there exist \(\zeta_{i,k}\in\mathbb{C}\), \(\xi_{i,k}(t)\in\mathbb{C}[t]\) such that_ \[p_{k}-\sum_{i=1}^{n}(\zeta_{i,k}q_{i}+t\xi_{i,k}(t)q_{i})\in I.\] _In particular, there exist \(\zeta_{i,k,m}\in\mathbb{C}\), \(\xi_{i,k,m}(t)\in\mathbb{C}[t]\) such that_ \[q_{k}q_{m}-\sum_{i=1}^{n}(\zeta_{i,k,m}q_{i}+t\xi_{i,k,m}(t)q_{i})\in I.\] Proof.: For each element \(p_{k}\in\mathbb{C}[x,y][t]\) we have \[p_{k}\in\sum_{i=1}^{n}\mathbb{C}[t]q_{i}+I.\] This follows by taking \(f^{-1}\) of equation (1). Hence, for each \(k\in\mathbb{N}\) there exist \(\sigma_{i,k}\in\mathbb{C}[t]\) such that \[p_{k}-\sum_{i=1}^{n}\sigma_{i,k}q_{i}\in I,\] where \(I=\ker f\). Notice that we can write \(\sigma_{i,k}(t)=\zeta_{i,k}+t\xi_{i,k}(t)\) for some \(\zeta_{i,k}\in\mathbb{C}\), \(\xi_{i,k}(t)\in\mathbb{C}[t]\), as to separate the terms dependent on \(t\). The above then becomes \[p_{k}-\sum_{i=1}^{n}(\zeta_{i,k}q_{i}+t\xi_{i,k}(t)q_{i})\in I.\] Reasoning similarly as in the proof of Proposition 3.1 we get the following corollary. **Corollary 3.3**.: _Suppose \(\sum_{i=1}^{n}\alpha_{i}q_{i}\in I\) for some \(\alpha_{i}\in\mathbb{C}[t]\). Then \(f(\sum_{i}^{n}\alpha_{i}q_{i})=0\) and consequently \(\alpha_{i}=0\) for every \(i\in\{1,2,\ldots,n\}\)._ _Moreover, all elements of \(I\) are \(\mathbb{C}[t]\)-linear combinations of elements_ \[p_{k}-\sum_{i=1}^{n}(\zeta_{i,k}q_{i}+t\xi_{i,k}(t)q_{i})\] _for some \(k\in\mathbb{N},\zeta_{i,k}\in\mathbb{C}\), \(\xi_{i,k}(t)\in\mathbb{C}[t]\)._ **Notation 3.4**.: We denote by \(J\) be the set consisting of \(\mathbb{C}\)-linear combinations of elements \[p_{k}-\sum_{i=1}^{n}\zeta_{i,k}q_{i}\in\mathbb{C}[x,y], \tag{2}\] where \(\zeta_{i,k}\) are as in Proposition 3.2. Let \(\langle J\rangle\) be the ideal of \(\mathbb{C}[x,y]\) generated by elements from \(J\). Further, let \(N\) be the quotient algebra \(\mathbb{C}[x,y]/\langle J\rangle\). **Corollary 3.5**.: _We have \(e\in J\) if and only if \(e+e^{\prime}\in I\) for some \(e^{\prime}\in t\mathbb{C}[x,y][t]\). In particular \(J\subseteq I+t\mathbb{C}[x,y][t]\), and hence \(\langle J\rangle\subseteq I+t\mathbb{C}[x,y][t]\)._ **Proposition 3.6**.: _The dimension of \(N\) equals \(n\). Moreover, elements \(q_{k}+J\in N\) are a basis of \(N\) as a \(\mathbb{C}\)-vector space._ Proof.: Notice that expression (2) implies that \[\mathbb{C}[x,y]\subseteq\sum_{k=1}^{n}\mathbb{C}q_{k}+\langle J\rangle,\] and hence \(\{q_{k}+\langle J\rangle\mid k\in\{1,2,\ldots,n\}\}\) span \(N\) as a \(\mathbb{C}\)-vector space. Therefore, the dimension of \(N\) does not exceed \(n\). We now show that the elements \(q_{k}+\langle J\rangle\in N\) for \(k\in\{1,2,\ldots,n\}\) are linearly independent over \(\mathbb{C}\). Suppose on the contrary that we have \[\sum_{i=1}^{n}\xi_{i}q_{i}\in\langle J\rangle\] for some \(\xi_{i}\in\mathbb{C}\). By Corollary 3.5 and Proposition 3.1 we have \[\langle J\rangle\subseteq I+t\mathbb{C}[x,y][t]\subseteq I+t\sum_{i=1}^{n} \mathbb{C}[t]q_{i}.\] It follows that there is \(e^{\prime}\in t\sum_{i=1}^{n}\mathbb{C}[t]q_{i}\) such that \(\sum_{i=1}^{n}\xi_{i}q_{i}-e^{\prime}\in I\), contradicting Corollary 3.3. Observe now that the following holds: **Proposition 3.7**.: _We can present \(N\) as an \(\mathbb{C}\)-algebra with generators \(d_{1},\ldots,d_{n}\), which span \(N\) as a \(\mathbb{C}\)-vector space, subject to relations_ \[d_{k}d_{m}=\sum_{i=1}^{n}\zeta_{i,k,m}d_{i}\] _where \(\zeta_{i,k,m}\) are as in Proposition 3.2._ Proof.: We let \(\mathbb{C}[q_{1},\ldots,q_{n}]\) be the subalgebra of \(\mathbb{C}[x,y]\) generated by elements \(q_{1},\ldots,q_{n}\in\mathbb{C}[x,y]\). Similarly, let \(\mathbb{C}[d_{1},\ldots,d_{n}]\) be the free \(\mathbb{C}\)-algebra generated by elements \(d_{1},\ldots,d_{n}\). Let \(\zeta:\mathbb{C}[d_{1},\ldots,d_{n}]\to\mathbb{C}[x,y]/\langle J\rangle\) be defined as \(\zeta(d_{i})=q_{i}+\langle J\rangle\). We now show that \(x+\langle J\rangle,y+\langle J\rangle\in\mathrm{im}\zeta\). Notice that since \(f(x)\in\mathrm{im}f\) we have \[f(x)\in\sum_{i=0}^{\infty}\mathbb{C}[t]f(q_{i}).\] Hence we have \(f(x)=f(x^{\prime})\) for some \(x^{\prime}\in\sum_{i=0}^{\infty}\mathbb{C}[t]f(q_{i}).\) Then \(x-x^{\prime}\in I,\) and so \(x-x^{\prime\prime}\in J\) for some \(x^{\prime\prime}\in\sum_{i=0}^{\infty}\mathbb{C}q_{i}.\) Then \(x^{\prime\prime}+\langle J\rangle\in\mathrm{im}\zeta\), and so \(x+\langle J\rangle\in\mathrm{im}\zeta\). An analogous argument shows \(y+\langle J\rangle\in\mathrm{im}\zeta\). Let \[J^{\prime}=\ker(h)\subseteq\mathbb{C}[d_{1},\ldots,d_{n}].\] Then by the First Isomorphism Theorem for rings \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime}\) is isomorphic to \(\mathbb{C}[x,y]/\langle J\rangle=N\). Let \(J^{\prime\prime}\) be the ideal of \(\mathbb{C}[d_{1},\ldots,d_{n}]\) generated by elements \[d_{k}d_{m}-\sum_{i=1}^{n}\zeta_{i,k,m}d_{i}\] where \(\zeta_{i,k,m}\) are as in Proposition 3.2. Observe that \[\zeta\left(d_{k}d_{m}-\sum_{i=1}^{n}\zeta_{i,k,m}d_{i}\right)=q_{k}q_{m}-\sum_ {i=1}^{n}\zeta_{i,k,m}q_{i}+\langle J\rangle=0+\langle J\rangle.\] Therefore \(J^{\prime\prime}\subseteq J^{\prime}\). It follows that the dimension of \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime}\) does not exceed the dimension of \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime\prime}\). We now show that \(J^{\prime}=J^{\prime\prime}\). Notice that \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime\prime}\) is at most \(n\)-dimensional, since every element can be presented as a linear combination of elements \(d_{i}+J^{\prime\prime}\) for \(i=1,2,\ldots,n\). On the other hand \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime}\) is isomorphic to \(\mathbb{C}[x,y]/\langle J\rangle\), and hence is \(n\)-dimensional (by Proposition 3.6). Hence by comparing dimensions we get that the dimension of \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime\prime}\) does not exceed the dimension of \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime}\). Previously we showed that \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime}\) does not exceed the dimension of \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime\prime}\). It follows that \(J^{\prime}=J^{\prime\prime}\), as required. We now define a multiplication on \(N\) which gives a formal deformation to the algebra \(A\). **Theorem 3.8**.: _Let \(d_{1},\ldots,d_{n}\) be free generators of the \(\mathbb{C}\)-algebra \(\mathbb{C}[d_{1},\ldots,d_{n}]\) and suppose \(\zeta_{i,k,m}\in\mathbb{C}\), \(\xi_{i,k,m}(t)\in\mathbb{C}[t]\) are as in Proposition 3.2. Then the multiplication rule_ \[d_{k}*_{t}d_{m}=\sum_{i=1}^{n}(\zeta_{i,k,m}d_{i}+t\xi_{i,k,m}(t)d_{i}).\] _gives a formal deformation such that \(*_{0}\) gives the multiplication on an algebra isomorphic to \(N\)._ Proof.: Recall relations from Proposition 3.2 that for any \(k,m\in\{1,2,\ldots,n\}\) there exist \(\zeta_{i,k,m}\in\mathbb{C}\), \(\xi_{i,k,m}(t)\in\mathbb{C}[t]\) such that \[q_{k}q_{m}-\sum_{i=1}^{n}(\zeta_{i,k,m}q_{i}+t\xi_{i,k,m}(t)q_{i})\in I. \tag{3}\] We introduce notation \([q_{i}]:=q_{i}+I\) for elements of \(\mathbb{C}[x,y]/I\). Hence we obtain the following relations corresponding to (3) which give the multiplicative table on \(\mathbb{C}[x,y]/I\). We have \[[q_{k}][q_{m}]-\sum_{i=1}^{n}(\zeta_{i,k,m}[q_{i}]+t\xi_{i,k,m}(t)[q_{i}]).\] Notice that these relations give a multiplication \[d_{k}*_{t}d_{m}=\sum_{i=1}^{n}(\zeta_{i,k,m}d_{i}+t\xi_{i,k,m}(t)d_{i}) \tag{4}\] for \(d_{i}\in\mathbb{C}[x,y].\) Now recall that \(N\) is isomorphic to \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime\prime}\) where \(J^{\prime\prime}\) is the ideal of \(\mathbb{C}[d_{1},\ldots,d_{n}]\) generated by relations \[d_{k}d_{m}-\sum_{i=1}^{n}\zeta_{i,k,m}d_{i}.\] Therefore by setting \(t=0\) in (4) we obtain the multiplication rule for \(\mathbb{C}[d_{1},\ldots,d_{n}]/J^{\prime\prime}\). ### Specification at \(t=1\) In this section we define a multiplication on \(\mathbb{C}[x,y][t]\) which gives a formal deformation of \(N\), such that at specification \(t=1\) we get an algebra isomorphic to \(A\). **Proposition 3.9**.: _Let \(\xi:\mathbb{C}[x,y][t]\to A\) be a homomorphism of \(\mathbb{C}\)-algebras such that_ \[\xi(x)=\sum_{i=0}^{n}a_{i},\quad\xi(y)=\sum_{i=0}^{n}b_{i},\quad\xi(t)=1.\] _Further, assume that for every \(r\in A\) there exists \(i=i(r)\) such that \(rt^{i}\in\text{imf}\). Then \(\ker\xi=\langle I,t-1\rangle\)._ Proof.: We let \(e=\sum_{i}\alpha_{i}p_{i}t^{\beta_{i}}\in\ker\xi\) where \(p_{i}\) are monomials in \(\mathbb{C}[x,y]\), \(\alpha_{i}\in\mathbb{C}\) and \(\beta_{i}\in\mathbb{N}\). We will show there exists \(\gamma_{i}\in\mathbb{N}\) such that \(\hat{e}:=\sum_{i}\alpha_{i}p_{i}t^{\gamma_{i}}\in I\), so that \(e\in I+\langle t-1\rangle\) as \(e-\hat{e}\in\langle t-1\rangle.\) This follows since \[e-\hat{e}=\sum_{i}\alpha_{i}p_{i}t^{\beta_{i}}-\sum_{i}\alpha_{i}p_{i}t^{ \gamma_{i}}=\sum_{i}\alpha_{i}p_{i}(t^{\beta_{i}}-t^{\gamma_{i}})=\sum_{i} \alpha_{i}p_{i}t^{m}(t^{n}-1)\in\langle t-1\rangle\] for some \(m,n\in\mathbb{N}\). Recall Notation 1.8 and note that \(\xi(\underline{e})=\sum_{i}\alpha_{i}\overline{\overline{p_{i}}}=0\). Notice that by assumption there exist \(j_{i}\in\mathbb{N}\) such that \(\overline{p_{i}}t^{j_{i}}\in\text{im}f\). We let \(f(c_{i})=\overline{\overline{p_{i}}}t^{j_{i}}\) for some \(c_{i}\in\mathbb{C}[x,y][t]\). Hence for large enough \(k\in\mathbb{N}\) we have \[f\left(\sum_{i}\alpha_{i}c_{i}t^{k-j_{i}}\right)=\sum_{i}\alpha_{i}\overline{ \overline{p_{i}}}t^{k}=0.\] **Proposition 3.10**.: _Let notation be as in Proposition 3.9, and suppose that \(\sum_{i=0}^{n}a_{i}\) and \(\sum_{i=0}^{n}b_{i}\) generate \(A\) as a \(\mathbb{C}\)-algebra. We have_ \[\frac{\mathbb{C}[x,y][t]/I}{\langle t-1+I\rangle}\cong A.\] Proof.: Note that the ideal \(\langle t-1+I\rangle\) of \(\mathbb{C}[x,y][t]/I\) equals the set \[\frac{\mathbb{C}[x,y][t](t-1)+I}{I}=\frac{\langle I,t-1\rangle}{I}=\{g(t-1)+ I\mid g\in\mathbb{C}[x,y][t]\}.\] By the Third Isomorphism Theorem we have \[\frac{\mathbb{C}[x,y][t]/I}{\langle I,t-1\rangle/I}\cong\frac{\mathbb{C}[x,y][ t]}{\langle I,t-1\rangle}.\] By Proposition 3.9 we have \(\langle I,t-1\rangle=\ker\xi\) and as \(\xi\) is onto, \[\frac{\mathbb{C}[x,y][t]}{\ker\xi}\cong A\] by the First Isomorphism Theorem. We now define a multiplication on \(\mathbb{C}[x,y][t]/\langle t-1,I\rangle\) which gives a deformation to \(A\) at \(t=1\). **Theorem 3.11**.: _Let notation be as in Proposition 3.9, and suppose that \(\sum_{i=0}^{n}a_{i}\) and \(\sum_{i=0}^{n}b_{i}\) generate \(A\) as a \(\mathbb{C}\)-algebra. Let \(d_{1},\dots,d_{n}\) be free generators of the \(\mathbb{C}\)-algebra \(\mathbb{C}[d_{1},\dots,d_{n}]\) and suppose \(\zeta_{i,k,m}\in\mathbb{C}\), \(\xi_{i,k,m}(t)\in\mathbb{C}[t]\) are as in Proposition 3.2. Then the multiplication rule_ \[d_{k}\ast_{t}d_{m}=\sum_{i=1}^{n}(\zeta_{i,k,m}d_{i}+t\xi_{i,k,m}(t)d_{i}).\] _gives a formal deformation such that \(\ast_{1}\) gives the multiplication on an algebra generated by \(d_{1},\dots,d_{n}\) isomorphic to \(\mathbb{C}[x,y][t]/\langle I,t-1\rangle\)._ Proof.: We show that the algebra \(\mathbb{C}[x,y][t]/\langle I,t-1\rangle\) is isomorphic to the algebra \(\mathbb{C}[d_{1},d_{2},\ldots d_{n}]/I^{\prime}\) where \(I^{\prime}\) is the ideal of \(\mathbb{C}[d_{1},d_{2},\ldots d_{n}]\) generated by elements \[d_{k}*_{1}d_{m}-\sum_{i=1}^{n}(\zeta_{i,k,m}d_{i}+\xi_{i,k,m}(1)d_{i})\] for \(\zeta_{i,k,m}\in\mathbb{C}\), \(\xi_{i,k,m}(t)\in\mathbb{C}[t]\) as in Proposition 3.2. We let \(\delta:\mathbb{C}[d_{1},\ldots,d_{n}]\to\mathbb{C}[x,y][t]/\langle I,t-1\rangle\) be such that \[\delta(d_{i})=q_{i}+\langle I,t-1\rangle,\] for \(i\in\{1,2,\ldots,n\}\). Observe that \(I^{\prime}\subseteq\ker(\delta)\) since \[\delta\left(d_{k}*d_{m}-\sum_{i=1}^{n}(\zeta_{i,k,m}d_{i}+\xi_{i,k,m}(1)d_{i})\right) =q_{k}*q_{m}-\sum_{i=1}^{n}(\zeta_{i,k,m}q_{i}+\xi_{i,k,m}(1)q_{i})\] \[\quad+\langle I,t-1\rangle\] \[=\langle I,t-1\rangle\] since \(q_{k}*q_{m}-\sum_{i=1}^{n}(\zeta_{i,k,m}q_{i}+\xi_{i,k,m}(1)q_{i})\in I+ \langle t-1\rangle\). Therefore the dimension of \(\mathbb{C}[d_{1},\ldots,d_{n}]/\ker(\delta)\) does not exceed the dimension of \(\mathbb{C}[d_{1},\ldots,d_{n}]/I^{\prime}\). Notice that \(\mathbb{C}[d_{1},\ldots,d_{n}]/I^{\prime}\) is spanned by elements \(d_{i}+I\) as a vector space, and hence has dimension at most \(n\). On the other hand, by the First Isomorphism Theorem for rings: \[\mathbb{C}[d_{1},\ldots,d_{n}]/ker(\delta)\cong\operatorname{im}(\delta)= \mathbb{C}[x,y][t]/\langle I,t-1\rangle, \tag{5}\] which in turn is isomorphic to \(A\) by Proposition 3.9, and hence has dimension \(n\). It follows that \(I^{\prime}=\ker(\delta)\) and hence \(\mathbb{C}[d_{1},\ldots,d_{n}]/ker(\delta)\) is isomorphic to \(\mathbb{C}[d_{1},\ldots,d_{n}]/I^{\prime}\). Where the equality in Equation 5 is true by the proof of Proposition 3.7. **Remark 3.12**.: _Deformations at \(t\neq 0\)_ The results from this section, hold with analogous proofs for a specialisation at other \(t\neq 0\). Let \(z\in\mathbb{C}\) and \(z\neq 0\), the the deformation at \(t=z\) is also isomorphic to \(A\), provided that \(\sum_{i=0}^{n}a_{i}z^{i}\) and \(\sum_{i=0}^{n}b_{i}z^{i}\) generate \(A\) as a \(\mathbb{C}\)-algebra. **Remark 3.13**.: Since the dimension of all the algebras in \(\{N_{t}\}_{t\in[0,1]}\) remains \(n=\dim(A)\) the deformation arising from \(*\) is flat. ## 4 Application In this section we prove a conjecture of M. Wemyss using the method in section 3. The statement is that of the following theorem. **Theorem 4.1**.: _Let_ \[N=\frac{\mathbb{C}[x,y]}{\langle xy+yx,x^{3}+y^{2},y^{3}\rangle}.\] _Then \(N\) deforms into \(A=M_{2}(\mathbb{C})\oplus\mathbb{C}^{\oplus 5}.\)_ We prove this theorem through a series of lemmas. Firstly, we fix notation in addition to the notation of Theorem 4.1. **Notation 4.2**.: Let \(i_{1},i_{2}\) be roots of the polynomial \(x^{2}-x+1.\) **Notation 4.3**.: Let \(e\in\mathbb{C}\) be such that for \(c=\frac{i_{1}-i_{2}}{\sqrt{1+e^{2}}}\) the polynomial \[g(z)=(2z-1)^{2}z^{3}+c^{2}\] has 5 distinct roots \(\alpha_{1},\dots,\alpha_{5}\in\mathbb{C}\). Further, we assume that \(e\neq 0\), \(c^{2}\neq 3\) and \(\alpha_{i}\neq\frac{1}{2}.\) Notice that \(g(z)\) and \(z^{3}+1\) have no common roots, so \(\alpha_{i}\neq-1\) and \(\alpha_{j}\notin\{i_{1},i_{2}\}\). **Notation 4.4**.: Let \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5})\) and \(\beta=(\beta_{1},\beta_{2},\beta_{3},\beta_{4},\beta_{5})\) for \(\beta_{i}\in\mathbb{C}\) such that \[(\beta_{1},\beta_{2},\beta_{3},\beta_{4},\beta_{5})(2\alpha_{1}-1,2\alpha_{2} -1,2\alpha_{3}-1,2\alpha_{4}-1,2\alpha_{5}-1)=c(1,1,1,1,1).\] Notice that such \(\beta_{i}\) exist since \(\alpha_{i}\neq\frac{1}{2}\). **Lemma 4.5**.: _Let \(\xi:\mathbb{C}[x,y][t]\to A\) be a homomorphism of \(\mathbb{C}\)-algebras such that_ \[\xi(x)=\begin{pmatrix}i_{1}&0\\ 0&i_{2}\end{pmatrix}+\alpha,\quad\xi(y)=\begin{pmatrix}1&e\\ e&-1\end{pmatrix}+\beta,\quad\xi(t)=1.\] _Then \(\xi(x),\xi(y)\) and \(1\) generate \(A\) as an algebra._ Proof.: Observe that \(\xi(x^{j}(x^{2}-x+1))=\alpha^{j}(\alpha^{2}-\alpha+1)\) (where \(\alpha^{j}(\alpha^{2}-\alpha+1)=(\alpha_{1}^{j}(\alpha_{1}^{2}-\alpha_{1}+1), \dots,\alpha_{5}^{j}(\alpha_{5}^{2}-\alpha_{5}+1))\)). Now let \(W\) be the matrix with entries \(W_{ij}=\alpha_{i}^{j}(\alpha_{i}^{2}-\alpha_{i}+1)\). Let \(V\) be the Vandermonde matrix with entries \(V_{ij}=\alpha_{i}^{j}\). Let \(D\) be a diagonal matrix with entries \(D_{ij}=\delta_{ij}(\alpha_{i}^{2}-\alpha_{i}+1)\). It follows that \(W=DV.\) Now since \(V\) is nonsingular, \(W\) is nonsingular. Now this implies that any 5-dimensional vector in \(A\) is in \(\mathrm{im}\xi\). Then \(\xi(x)-\alpha\in\mathrm{im}\xi\) and hence \[\begin{pmatrix}i_{1}&0\\ 0&i_{2}\end{pmatrix}\in\mathrm{im}\xi.\] Similarly, \(\xi(y)-\beta\in\mathrm{im}\xi\) and hence \[\begin{pmatrix}1&e\\ e&-1\end{pmatrix}\in\mathrm{im}\xi.\] Now recall that \(i_{1},i_{2}\) are roots of the polynomial \(z^{3}+1\), so we have \[\begin{pmatrix}i_{1}&0\\ 0&i_{2}\end{pmatrix}^{3}=\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}.\] Notice that every diagonal matrix is a linear combination of \(\begin{pmatrix}i_{1}&0\\ 0&i_{2}\end{pmatrix}\) and \(\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}\). Now, it can be checked that these matrices are sufficient to generate all matrices in \(M_{2}(\mathbb{C})\). **Notation 4.6**.: Let \(f:\mathbb{C}[x,y][t]\to A[t]\) be a homomorphism of \(\mathbb{C}\)-algebras such that \[f(x)=t^{2}\begin{pmatrix}i_{1}&0\\ 0&i_{2}\end{pmatrix}+t^{2}\alpha,\quad f(y)=\frac{1}{\sqrt{1+e^{2}}}t^{3} \begin{pmatrix}1&e\\ e&-1\end{pmatrix}+t^{3}\beta,\quad f(t)=t1,\] where \(\mathbb{1}\) is the identity element in \(A\). We write \(\tilde{1}:=(1,1,1,1,1)\). As usual we denote by \(1\) the identity element in \(\mathbb{C}\) and thus in \(\mathbb{C}[x,y][t]\). We now prove Theorem 4.1. Proof.: Referring to Notation 4.4 and 4.6 we calculate: \[f(x^{2}) =t^{4}\Big{(}\begin{pmatrix}i_{1}^{2}&0\\ 0&i_{2}^{2}\end{pmatrix}+\alpha^{2}\Big{)},\] \[f(y^{2}) =t^{6}\Big{(}\begin{pmatrix}1&0\\ 0&1\end{pmatrix}+\beta^{2}\Big{)},\] \[f(xy) =\frac{t^{5}}{\sqrt{1+e^{2}}}\Big{(}\begin{pmatrix}i_{1}&i_{1}e \\ i_{2}e&-i_{2}\end{pmatrix}+\alpha\beta\Big{)},\] \[f(yx) =\frac{t^{5}}{\sqrt{1+e^{2}}}\Big{(}\begin{pmatrix}i_{1}&i_{2}e \\ i_{1}e&-i_{2}\end{pmatrix}+\alpha\beta\Big{)}.\] Note that \(f(x^{2}-t^{2}x+t^{4})=t^{4}(\alpha^{2}-\alpha+\tilde{1})\) and hence \[f(x(x^{2}-t^{2}x+t^{4})) =t^{6}\alpha(\alpha^{2}-\alpha+\tilde{1}),\] \[f(x^{2}(x^{2}-t^{2}x+t^{4})) =t^{8}\alpha^{2}(\alpha^{2}-\alpha+\tilde{1}),\] \[f(y(x^{2}-t^{2}x+t^{4})) =t^{7}\beta(\alpha^{2}-\alpha+\tilde{1}).\] Observe that \[f(xy+yx) =\frac{t^{5}}{\sqrt{1+e^{2}}}\begin{pmatrix}2i_{1}&e\\ e&-2i_{2}\end{pmatrix}+t^{5}2\alpha\beta,\] \[f(t^{2}y) =\frac{t^{5}}{\sqrt{1+e^{2}}}\begin{pmatrix}1&e\\ e&-1\end{pmatrix}+t^{5}\beta,\] \[f(t^{5}\frac{i_{2}-i_{1}}{\sqrt{1+e^{2}}}) =\frac{t^{5}}{\sqrt{1+e^{2}}}\begin{pmatrix}i_{2}-i_{1}&0\\ 0&i_{2}-i_{1}\end{pmatrix}+\frac{t^{5}(i_{2}-i_{1})}{\sqrt{1+e^{2}}}\tilde{1}.\] Note that \(2i_{1}-1+(i_{2}-i_{1})=0\) and \(-2i_{2}+1+(i_{2}-i_{1})=0\), so \[f(xy+yx-t^{2}y+\frac{t^{5}(i_{2}-i_{1})}{\sqrt{1+e^{2}}})=2\alpha\beta-\beta+ \frac{i_{2}-i_{1}}{\sqrt{1+e^{2}}}\bar{1}.\] Now notice that \(2\alpha\beta-\beta+\frac{i_{2}-i_{1}}{\sqrt{1+e^{2}}}\bar{1}=0\) by assumptions of Notation 4.4. It follows that \(xy+yx-t^{2}y+\frac{t^{5}(i_{2}-i_{1})}{\sqrt{1+e^{2}}}\in I\) and so \(xy+yx\in\langle J\rangle\) is a relation in \(N\) as defined in Notation 3.4. Notice that by Notation 4.4 we have \(\beta^{2}(2\alpha-1)^{2}=c^{2}\tilde{1}\). Now it follows by Notation 4.3 that \[\alpha^{3}+\beta^{2}=0, \tag{6}\] and thus \(x^{3}+y^{2}\in\langle J\rangle\) is a relation in \(N\). We now proceed to show that \(y^{3}\in\langle J\rangle\). Notice that: \[f(y^{3})=\frac{t^{9}}{\sqrt{1+e^{2}}}\begin{pmatrix}1&e\\ e&-1\end{pmatrix}+t^{9}\beta^{3}.\] Hence \(y^{3}-t^{6}y=t^{9}(\beta^{3}-\beta).\) Therefore it is sufficient to show that \(y^{3}-t^{6}y+q\in I\) for some \(q\in t\mathbb{C}[x,y][t]\). We denote: \[r_{1} :=t^{5}(x^{2}-t^{2}x+t^{4})\] \[r_{2} :=t^{3}x(x^{2}-t^{2}x+t^{4})\] \[r_{3} :=tx^{2}(x^{2}-t^{2}x+t^{4})\] \[r_{4} :=t^{2}y(x^{2}-t^{2}x+t^{4})\] Notice that the elements \[f(r_{1}) =t^{9}(\alpha^{2}-\alpha+\tilde{1})=:t^{9}e_{1},\] \[f(r_{2}) =t^{9}\alpha(\alpha^{2}-\alpha+\tilde{1})=:t^{9}e_{2},\] \[f(r_{3}) =t^{9}\alpha^{2}(\alpha^{2}-\alpha+\tilde{1})=:t^{9}e_{3},\] \[f(r_{4}) =t^{9}\beta(\alpha^{2}-\alpha+\tilde{1})=:t^{9}e_{4}\] are in \(t\mathbb{C}[x,y][t]\). Note that it suffices to show that \(y^{3}-t^{6}y+q\in I\) for some \(q\in\mathbb{C}r_{1}+\mathbb{C}r_{2}+\mathbb{C}r_{3}+\mathbb{C}r_{4}\). Therefore it suffices to show that \[\beta^{3}-\beta\in\mathbb{C}e_{1}+\mathbb{C}e_{2}+\mathbb{C}e_{3}+\mathbb{C}e _{4}. \tag{7}\] Now relation (7) is equivalent to \[(2\alpha-1)\beta(\beta^{2}-1)\in\mathbb{C}(2\alpha-1)e_{1}+\mathbb{C}(2\alpha -1)e_{2}+\mathbb{C}(2\alpha-1)e_{3}+\mathbb{C}(2\alpha-1)e_{4},\] which in turn is equivalent to \[-c(\alpha^{3}+1)\in\mathbb{C}(2\alpha-1)e_{1}+\mathbb{C}(2\alpha-1)e_{2}+ \mathbb{C}(2\alpha-1)e_{3}+\mathbb{C}(2\alpha-1)e_{4}. \tag{8}\] Now, notice that \(\alpha^{3}+1=(\alpha+1)(\alpha^{2}+\alpha+1)\) and that \(\alpha+1\) has non-zero entries, so multiplying relation (8) by \(\alpha+1\) we get an equivalent relation \[(\alpha^{3}+1)(\alpha+1)\in\mathbb{C}(2\alpha-1)(\alpha^{3}+1)+ \mathbb{C}(2\alpha-1)(\alpha^{3}+1)\alpha\] \[\qquad\qquad\qquad+\mathbb{C}(2\alpha-1)(\alpha^{3}+1)\alpha^{2}+ \mathbb{C}(\alpha^{3}+1).\] Notice that the right hand side can be rewritten as \(\mathbb{C}(\alpha^{3}+1)+\mathbb{C}(\alpha^{3}+1)\alpha+\mathbb{C}(\alpha^{3}+ 1)\alpha^{2}+\mathbb{C}(\alpha^{3}+1)\alpha^{3}\). ## 5 A more general method In this section we consider a more general method that results in algebras \(N\) that are not necessarily local and have more generators. Furthermore, we consider \(A\) to have finitely many generators. Let \(A\) be a finitely generated \(\mathbb{C}\)-algebra. 1. We consider a free \(\mathbb{C}\)-algebra \(F\) with \(j\) generators \(x_{1},\ldots,x_{j}\) for some \(j\in\mathbb{N}\). We define: \[f:F[t] \to A[t]\] \[x_{i} \mapsto\sum_{l=0}^{m}a_{i,l}t^{l}\] for some \(m\in\mathbb{N}\) and some \(a_{i,l}\in A\). We also assume that for each \(z\in\mathbb{C}\), \(z\neq 0\) elements \(\sum\limits_{l=0}^{m}a_{i,l}z^{l}\) for \(i=1,2,\ldots j\) generate \(A\). In other words, images of the generators generate A when \(t\) is set to non-zero numbers. 2. We define \[\mathcal{N}:=\frac{\mathbb{C}[x_{1},\ldots,x_{j}][t]}{I},\] where \(I:=\ker(f)\). 3. We assume that for every \(r\in A\) there exists \(i=i(r)\) such that \(rt^{i}\in\mathrm{im}f\). 4. Our algebra \(N=N_{0}\) is **isomorphic to** \[\frac{\mathcal{N}}{\langle t+I\rangle}\] and deforms flatly and associatively to \(A\). Moreover \(N\) is isomorphic to \(\frac{\mathbb{C}[x_{1},\ldots,x_{j}][t]}{\langle I,t\rangle}\), where \(\langle I,t\rangle\) is the ideal of \(\mathbb{C}[x_{1},\ldots,x_{j}][t]\) generated by elements of \(I\) and by \(t\). If \(j=2\) then this method corresponds to the method described in section 3 provided that \(a_{i,0}=0\) for all \(i\). **Proof.** When reasoning as in the previous chapters we will use at all places \(\mathbb{C}[x_{1},\ldots,x_{j}]\) instead of \(\mathbb{C}[x,y]\), and we will use \(\mathbb{C}[x_{1},\ldots,x_{j}][t]\) instead of \(\mathbb{C}[x,y][t]\). Let \(A\) have dimension \(n\) over \(\mathbb{C}\). Recall that we \[\mathcal{N}:=\frac{\mathbb{C}[x_{1},\ldots,x_{j}][t]}{I},\] where \(I:=\ker(f)\). * Reasoning analogously as in Proposition 3.1 we obtain that there are \(q_{1},\ldots,q_{n}\in F\) such that \[f(F[t])\subseteq\sum_{i=1}^{n}\mathbb{C}f(q_{i}),\] and hence, since \(I=\ker f\), we get \[F(t)\subseteq\sum_{i=1}^{n}\mathbb{C}q_{i}+I.\] In particular, there are \(\zeta_{i,k,m}\in\mathbb{C}\), \(\xi_{i,k,m}(t)\in\mathbb{C}[t]\) such that \[q_{k}\cdot q_{m}-(\sum_{i=1}^{n}\zeta_{i,k,m}q_{i}+t\xi_{i,k,m}q_{i})\in I.\] * Reasoning similarly as in Theorem 3.8 we obtain that if \(d_{1},\ldots,d_{n}\) are free generators then \[d_{k}*d_{m}-(\sum_{i=1}^{n}\zeta_{i,k,m}d_{i}+t\xi_{i,k,m}d_{i}),\] gives a formal deformation such that \(*_{0}\) gives a multiplication on \(N\) where \(N\) is the \(\mathbb{C}\)-algebra with generators \(d_{1},\ldots,d_{n}\) subject to relations \[d_{k}*_{0}d_{m}=\sum_{i=1}^{n}\zeta_{i,k,m}d_{i}.\] Observe that \(*\) defines a multiplication on the algebra \(\mathbb{C}[d_{1},\ldots,d_{n}][t]/I^{\prime}\), where \(I^{\prime}\) is the ideal of \(\mathbb{C}[d_{1},\ldots,d_{n}][t]\) generated by elements \[d_{k}*d_{m}-(\sum_{i=1}^{n}\zeta_{i,k,m}d_{i}+t\xi_{i,k,m}d_{i}).\] Notice that \(d_{1},\ldots,d_{n}\) are free generators of the free algebra \(\mathbb{C}[d_{1},\ldots,d_{n}]\). Notice that the algebra \(\mathbb{C}[d_{1},\ldots,d_{n}][t]/I^{\prime}\) is isomorphic to algebra \(\mathcal{N}\). To see this consider the map \(\sigma\mathbb{C}[d_{1},\ldots,d_{n}][t]\to\mathcal{N}\) given by \[d_{i}\to q_{i}+I.\] Notice that \(I^{\prime}\subseteq\ker\sigma\) since \[(q_{k}+I)*(q_{m}+I)=\sum_{i=1}^{n}\zeta_{i,k,m}(q_{i}+I)+t\xi_{i,k,m}(q_{i}+I).\] Recall that \(\ker\sigma=I^{\prime}\) because if \(a\in ker\sigma\) then by using relations from \(I^{\prime}\) we can present \(a\) as \(a^{\prime}+i\) where \(i^{\prime}\in I\) and where \(a^{\prime}=\sum_{i=1}^{n}\alpha_{i}(t)d_{i}\) for some \(\alpha_{i}(t)\in\mathbb{C}[t]\). Notice that then \(\sigma(i^{\prime})=0\) and hence \[\sigma(a)=\sigma(a^{\prime})=\sigma(\sum_{i=1}^{n}\alpha_{i}(t)d_{i})=\sum_{i= 1}^{n}\alpha_{i}(t)q_{i}.\] Reasoning as in Corollary 3.3 we get that this implies that all \(\alpha_{i}=0\), hence \(a^{\prime}=0\), hence \(a=i^{\prime}\). Therefore, \(\ker\sigma\subseteq I\) and consequently \(\ker\sigma=I^{\prime}\). Therefore, by the First Isomorphism Theorem for algebras \(\mathbb{C}[d_{1},\ldots,d_{n}][t]/I^{\prime}\) is isomorphic to \(Im(\sigma)\) which is equal to \(\mathcal{N}\). To see that \(\sigma\) is surjective, notice that \(q_{1}+I,\ldots,q_{n}+I\) span \(\mathcal{N}\) as a linear space (as mentioned before this can be proved similarly as Proposition 3.1). * Reasoning as in Proposition 3.6 we obtain that \(N\) is \(n\) dimensional \(\mathbb{C}\)-algebra. * Reasoning similarly as in Proposition 3.7 we obtain that the algebra \(N\) is isomorphic to the algebra \(\frac{\mathbb{C}[x_{1},\ldots,x_{j}][t]}{\langle I,t\rangle}\) where \(\langle I,t\rangle\) is the ideal of \(\mathbb{C}[x_{1},\ldots,x_{j}][t]\) generated by elements from \(I\) and by \(t\). By the Third Isomorphism Theorem \(N\) is isomorphic to \[\frac{\mathcal{N}}{\langle t+I\rangle}.\] This can be seen by reasoning similarly as in Proposition 3.10 but taking \(t\) instead of \(t-1\) in the proof. * We will now consider deformations \(*_{1}\) at \(t=1\). Consider first the homomorphism of \(\mathbb{C}\)-algebras \(\xi:F[t]\to A\) given by \(\xi(x_{i})=\sum_{l=0}^{m}a_{i,l}\) for \(i=1,2,\ldots,j\). Reasoning similarly as in Proposition 3.9 we obtain that \(\ker\xi=\langle I,t-1\rangle\). Reasoning analogously as in the last two lines of Proposition 3.10 we obtain that \(F[t]/\ker\xi\) is isomorphic to \(A\). * Reasoning as in Theorem 3.11 we see that the algebra \(\mathbb{C}[d_{1},\ldots,d_{n}][t]/I\) deforms at \(t=1\) to the algebra \(F[t]/\langle I,t-1\rangle=F[t]/\ker\xi\) which is isomorphic to \(A\). * We will now consider deformations \(*_{z}\) at \(t=z\) where \(z\in\mathbb{C},z\neq 0\), \(z\neq 1\). Consider first the homomorphism of \(\mathbb{C}\)-algebras \(\xi:F[t]\to A\) given by \(\xi(x_{i})=\sum_{l=0}^{m}a_{i,l}z^{l}\) for \(i=1,2,\ldots,j\). Reasoning similarly as in Proposition 3.9 we obtain that \(\ker\xi=\langle I,t-z\rangle\) (this can be done by using \(t^{\prime}=t/z\) at the place of \(t\) in the proof of Proposition 3.9, and in the first line declaring that \(p_{i}\in F\) instead of \(p_{i}\in\mathbb{C}[x,y]\)). Next, reasoning analogously as in the last two lines of Proposition 3.10 we obtain that \(F[t]/\ker\xi\) is isomorphic to \(A\) (this follows since elements \(\xi(x_{i})\) generate \(A\)). Next, reasoning as in Theorem 3.11 we see that the algebra \(\mathbb{C}[d_{1},\dots,d_{n}][t]/I\) deforms at \(t=z\) to the algebra \(F[t]/\langle I,t-z\rangle=F[t]/\ker\xi\) which is is isomorphic to \(A\). ## 6 Future Work In this section we pose some questions about Method 1 and suggest some generalisations. **Question 6.1**.: Let \(A\) be a \(\mathbb{C}\)-algebra and suppose \(G_{1}=\{a,b\}\) and \(G_{2}=\{a^{\prime},b^{\prime}\}\) are two distinct generating sets of \(A\). Under what conditions does Method 1 using \(A,G_{1}\) and \(A,G_{2}\) result in the same algebra \(N_{0}\) that has \(A\) as a deformation? **Question 6.2**.: Let \(A\) be a \(\mathbb{C}\)-algebra. Under what conditions does Method 1 only produce one algebra \(N\) that has \(A\) as a deformation. **Question 6.3**.: Let \(A\) be a \(\mathbb{C}\)-algebra. Does their exist a finitely terminating algorithm that will produce all the algebras \(N\) arising from Method 1 that have \(A\) as a deformation? **Question 6.4**.: Let \(A\) be a \(\mathbb{C}\)-algebra. Do ring theoretic properties of the generators \(a,b\) (for example being idempotent, being irreducible idempotent, prime) determine any properties of the algebra \(N\) arising from Method 1 using \(A,\{a,b\}\)? **Question 6.5**.: Let \(A\) be a \(\mathbb{C}\)-algebra. Do ring theoretic properties of \(A\) (for example being semi-simple, local) determine any properties of the algebra \(N\) arising from Method 1 using \(A\)? **Question 6.6**.: Can Method 1 be generalised to be used for algebras over a general commutative ring? ## Acknowledgements We are grateful to Michael Wemyss for suggesting his interesting questions which inspired this paper, and for useful comments about contraction algebras and their applications in geometry. The third author acknowledges support from the EPSRC programme grant EP/R034826/1 and from the EPSRC research grant EP/V008129/1.
2307.12332
X-CapsNet For Fake News Detection
News consumption has significantly increased with the growing popularity and use of web-based forums and social media. This sets the stage for misinforming and confusing people. To help reduce the impact of misinformation on users' potential health-related decisions and other intents, it is desired to have machine learning models to detect and combat fake news automatically. This paper proposes a novel transformer-based model using Capsule neural Networks(CapsNet) called X-CapsNet. This model includes a CapsNet with dynamic routing algorithm paralyzed with a size-based classifier for detecting short and long fake news statements. We use two size-based classifiers, a Deep Convolutional Neural Network (DCNN) for detecting long fake news statements and a Multi-Layer Perceptron (MLP) for detecting short news statements. To resolve the problem of representing short news statements, we use indirect features of news created by concatenating the vector of news speaker profiles and a vector of polarity, sentiment, and counting words of news statements. For evaluating the proposed architecture, we use the Covid-19 and the Liar datasets. The results in terms of the F1-score for the Covid-19 dataset and accuracy for the Liar dataset show that models perform better than the state-of-the-art baselines.
Mohammad Hadi Goldani, Reza Safabakhsh, Saeedeh Momtazi
2023-07-23T13:58:00Z
http://arxiv.org/abs/2307.12332v1
# X-CapsNet For Fake News Detection ###### Abstract News consumption has significantly increased with the growing popularity and use of web-based forums and social media. This sets the stage for misinforming and confusing people. To help reduce the impact of misinformation on users' potential health-related decisions and other intents, it is desired to have machine learning models to detect and combat fake news automatically. This paper proposes a novel transformer-based model using Capsule neural Networks(CapsNet) called X-CapsNet. This model includes a CapsNet with dynamic routing algorithm paralyzed with a size-based classifier for detecting short and long fake news statements. We use two size-based classifiers, a Deep Convolutional Neural Network (DCNN) for detecting long fake news statements and a Multi-Layer Perceptron (MLP) for detecting short news statements. To resolve the problem of representing short news statements, we use indirect features of news created by concatenating the vector of news speaker profiles and a vector of polarity, sentiment, and counting words of news statements. For evaluating the proposed architecture, we use the Covid-19 and the Lian datasets. The results in terms of the F1-score for the Covid-19 dataset and accuracy for the Lian dataset show that models perform better than the state-of-the-art baselines. Fake News Detection, COVID-19, BERT, Deep Convolutional Neural Networks, Multi-Layer Perceptron, Dynamic Routing Algorithm, Capsule Neural Networks. ## 1 Introduction In recent years, online social media have become a common platform for broadcasting news for political, commercial, and entertainment purposes. News is understood as any information intended to make the public aware of the events happening around them, which may affect them personally or socially [1]. People use social media to search for and consume news due to its ease, convenience, and rapid spread [2]. These platforms have brought both constructive and destructive impacts. Therefore, as an integral part of culture and society, social media is a double-edged sword [3]. People may manipulate and spread factual information for profit, or their entertainment in the form of fake news [4]. Fake news played a pivotal role in the 2016 United States presidential election campaign after the mass of false information leaked on Facebook over the last three months of the presidential election [5]. Misleading information can disrupt countries' economies, reduce people's trust in their governments, or promote a specific product to make huge profits. For example, this has already happened with COVID-19. Misleading information about lockdowns, vaccinations, and death statistics have fueled panic over purchasing groceries, disinfectants, masks, and paper products. This has led to shortages that have disrupted the supply chain and exacerbated the gaps between supply and demand and food insecurity [6]. In addition, it caused a sharp decline in the international economy, severe losses in the value of crude oil, and the collapse of world stock markets [7, 8, 9]. Furthermore, due to the spread of COVID-19 and the shortage of medical protection products worldwide, many people have lost faith in their governments, such as Italy and Iran [10, 11]. These all drive the world into an economic recession [12, 9, 13]. While a growing percentage of the population relies on social media platforms for news consumption, the reliability of the information shared remains an open issue. Fake news and many types of disinformation are rampant on social media, putting audiences around the world at risk. Therefore, detecting and mitigating the effects of disinformation is a crucial concern in studies where various approaches have been proposed, from linguistic indicators to deep learning models [14, 15]. Recently, the automatic detection of fake news is attracting a large number of researchers [16, 17, 18]. Early fake news detection methods often designed complete sets of hand-crafted features based on news content, user profiles, and news propagation paths, then train classifiers to discriminate the truthfulness of the news [19, 20, 21]. However, it is challenging to design all-encompassing features, as fake news is usually created on different writing styles, types of topics, and social media platforms [22]. Therefore, many approaches based on deep neural networks [23, 24, 25, 26, 27, 28] have been proposed to automatically learn patterns discriminated by propagation paths and news content [29]. The recent deep learning models improve the performance of fake news detection models, but the performance drops dramatically when the news content is short [30, 31]. As a solution, in this work, we propose a new model based on CapsNet and indirect features for detecting fake news. The DCNN and MLP models with different feature extraction sections can be parallelized with a CapsNet architecture that is enhanced by using margin loss as the loss function. We also compare varieties of word representation layers and finally use the Bidirectional Encoder Representations from Transformers (BERT) [32] and a robustly optimized BERT pretraining approach(RoBERTa) [33] in our proposed model. We show that the proposed models achieve better results than the state-of-the-art methods on the Covid-19 and Lian fake news datasets. The rest of the paper is organized as follows: Section 2 reviews related work about fake news detection. Section 3 presents the model proposed in this paper. The datasets used for fake news detection and evaluation metrics are introduced in Section 4. Section 5 reports the experimental results, comparison with the baseline classification, and discussion. Finally, Section 6 summarizes the paper. ## 2 Related work Social media have presently become the main source of information and news dissemination. This increases the challenges of spreading fake news. As a result, the identification of disinformation has been extensively studied in recent years, with the introduction of several tasks in the field. In this research, our focus is on the detection of fake news in two different domains, COVID-19 and politics, and is mainly based on the supervised method. A machine/deep learning model is trained based on the available data containing fake and real news. The model is then used to decide on the new news articles to find out if they are false or not. Most of the available studies investigating the fake news detection task have been conducted based on deep neural models, including CNN, Long Short Term Memory (LSTM), and BERT, which will be described in this section. The related works of the two domains are reviewed separately in this section. ### _COVID-19 domain_ Since the appearance of the first case of COVID-19 on December 31, 2019, the World Health Organization (WHO) has declared Covid-19 as a pandemic emergency. As a source of COVID-19 information, tweets and social media news contain information or misinformation about COVID-19. For example, ordinary people become more eager to read more to know how to protect themselves [34]. Brennen et al. [35] examined the sources of misinformation about COVID-19. Their analysis revealed that most of the COVID-19 misinformation is fabricated from real information rather than invented. Therefore the detection of COVID-19 fake news has attracted data scientists. Patwa et al. [36] provided a comprehensive dataset, the Covid-19 dataset, which includes fake and real news from Twitter. It includes 10,700 posts of the COVID-19 outbreak shared on social media. The real news was captured from 14 official Twitter accounts, and fake data were collected from social media and fact-checking websites. They performed a binary classification task (real vs. false) and evaluated four baselines of machine learning, namely Decision Tree (DT), Support Vector Machine (SVM), Logistic Regression (LR), and Gradient Boosting Decision Tree (GDBT). They achieved the best performance of 93.32 % F1-score with SVM on the test set. Shifath et al. [37] used eight different pre-trained transformer-based models with additional layers to build a stackable ensemble classifier and refined them for the proposed models. Their models were evaluated on the Covid-19 dataset and showed that the RoBERTa-CNN model achieves 96.49% F1 score on the test dataset. Several supervised text classification algorithms were evaluated by Wani et al. [38] on the Covid-19 fake news detection dataset. Their models are based on CNN, LSTM, and BERT. They also assessed the importance of unsupervised learning in the form of a pre-trained language model and distributed word representations using an unlabeled corpus of COVID-19 tweets. They claimed that their model improved the fake news detection accuracy. Samadi et al. [39] implemented three different neural classifiers with text representation models like BERT, RoBERTa, Funnel Transformer, and GPT2. They used Single Layer Perceptron (SLP), MLP, and CNN and tried to connect them to various contextualized text representation models. They compared the models' results and discussed their advantages and disadvantages. Finally, to corroborate the effectiveness of their approach, they selected the best model and compared their results with the most advanced models. Furthermore, they added a Gaussian noise layer to the combination of a contextualized text representation model with the CNN classifier. They claimed that the Gaussian noise layer could prevent overfitting in the learning process and, as a result, learns better about the Covid-19 and other data sets. A two-stage automated pipeline model was developed by Vijiali et al. [40] for COVID-19 fake news detection. They used a state-of-the-art machine learning model for fake news detection. The first model used a novel fact-checking algorithm that searches for the most relevant facts regarding user claims for specific COVID-19 claims. The second model determines a claimant's level of truth by calculating the textual meaning between the claim and facts retrieved from a manually created Covid-19 dataset. They evaluated a set of models based on the classic text-based features as more contextualized Transformer-based models. They found that for both stages, the model pipelines based on BERT and ALBERT yield the best results. Koloski et al. [41] leveraged several approaches and techniques for detecting COVID-19 fake news. They created several handcrafted features that capture the statistical distribution of characters and words in tweets. They showed that possible spatial representations were learned by capturing potentially relevant patterns from collections of n-grams of characters and features based on the words found in the tweets. For the assessment, they used various BERT-based representations to capture contextual information and the differences between fake and real COVID-19 news. Finally, they proved that the distilBERT tokenizer performs best with an F1 score of 97.05%. ### _Politic domain_ Many approaches based on neural networks and deep learning models are used to detect fake news articles in the datasets provided for the political domain. Different approaches, including CNNs, RNNs, hybrid models as well as more recent models such as CapsNets were used for the task. Lian dataset was presented by Wang et al. [42]. Then they proposed a model that used statements and metadata together as inputs, a CNN for extracting features from statements, and a BiLSTM (Bi-directional long short-term memory) network for extracting features from metadata. They demonstrated that their model significantly improved the accuracy. Long et al. [43] proposed a model on the Liar dataset that incorporates speaker profiles as features, containing speaker position, party affiliation, title, and credit history, into an attention-based LSTM model. They used two ways to improve the model with speaker profiles; (1) considering them in the attention model; (2) incorporating them as additional input data. They demonstrated that this model improves the performance of the classifier on the Liar dataset. The event adversarial neural network model was proposed by Wang et al. [44]. This model includes three main components: (1) the multimodal feature extractor, which uses CNN as its main module, (2) the fake news detector, which is a fully connected layer with softmax activation (3) the event discriminator that uses two completely connected layers and aims to classify the news in one of the K events based on the representations of the first components. A model based on CapsNet was also proposed for detecting fake news by Goldani et al. [30]. They applied different levels of n-grams and different embedding models to news items of various lengths. Four filters with 2,3,4, and 5 kernel sizes and convolutional n-gram layers with non-static embedding were used for long news statements. For short news, only a static embedding of two filters with kernel sizes of 3 and 5 was used. They showed that their model improves the accuracy of the state-of-the-art methods. Choudhary et al. [45] used a language model to represent the news text to detect fake news. This linguistic model extracts information related to the news text's syntax, meaning, and readability. Due to the fact that this language model is time-consuming, they used a hierarchical model of neural networks to represent the features. In order to evaluate, the results obtained from sequential neural networks were compared with other machine learning methods and models based on LSTM. It was shown that the model based on hybrid neural networks could have better results than other methods. Blackledge et al. [46] used transformer-based models to investigate the ability of these models to detect fake news and the generalizability of the models to detect fake news with different topics and models. They showed that the models could not naturally recognize news based on opinion and suspicion. Therefore, they proposed a new method to remove such news articles in the first step and then categorize them. In this article, it is shown that the generalizability using the proposed two-step method is able to improve the accuracy of the transformer-based models. ## 3 X-CapsNet for fake news detection This section presents the models proposed for fake news detection in this paper. Depending on whether the news sentences are short or long, two different structures have been proposed to detect fake news. In these models, two parallel networks are concatenated: a CapsNet layer and a new size-based classification layer that uses a DCNN with pre-trained language models or an MLP layer with indirect features extracted from the input text. The concatenated layer is added to a dense layer to be used for detecting fake news eventually. Figure 1 shows the architecture of the proposed model. A CapsNet with a representation layer based on a pre-trained embedding model is used for all input text. In addition, when the news text is long, the DCNN model with three feature extractors with kernel sizes of 2, 3, and 4 is used. When the sentences are short, an MLP model that takes advantage of the indirect features of the news is used. In the following, we first review the pre-trained language models, including BERT with non-static embedding, which incrementally uptrains and updates the word embeddings in the training phase. Then we describe the classifiers used for the learning process. ### _Representation layer_ Word embedding is one of the most widely used techniques in Natural Language Processing (NLP), and the goal is to learn a low-dimensional vector representation for a word in a text [47]. The power of word embedding algorithms such as Word2Vec [48], FastText [49], and GloVe [50] in capturing semantic and syntactic word relationships has been proven. This capability has facilitated various NLP tasks, such as aspect extraction, part-of-speech tagging, and sentiment analysis [51]. The idea of distributional semantics states that words occurring in the same context tend to have similar meanings. In fact, word embeddings reveal hidden relationships between words that can be used during a training process. However, the static embedding techniques mentioned above only capture limited semantic information due to providing a unique static embedding vector for a word in different contexts. Therefore, in recent years, researchers have implemented various deep transformer-based word representation techniques that can take con Fig. 1: Proposed model for fake news detection textual information into account to generate embedding vectors for the same word in different contexts. BERT [32] is an unsupervised deep model that uses the transformer architecture [52] and has been trained on a huge text corpus using two different scenarios: (1) the masked language model that learns the relationships between words by using their adjacency in a sentence, and (2) the next sentence prediction that learns relations between sentences. RoBERTa [33] is similar to BERT with some hyperparameters tuning and modifications to its learning process. Liu et al. [33] claimed that BERT was undertrained. Therefore, in addition to a larger dataset with longer sequences for training the model, they trained the model with longer batches. They also compared various alternative training approaches together and, as a result, claimed that for enhancing the performance of the learning process, the next sentence prediction loss can be removed. GPT2 is a large transformer-based language model trained on various texts taken from web pages on the Internet [53]. The GPT2 architecture is similar to the OpenAI GPT model [54], and is fine-tuned using the four tasks: text classification, text similarity, text consequence, and question answering. Funnel Transformer is an efficient encoder-decoder architecture that reduces the input features' resolution (length) using a pool operation and embeds them into a lower-dimensional vector [55]. In this model, the decoder is an additional part of the architecture. It is used to simulate a masked language or, in the ELECTRA pre-learning task [56], a new method of learning the self-supervised representation of a language. In fake news detection, Goldani et al. [30] showed that when the training data size is large enough, the model's performance can improve by using non-static embedding. Therefore, we also use a non-static setting in our model to update the text representation model during the training phase. The recent deep learning models improve the performance of fake news detection models, but the performance drops dramatically when the news content is short. To resolve this problem, we extract more features from the news in addition to the features extracted from the word embedding of sentences. The new features include the signs and information in the sentences [57] along with the history of the speaker profile. More specifically, we use the following indirect features: * Count of words (length of a news article) * Count of unique words * Count of letters * Count of stop words * Polarity score * Subjectivity score * History of the speaker profile ### _DCNN layer for long news article_ In recent years, different variations of CNNs have been used in the task of fake news detection [31, 58, 59, 60]. A CNN architecture with convolutional and pooling layers can accurately extract features from local rendering to global rendering that indicate the powerful representational capabilities of CNNs. In order to extract more robust features for the learning process, the CNN needs to be enhanced with more identifying information. This requires that the intra-cluster similarity and inter-cluster dissimilarity of the learned features be maximized. For this goal, one of the most commonly used loss functions that are used with softmax in CNNs for fake news tasks is the margin loss [31]. Using this loss function avoids overlapping problems and helps the model mitigate overfitting problems [31]. In Figure 2, the computational flow of the DCNN classifier is demonstrated. Zhong et al. [61] showed that fake news detection could be investigated by adopting a standard text classification model consisting of an embedding layer, a one-dimensional convolutional layer, a max-pooling layer, and finally, a prediction-based output layer [61]. Our proposed model is motivated by the concept of multiple parallel channels-variable-size-based neural networks considering three different filter sizes 2, 3, and 4 as n-gram convolutional layers for feature extraction. In this section, we present our fake news detection model for long news. The proposed model includes a pre-trained embedding model and two parallel classifiers. It reaps the benefits of both DCNN and CapsNet as two different neural network architectures that are used as classifiers. Figure 2 shows the proposed model. In this architecture, four parallel neural networks have been used. These parallel networks include three different n-grams convolutional layers for feature extraction and a CapsNet layer that includes the primary capsule layer, a convolutional capsule layer, and a feed-forward capsule layer that was previously introduced by Yong et al. [62]. Moreover, in the next layer, the outputs of CNNs and CapsNet go through a global max-pooling and a leaky-ReLU (Rectified Linear Unit) and concatenate. Then after using two dense layers, the final output predicts the label of the input news article. With this architecture, the models can learn more meaningful and extensive text representations on different n-gram levels according to the length of the text. ### _MLP layer for short news article_ Due to the MLP's ability to learn, handle noisy or incomplete data and solve complex problems in real-time, this method is applied for the proposed classifier for the detection of short fake news [63]. Figure 3 shows the proposed model for detecting short news instances. In this model, indirect features are added in the training phase because of the short size of the news article and the need for more features for detecting fake news in addition to the transformer-based representation layer. The model is designed, implemented, and evaluated using the open-source Python software, and the Keras library. It consists of a fully connected 4-layer MLP Neural Network(NN) architecture with an input layer that takes data from the indirect characteristics of the input news text, two hidden layers to process the inputs, and an output layer that indicates the output. Each layer is a computational abstraction consisting of a fixed number of computational data entities called neurons. Neurons are the building block of the NN that allow the NN to learn from the data and adjust its weights. Neurons in different layers are interconnected through weights that define the weight matrix between the layers. The model is designed sequentially by adding layers one after the other, starting with the input layer, the first hidden layer, the second hidden layer, and finally, the output layer. The number of input-outputs in the feature dataset determines the neurons in the NN input and output layer. The inner layers contain an arbitrary/specific number of neurons calculated empirically based on standard rules. Each level is defined by the number of nodes/neurons and the triggering function. The first layer has 12 input nodes corresponding to 12 feature attributes of the news indirect feature vector. The size of the input layer is 12\(\times\)1. The hidden layer1 is the second layer. It is the first inner layer with 64 neurons in the optimized design and uses the ReLU activation function to operate on the inputs. ReLU is the most used activation function as it overcomes the problem of escape gradients during backpropagation and is suitable for large NNs. Each neuron in this layer is connected to all inputs of the weighted input layer. The size of the merge weight matrix of the input layer and the hidden layer is 12\(\times\)64. The second hidden layer is the third layer. It has 32 neurons in the optimized form and uses the ReLU activation function. Each neuron in this layer is connected to the outputs of all neurons of the hidden layerl with associated weights. The dimension of the matrix of the joining weights of hidden layer1 and hidden layer 2 is 32\(\times\)32. The output layer is the fourth layer with 32 output nodes concatenated with the CapsNet output, and finally, a vector with 64 dimensions is fed to the dense layer. ### _CapsNet layer_ After the success of CapsNets in various NLP tasks [64], different models based on CapsNets have been used for fake news detection in recent years [30, 65, 66]. As mentioned in subsection3.1, there are different ways for encoding a text. In this work, we use internal word embedding encoding, which means that in CapsNet, the input text is encoded. In this case, we use a 100-dimensional vector to represent a word with a batch size of 50. After evaluations, such dimensions prove to be sufficient to train CapsNet effectively. Considering the fake news detection with real and fake classes, low-level capsules should catch the most important words in the text. These are the words that significantly affect the classification. After that, two high-level capsules(real and fake classes) through dynamic routing detect the dependencies between significant words and recursively combine them to get the correct prediction. Algorithm 1 shows the dynamic routing procedure, in which \(r\) in STEP 2 in line 3 shows a hyperparameter that can be used for the training phase. In the "for loop", the scaler product \(a_{ij}=v_{j}.u_{ji}\) is defined as the agreement. The agreement is a log likelihood and is added to the initial logit, \(b_{ij}\). The initial coefficients are refined iteratively. This operation measures the agreement between the output \(v_{j}\) of each capsule \(j\) in the above layer and the prediction \(u_{j|i}\) that was made by capsule \(i\). Consequently, the assumption is that low-level capsules will detect the words that significantly affect the classification of the text, and high-level capsules will detect low-level capsules and maximize the prediction value. In the original architecture of CapsNet [67], 2D convolution was used for image processing because the input is Fig. 3: Proposed model for fake news detection in short news statements Fig. 2: Proposed model for fake news detection in long news statements an image. It is important to consider the pixels because the surrounding pixels bring additional information. However, for fake news detection and in the case of text classification, it is not necessary to consider the surrounding pixels since it is not an image. A 1D convolution operation at the first and primary layers is used in this case. In the original architecture, the decoder framework reconstructs the input image and uses the decoder as a regularization method [67]. However, in the text classification task, there is no reason to use the decoder because the task is only to classify the input into predefined categories. Therefore, the decoder structure is removed from the proposed architecture. Instead of using the decoder as the regularization method, the proposed models use a dropout layer against overfitting [51]. ``` Input :\(u_{i|j}\),\(r\),\(l\) Output :\(v_{j}\) 1 STEP 1: for all capsule \(i\) in layer \(l\) and capsule \(j\) in layer (\(l\)+1): \(b_{ij}\gets 0\) STEP 2: iterative routing: for\(i\) in range (r)do 2 for all capsule \(i\) in layer \(l\): \(c_{i}\gets softmax(b_{ij})\) for all capsule \(j\) in layer \(l+1\): \(s_{j}\leftarrow\Sigma_{i}c_{ij}u_{j|i}\) for all capsule \(j\) in layer \(l+1\): \(v_{j}\gets squash(s_{j})\) for all capsule \(i\) in layer \(l\) and capsule j in layer \(l\)+1: \(b_{ij}\gets b_{ij}+u_{i|j}+v_{j}\) return \(v_{j}\) ``` **Algorithm 1**Dynamic Routing algorithm The CapsNet architecture includes a standard convolutional layer called an n-gram convolutional layer that acts as a feature extractor. The second layer maps the scalar features into a capsule representation called the primary capsule layer. The outputs of this capsule are fed to the new layer called the convolutional capsule layer. In this layer, each capsule is only connected to the local area of the layer below. In the final step, the previous layer's output is flattened and fed through the feed-forward capsule layer. For this layer, all capsules in the output are considered to be of a specific class. This architecture uses the maximum margin loss to train the model as presented in Figure 4[62]. A CapsNet with two capsules of dimension 16 followed by a leaky-ReLU has been chosen as a parallelized neural network in the proposed model. ### _Fully connected layer_ The functionality of a dense layer is considered a linear operation in which all inputs are connected to all outputs by some weights. We use two dense layers to make the proposed model inherently dense. In the proposed model, the first dense layer takes the output of the concat layer, and then the second dense layer predicts the final output. ## 4 Evaluation The proposed model is evaluated in this section using different datasets for fake news detection. ### _Dataset_ We use two datasets from different domains for the evaluation of the model. These datasets include the Covid-19 dataset and the Lian dataset. #### 4.1.1 The Covid-19 dataset When the COVID-19 pandemic began, social media users shared more and more misinformation and unconfirmed news about the Coronavirus. This motivated researchers to collect datasets from social media and propose machine learning models to evaluate their methods. The Covid-19 dataset is one of the recent datasets proposed by [36] that is a comprehensive dataset including fake and real news from Twitter. This dataset includes 10,700 posts about the COVID-19 outbreaks that were shared on social media. COVID-19 fake articles were collected from fact-checking websites and social media. Moreover, the real news was obtained from 14 official Twitter accounts. Table I shows the statistics of the dataset. #### 4.1.2 The Lian dataset Another fake news dataset used for evaluating different models is the Lian dataset. This dataset contains 12,800 short political news texts from the United States in 6 different categories and is accessible from the POLITIFACT.COM website. Every news text has been validated on this site by a human agent. Therefore, the dataset is divided into 6 categories: true, false, mostly-True, half-true, barely-True 1 Fig. 4: The architecture of capsule network proposed for text classification by [62] \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c|}{**Label**} & \multirow{2}{*}{**Total**} \\ \cline{2-3} \cline{5-4} & **Real** & & **Fake** \\ \hline Training & 3360 & 3360 & 4420 \\ \hline Validation & 1120 & 1020 & 2140 \\ \hline Test & 1120 & 1020 & 2140 \\ \hline \end{tabular} \end{table} TABLE I: Covid-19 dataset statistics by [36] and pants-fire2. The distribution of labels is 1050 pants-fire labels, and the number of the other categories is between 2063 and 2638 [42]. For each news article, the metadata, such as speaker profiles, are taken into account in addition to news statements. This metadata includes valuable information about the new speaker's name, topic, job, state, party, and overall credit history. The total number of credit histories includes false counts, mostly-true counts, barely-true counts, half-true counts, and pants-fire counts. Table II demonstrates the statistics of the Liar dataset. Footnote 2: very false ### _Evaluation metrics_ In our experiments, the classification accuracy, precision, recall, and F1-score are used as evaluation metrics. The accuracy is the ratio of correct predictions of the news label to the total number of news samples and is computed as: \[Accuracy=\frac{TP+TN}{TP+TN+FN+FP} \tag{1}\] Precision shows the percentages of the reported fake news that are correctly detected: \[Precision=\frac{TP}{TP+FP} \tag{2}\] Recall estimates the ratio of the correctly detected fake news: \[Recall=\frac{TP}{TP+FN} \tag{3}\] F1-score is the harmonic mean of precision and recall: \[F1-score=2\times\frac{Precision+Recall}{Precision2\times Recall} \tag{4}\] In these equations, TP represents the number of True Positive results, TN represents the number of True Negative results, FP represents the number of False Positive results, and FN represents the number of False Negative results. ## 5 Results This section evaluates the proposed models on the Covid-19 and Liar datasets on different representation layers. The results are compared to other baseline methods, and the performance of parallel layers is evaluated separately. In the end, in the discussion subsection, a series of experiments on the dataset are discussed. ### _Classification results on the Covid-19 dataset_ #### 5.1.1 Classification results on different representation on the Covid-19 dataset Table III shows the evaluation of the proposed model on the Covid-19 dataset using different representation layers and routing iterations for the dynamic routing algorithm of the CapsNet. As it can be seen, the best result belongs to the model with RoBERTa as representation layer. #### 5.1.2 Classification results on different routing iterations Figure 5 shows evaluations on the Covid-19 dataset in terms of the F1-score for different routing iterations for the dynamic routing algorithm of the CapsNet model. As a result, for long news statements with COVID-19 subjectivity, one repetition is sufficient to achieve the best result. This shows that combining higher hierarchies for data with more sentences is unnecessary to achieve better results, and the best results are obtained by repeating dynamic routing once. #### 5.1.3 Classification results on the Covid-19 dataset After the presentation of the Covid-19 dataset by [36], different machine learning and deep learning models were evaluated for fake news detection on this dataset. [36] used conventional machine learning models, including the DT, LR, SVM, and GDBT. [37] proposed an MLP connected to the RoBERTa's pooled output by utilizing additional data for training. [38] evaluated many models, including a softmax layer connected to the BERT for prediction. [39] proposed a CNN connected to the RoBERTa's pooled \begin{table} \begin{tabular}{|l c|} \hline \multicolumn{2}{|l|}{Liar Dataset Statistics} \\ \hline Training set size & 10,269 \\ Validation set size & 1,284 \\ Testing set size & 1,283 \\ Avg. statement length (tokens) & 17.9 \\ \hline Top-3 Speaker Affiliations & \\ Democrats & 4,150 \\ Republicans & 5,687 \\ None (e.g., FB posts) & 2,185 \\ \hline \end{tabular} \end{table} TABLE II: The Liar dataset statistics provided by [42] Fig. 5: Classification results on different routing iterations on the Covid-19 dataset \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Model** & **RL** & **Acc** & **Prec** & **Rec** & **F1** \\ \hline \multirow{4}{*}{DCNN} & BERT & 96.77 & 96.55 & **97.48** & 97.00 \\ \cline{2-6} & Funnel & 97.05 & 96.85 & 97.20 & 97.02 \\ \cline{1-1} \cline{2-6} & GPT2 & 97.00 & 96.63 & **97.48** & 97.05 \\ \cline{1-1} \cline{2-6} & RoBERTa & **97.34** & **97.21** & 97.38 & **97.29** \\ \hline \end{tabular} \end{table} TABLE III: Classification results on different representations on the Covid-19 dataset output and showed that the performance of this model is better than the previous models. Table IV compares our proposed model's results with the state-of-the-art models on the Covid-19 test set. The results in terms of the F1-score show that the DCNN-CapsNet model can perform better than the state-of-the-art baselines. Moreover, it should be mentioned that recall is an important factor in fake news detection since missing any fake news article has its own negative side effects. As can be seen in the tabulated results, we achieved the best recall among competitors. #### 5.1.4 Performance of parallel layers Table V shows the proposed model's performance compared to the two parallel models. The results show that using different feature extractors for CNN and adding CapsNet, which aims at keeping detailed information about the location of the object and its pose throughout the network, can improve the performance of the baseline models. The best result is achieved when both models are used together. ### _Classification results on the Liar dataset_ #### 5.2.1 Classification results on different representations on the Liar dataset #### 5.2.2 Classification results on different routing iterations Figure 6 shows evaluations on the Liar dataset in terms of the accuracy for different routing iterations for the dynamic routing algorithm of the CapsNet model. As a result, more repetition is needed for short news statements with political subjectivity to achieve the best result. This shows that combining higher hierarchies for data with more sentences is necessary to achieve better results, and the best results are obtained by repeating dynamic routing twice. #### 5.2.3 Classification results on the Liar dataset Table VII compares our proposed model's results with the state-of-the-art models on the Liar validation and test set. The results in terms of accuracy show that the MLP-CapsNet model can perform better than the state-of-the-art baselines. #### 5.2.4 Performance of parallel layers Table VIII shows the proposed model's performance compared to the two parallel models. The results show that using direct features and adding CapsNet, which aims at keeping detailed information about the location of the object and its pose throughout the network, can improve the performance of the baseline models. The best result is achieved when both models are used together. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Model** & **Representation** & **Validation** & **Test** \\ \hline \multirow{3}{*}{MLP} & BERT & 38.55 & 35.70 \\ \cline{2-4} & Funnel & 37.69 & 35.46 \\ \cline{2-4} & GPT2 & 42.91 & 39.67 \\ \cline{2-4} & RoBERTa & **41.19** & **41.77** \\ \hline \end{tabular} \end{table} TABLE VI: Classification results on different representations on the Liar dataset \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Model** & **Acc** & **Prec** & **Rec** & **F1** \\ \hline \hline BotBERTa-CapsNet & 93.22 & 93.92 & 91.91 & 96.31 \\ \hline RoBERTa-CNN & **97.43** & **98.30** & 96.27 & 97.27 \\ \hline Proposed Model & 97.34 & 97.21 & **97.38** & **97.29** \\ \hline \end{tabular} \end{table} TABLE V: Comparison of proposed model result with the result of parallel layers on the Covid-19 test set. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Model** & **Acc** & **Prec** & **Rec** & **F1** \\ \hline \hline BotBERTa-CapsNet & 93.22 & 93.92 & 91.91 & 96.31 \\ \hline RoBERTa-CNN & **97.43** & **98.30** & 96.27 & 97.27 \\ \hline Proposed Model & 97.34 & 97.21 & **97.38** & **97.29** \\ \hline \end{tabular} \end{table} TABLE VII: Comparison of proposed model result with the result of other models on the Liar validation and test set. Fig. 6: Classification results on different routing iterations on the Liar dataset \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Model** & **Acc** & **Prec** & **Rec** & **F1** \\ \hline \hline BotBERTa-CapsNet & 93.22 & 93.92 & 91.91 & 96.31 \\ \hline RoBERTa-CNN & **97.43** & **98.30** & 96.27 & 97.27 \\ \hline Proposed Model & 97.34 & 97.21 & **97.38** & **97.29** \\ \hline \end{tabular} \end{table} TABLE V: Comparison of proposed model result with the result of parallel layers on the Covid-19 test set. ### _Discussion_ This section further analyzes the training set of the Covid-19 dataset for real and fake news labels. Figures 7 and 8 show the word clouds for the real and fake news of the training set after omitting the stopwords, respectively. From the word clouds and most frequent words, we see an overlap of the important words across fake and real news. Therefore for more analysis, we list the ten most frequent words in real and fake news after removing the stopwords: * **Fake news**: covid, Coronavirus, people, claim, trump, virus, say, vaccine, new, and case * **Real news**: case, covid, new, state, test, number, death, India, total, day If we ignore the common words of the two groups, we find that among the fake news, words about sensitive quotes and reports such as the vaccine, Coronavirus, and the names of politicians are more frequent. Also, statistics about the number of infected cases and related words in this domain, such as number, death, day, and names of countries, have been repeated more frequently in the real news. We also analyze the polarity and subjectivity of real and fake news of the Covid-19 training set. Polarity is a float that lies in the range of [-1,1], where -1 means negative statement and 1 means a positive statement. Objective refers to factual information, whereas subjective sentences generally refer to emotion or judgment and personal opinion. Subjectivity is also a float that lies in the range of [0,1][69]. Figure 9 and 10 show the polarity and subjectivity based on the frequency of real and fake news, respectively. We can see that "zero" is the most frequent type of polarity, and in both real and fake news, positive polarity is more than negative polarity. However, for both classes, negative polarity for fake news is more frequent than for real news, while positive polarity is more pronounced in real news. In figure 10, we can see in both fake and real news that zero subjectivity is more frequent; but for fake news, it is obviously more. It is also observed that the subjectivity distribution for fake news is higher than that for real news. Fig. 8: Word cloud of fake news \begin{table} \begin{tabular}{|c|c|c|} \hline **Model** & **Validation(\%)** & **Test(\%)** \\ \hline CapsNet & 40.90 & 39.50 \\ \hline CNN with margin loss & **44.40** & 41.60 \\ \hline Proposed model & 41.19 & **41.77** \\ \hline \end{tabular} \end{table} TABLE VIII: Comparison of proposed model result with the result of parallel layers on the Liar validation and test set. Fig. 7: Word cloud of real news Fig. 9: Polarity of real and fake news Figure 11 shows the different methods of fake news sharing with COVID-19 subject on social media proposed by [70]. This study's model was developed with the U&G theory and previous studies and includes six sharing methods: entertainment, socialization, pass time, altruism, information seeking, and information sharing. In order to further experiments on the subjectivity of fake news, the fake news with the subjectivity of one in Figure 10 is extracted and classified. In Table IX, we can see most of the fake news with high subjectivity shared for information sharing purposes on social media. Also, socialization is another frequent method that is used in fake news sharing. As a result, one of the methods of spreading fake news is sharing interesting information and then using members to re-share that news on social networks. ## 6 Conclusion This paper proposes X-CapsNet for detecting long and short fake news statements. DCNN-CapsNet with margin loss has been proposed for detecting long fake news statements, and MLP-CapsNet with indirect features for short fake news statements. DCNN-CapsNet uses four parallel neural networks. These parallel networks include three different n-grams convolutional layers for feature extraction and a CapsNet layer. MLP-CapsNet has been proposed to solve the problem of short fake news statements that, in addition to the representation layer, we use an indirect features vector created by concatenating news speaker profile information and sentiment, polarity, and sentence information of a fake news article. Different pre-trained representation models and different iterations of the dynamic routing algorithm for the CapsNet have been used to evaluate the proposed models. Finally, models have been tested on two recent well-known datasets in the field, namely the Covid-19 with long fake news statements and the Liar datasets as a dataset with short news statements. Our result shows that using these models can improve the performance of state-of-the-art baselines.
2305.04329
FACTIFY-5WQA: 5W Aspect-based Fact Verification through Question Answering
Automatic fact verification has received significant attention recently. Contemporary automatic fact-checking systems focus on estimating truthfulness using numerical scores which are not human-interpretable. A human fact-checker generally follows several logical steps to verify a verisimilitude claim and conclude whether its truthful or a mere masquerade. Popular fact-checking websites follow a common structure for fact categorization such as half true, half false, false, pants on fire, etc. Therefore, it is necessary to have an aspect-based (delineating which part(s) are true and which are false) explainable system that can assist human fact-checkers in asking relevant questions related to a fact, which can then be validated separately to reach a final verdict. In this paper, we propose a 5W framework (who, what, when, where, and why) for question-answer-based fact explainability. To that end, we present a semi-automatically generated dataset called FACTIFY-5WQA, which consists of 391, 041 facts along with relevant 5W QAs - underscoring our major contribution to this paper. A semantic role labeling system has been utilized to locate 5Ws, which generates QA pairs for claims using a masked language model. Finally, we report a baseline QA system to automatically locate those answers from evidence documents, which can serve as a baseline for future research in the field. Lastly, we propose a robust fact verification system that takes paraphrased claims and automatically validates them. The dataset and the baseline model are available at https: //github.com/ankuranii/acl-5W-QA
Anku Rani, S. M Towhidul Islam Tonmoy, Dwip Dalal, Shreya Gautam, Megha Chakraborty, Aman Chadha, Amit Sheth, Amitava Das
2023-05-07T16:52:21Z
http://arxiv.org/abs/2305.04329v2
# FACTIFY-5WQA: 5W Aspect-based Fact Verification through Question Answering ###### Abstract Automatic fact verification has received significant attention recently. Contemporary automatic fact-checking systems focus on estimating truthfulness using numerical scores which are not human-interpretable. A human fact-checker generally follows several logical steps to verify a verisimilitude claim and conclude whether it's truthful or a mere masquerade. Popular fact-checking websites follow a common structure for fact categorization such as _half true, half false, false, pants on fire_, etc. Therefore, it is necessary to have an aspect-based (_delineating which part(s) are true and which are false_) explainable system that can assist human fact-checkers in asking relevant questions related to a fact, which can then be validated separately to reach a final verdict. In this paper, we propose a 5W framework (_who, what, when, where, and why_) for question-answer-based fact explainability. To that end, we present a semi-automatically generated dataset called FACTIFY-5WQA, which consists of \(391,041\) facts along with relevant 5W QAs - underscoring our major contribution to this paper. A semantic role labeling system has been utilized to locate 5Ws, which generates QA pairs for claims using a masked language model. Finally, we report a baseline QA system to automatically locate those answers from evidence documents, which can serve as a baseline for future research in the field. Lastly, we propose a robust fact verification system that takes paraphrased claims and automatically validates them. The dataset and the baseline model are available at [https://github.com/ankuranii/acl-5W-QA](https://github.com/ankuranii/acl-5W-QA) ## 1 Fact checking demands aspect-based explainability Manual fact-checking is a time-consuming task. To assess the truthfulness of a claim, a journalist would either need to search online, offline, or both, browsing through a multitude of sources while also accounting for the perceived reliability of each source. The final verdict can then be obtained via assimilation and/or comparison of the facts derived from said sources. This process can take professional fact-checkers several hours or days (Hassan et al., 2019) (Adair et al., 2017), depending on the inherent complexity of the claim. There are several contemporary practices that journalists use for the manual verification of a claim. These methods can be categorized into four broad categories (Poseti et al., 2018): 1. **Research and fact-checking**: This involves carefully researching the claim and verifying its accuracy using reliable and credible sources such as news services, academic studies, and government data. 2. **Interviews and expert opinions**: This involves speaking with experts in the relevant field and asking for their opinions on the claim to see if it is supported by evidence and expertise. 3. **Cross-checking with multiple sources**: This involves comparing the claim with information from multiple sources to see if it is consistent or triangulates the facts obtained via multiple sources. 4. **Verifying the credibility of sources**: This involves checking the credibility of the sources used to support the claim, such as ensuring that they are reliable and unbiased. Overall, these methods can help journalists to carefully verify claims and ensure that they are accurate and supported by evidence. However, this process is tedious and hence time-consuming. A system that can generate relevant question-answer sets by dissecting the claim into its constituent components for a given verisimilitude claim could be a great catalyst in the fact-checking process. Research on automatic fact-checking has recently received intense attention (Yang et al., 2022a), (Park et al., 2021), (Atanasova et al., 2019), (Guo et al., 2022), (Trokhymovych and Saez-Trumper, 2021). Several datasets to evaluate automatic fact verification such as FEVER (Thorne et al., 2018), LIAR (Wang, 2017), PolitiFact (Garg and Sharma, 2020), FaVIQ (Kwiatkowski et al., 2019), Hover (Jiang et al., 2020), X-Fact (Gupta and Srikumar, 2021), CREAK (Onoe et al., 2021), FEVEROUS (Aly et al., 2021) are also available. Contemporary automatic fact-checking systems focus on estimating truthfulness using numerical scores which are not human-interpretable (Nakov et al., 2021; Guo et al., 2021). Others extract explicit mentions of the candidate's facts in the text as evidence for the candidate's facts, which can be hard to spot directly. Moreover, in the case of false information, it is commonplace that the whole claim isn't false, but some parts of it are, while others could still be true. A claim is either opinion-based, or knowledge-based (Kumar and Shah, 2018). For the same reason, the popular website Politifact based on the work by (Garg and Sharma, 2020) categorized the fact-checking verdict in the form of half-true, half-false, etc. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{6}{c}{**Factify Question Answering at a glance**} \\ \hline **Entailment Classes** & **Textual support** & **No. of claims** & **No. of paraphrased claims** & **SWQA pairs** & **No. of evidence documents** \\ \hline \multirow{3}{*}{Sugget} & Text are supporting & \multirow{3}{*}{217,856} & \multirow{3}{*}{992,503} & \multirow{3}{*}{464,766} & \multirow{3}{*}{217,635} \\ & each other & & & & \\ \cline{3-5} & \(\sim\) similar news & & & & \\ \hline \multirow{3}{*}{Swetz} & Text are neither & \multirow{3}{*}{79,318} & \multirow{3}{*}{365,593} & \multirow{3}{*}{194,635} & \multirow{3}{*}{45,715} \\ & supported nor ref refuted & & & & \\ \cline{3-5} & \(\sim\) may have common words & & & & \\ \hline \hline \multicolumn{6}{c}{**Fact**} & Fake Claim & 93,867 & \multirow{3}{*}{383,035} & \multirow{3}{*}{243,904} & \multirow{3}{*}{93,766} \\ \hline \hline \multicolumn{6}{c}{Total} & \multicolumn{1}{c}{391,041} & \multicolumn{1}{c}{1,741,131} & \multicolumn{1}{c}{903,305} & \multicolumn{1}{c}{357,116} \\ \hline \hline \end{tabular} \end{table} Table 1: A top-level view of Facility-5WQA: (i) classes and their respective textual support specifics, (ii) Number of claims,(iii) Number of paraphrased claims, (iv) 5WQA pairs, and (v) evidence documents \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{6}{c}{**5W QA based Explainability**} \\ \hline **Claim:** Modern’s lawsuits against Pfizer-BiNotTech & \multicolumn{2}{c}{**What claims**} & **When claims** & **Where claims** & **Why claims** \\ \hline **Who claims** & **Q1:** & **_What the lawusuit shows?_ & **Q1:** & **_When the_ & \\ **against whom** & **Ahs** & COVID-19 & _COVID-19_ & _COVID-19_ & \\ **Ahs** & **Moderna** & **Ahs** & **overlines were were in work?_ & **a** no claim! & * no claim! \\ **Pizer-BiNotTech** & \multicolumn{2}{c}{**_Mederna_’s lawsuits against Pfizer-BiNotTech} & \multicolumn{2}{c}{**_Mederna_’s lawsuits against We propose 5W (Who, What, When, Where, and Why) aspect-based question-answer pairwise explainability. Including these 5W elements within a statement can provide crucial information regarding the entities and events being discussed, thus facilitating a better understanding of the text. For instance, in the statement _"Moderna's lawsuits against Pfizer-BioNTech show COVID-19 vaccines were in the works before the pandemic started."_ The use of _who_ highlights the individuals or entities involved in the action of filing lawsuits, _what_ pertains to the content of the lawsuit, specifically the revelation that COVID-19 vaccines were in the works, _when_ refers to the timing of this revelation, i.e., before the pandemic. Overall, the incorporation of "who," "what," "when," "where," and "why" in a text can provide crucial context and aid in making the text more clear and comprehensible. Automatic question and answering (Q&A) systems can provide valuable support for claims by providing evidence and supporting information. They can also help to identify potential flaws or weaknesses in a claim, allowing for further analysis and discussion. They can also help to identify potential flaws or weaknesses in a claim, allowing for further analysis and discussion. Only two recent works (Yang et al., 2022; Kwiatkowski et al., 2019) propose question answering as a proxy to fact verification explanation, breaking down automated fact-checking into several steps and providing a more detailed analysis of the decision-making processes. Question-answering-based fact explainability is indeed a very promising direction. However, open-ended QA for a fact can be hard to summarize. Therefore, we refine the QA-based explanation using the 5W framework (_who, what, when, where, and why_). Jounalists follow an established practice for fact-checking, verifying the so-called 5Ws (Mott, 1942), (Stofer et al., 2009), (Silverman, 2020), (Su et al., 2019), (Smarts, 2017). This directs verification search and, moreover, identifies missing content in the claim that bears on its validity. One consequence of journalistic practice is that claim rejection is not a matter of degree (_as conveyed by popular representations such as a number of Pinocchios or crows, or true, false, half true, half false, pants on fire_), but the rather specific, substantive explanation that recipients can themselves evaluate (Dobbs, 2012). ## 2 Data sources and compilation Data collection is done by sorting 121 publicly available prevalent fact verification data sets based on modalities (111), languages (83), and tasks (51). By filtering 121 publicly available data sets for fact verification, we found ten of them to be suitable for the text-based fact verification task. We only considered the claims present in textual format in English-language because of which DanFEVER (Norregaard and Derczynski, 2021) and X-Fact (Gupta and Srikumar, 2021) were also excluded because they are either Danish or multilingual. We discovered that "Evidence-based Factual Error Correction" and FEVEROUS (Aly et al., 2021) were subsets of the FEVER dataset, so we decided to use FEVER (Thorne et al., 2018), HoVer (Jiang et al., 2020), VITC (Schuster et al., 2021), FaVIQ (Park et al., 2021), Factify 1.0 (Patwa et al., 2022) and Factify 2.0 (Mishra et al., 2022) for our analysis. We verified that the claims in these datasets were unique but found that 64 claims from VITC (Schuster et al., 2021) overlapped with those in FEVER (Thorne et al., 2018) which is later considered once giving a total count of \(391,041\) datapoints and the distribution is represented in the figure 2. We only used a specific number of claims from each of the six datasets after manually inspecting the quality aspects - length of the claim and evidence, grammatical correctness, etc. For the FEVER and VITC datasets, only the claims belonging to the train split were used for making the dataset. For Factify 1.0 and Factify 2.0, the multimodal part of the dataset was discarded and only the text-based part was used. FaVIQ has two sets: the _A set_ and the _R set. A set_ consists of ambiguous questions and their disambiguation. _R set_ is made by using unambiguous question-answer pairs. As discussed in earlier paragraphs, _A set_ is a more challenging set; hence we took the _A set_ of FaVIQ for making our dataset. In the case of the HoVer dataset, \(22036\) claims were used in making our dataset. We propose an amalgamated data set with the total number of unique claims as \(391,041\). Around (\(\sim 85\%\)) of them are from VITC, FEVER, and Figure 2: Distribution of the FACTIFY 5WQA fact verification dataset. HoVer, and (\(\sim 15\%\)) of it is from Factify 1.0, Factify 2.0 and FaVIQ as evident from Figure 2. Figure 1 offers a snapshot of topics in these datasets through a word cloud. ## 3 Paraphrasing textual claims The motivation behind paraphrasing textual claims is as follows. A textual given claim may appear in various different textual forms in real life, owing to variations in the writing styles of different news publishing houses. Incorporating such variations is essential to developing a strong benchmark to ensure a holistic evaluation (see examples in Figure 3).Manual generation of possible paraphrases is undoubtedly ideal, but that process is time-consuming and labor-intensive. On the other hand, automatic paraphrasing has received significant attention in recent times (Niu et al., 2020) (Nicula et al., 2021)(Witteveen and Andrews, 2019)(Nighojkar and Licato, 2021). For a given claim, we generate multiple paraphrases using various SoTA models. In the process of choosing the appropriate paraphrase model based on a list of available models, the primary question we asked is how to make sure the generated paraphrases are rich in diversity while still being linguistically correct. We delineate the process followed to achieve this as follows. Let's say we have a claim \(c\). We generate \(n\) paraphrases using a paraphrasing model. This yields a set of \(p_{1}^{c}\),..., \(p_{n}^{c}\). Next, we make pair-wise comparisons of these paraphrases with \(c\), resulting in \(c-p_{1}^{c}\),..., and \(c-p_{n}^{c}\). At this step, we identify the Figure 1: Word cloud offers a glance view of topic distributions over the chosen datasets: (i) VITC (Schuster et al., 2021), (ii) FEVER (Thorne et al., 2018), (iii) Factify 1.0 (Patwa et al., 2022), (iv) Factify 2.0 (Mishra et al., 2022), (v) HoVer (Jiang et al., 2020), and (vi) FaVIQ (Park et al., 2021). Darker color shades in the cloud represent a higher frequency of the particular word in the dataset. Figure 3: Claims and paraphrases obtained using text-davinci-003 (Brown et al., 2020) examples which are entailed, and only those are chosen. For the entailment task, we have utilized RoBERTa Large Liu et al. (2019) - a SoTA model trained on the SNLI task Bowman et al. (2015). However, there are many other secondary factors, for e.g., a model may only be able to generate a limited number of paraphrase variations compared to others, but others can be more correct and/or consistent. As such, we considered three major dimensions in our evaluation: _(i) a number of considerable paraphrase generations, (ii) correctness in those generations, and (iii) linguistic diversity in those generations_. We conducted experiments with three available models: (a) Pegasus Zhang et al. (2020), (b) T5 (T5-Large) Raffel et al. (2020), and (c) GPT-3 (text-davinci-003 variant) Brown et al. (2020). Based on empirical observations, we concluded that GPT-3 outperformed all the other models. To offer transparency around our experiment process, we detail the aforementioned evaluation dimensions as follows. **Coverage - a number of considerable paraphrase generations:** We intend to generate up to \(5\) paraphrases per given claim. Given all the generated claims, we perform a minimum edit distance (MED) Wagner and Fischer (1974) - units are words instead of alphabets). If MED is greater than \(\pm 2\) for any given paraphrase candidate (for e.g., \(c-p_{1}^{c}\)) with the claim, then we further consider that paraphrase, otherwise discarded. We evaluated all three models based on this setup that what model is generating the maximum number of considerable paraphrases. **Correctness - correctness in those generations:** After the first level of filtration we have performed pairwise entailment and kept only those paraphrase candidates, are marked as entailed by the Liu et al. (2019) (Roberta Large), SoTA trained on SNLI Bowman et al. (2015). **Diversity - linguistic diversity in those generations:** We were interested in choosing that model can produce linguistically more diverse paraphrases. Therefore we are interested in the dissimilarities check between generated paraphrase claims. For e.g., \(c-p_{n}^{c}\), \(p_{1}^{c}-p_{n}^{c}\), \(p_{2}^{c}-p_{n}^{c}\),..., \(p_{n-1}^{c}-p_{n}^{c}\) and repeat this process for all the other paraphrases and average out the dissimilarity score. There is no such metric to measure dissimilarity, therefore we use the inverse of the BLEU score Papineni et al. (2002). This gives us an understanding of how linguistic diversity is produced by a given model. Based on these experiments, we found that text-davinci-003 performed the best. The results of the experiment are reported in the following table. Furthermore, we were more interested to choose a model that can maximize the linguistic variations, and text-davinci-003 performs on this parameter of choice as well. A plot on diversity vs. all the chosen models is reported in Figure 4. ## 4 5W Semantic Role Labelling Identification of the functional semantic roles played by various words or phrases in a given sentence is known as semantic role labelling (SRL). SRL is a well-explored area within the NLP community. There are quite a few off-the-shelf tools available: (i) Stanford SRL Manning et al. (2014), (ii) AllenNLP AllenNLP (2020), etc. A typical SRL system first identifies verbs in a given sentence and then marks all the related words/phrases haven relational projection with the verb and assigns appropriate roles. Figure 4: A higher diversity score depicts an increase in the number of generated paraphrases and linguistic variations in those generated paraphrases. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **Coverage** & **Correctness** & **Diversity** \\ \hline **Pegasus** & 32.46 & 94.38\% & 3.76 \\ **TS** & 30.26 & 83.84\% & 3.17 \\ **GPT-3** & 35.51 & 88.16\% & 7.72 \\ \hline \hline \end{tabular} \end{table} Table 3: Experimental results of automatic paraphrasing models based on three factors: _(i) coverage, (ii) correctness and (iii) diversity_; GPT-3 (text-davinci-003) can be seen as the most performant. Figure 5: Examples of the 5W semantic role labels. ally marked by standard roles defined by the Proposition Bank (generally referred to as PropBank) (Palmer et al., 2005), such as: _Arg0, Arg1, Arg2,_ and so on. We propose a mapping mechanism to map these PropBank arguments to 5W semantic roles. (look at the conversion table 4). Semantic role labelling (SRL) is a natural language processing technique that involves identifying the functions of different words or phrases in a sentence. This helps to determine the meaning of the sentence by revealing the relationships between the entities in the sentence. For example, in the sentence _"Moderna's lawsuits against Pfizer-BioNTech show COVID-19 vaccines were in the works before the pandemic started," Moderna_ would be labeled as the _agent_ and _Pfizer-BioNTech_ would be labelled as the _patient_. The five "W"s (what, when, where, why, who) are often used to refer to the key questions that need to be answered in order to fully understand a sentence or piece of text. SRL can be seen as a way of providing answers to these questions by identifying the various roles that words and phrases play within a sentence. For example, a semantic role labeler might identify the subject of a sentence (who or what the sentence is about), the object (who or what is being acted upon), and the verb (the action being performed). In this way, semantic role labeling can be seen as a way of providing the necessary context for answering the five "W"s, and can be an important tool in natural language processing and understanding. In this study, we use the mapping displayed in table 4 and replace the roles that are assigned with respect to each verb as an output from SRL with 5W. According to table 4, it is evident that each of the 5Ws can be mapped to semantic roles. The highest percentage of mapping is taken into consideration and concluded in table 4. After the mapping is done, a detailed analysis for the presence of each of the 5W is conducted which is summarized in figure 6. In this study, experimentation for finding semantic roles was conducted using **AllenNLP SRL** demo (AllenNLP, 2020). Developed by (Shi and Lin, 2019), it is a BERT (Devlin et al., 2018) based model with some modifications that introduce a linear classification layer with no additional parameters, and it is currently the best single model for English PropBank SRL on newswire sentences with a test F1 of 86.49 on the Ontonotes 5.0 dataset (Palmer et al., 2005). Newswire instances correlate with the fact verification dataset as true news is also a fact. As indicated in figure 5, the pipeline for generating 5W aspect-based semantic role labeling is to pass it through an SRL model and map it with 5W. An example of a claim as per the output using AllenNLP's SRL model is in figure 5. ### Human Evaluation of the 5W SRL In this work evaluation for the 5W Aspect, based on semantic role labeling is conducted using _mapping accuracy_: This involves accuracy on SRL output mapped with 5Ws. For the purpose of finding how good the mapping of 5W is with semantic roles and generation of semantic roles, human annotation of \(3000\) data points was conducted. \(500\) random data points each from FEVER, FavIQ, HoVer, VITC, Factify 1.0 and Factify 2.0 were annotated and the results are described in table 6. ## 5 5W aspect-based QA pair generation A false claim is very likely to have some truth in it, some correct information. In fact, most fake news \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline & **FaVIQ** & **FEVER** & **HoVer** & **VitmianC** & **Factify 1.0** & **Factify 2.0** \\ \hline **Who** & 89\% & 85\% & 90\% & 87\% & 86\% & 82\% \\ **What** & 85\% & 50\% & 68\% & 78\% & 81\% & 93\% \\ **When** & 86\% & 90\% & 98\% & 98\% & 83\% & 75\% \\ **Where** & 93\% & 100\% & 90\% & 97\% & 93\% & 86\% \\ **Why** & 90\% & - & 100\% & 92\% & 87\% & 93\% \\ \hline \end{tabular} \end{table} Table 6: Human evaluation of 5W SRL; % represents human agreement on 5W mapping with SRL. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **PropBank Role** & **Who** & **What** & **When** & **Where** & **Why** & **How** \\ \hline **ARG0** & **84.48** & 0.00 & 3.33 & 0.00 & 0.00 & 0.00 \\ **ARG1** & 10.34 & **53.85** & 0.00 & 0.00 & 0.00 & 0.00 \\ **ARG2** & 0.00 & 9.89 & 0.00 & 0.00 & 0.00 & 0.00 \\ **ARG3** & 0.00 & 0.00 & 0.00 & 22.86 & 0.00 & 0.00 \\ **ARG4** & 0.00 & 3.29 & 0.00 & 34.29 & 0.00 & 0.00 \\ **ARG3-TMP** & 0.00 & 1.09 & **0.00** & 0.00 & 0.00 & 0.00 \\ **ARG4-LOC** & 0.00 & 1.09 & 1.00 & **25.71** & 0.00 & 0.00 \\ **ARG4-LOC** & 0.00 & 0.00 & 0.00 & 0.00 & **100.00** & 0.00 \\ **ARG4-ADV** & 0.00 & 4.39 & 2.00 & 0.00 & 0.00 & 0.06 \\ **ARG4-MRN** & 0.00 & 3.85 & 0.00 & 8.57 & 0.00 & **99.91** \\ **ARG4-MOD** & 0.00 & 4.39 & 0.00 & 0.00 & 0.00 & 0.00 \\ **ARG4-DRN** & 0.00 & 0.01 & 0.00 & 5.71 & 0.00 & 3.03 \\ **ARG4-DRS** & 0.00 & 1.65 & 0.00 & 0.00 & 0.00 & 0.00 \\ **ARG4-NEG** & 0.00 & 1.09 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \end{tabular} \end{table} Table 4: A mapping table from PropBank(Palmer et al., 2005) (_Arg0, Arg1,..._) to 5W (_who, what, when, where, and why_). Figure 6: Percentage of W’s present across the dataset. articles are challenging to detect precisely because they are mostly based on correct information, deviating from the facts only in a few aspects. That is, the misinformation in the claim comes from a very specific inaccurate statement. So, given our textual claim, we generate 5W question-answer pairs by doing semantic role labeling on the given claim. The task is now based on the generated QA pairs, a fact-checking system can extract evidence sentences from existing authentic resources to verify or refute the claim based on each question- _Who, What, When, Where, and Why_(Wikipedia, 2023). Please see examples in Figure 7. Our method of using 5W SRL to generate QA pairs and then verify each aspect separately allows us to detect '_exactly where the lie lies_'. This, in turn, provides an explanation of why a particular claim is refutable since we can identify exactly which part of the claim is false. The process of fact verification is inherently intricate, with several questions representing the components within the underlying claim that need answers to reach a verdict on the veracity of the claim. Referring to the example in figure 7, such questions may include: _(a) Who lawsuit against whom? (b) Vaccine were in use when?_ what can go wrong if this claim is false? Manual fact-checking can be labor-intensive, consuming several hours or days (Hassan et al., 2015; Adair et al., 2017). For the 5W question generation task we have experimented with two models: (i) BART (Lewis et al., 2019), and (ii) ProphetNet (Qi et al., 2020), and found ProphetNet outperforms the former. ProphetNet (Qi et al., 2020), a generative model that uses multi-lingual pre-training with masked span generation. It is optimized through _n-step_ ahead prediction, which predicts the next \(n\) tokens based on previous context tokens at each time step, encouraging the model to explicitly plan for future tokens. In this work, we employed the context-based question generation approach to generate relevant and specific questions for the task of fact verification. This approach utilizes the claim information to ensure that the generated questions are appropriate for fact-checking. ### Human evaluation of QA generation For the evaluation purpose, a random sample of \(3000\) data points was selected for annotation. The questions generated using the Prophetnet model were utilized for this purpose. The annotators were instructed to evaluate the question-answer pairs in three dimensions: the question is well formed, \begin{table} \begin{tabular}{l which means it is syntactically correct, the question is correct which means it is semantically correct with respect to the given claim, and extracted answer from the model is correct. The evaluation results for the datasets are presented in the following analysis. ## 6 The 5W QA validation system Finally, we propose a QA validation system, where the generated questions from the QG system and the evidence are passed through SoTA Question answering models (T5:3B) [14], T5:Large [14], Bert: Large [15]) demonstrated in figure 9. This helps to find out whether the evidence supports or refutes the claim or if the system misses out on enough information to make a conclusion. An example of two of the claims that generate answers based on the evidence is represented in figure 9. In this figure, the question is generated using prophetnet, and the answer is generated using the T5-3B model from the evidence of the claims. as described in figure 10. To design the 5W QA validation system, we utilized the claims, evidence documents, and 5W questions generated by the question generation system as input. The answer generated by the 5W QG model is treated as the gold standard for comparison between claim and evidence. We experimented with three models, T5-3B [14], T5-Large [14], and Bert-Large [15]. The T5 is an encoder-decoder-based language model that treats this task as text-to-text conversion, with multiple input sequences and produces an output as text. The model is pre-trained using the C4 corpus [14] and fine-tuned on a variety of tasks. T5-Large employs the same encoder-decoder architecture as T5-3B [14], but with a reduced number of parameters. The third model that we experimented with is the Bert-Large [15] model, which utilizes masked language models for pre-training, enabling it to handle various downstream tasks. ## 7 Selecting the best combination - 5W QAG vs. 5W QA validation We have utilized off-the-self models both for 5W question-answer generation and 5W question-answer validation. Given that the datasets used for training the models bear an obvious discrepancy in terms of the distribution characteristics compared to our data (world news) which would probably lead to a generalization gap, it was essential to experimentally judge which system offered the best performance for our use-case. Instead of choosing the best system for generation vs. validation, we opted for pair-wise validation to ensure we chose the best combination. Table 5 details our evaluation results - the rows denote the QA models while the columns denote QAG models. From the results in the table, we can see that the best combination in terms of a QAG and QA validation model was identified as T5-3b and ProphetNet, respectively. ## 8 Conclusion and future avenues It has been realized by the community that due to the given complexity of fact-checking it possibly can not be automated completely. Human-in-loop is the solution for the same. Proposed 5W QA-based fact verification can be the best aid for human fact-checkers. To the best of our knowledge, we are the first to introduce 5W QA-based fact verification and additionally proposed relevant techniques to automatically generate QA using the automatic method, which can be readily used for any incoming claim on the spot. Furthermore, the QA validation section can aid to provide evidence Figure 10: T5-based question answering framework. Figure 9: Examples of QA pairs generated from evidence by the QA system. support. Paraphrasing claims provide a holistic approach to fact-checking. Generated datasets and resources will be made public for research purposes containing 3.91 million claims. ## 9 Discussion and limitations In this section, we self-criticize a few aspects that could be improved and also detail how we plan (tentatively) to plan to improve upon those specific aspects - ### Paraphrasing claims Manual generation of possible paraphrases is undoubtedly ideal but is time-consuming and labor-intensive. Automatic paraphrasing is a good way to scale quickly, but there could be more complex variations of meaning paraphrases hard to generate automatically. For example - "_It's all about business - a patent infringement case against Pfizer by a rival corporate reveals they knew about COVID in one way!_" and "_Oh my god COVID is not enough now we have to deal with HIV blood in the name of charity!_". An ideal for this shortcoming would be to manually generate a few thousand paraphrase samples and then fine-tune language models. On the other hand, a new paradigm in-context Learning is gaining momentum (Xun et al., 2017). In-context learning has been magical in adapting a language model to new tasks through just a few demonstration examples without doing gradient descent. There are quite a few recent studies that demonstrate new abilities of language models that learn from a handful of examples in the context (in-context learning - ICL for short). Many studies have shown that LLMs can perform a series of complex tasks with ICL, such as solving mathematical reasoning problems (Wei et al., 2022). These strong abilities have been widely verified as emerging abilities for large language models (Wei et al., 2022). From prompt engineering to chain of thoughts, we are excited to do more experiments with the new paradigm of in-context learning for automatically paraphrasing claims. ### 5w Srl Semantic role labeling is a well-studied sub-discipline, and the mapping mechanism we proposed works well in most cases except in elliptic situations like anaphora and cataphora. In the future, we would like to explore how an anaphora and coreference resolution (Joshi et al., 2019) can aid an improvement. ### 5w QA pair generation 5W semantic role-based question generation is one of the major contributions of this paper. While automatic generation aided in scaling up the QA pair generation, it also comes with limitations of generating more complex questions covering multiple Ws and _how_ kinds of questions; for example, "_How Moderna is going to get benefited if this Pfizer COVID news turns out to be a rumor?_". For the betterment of FACTIFY benchmark, we would like to generate few thousand manually generated abstract QA pairs. Then will proceed towards in-context Learning (Xun et al., 2017). Abstractive question-answering has received momentum (Zhao et al., 2022), (Pal et al., 2022) recently. We want to explore how we can generate more abstract QA pairs for the multimodal fact-verification task. ### QA system for the 5W question Generated performance measures attest the proposed QA model needs a lot more improvement. This is due to the complexity of the problem and we believe that will attract future researchers to try this benchmark and conduct research on multimodal fact verification. It has been realized by the community that relevant document retrieval is the major bottleneck for fact verification. Recent work introduced a fresh perspective to the problem - named Hypothetical Document Embeddings (HyDE) (Gao et al., 2022) and applied a clever trick even if the wrong answer is more semantically similar to the right answer than the question. This could be an interesting direction to explore and examine how that could aid in retrieving relevant documents and answers.
2302.09872
A novel dual-decomposition method for non-convex mixed integer quadratically constrained quadratic problems
We propose the novel p-branch-and-bound method for solving two-stage stochastic programming problems whose deterministic equivalents are represented by non-convex mixed-integer quadratically constrained quadratic programming (MIQCQP) models. The precision of the solution generated by the p-branch-and-bound method can be arbitrarily adjusted by altering the value of the precision factor p. The proposed method combines two key techniques. The first one, named p-Lagrangian decomposition, generates a mixed-integer relaxation of a dual problem with a separable structure for a primal non-convex MIQCQP problem. The second one is a version of the classical dual decomposition approach that is applied to solve the Lagrangian dual problem and ensures that integrality and non-anticipativity conditions are met once the optimal solution is obtained. This paper also presents a comparative analysis of the p-branch-and-bound method efficiency considering two alternative solution methods for the dual problems as a subroutine. These are the proximal bundle method and Frank-Wolfe progressive hedging. The latter algorithm relies on the interpolation of linearisation steps similar to those taken in the Frank-Wolfe method as an inner loop in the classic progressive hedging. The p-branch-and-bound method's efficiency was tested on randomly generated instances and demonstrated superior performance over commercial solver Gurobi.
Nikita Belyak, Fabricio Oliveira
2023-02-20T10:03:13Z
http://arxiv.org/abs/2302.09872v4
# A novel dual-decomposition method based on \(p\)-Lagrangian relaxation ###### Abstract In this paper, we propose the novel \(p\)-branch-and-bound method for solving two-stage stochastic programming problems whose deterministic equivalents are represented by mixed-integer quadratically constrained quadratic programming (MIQCQP) models. The precision of the solution generated by the \(p\)-branch-and-bound method can be arbitrarily adjusted by altering the value of the precision factor \(p\). The proposed method combines two key techniques. The first one, named \(p\)-Lagrangian decomposition, generates a mixed-integer relaxation of a dual problem with a separable structure for a primal MIQCQP problem. The second one is a version of the classical dual decomposition approach that is applied to solve the Lagrangian dual problem and ensures that integrality and non-anticipativity conditions are met in the optimal solution. The \(p\)-branch-and-bound method's efficiency has been tested on randomly generated instances and demonstrated superior performance over commercial solver Gurobi. This paper also presents a comparative analysis of the \(p\)-branch-and-bound method efficiency considering two alternative solution methods for the dual problems as a subroutine. These are the proximal bundle method and Frank-Wolfe progressive hedging. The latter algorithm relies on the interpolation of linearisation steps similar to those taken in the Frank-Wolfe method as an inner loop in the classic progressive heading. two-stage stochastic programming, normalized multiparametric disaggregation, Lagrangian relaxation, branch-and-bound ## 1 Introduction Presently, the vast majority of engineering sectors utilise mathematical optimisation as a modelling framework to represent the behaviour of various processes. Areas such as electrical and process engineering are arguably the most prominent employers of mathematical optimisation techniques to improve their operational performance. For instance, [4] emphasises the efficiency of mathematical optimisation as an approach for designing energy systems. The author highlights the superior performances of mathematical optimisation in terms of comprehensiveness and ability to explicitly determine the topology of energy systems compared to its alternatives, e.g., heuristic and insight-based approaches. [23] discuss the importance of mathematical optimisation in chemical and petrochemical operations, allowing, in some cases, up to 30% in energy savings. Mathematical optimisation approaches closely rely on solving to optimality a mathematical optimisation problem - the set of mathematical relationships representing the real-world problem [26]. [26] classifies mathematical optimisation problems into four groups such as linear programming problems, mixed-integer linear programming (MILP) problems, nonlinear programming problems, and mixed-integer nonlinear programming (MINLP) problems. Mixed-integer problems involve decision variables that can have both continuous and discrete domains. The linearity or nonlinearity of the problem refers to the type of constraints and objective function. [31] highlight that MINLP problems are particularly challenging due to the difficulties arising from solving over integer variables and non-linear functions. At the same time, both mixed-integer linear programming and nonlinear programming problems are known to be NP-hard [21; 35]. Nevertheless, the range of applications of MINLP is noticeably diverse [31]. It includes modelling block layout design problems with unequal areas [15], structural flow sheet problem [29] and finding optimal design of water distribution networks [11] to mention only a few relevant applications. In this paper, we focus on a subclass of MINLP problems that represent deterministic equivalents for two-stage stochastic mixed-integer programming (2SSMIP) problems. Such problems involve two decision-variable sets that are separated by an intermediate probabilistic event. These two distinct decision-variable sets represent decisions made at different stages, i.e., before and after the intermediate probabilistic event occurred and its outcome is acknowledged. The modelling of probabilistic events for the most part involves consideration of mutually exclusive and exhaustive alternatives (scenarios) and the definition of probabilities associated with them [12]. Despite their vast applicability, 2SSMIP problems raise serious conceptual and computational challenges [30]. For instance, in [33], the authors exploited a multi-step mixed-integer nonlinear programming problem to optimise the recovery process for network and load during power system restoration. In [41], a 2SSMIP model has been used to formulate a container slot allocation problem for a liner shipping service. The authors used the sample-average approximation to approximate the expected value function rendering a nonlinear integer programming model. Our interest resides in 2SSMIP problems whose deterministic equivalent representations are nonconvex mixed-integer quadratically constrained quadratic programming (MIQCQP) models. These models arise in several practically relevant applications, such as the pooling problem, which is a MIQCQP under the assumption of linearly blending qualities [34] and as the equivalent, single-level reformulation of some bi-level optimisation problems [19; 40]. The examples of pooling problem applications include the design of water networks [25], modelling refinery processes [3], and transportation systems [13] as some relevant examples. The formulation of a general 2SSMIP is \[z^{\text{SMIP}}=\min_{x}\left\{c^{\top}x+\mathcal{Q}(x):x\in X\right\}, \tag{1}\] where the vector \(c\in\mathbb{R}^{n_{x}}\) is known, \(X\) is a mixed-integer linear set consisting of linear constraints and integrality restrictions on some components of \(x\). The recourse function \(\mathcal{Q}:\mathbb{R}^{n_{x}}\mapsto\mathbb{R}\) is the expected recourse value function \[\mathcal{Q}(x)=\mathbb{E}\left[\min_{y}\left\{f(y,\xi):g(x,y,\xi)=0,y\in Y( \xi)\right\}\right], \tag{2}\] where, for any realisation of the random variable \(\xi\), \(f:\mathbb{R}^{n_{y}}\mapsto\mathbb{R}\) is defined as \[f(y,\xi)=q(\xi)^{\top}y+\sum_{(i,j)\in B_{Q}}Q(\xi)_{i,j}y_{i}y_{j},\] \(g=[g_{1},\ldots,g_{|M|}]^{\top}\) where \(g_{m}:\mathbb{R}^{n_{x}\times n_{y}}\mapsto\mathbb{R},\forall m\in\{1,\ldots, |M|\}=M\), is defined as \[g_{m}(x,y,\xi)=T(\xi)_{m}x+W(\xi)_{m}y+\sum_{(i,j)\in B_{U}}U(\xi)_{m,i,j}y_{ i}y_{j}-h(\xi)_{m},\] and \(B_{Q}\) (\(B_{U}\)) comprise the index pairs \((i,j)\) for which the entry \(Q_{i,j}>0\) (\(U_{i,j}>0\)), implying the presence of the bi-linear terms \(y_{i}y_{j}\); \(Y(\xi)\) is a mixed-integer set containing both linear constraints and integrality requirements on some of the variables \(y(\xi)\); and \(\mathbb{E}\left[\,\cdot\,\right]\) denotes the expectation of \(\cdot\) in terms of the random variable \(\xi\). As it is standard practice in the stochastic programming literature, we assume that the random variable \(\xi\) is represented by a finite set \(S\) of realisations \(\xi_{1},\ldots,\xi_{|S|}\), each with associated probability value \(p_{1},\ldots,p_{|S|}\). In particular, each realisation \(\xi_{s}\) of \(\xi\) encodes the realisation observed for each of the random elements \((q(\xi_{s}),Q(\xi_{s}))\) and \((T(\xi_{s})_{m},W(\xi_{s})_{m},U(\xi_{s})_{m},h(\xi_{s})_{m})\), \(\forall m\in M\). For the sake of notation compactness, we refer to these collections as \((q^{s},Q^{s})\) and \((T^{s}_{m},W^{s}_{m},U^{s}_{m},h^{s}_{m})\), \(\forall m\in M\), respectively. Problem (1) can be then posed as the deterministic equivalent \[z^{\rm{SMIP}}=\min_{x,y}\ \ c^{\top}x+\sum_{s\in S}p^{s}(q^{s\top}y^{s}+\sum_{(i, j)\in B_{Q}}Q_{i,j}^{s}y_{i}^{s}y_{j}^{s}) \tag{3}\] subject to: \(x\in X\) \[\begin{split}& T_{m}^{s}x+W_{m}^{s}y^{s}+\sum_{(i,j)\in B_{U}}U_{m,i,j} ^{s}y_{i}^{s}y_{j}^{s}=h_{m}^{s},\ \forall m\in M,\forall s\in S\\ & y^{s}\in Y^{s},\ \forall s\in S.\end{split} \tag{4}\] Due to the challenging nature of the MIQCQP problems open source and commercial global solvers, such as Gurobi [24], Couenne [8], or Baron [39] still lack performance requirements in case of large-scale instances. There have been several solution approaches developed for MIQCQP problems, which can be categorised into three groups. The first one involves approximation of the problem (4) with a continuous or mixed-integer relaxation [5; 18]. Another group is formed by those employing variants of the branch-and-bound (BnB) method. In particular, typically for non-convex problems, spatial BnB is used, which involves convexification of non-convex terms as a sub-routine [9; 16; 19]. The last group involves methods relying on the introduction of non-anticipativity conditions (NAC) and the decomposition of the problem into more tractable sub-problems. The block-angular structure of the problem (4) allows for the construction of the almost decomposable equivalent by means of making explicit non-anticipativity conditions (NAC) of the first-stage variables \(x\). The reformulated deterministic equivalent model (RDEM) with an almost-separable structure can be represented as \[\begin{split} z^{\rm{SMIP}}=\min_{x,y}&\ \sum_{s\in S}p^{s}(c^{\top}x^{s}+q^{s\top}y^{s}+\sum_{(i,j)\in B_{Q}}Q_{i,j}^{s}y _{i}^{s}y_{j}^{s})\\ \text{s.t.:}& y^{s}\in Y^{s},\ \forall s\in S,\\ & x^{s}\in X,\ \forall s\in S\\ & T_{m}^{s}x^{s}+W_{m}^{s}y^{s}+\sum_{(i,j)\in B_{U}}U_{m,i,j}^{ s}y_{i}^{s}y_{j}^{s}=h_{m}^{s},\ \forall m\in M,\ \forall s\in S\\ & x^{s}-\overline{x}=0,\ \forall s\in S,\end{split} \tag{5}\] where the constraint \(x^{s}-\overline{x}=0,\ \forall s\in S\) enforces non-anticipativity for the first-stage decisions. The RDEM problem (5) could be fully decomposed into \(|S|\) MIQCQP problems if one could remove the set of linear constraints \(x^{s}-\overline{x}=0,\ \forall s\in S\), that relates variables from distinct sub-problems, a structure commonly known as complicating constraints. To tackle problem (5), [6] developed an algorithm named \(p\)-Lagrangian decomposition. The \(p\)-Lagrangian decomposition method involves exploiting Lagrangian relaxation for decomposing the primal problem (5) into \(|S|\) independent sub-problems and employing the reformulated normalised multiparametric disaggregation technique (RNMDT) [5] to construct mixed-integer-based relaxations. As a subroutine, the algorithm employs a dynamic precision-based method developed in [5], ensuring the tightening of the relaxation bounds, and a bundle method approach for updating the dual multipliers. Additionally, the decomposable structure of the Lagrangian dual problem is amenable to parallelisation, which can significantly enhance the computational performance. As suggested by the numerical results in [6], the _p_-Lagrangian decomposition demonstrated superior performance compared to commercial solver Gurobi [24]. Nevertheless, the _p_-Lagrangian decomposition algorithm has an important shortcoming related to the duality gap arising from the mixed-integer nature of the primal problem combined with the imprecision of the RNMDT relaxation. Our main contribution is a solution method for problem (4) that for the first time incorporates _p_-Lagrangian relaxation within the duality-based branch-and-bound method named _p_-branch-and-bound (_p_-BnB), inspired by the decomposition method for two-stage stochastic integer programs proposed in [14]. The technically challenging synchronisation of these two methods relies on the repeatedly solving _p_-Lagrangian relaxation of problem (4) by means of _p_-BnB and iteratively restricting the feasible region via branch-and-bound framework whenever the solution of _p_-Lagrangian relaxation violates integrality or non-anticipativity conditions. Consequently, _p_-BnB provides the upper bound for problem (4) that can be made arbitrarily precise against the value of the Lagrangian relaxation bound by decreasing the value of precision factor \(p\). We also evaluate the numerical efficiency of _p_-BnB on randomly generated instances for two different solution methods for the _p_-Lagrangian relaxation. The first one is the Frank-Wolfe Progressive Hedging (FWPH) method, originally presented in [10]. The FWPH is an enhancement of the classic progressive hedging method [38] with convergence guarantees to the optimal dual value of _p_-Lagrangian relaxation. The other solution method for dual problems tested is _p_-BnB is the proximal bundle method [27; 36]. The proximal bundle method relies on the classic bundle method [32] but involves a proximal term restricting the space of candidate solutions [22]. The first step of the proposed _p_-BnB method involves the construction of the mixed-integer relaxation of the primal RDEM problem (4) by means of employing the RNMDT technique described in Section 2.1. Next, we apply a Lagrangian duality-based branch-and-bound method reviewed in Section 3. To solve the sub-problems within the branch-and-bound search, we consider the FWPH method discussed in Section 2.3.2 and the proximal bundle method presented in Section 2.3.1. It is worth mentioning that it is the first time the efficiency of the FWPH method is assessed within a Lagrangian duality-based branch-and-bound framework. The proposed method was tested on randomly generated instances, and the results of the numerical experiments are presented in Section 4. Finally, in Section 5, we provide conclusions and directions for further research. ## 2 Technical background In what follows, we present the technical elements that form our proposed method. In essence, _p_-BnB is formed by the combination of three main techniques, namely _p_-Lagrangian decomposition, of which RNMDT is a key concept, solution methods for dual Lagrangian problems, and a branch-and-bound coordination algorithm. ### Reformulated normalized multiparametric disaggregation technique (RNMDT) The RNMDT relaxation of the primal RDEM problem can be constructed by employing RNMDT to discretise the second-stage variables \(y_{j}^{s}\) in the primal RDEM. Therefore, for a fixed value of the precision factor \(p\), the mixed-integer relaxation RNMDT\({}_{p}\) can be stated as \[z^{\text{RNMDT}}=\min_{x,y,w}\sum_{s\in S}p^{s}(c^{\top}x^{s}+q^{s \top}y^{s}+\sum_{(i,j)\in B_{Q}}Q_{i,j}^{s}w_{i,j}^{s})\] (5) \[\text{s.t.:}\;\;y^{s}\in Y^{s},\;\forall s\in S,\] \[\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x^{s} \in X,\;\forall s\in S\] \[\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; ### _p_-Lagrangian relaxation Let us consider the RNMDT\({}_{p}\) problem (5) defined in Section 2.1, where the precision factor \(p\) is fixed to some negative integer value. The _p_-Lagrangian decomposition of RNMDT\({}_{p}\) can can be obtained by applying Lagrangian relaxation to relax the NAC \[x^{s}-\overline{x}=0,\ \forall s\in S.\] Let \(\lambda=(\lambda^{1},\ldots,\lambda^{|S|})\in\mathbb{R}^{n_{x}\times|S|}\) be the vector of dual multipliers associated with the relaxed NAC. By setting \(\mu^{s}=\frac{1}{p^{s}}\lambda^{s}\), \(\forall s\in S\), the _p_-Lagrangian dual function can be defined as \[L_{p}(\mu)=\left\{\begin{aligned} &\min_{x,\overline{x},y,w}\sum_{s \in\mathcal{S}}\ p^{s}\big{(}\mathbb{\Gamma}^{T}x^{s}+{q^{s}}^{\top}y^{s}+\sum _{(i,j)\in B_{Q}}Q^{s}_{i,j}w^{s}_{i,j}+{\mu^{s}}^{\top}(x^{s}-\overline{x})) \\ &\hskip 142.26378pt:(x^{s},y^{s},\Gamma^{s})\in G^{s},\forall s \in S,\end{aligned}\right\}, \tag{6}\] where \(\Gamma^{s}=\{w^{s},\Delta y^{s},\Delta w^{s},\hat{y}^{s},z^{s}\}\) and \(G^{s}\) is defined by the following set of constraints \[G^{s}=\left\{\begin{aligned} & x^{s}\in X\\ & y^{s}\in Y^{s}\\ & T^{s}_{m}x+W^{s}_{m}y^{s}+\sum_{(i,j)\in B_{U}}U^{s}_{m,i,j}w^{s }_{i,j}=h^{s}_{m},\ \forall m\in M\\ & y^{s}_{j}=(N^{U,s}_{j}-N^{L,s}_{j})\bigg{(}\sum_{l\in P}2^{l}z^ {s}_{j,l}+\Delta y^{s}_{j}\bigg{)}+N^{L,s}_{j},\\ &\hskip 142.26378pt\forall j\in\{j\mid(i,j)\in B_{Q}\cup B_{U}\} \\ & 0\leq\Delta y^{s}_{j}\leq 2^{p},\ \forall j\in\{j\mid(i,j)\in B_{Q}\cup B_{U}\}\\ &\omega^{s}_{i,j}=(N^{U,s}_{j}-N^{L,s}_{j})\bigg{(}\sum_{l\in P}2 ^{l}\hat{y}^{s}_{i,j,l}+\Delta\omega^{s}_{i,j}\bigg{)}+y^{s}_{i}N^{L,s}_{j}, \\ &\hskip 142.26378pt\forall(i,j)\in B_{Q}\cup B_{U}\\ & 2^{p}(y^{s}_{i}-N^{U,s}_{i})+N^{U,s}_{i}\Delta y^{s}_{j}\leq\Delta \omega^{s}_{i,j}\leq 2^{p}(y^{s}_{i}-N^{L,s}_{i})+N^{L,s}_{i}\Delta y^{s}_{j},\\ &\hskip 142.26378pt\forall(i,j)\in B_{Q}\cup B_{U}\\ & N^{L,s}_{i}\Delta y^{s}_{j}\leq\Delta w^{s}_{i,j}\leq N^{U,s}_{i }\Delta y^{s}_{j},\ \forall(i,j)\in B_{Q}\cup B_{U}\\ & N^{L,s}_{i}z^{s}_{j,l}\leq\hat{y}^{s}_{i,j,l}\leq N^{U,s}_{i}z^ {s}_{j,l},\ \forall(i,j)\in B_{Q}\cup B_{U},\ \forall l\in P\\ & N^{L,s}_{i}(1-z^{s}_{j,l})\leq y^{s}_{i}-\hat{y}^{s}_{i,j,l}\leq N ^{U,s}_{i}(1-y^{s}_{j,l}),\\ &\hskip 142.26378pt\forall(i,j)\in B_{Q}\cup B_{U},\ \forall l\in P\\ & z^{s}_{j,l}\in\{0,1\},\ \forall j\in\{j\mid(i,j)\in B_{Q}\cup B_{U}\},\ \forall l\in P.\end{aligned}\right.\] The variable \(\overline{x}\) in (6) is unconstrained. Therefore, in order for the _p_-Lagrangian dual function \(L_{p}(\lambda)\) to be bounded, we must impose the dual feasibility condition \(\sum_{s\in S}p^{s}\mu^{s}=0\). With this assumption in mind, the _p_-Lagrangian dual function (6) can be explicitly decomposed for each scenario \(s\in S\) \[L_{p}(\mu)=\sum_{s\in\mathcal{S}}\ p^{s}L_{p}^{s}(\mu^{s}), \tag{7}\] where \[L_{p}^{s}(\mu^{s})=\left\{\begin{aligned} \min_{x,y,w}&\left(c^{s}+\mu^{s} \right)^{\top}\!x^{s}+q^{s^{\top}}y^{s}+\sum_{(i,j)\in B_{Q}}Q_{i,j}^{s}w_{i,j} ^{s}\\ &\qquad\qquad\qquad\qquad\qquad\qquad:(x^{s},y^{s},\Gamma^{s}) \in G^{s},\forall s\in S\end{aligned}\right\}. \tag{8}\] For any fixed value of \(\mu=(\mu_{1},\ldots\mu_{|S|})\), the _p_-Lagrangian dual function (7) provides a lower bound for the primal RNMDT\({}_{p}\) problem (5) [6]. Our objective is to find the tightest (i.e., the uppermost) lower bound. Therefore, the dual bound can be obtained by solving the _p_-Lagrangian dual problem: \[z_{LD}=\max_{\mu}\left\{L_{p}(\mu):\sum_{s\in S}p^{s}\mu^{s}=0\right\} \tag{9}\] ### Solution method for _p_-Lagrangian dual function In this section, we present adaptations of proximal bundle method [27] and FWPH [10] for solving the _p_-Lagrangian dual problem (9). One should bear in mind that other nonsmooth (convex) optimisation algorithms can be potentially applied to solve _p_-Lagrangian dual function \(L_{p}(\mu)\). The choice for proximal bundle method and FWPH was motivated by the literature on dual Lagrangian-based methods, including their reported efficiency (see, for example, [7; 10; 20; 37]), and our own experience with preliminary experiments. #### Proximal bundle method We propose an adaptation of a proximal bundle method proposed in [27] to solve the _p_-Lagrangian dual problem (9). This proposed adaptation relies on an iterative approximation of the _p_-Lagrangian dual function \(L_{p}(\mu)\) with piecewise linear functions via cutting planes. The pseudo-code of our proposed method is presented in Algorithm 1. Suppose that in the \(k^{\text{th}}\) iteration of the proximal bundle method, we have computed current candidates for the Lagrangian multiplier \(\mu_{l}\) and centre of mass \(\overline{\mu}_{l},\ l=1,\ldots,k-1\). In what follows, we present the adaptation of the proximal bundle method to update these parameters. The Lagrangian multiplier \(\mu_{k}\) is computed as follows \[\mu_{k}=\operatorname*{arg\,max}_{\mu}\left\{m_{k}(\mu)-\frac{u_{k}}{2}\mu_{k }-\overline{\mu}_{k-1}\right\}, \tag{10}\] _A novel dual-decomposition method based on p-Lagrangian relaxation_ where \(m_{k}(\mu)\) is piece-wise linear approximation of \(L_{p}(\mu)\) at iteration \(k\) given by \[m_{k}(\mu)= \max_{\theta^{s}}\sum_{s\in S}\theta^{s} \tag{11}\] \[\text{s.t.:}\;\theta^{s}\leq L_{p}^{s}(\mu_{l})-\left(\frac{ \partial L_{p}^{s}(\mu_{l})}{\partial\mu_{l}}\right)^{\top}(\mu-\mu_{l}),\; \forall s\in S,\;l=1,\ldots,k-1. \tag{12}\] The convergence of the proximal bundle method strongly relies on the update of the proximal parameter \(u_{k}\) and of the centre of mass of \(\overline{\mu}_{k}\). In line with the procedure the developed in [27], the centre of mass \(\overline{\mu}_{k}\) is updated as follows \[\overline{\mu}_{k}=\begin{cases}\mu_{k},&\text{ if }L_{p}(\mu_{k})\geq L_{p}( \overline{\mu}_{k})+m_{l}v_{k}\quad\text{(serious step)}\\ \overline{\mu}_{k-1},\;\text{otherwise}&\text{(null step)},\end{cases} \tag{13}\] where we typically have \(m_{l}\in(0,0.5)\) and \[v_{k}=m_{k}(\mu_{k})-L_{p}(\overline{\mu}_{k-1}) \tag{14}\] reprewsenting the predicted increase of _p_-Lagrangian function \(L_{p}(\mu)\). The proximal term \(u_{k}\) must be chosen carefully. To prevent the proximal bundle method from taking a serious step too frequently (after too little improvement in \(L_{p}(\mu)\)), \(u_{k}\) cannot be too large. On the other hand, if \(u_{k}\) value is too small, the method will take many null steps before it finds a good candidate for the new centre of mass. To accelerate the performance of the proximal bundle method, tests identifying whether the proximal parameter \(u_{k}\) value was too small or too large can be employed. The case when \(u_{k}\) is too large can be identified by testing whether \[L_{p}(\mu_{k})\geq L_{p}(\overline{\mu}_{k-1}+m_{R}v_{k}), \tag{15}\] where \(m_{r}\in(m_{L},1)\). If (15) holds the proximal term \(u_{k}\) is updated as \[u_{k+1}=\max\{h_{k},C_{min}^{u}u_{k},u_{min}\}, \tag{16}\] with \[h_{k}=2u_{k}\left(1-\frac{L_{p}(\mu_{k})-L_{p}(\overline{\mu}_{k-1})}{v_{k}} \right), \tag{17}\] and \(C_{min}^{u}\in\mathbb{R}\). On the other hand, whether the proximal term \(u_{k}\) is too small is identified by the test \[\overline{\delta}_{k}>\max\{\delta_{k}(\overline{\mu}_{k-1})+|g_{k}|,C^{v}v_{ k}\}, \tag{18}\] where \(C^{v}\in\mathbb{R}\) \[\overline{\delta}_{k}=L_{p}(\mu_{k})-\left(\sum_{s\in S}\frac{\partial L _{p}^{s}(\mu_{k})}{\partial\mu_{k}}(x_{k}^{s})\right)^{\top}(\mu_{k}-\overline {\mu}_{k-1})-L_{p}(\overline{\mu}_{k-1}), \tag{19}\] in which \(x_{k}^{s}\), \(\forall s\in S\), is the optimal solution of the _p_-Lagrangian sub-problem \(L_{p}^{s}(\mu)\) with \(\mu=\mu_{k}\), \[g_{k}\in\partial m_{k}(\mu_{k}), \tag{20}\] and \[\delta_{k}(\mu)=m_{k}(\mu_{k})+(g_{k})^{\intercal}(\mu-\mu_{k})- L_{p}(\mu), \tag{21}\] where \(\partial\) denotes the subdifferential of \(m_{k}\) at \(u_{k}\), making thus \(g_{k}\) a subgradient of \(m_{k}\) at \(u_{k}\). If (18) holds, the proximal term \(u_{k}\) is updated as \[u_{k+1}=\min\{h_{k},C_{max}^{u}u_{k}\}, \tag{22}\] where \(C_{max}^{u}\in\mathbb{R}\). Algorithm 1 summarises the detailed steps of the proximal bundle method, starting with a step \(k=0\) and the initialisation of the parameters. Following the developments in [27], the algorithm includes an additional parameter \(i_{k}^{u}\) that counts consecutive serious or null steps and enforces the tuning of the proximal term \(u_{k}\), hoping to speed up the algorithm's convergence. The algorithm terminates when predicted increases \(v_{k}\) are within an arbitrary tolerance \(\epsilon\). For proof of the convergence of the bundle method adaptation presented in Algorithm 1, one can refer to, for instance, [28]. #### Frank-Wolfe Progressive-Hedging method Alternatively, one can apply the Frank-Wolfe Progressive-Hedging (FWPH) method [10] to solve the Lagrangian dual problem (9). FWPH is applied to the primal characterisation of (9): \[z_{LD}=\min_{x,\overline{x},y,w}\left\{\sum_{s\in S}\ p^{s}\left( c^{\top}x^{s}+q^{s^{\top}}y^{s}+\sum_{(i,j)\in B_{Q}}Q_{i,j}^{s}w_{i,j}^{s} \right)\right\}, \tag{23}\] \[:(x^{s},y^{s},\Gamma^{s})\in\mathrm{conv}(G^{s}),x^{s}=\overline {x},\ \forall s\in S\right\},\] where \(\mathrm{conv}(G^{s})\) denotes the convex hull of \(G^{s}\) for each \(s\in S\). The FWPH method primarily relies on the classical progressive hedging method while integrating an extension of the Frank-Wolfe method called the simplicial decomposition method (SDM) to iteratively construct an inner approximation of \(\mathrm{conv}(G^{s})\) for each \(s\in S\). Using the original progressive hedging method as proposed in [38] might result in suboptimal bounds, cycling behaviour and poor convergence behaviour of Lagrangian dual bound for problem (9) as the presence of integer variables hinders its convergence guarantees. As a result, the classic progressive hedging method has typically been employed as a heuristics approach (see, for example, [42]). The composition of SDM and progressive hedging method allows for overcoming the aforementioned convergence issue. Additionally, it allows replacing the additional step of solving mixed-integer linear sub-problems with solving convex continuous quadratic sub-problems when calculating the Lagrangian dual bound. This, in turn, improves the computational performance of the FWPH method [10]. The FWPH method uses the augmented Lagrangian dual problem, i.e., a modified Lagrangian dual problem in which the Lagrangian dual function is augmented by a penalty term that acts as a regularisation term. The augmented Lagrangian dual function based on relaxing the NAC constraints \(x^{s}=\overline{x},\forall s\in S\) in RNMDT\({}_{p}\) problem (5) is \[AL_{p,\tau}(x,y,w,\overline{x},\mu)=\sum_{s\in\mathcal{S}}\ p^{s}AL_{\tau}^{s}( x^{s},y^{s},w^{s},\overline{x},\mu^{s}), \tag{24}\] where \[AL_{\tau}^{s}(x^{s},y^{s},w^{s},\overline{x},\mu^{s})=\] \[\quad c^{\top}x^{s}+q^{s^{\top}}y^{s}+\sum_{(i,j)\in B_{Q}}Q_{i,j} ^{s}w_{i,j}^{s}+\mu^{s^{\top}}(x^{s}-\overline{x})+\frac{\rho}{2}\|x^{s}- \overline{x}\|_{2}^{2} \tag{25}\] and \(\rho>0\) is a penalty parameter. The FWPH algorithm pseudo-code is stated in Algorithm 2. The parameter \(k_{max}\) defines the maximum number of iterations for the Frank-Wolfe method and \(\epsilon\) is an arbitrary convergence tolerance. The termination criterion involves the term \(\sum_{s\in S}p^{s}\|x_{k}^{s}-\overline{x}_{k-1}\|\) that represents the sum of squared norms of primal and dual residuals associated with (23). These residuals evaluate how close the solution candidate \(((x^{s},y^{s},w^{s}),\overline{x})\) is to satisfy the necessary and sufficient optimality conditions for (23). ``` initialise:\((V_{0}^{s})_{s\in S},(x_{0}^{s})_{s\in S},\mu_{0},\tau,\alpha,\epsilon,k_{ max}\) and \(t_{max}\). Compute \(\overline{x}_{0}=\sum_{s\in S}p^{s}x_{0}^{s}\) and \(\mu_{1}^{s}=\mu_{0}^{s}+\tau(x_{0}^{s}-\overline{x}_{0})\). for\(k=1,\dots,k_{max}\)do for\(s\in S\)do \(\tilde{x}^{s}=(1-\alpha)\overline{x}_{k-1}+\alpha x_{k-1}^{s}\), \([x_{k}^{s},y_{k}^{s},w_{k}^{s},V_{k}^{s},L_{p,k}^{s}]=SDM(V_{k-1}^{s},\tilde{ x}^{s},\mu_{k}^{s},\overline{x}_{k-1},t_{max},0)\) endfor Compute \(L_{p,k}=\sum_{s\in S}p^{s}L_{k}^{s}\) and \(\overline{x}_{k}=\sum_{s\in S}p^{s}x_{k}^{s}\). if\(\sqrt{\sum_{s\in S}p^{s}\|x_{k}^{s}-\overline{x}_{k-1}\|}\leq\epsilon\)then return\(((x_{k}^{s},y_{k}^{s},w_{k}^{s})_{s\in S},\ \overline{x}_{k},\ \mu_{k},\ L_{p,k})\) endif Compute \(\mu_{k+1}^{s}=\mu_{k}^{s}+\tau(x_{k}^{s}-\overline{x}_{k})\) for each \(s\in S\). endfor return\((x_{k_{max}}^{s},y_{k_{max}}^{s},w_{k_{max}}^{s})_{s\in S},\overline{x}_{k_{ max}},\mu_{k_{max}},L_{p,k_{max}}\). ``` **Algorithm 2** Frank-Wolfe progressive hedging (FWPH) method As a subroutine, Algorithm 2 employs the SDM method to minimise \(AL_{\tau}^{s}(x,y,w,\overline{x},\mu^{s})\) over \((x,y,w)\subset\text{conv}(G^{s})\) for a given \(s\in S\). The pseudo-code for SDM is stated in Algorithm 3. The precondition for the SDM algorithm is that \(V_{0}^{s}\subset\mathrm{conv}(G^{s})\) and \(\overline{x}=\sum_{s\in S}p^{s}x_{0}^{s}\), where \(V_{t}^{s}\) are discrete sets of points such that \(V_{t}^{s}\subset\mathrm{conv}(G^{s})\). Parameter \(t_{max}\) defines the maximum number of iterations for SDM, and \(\tau>0\) is the convergence tolerance. The parameter \(\alpha\) affects the initial linearisation point \(\tilde{x}^{s}\) of the SDM method. ``` initialise:\(V_{0}^{s},x_{0}^{s},\mu^{s},\overline{x},t_{max}\) and \(\gamma\). for\(t=1,\ldots,t_{max}\)do \(\hat{\mu_{t}^{s}}=\mu^{s}+\tau(x_{t-1}^{s}-\overline{x})\), \((\hat{x}^{s},\hat{y}^{s},\hat{w}^{s})\in\arg\min_{x,y,w}\Bigl{\{}(c+\hat{\mu}_{t }^{s})^{\top}x+q^{s\top}y+\sum_{(i,j)\in B_{Q}}Q_{i,j}^{s}w_{i,j}:\) \((x,y,w)\in G^{s}\Bigr{\}}\) if\(t=1\)then \(L_{p}^{s}=(c+\hat{\mu}_{t}^{s})^{\top}\hat{x}^{s}+q^{s\top}\hat{y}^{s}+\sum_{(i,j)\in B_{Q}}Q_{i,j}^{s}\hat{w}_{i,j}^{s}\) endif Compute \(\Gamma^{t}=-\Bigl{[}(c+\hat{\mu}_{t}^{s})^{\top}(\hat{x}^{s}-x_{t-1}^{s})+{q^{s \top}}(\hat{y}^{s}-y_{t-1}^{s}).\) \(+\sum_{(i,j)\in B_{Q}}Q_{i,j}^{s}(\hat{w}_{i,j}^{s}-w_{t-1,i,j}^{s})\Bigr{]}\), \(V_{t}^{s}=V_{t-1}^{s}\cup\{(\hat{x}^{s},\hat{y}^{s},\hat{w}^{s})\}\) and \((x_{t}^{s},y_{t}^{s})\in\arg\min_{x,y,w}\{AL_{\tau}^{s}(x,y,w,\overline{x},\mu^ {s}):(x,y,w)\in\mathrm{conv}(V_{t}^{s})\}\). if\(\Gamma^{t}\leq\gamma\)then return\((x_{t}^{s},y_{t}^{s},w_{t}^{s},V_{t}^{s},L_{p}^{s})\) endif endfor return\((x_{t_{max}}^{s},y_{t_{max}}^{s},w_{t_{max}}^{s},V_{t_{max}}^{s},L_{p}^{s})\). ``` **Algorithm 3** Simplicial decomposition method (SDM) ## 3 Dual decomposition In this section, we present the branching approach inspired by dual decomposition proposed in [14]. The authors proposed a solution method for linear stochastic multi-stage problems that may involve integrality requirements at each stage. The solution method relies on dual decomposition combined with branch-and-bound strategies to ensure convergence. In what follows, we discuss our adaptation of the solution method proposed in [14] for the mixed-integer RNMDT relaxations of RDEM problems. Let \(T\) be the branch-and-bound set of unexplored nodes in which each node is denoted by \(N\). The key idea behind our approach is to extend the branch-and-bound procedure proposed in [14] for the RNMDT\({}_{p}\) problem (5). Specifically, we perform branching on the first-stage variables and use the solution of \(p\)-Lagrangian dual problem, as described in (9), as the bounding procedure. To form candidates for feasible first-stage variables solution, the method uses an average \(\overline{x}_{N}=\sum_{s\in S}p^{s}x_{N}^{*,s}\), combined with a rounding heuristic to fulfil the integrality requirements, where \(x_{N}^{*,s},\forall s\in S\), is obtained from solving the \(N\) node-corresponding dual problem (9). If \(\overline{x}_{N}\) violates integrality conditions for some integer index \(i\), i.e., \(\lfloor\overline{x}_{N,i}\rfloor<\overline{x}_{N,i}<\lceil\overline{x}_{N,i}\rceil\), two nodes \(N^{L}\) and \(N^{R}\) with the correspondent sub-problems (9) are formed from parent node \(N\), where feasibility sets \(G_{N^{L}}^{s}\) and \(G_{N^{R}}^{s}\), \(\forall s\in S\), are formed respectively as \[G_{N^{L}}^{s} =G_{N}^{s}\cap\left\{x_{i}^{s}\leq\lfloor\overline{x}_{N,i} \rfloor\right\}, \tag{26}\] \[G_{N^{R}}^{s} =G_{N}^{s}\cap\left\{x_{i}^{s}\geq\lceil\overline{x}_{N,i}\rceil \right\}. \tag{27}\] If \(\overline{x}_{N}\) satisfies integrality conditions but \(x_{N}^{*,s},\ \forall s\in S\), violates non-anticipativity conditions, two nodes \(N^{L}\) and \(N^{R}\) with the correspondent sub-problems (9) are formed from the parent node \(N\), where feasibility sets \(G_{N^{L}}^{s}\) and \(G_{N^{R}}^{s}\), \(\forall s\in S\), are formed respectively as \[G_{N^{L}}^{s} =G_{N}^{s}\cap\left\{x_{i}^{s}\leq\overline{x}_{N,i}-\epsilon_{ BB}\right\}, \tag{28}\] \[G_{N^{R}}^{s} =G_{N}^{s}\cap\left\{x_{i}^{s}\geq\overline{x}_{N,i}+\epsilon_{ BB}\right\}, \tag{29}\] where \(\epsilon_{BB}>0\). The branching index \(i\) is chosen based on the measure of the dispersion in the first-stage scenario solutions, e.g., if the dispersion of the component \(i:\ \delta_{i}=\max_{s\in S}x_{N,i}^{*,s}-\min_{s\in S}x_{N,i}^{*,s}\) is zero, this should imply the non-anticipativity of this component \[x_{N,i}^{*,1}=\cdots=x_{N,i}^{*,|S|}.\] Therefore, in case of violating non-anticipativity constraints, branching is performed on the index \(i\) with the largest dispersion. Algorithm 4 summarises adaptation of the branch-and-bound method presented by [14; 27] that hereafter we refer to as _p_-BnB. For each branch-and-bound node \(N\in T\), we generate node sub-problem (9) and compute its dual bound value \(z_{N}^{*}\) as well as corresponding optimal dual and primal variables values \((\mu_{N}^{*,s})_{s\in S}\) and \((x_{N}^{*,s},y_{N}^{*,s},w_{N}^{*,s})_{s\in S}\), respectively, by applying Algorithm 1 or 2. If the dual bound value \(z_{N}^{*}>z_{UB}\), the node \(N\) is fathomed. Otherwise, we check whether solution \(x_{N}^{*,s}\) violates non-anticipativity or integrality conditions. If that is the case, following [27], we perform branching as described in (26)-(27) on the most fractional variable \(\overline{x}_{N,i}\) if \(x_{N}^{*,s}\) violates integrality conditions. Otherwise, we perform branching as described in (28)-(29) on the variable with the largest dispersion \(\delta_{i}\) if \(x_{N}^{*,s}\) violates non-anticipativity conditions. If \(x_{N}^{*,s}\) satisfies both non-anticipativity and integrality conditions, we update the best upper bound value \(z_{UB}=z_{N}^{*}\) and best solution value \(x^{*}=\overline{x}_{N}\). Lastly, we update the best lower bound value \(z_{LB}\) by setting it to the smallest dual bound value \(z_{N}^{*}\) among the nodes \(N\) that are yet to be fathomed. The algorithm continues until the set \(T\) is empty. ``` initialise: \(T=\emptyset\), \(z_{UB}=\infty\), \(z_{LB}=-\infty\), \(x^{*}=\emptyset\), \(\epsilon_{BB}>0\) and \(\epsilon_{NAC}\geq 0\). Create root node \(N_{0}\) sub-problem (9), \(T=T\cup\{N_{0}\}\). repeat Choose a node \(N\in T\). \(T=T\setminus\{N\}\). Apply Algorithm 1 or 2 to the node \(N\) sub-problem (9) to obtain \(z_{N}^{*}\), \((\mu_{N}^{*,s})_{s\in S}\) and \((x_{N}^{*,s},y_{N}^{*,s},w_{N}^{*,s})_{s\in S}\). Compute \(\overline{x}_{N}=\sum_{s\in\mathcal{S}}p^{s}x_{N}^{*,s}\). Compute \(\sigma_{i}=\max_{s\in S}\left\{x_{N,i}^{*,s}\right\}-\min_{s\in S}\left\{x_{N,i}^{*,s}\right\}\) for \(i\in\{1,\ldots,n_{x}\}\). if\(\max_{i\in 1,\ldots,n_{x}}\left\{\sigma_{i}\right\}\leq\epsilon_{NAC}\)then if\(\overline{x}_{N,i}\) is fractional for some integer index \(i\in\{1,\ldots,n_{x}\}\)then Choose integer variable index \(i\in\{1,\ldots,n_{x}\}\) such as \(\left\lfloor\overline{x}_{N,i}\right\rfloor<\overline{x}_{N,i}<\left\lceil \overline{x}_{N,i}\right\rceil\). Create two new nodes \(N^{L}\) and \(N^{R}\) via (26) and (27), respectively. elseif\(z_{UB}>z_{N}^{*}\)then \(z_{UB}=z_{N}^{*}\), \(x^{*}=\overline{x}_{N}\) endif elseif\(\max_{i\in 1,\ldots,n_{x}}\{\sigma_{i}\}>\epsilon_{NAC}\) and \(z_{UB}>z_{N}^{*}\)then if\(\overline{x}_{N,i}\) is fractional for some integer index \(i\in 1,\ldots,n_{x}\)then Choose integer variable index \(i\in 1,\ldots,n_{x}\) such as \(\left\lfloor\overline{x}_{N,i}\right\rfloor<\overline{x}_{N,i}<\left\lceil \overline{x}_{N,i}\right\rceil\). Create two new nodes \(N^{L}\) and \(N^{R}\) via (26) and (27), respectively. else Choose continuous variable index \(i\in\arg\max_{i}\sigma_{i}\). Create two nodes \(N^{L}\) and \(N^{R}\) via (28) and (29), respectively. endif \(T=T\cup\{N^{L},N^{R}\}\). endif Update \(Z_{LB}\). until\(T=\emptyset\) ``` **Algorithm 4**\(p\)-branch-and-bound method (\(p\)-BnB) In what follows, we provide a theoretical justification of the Algorithm 4 convergence to the optimal solution of RNMDT\({}_{p}\) relaxation (problem (5)). The convergence of the Algorithm 4 for problem (5) considering any fixed value of \(p=\{-\infty\ldots,-1\}\) is stated in Theorem 1. Consequently, problem (5) converges to the primal RDEM (problem (4)) as the precision factor \(p\) approaches \(-\infty\). Formally, the justification for convergence of the RNMDT relaxation (problem (5)) is stated in Theorem 2. **Theorem 1**: _Suppose we consider the RNMDT relaxation (problem (5)) with an arbitrary fixed value of the precision factor \(p=\{-\infty,\ldots,-1\}\). Then Algorithm 4 converges to the solution \((x_{N}^{*,s},y_{N}^{*,s},w_{N}^{*,s})_{s\in S}\) that is optimal for problem (5)._ In [14], the authors demonstrate the termination in finitely many steps and convergence of the Algorithm 4 to the optimal solution of problem (5) assuming that nodes \(p\)-Lagrangian dual sub-problem (9) are solved to optimality and hence, yielding optimal dual bound. Employing either Algorithm 1 or 2 ensures the convergence to the optimal solution of the \(p\)-Lagrangian dual sub-problem (9). For the convergence of Algorithms 1 and 2 to the optimal solution of problem (6), please refer to the [27] and [10], respectively. **Theorem 2**: _Suppose we consider the RNMDT\({}_{p}\) relaxation problem (5) with an arbitrary fixed value of the precision factor \(p=\{-\infty,\ldots,-1\}\). Then for any pair \((p_{1},p_{2})\) such that \(p_{1}<p_{2}\leq 0\) RNMDT\({}_{p_{1}}\) is a tighter (or equal) relaxation of the original RDEM problem than RNMDT\({}_{p_{2}}\)._ See [5, Theorem 6]. ## 4 Computational experiments This section presents numerical results for experiments performed using randomly generated 2SSMIP in the form of (4) or problems (4), as we refer to them hereinafter. All code and instances generated are available on the GitHub repository [https://github.com/gamma-opt/p-BnB](https://github.com/gamma-opt/p-BnB). The experiments were designed using Julia (Version 1.7.3) language. The code was run on Triton, Aalto University's high-performance computing cluster [1]. ### Design of experiments We tested the efficiency of Algorithm 4 considering two alternative methods two solve nodes-sub-problems (9): the proximal bundle method (BM) presented in Section 2.3.1 and the Frank-Wolfe progressive hedging (FWPH) method presented in Section 2.3.2. Algorithm 4 was implemented using parallel computing, meaning that the scenario-sub-problems (8) are solved in parallel. For each instance, the number of processes utilised for parallel computing was equal to 30. The computational efficiency of Algorithm 4 was compared with Gurobi's [24] branch-and-cut algorithm with standard parametrisation. We tested Algorithm 4 on 5 sets of randomly generated instances. Each set contained problems (4) with 50, 100 and 150 scenarios represented in two scales (small and large) as described in Table 1. Additionally, we assumed two different values of the precision factor \(p\in\{-2,-1\}\). Hence, we considered 60 instances in total. For the sake of simplicity, for each instance, all the first-stage variables were assumed to be an integer, and all the second-stage variables were assumed to be continuous. To make test instances similar to those available in [2] (which are not MIQCQPs) in terms of the number of non-zero coefficients in the constraints and objective function, we assumed the quadratic matrices \(Q^{s}\) and \(U_{m}^{s}\ \forall s\in S\), \(\forall m\in M\) to be randomly generated with 1% density. Therefore, the problems (4) with 50, 100 and 150 scenarios would have in total 5100, 10100 and 15100 variables, respectively, in the case of Small(S) instances and 10200, 20200 and 30200 variables, respectively, in case of Large(L) instances. Table 2 presents the parameter values used in the experiments for the proximal BM (Algorithm (1)) and the FWPH (Algorithm 2). In addition to the parameters stated in Table 2, the maximum number of iterations for the proximal BM (\(k_{max}\)) was set to 1000. The maximum numbers of iterations for the FWPH algorithm (\(k_{max}\)) and simplicial decomposition method (\(t_{max}\)) were set to 1000 and 1, respectively. The tolerances \(\epsilon_{BB}\) and \(\epsilon_{NAC}\) for the _p_-BnB (Algorithm 4) were set to \(10^{-6}\). As a time limit for solving each distinct instance, we considered 1 hour. \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{3}{c}{**proximal bundle method**} \\ \hline \(u_{min}\) & \(10^{-3}\) \\ \(m_{R}\) & 0.7 \\ \(m_{L}\) & 0.3 \\ \(i_{max}\) & 3 \\ \(i_{min}\) & -3 \\ \(C_{min}^{u}\) & 0.1 \\ \(C_{avg}^{u}\) & 0.5 \\ \(C_{max}^{u}\) & 10 \\ \(C^{v}\) & 10 \\ \(\epsilon_{BM}\) & \(10^{-3}\) \\ \hline \multicolumn{3}{c}{**Frank-Wolfe progressive hedging**} \\ \hline \(\tau\) & 2 \\ \(\alpha\) & 1 \\ \(\epsilon_{FWPH}\) & \(10^{-3}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Algorithm parameters \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{3}{c}{**Instance size**} & **\begin{tabular}{c} **\# of 1st-stage** \\ **variables** \\ \end{tabular} & **\begin{tabular}{c} **\# of 2nd-stage** \\ **variables** \\ \end{tabular} & ** \begin{tabular}{c} **\# of constraints** \\ \end{tabular} \\ \hline Small (S) & 100 & 100 & 100 \\ Large (L) & 200 & 200 & 200 \\ \hline \hline \end{tabular} \end{table} Table 1: Instance problems dimensions (per scenario) Starting dual multipliers \(\mu_{0}\) for Algorithms 1 and 2 were set \(\mu_{0}=0\). To set the first-stage variables \((x_{0}^{s})_{s\in S}\) for Algorithm 2, we considered the solution of the _p_-Lagrangian dual function (6) for a fixed value of the dual variable \(\mu=\mu_{0}\). Following [10], to initialise \((V_{0}^{s})_{s\in S}\), we took one arbitrary scenario (in our case the first scenario in \(S\),i.e., \(s=1\)) and set \(V_{0}^{1}=\{(x_{0}^{1},y_{0}^{1},w_{0}^{1})\}\). Further, for each \(s\in S,s\neq 1\), we initialised \(V_{0}^{s}=\{(x_{0}^{s},y_{0}^{s},w_{0}^{s}),(x_{0}^{1},\overline{y}^{s}, \overline{w}^{s})\}\), where \((x_{0}^{s},y_{0}^{s},w_{0}^{s})\) solves \(L_{p}^{s}(\mu_{0}^{s})\) and \((\overline{y}^{s},\overline{w}^{s}))\) solves, \[\min_{y,w}\left\{q^{s\top}y+\sum_{(i,j)\in B_{Q}}Q_{i,j}^{s}w_{i,j}:(x_{0}^{1}, y,w)\in G^{s}\right\},\text{ for each }\ s\in S.\] ### Numerical results Table 3 presents averaged results of solving the small(S) and large(L) scale instances with the parameters as defined in Table 1 and quadratic matrices nonzero densities being set to 1%. We compared the time required to solve the instances with the proposed _p_-BnB method against solving them directly with the Gurobi solver (Full scale). The columns "_p_-BnB (FWPH)" and "_p_-BnB (BM)" report the solution for _p_-BnB method when employing FWPH and proximal bundle method as a solution method for nodes sub-problems, respectively. Each cell in the "Solution time" section represents the average solution time value for 5 instances generated using 5 different random seeds but with an identical number of scenarios, as well as the same number of first- and second-stage variables and constraints per scenario. It is worth mentioning that when calculating the average value for the column "Full-scale" we have only considered the instances for which the Gurobi solver could generate a solution within one hour. \begin{table} \begin{tabular}{c|c|c|c|c|c} \multicolumn{2}{c|}{**Instance parameters**} & \multicolumn{4}{c}{**Solution time (s)**} \\ \hline Size & \(|S|\) & \(p\) & Full scale & _p_-BnB (FWPH) & _p_-BnB (BM) \\ \hline \multirow{4}{*}{S} & 50 & -1 & 83.88 & 22.61 & 15.68 \\ & 50 & -2 & 131.22 & 119.18 & 10.18 \\ & 100 & -1 & 185.56 & 172.05 & 41.01 \\ & 100 & -2 & 358.34 & 208.40 & 55.36 \\ & 150 & -1 & 316.23 & 226.49 & 50.28 \\ & 150 & -2 & 535.56 & 381.71 & 92.61 \\ \hline \multirow{4}{*}{L} & 50 & -1 & 687.88 & 630.49 & 119.81 \\ & 50 & -2 & 866.44 & 420.63 & 122.50 \\ \cline{1-1} & 100 & -1 & 1505.92 & 1637.15 & 367.48 \\ \cline{1-1} & 100 & -2 & 2490.45 & 1708.13 & 284.36 \\ \cline{1-1} & 150 & -1 & 2463.96 & 1372.53 & 523.98 \\ \cline{1-1} & 150 & -2 & 3412.82 & 1031.00 & 369.48 \\ \hline \end{tabular} \end{table} Table 3: Numerical results for the instances with low-density quadratic matrices As the numerical results in Table 3 suggest, for small-scale instances, the proposed _p_-BnB method outperformed commercial solver Gurobi in terms of the solution time regardless of the method employed to solve the dual subproblems. This conclusion also applies to the large-scale instances, except for the instance with 100 scenarios and precision factor \(p=-1\). On average, applying _p_-BnB with Frank-Wolfe progressing hedging allowed for saving up 31.41 % and 32.76 % of solution time for small- and large-scale instances, respectively, compared to solving the full-scale instances with Gurobi. The best improvement for the small-scale instances has been achieved for the instance with 50 scenarios and RNMDT precision factor \(p=-1\), demonstrating a decrease in computational time by 73.05 % compared to solving the instance directly with Gurobi. For the large-scale instances, the largest reduction in solution time was observed for the instance with 150 scenarios and RNMDT precision factor \(p=-2\), allowing for reducing the solution time required by Gurobi by 69.79 %. However, using _p_-BnB with the proximal BM instead has demonstrated even further improvements in computational solution time. Compared to solving the full-scale instances with Gurobi, _p_-BnB with the proximal BM demonstrated, on average, a decrease by 83.80% and 83.42 % in solution time for the small- and large-scale instances, respectively. Moreover, the results suggest that the solution time improvement reached up to 92.24% for the small-scale instances, as in the case of the instances with 50 scenarios and an RNMDT precision factor of \(p=-2\). For the large-scale instances, the maximum improvement in solution time was achieved for the instance with 150 scenarios and RNMDT precision factor being \(p=-2\), allowing a reduction of 89.17 % in the time required to solve that instance by Gurobi. Nevertheless, in all 60 instances, the _p_-BnB explored only one (root) node to identify the optimal solution. This effectively means that all of these instances were such that there was no duality gap when solving the _p_-Lagrangian duals and that bounds obtained by both methods were tight enough to find the optimal solution at the root node. This effect was also observed in [10] where the authors reported convergence of the FWPH method to the optimal solution for most of the stochastic mixed-integer problem instances. Additionally, the usage of _p_-Lagrangian relaxation exploits the block-angular structure of the primal RDEM problem allowing one to obtain tighter bounds at the root if compared to linear-programming (LP) relaxation. Such phenomena have been reported in [14] where the authors would obtain at a root node a duality gap of only 0.2- 0.3% in case Lagrangian relaxation is explored while the LP-relaxation, however, would provide a duality gap of 2.0 -2.1% To demonstrate the convergence of the method in cases when the solution for the root node violates integrality or non-anticipativity conditions, we conducted another batch of experiments for somewhat less realistic instances in which the matrices \(Q\) and \(U\) densities are 90 %. However, to ensure convergence of _p_-BnB within one hour, we tested _p_-BnB on 5 instances with RNMDT precision factor \(p=-1\) only and remaining parameters as before. Table 5 demonstrates the results of solving instances 1-5 with the proposed _p_-BnB method employing the FWPH (_p_-BnB (FWPH)) and proximal BM (_p_-BnB (BM)) as a subroutine. The column "sol. time" reports the time required by Algorithm 4 to converge to an optimal solution with a 0.00% gap, calculated as the relative difference between the upper bound (UB) and lower bound (LB) for the objective function generated by the corresponding method. The difference was calculated as 100%\(\frac{\text{UB}-\text{LB}}{\text{LB}}\).It is worth highlighting that solving full-scale instances with Gurobi resulted in convergence within one hour only for Instance 1, taking in a total of 2064.55 seconds. As can be seen in Table 5, the maximum number of nodes explored by _p_-BnB was only 11, for Instance 5 using the proximal BM, while the average number of nodes explored was five. This is due to the fact that despite the very high density of the quadratic matrices (90%) in the instances, at the very first node, _p_-BnB was able to generate a solution with a tight dual bound, on average being 0.01%. In comparison, the average dual bound generated within one hour by solving the Instances 1-5 with Gurobi was 4.98%. ## 5 Conclusions In this paper, we proposed a novel method for solving two-stage stochastic programming problems whose deterministic equivalents are represented by MIQCQP models. The proposed method is named _p_-branch-and-bound (_p_-BnB) and combines a branch-and-bound-based algorithm inspired by [14] with the _p_-Lagrangian decomposition proposed in [6]. The _p_-Lagrangian decomposition method relies on the composition of the mixed-integer-based relaxation \begin{table} \begin{tabular}{c c c c c} \hline \hline \begin{tabular}{c} **\# of** \\ **instance** \\ \end{tabular} & \begin{tabular}{c} **\# of** \\ **scenarios** \\ \end{tabular} & \begin{tabular}{c} **\# of 1*-stage** \\ **variables** \\ \end{tabular} & \begin{tabular}{c} **\# of 2nd-stage** \\ **variables** \\ \end{tabular} & \begin{tabular}{c} **\# of constraints** \\ **\# of constraints** \\ \end{tabular} \\ \hline **1** & 15 & 30 & 25 & 25 \\ **2** & 20 & 30 & 30 & 20 \\ **3** & 20 & 40 & 15 & 15 \\ **4** & 30 & 30 & 20 & 15 \\ **5** & 40 & 20 & 10 & 15 \\ \hline \hline \end{tabular} \end{table} Table 4: Dimensions of instances with low-density \(Q\) matrices \begin{table} \begin{tabular}{c|c c c|c c c} \hline \multirow{2}{*}{**Instance**} & \multicolumn{2}{c|}{_p_**-BnB (FWPH)**} & \multicolumn{3}{c}{_p_**-BnB (BM)**} \\ & sol. time (s) & \# nodes & \# iter. & sol. time (s) & \# nodes & \# iter. \\ \hline 1 & 459.90 & 5 & 68 & 114.42 & 1 & 30 \\ 2 & 410.63 & 3 & 21 & 170.53 & 1 & 26 \\ 3 & 520.47 & 5 & 145 & 925.73 & 3 & 312 \\ 4 & 374.79 & 3 & 56 & 1439.22 & 5 & 268 \\ 5 & 427.94 & 9 & 191 & 3001.86 & 11 & 1525 \\ \hline \hline \end{tabular} \end{table} Table 5: Numerical results for the instances with high-density \(Q\) matrices of the MIQCQP problem using the reformulated normalized multiparametric disaggregation technique (RNMDT) [5] and classic Lagrangian relaxation. The _p_-Lagrangian decomposition has been demonstrated to outperform the commercial solver Gurobi in terms of computational time required to generate the dual bounds for a primal MIQCQP problem, whose precision can be controlled by choice of parameters in RNMDT relaxation. However, _p_-Lagrangian decomposition could not tackle the duality gap arising from the mixed-integer nature of the primal MIQCQP problems. In contrast, the proposed _p_-BnB mitigates this issue by ensuring the integrality conditions of the optimal solution via a classic branch-and-bound approach. Additionally, following [14], the branch-and-bound procedure takes place whenever the first-stage variables candidates violate the non-anticipativity constraints. The _p_-BnB efficiency has been tested on a set of RNMDT relaxations of randomly generated MIQCQP instances. Numerical experiments demonstrated the superior performance of the proposed _p_-BnB method over attempts to solve full-scale RNMDT problems with the commercial solver Gurobi. Depending on the method utilised to solve dual sub-problems, the use of _p_-BnB allowed for saving on average about 32 % of the time required by Gurobi to solve RNMDT problem in case _p_-BnB used Frank-Wolfe progressive hedging (FWPH) [10] as a subroutine or about 84 % of the time if proximal bundle method (BM) [27; 36] has been used. It is worth highlighting that the _p_-BnB method implementation involves intricate computational decisions that can greatly influence its performance. Nevertheless, our implementation still serves as a proof of concept. Additionally, the _p_-BnB method considers rudimentary heuristics to generate feasible solutions for the primal RNMDT relaxation and the implementation of more sophisticated heuristics would likely improve the performance of _p_-BnB, in a similar fashion as they are beneficial in mixed-integer programming solvers. Hence, one could further enhance _p_-BnB computational efficiency. In particular, one potential path for improvement involves enhancing the branching strategies. As an example, one could refer to [17; 43] suggesting enhancements of the procedure of searching for the paths in the decision tree that would not lead to a better optimal solution and thus, can be eliminated. Another possible direction could be an improvement of the FWPH method implementation. Additionally, we observed that in the case of utilising FWPH in the context of _p_-BnB a significant amount of computational time is spent by FWPH on generating the sets \(\{(V_{0}^{s})_{s~{}inS}\}\) at the beginning of Algorithm 2 for the instances with a high number of scenarios. Hence, improvement of this procedure could bring new insight into _p_-BnB performance and convergence rate when using FWPH as a subroutine.
2303.06868
Deep Learning-based Eye-Tracking Analysis for Diagnosis of Alzheimer's Disease Using 3D Comprehensive Visual Stimuli
Alzheimer's Disease (AD) causes a continuous decline in memory, thinking, and judgment. Traditional diagnoses are usually based on clinical experience, which is limited by some realistic factors. In this paper, we focus on exploiting deep learning techniques to diagnose AD based on eye-tracking behaviors. Visual attention, as typical eye-tracking behavior, is of great clinical value to detect cognitive abnormalities in AD patients. To better analyze the differences in visual attention between AD patients and normals, we first conduct a 3D comprehensive visual task on a non-invasive eye-tracking system to collect visual attention heatmaps. We then propose a multi-layered comparison convolution neural network (MC-CNN) to distinguish the visual attention differences between AD patients and normals. In MC-CNN, the multi-layered representations of heatmaps are obtained by hierarchical convolution to better encode eye-movement behaviors, which are further integrated into a distance vector to benefit the comprehensive visual task. Extensive experimental results on the collected dataset demonstrate that MC-CNN achieves consistent validity in classifying AD patients and normals with eye-tracking data.
Fangyu Zuo, Peiguang Jing, Jinglin Sun, Jizhong, Duan, Yong Ji, Yu Liu
2023-03-13T05:33:28Z
http://arxiv.org/abs/2303.06868v1
Deep Learning-based Eye-Tracking Analysis for Diagnosis of Alzheimer's Disease Using 3D Comprehensive Visual Stimuli ###### Abstract Alzheimer's Disease (AD) causes a continuous decline in memory, thinking, and judgment. Traditional diagnoses are usually based on clinical experience, which is limited by some realistic factors. In this paper, we focus on exploiting deep learning techniques to diagnose AD based on eye-tracking behaviors. Visual attention, as typical eye-tracking behavior, is of great clinical value to detect cognitive abnormalities in AD patients. To better analyze the differences in visual attention between AD patients and normals, we first conduct a 3D comprehensive visual task on a non-invasive eye-tracking system to collect visual attention heatmaps. We then propose a multi-layered comparison convolution neural network (MC-CNN) to distinguish the visual attention differences between AD patients and normals. In MC-CNN, the multi-layered representations of heatmaps are obtained by hierarchical convolution to better encode eye-movement behaviors, which are further integrated into a distance vector to benefit the comprehensive visual task. Extensive experimental results on the collected dataset demonstrate that MC-CNN achieves consistent validity in classifying AD patients and normals with eye-tracking data. Alzheimer's disease (AD), eye-tracking, visual attention, convolutional neural network. ## I Introduction Recently, the prevalence of Alzheimer's Disease (AD) has been increasing rapidly, with an estimated 106.8 million worldwide cases by 2050 [11]. AD is an irreversible and chronic neurodegenerative brain disorder characterized by abnormal cognitive symptoms, including speech and language impairments, and memory declination [40]. As the most common cause of dementia, AD is estimated to be the fifth leading cause of death in elderly people. To date, treatments for AD only delay the disease onset or slow its progression. Thus, it is crucial to diagnose AD in the early stages by suitable biomarkers for a more aggressive therapy to overcome the symptoms and prevent its deterioration to dementia [47]. Currently, the clinical diagnoses of AD are usually made by experienced clinicians through neuropsychological tests [23], blood detections [43], and medical images [45]. Although these methods have become mainstream diagnostic methods for AD, several limitations still hinder their wider application. For example, neuropsychological tests generally need to take 10-15 min to complete under the guidance of professionally trained medical staff. This will cause a significant consumption both of time and manpower. Moreover, the collection of blood samples is invasive and will cause harm of some degree to the patients. Medical imaging requires sophisticated instrumentations and manual analysis that is relatively inefficient. Therefore, finding new biomarkers that can be noninvasively detected in the early stage of AD as well as be analyzed by artificial intelligence techniques like deep learning is highly desirable. To alleviate these issues, we propose a novel deep-learning-based approach to diagnose AD using eye-movement data. As biomarkers that can be detected in the early stage of AD, eye movements have shown great potential for AD diagnosis, which is demonstrated in extensive literature [1, 6, 13, 28]. Since eye movements are easy to be collected by eye-tracking tools, a growing body of research provides evidence that eye-tracking is a simple and non-invasive method for AD diagnosis. For example, numerous studies have shown that abnormal saccades are associated with AD [25, 30, 32]. Various eye-movement tasks have been designed to capture the abnormal saccades in AD patients, which include prosaccade tasks that require participants' eyeballs to move forward to a visual stimulus, and antisaccade tasks that require subjects' eyeballs to move away from the presented stimulus [44]. It is noted that AD patients usually have longer delays and decreased saccade amplitudes in initiating saccades compared with normals [25]. In addition, since cognitive impairment in AD patients is often accompanied by visual attention deficits, attention abnormalities have been well characterized in detecting AD. Previous researches also find that eye-movement behavior reveals multiple visual attention processes, including emotional attention [9], visual exploration strategies [22, 39, 42], and color preferences [48, 52]. To explore the visual attention behaviors in AD patients, several visual tasks with eye-tracking have been designed in current studies, most of which require participants to respond to instructions verbally or behaviorally [33, 8, 41]. For example, Kawagoe et al. [33] designed a visual memory task on 18 patients with mild cognitive impairment (MCI) which is a prodromal stage of Alzheimer's disease, and 18 normals. Participants were asked to view a study stimulus for 3s followed by a 3-5s gap. Then a test stimulus pair was presented to them. Participants were required to respond whether either (left or right) or neither test stimulus was the same as the study stimulus by pressing one of three buttons. Mosimann et al. [41] conducted a clock reading task on 24 AD patients and 24 normals by presenting participants with clocks of different times and asking them to read and state the times. They decided whether to read the next clock by pressing the mouse buttons. In these tasks, results mainly focused on two indices for eye-movement data: fixation duration and the number of fixations separately in the divided Areas of Interest (AOI). By analyzing the experimental results, the researchers found that patients show different attentions to the same stimulus. For instance, patients focused less on the mouth area of a face during the memory test. In the clock reading test, patients paid less attention to the clock hand areas. As indicated in the review literature, such tasks combined with eye-tracking technology show particularly promising results for the distinction between AD patients and normals. However, instructions will easily cause the subjects to be nervous and uneasy, resulting in less objective evaluation scores. Daffner et al. [15] conducted a viewing task without instructions by presenting participants with well-designed slides, and found that AD patients focus less on novel elements in a photograph. Deep-learning-based models have been shown to play a crucial role in identifying AD patients with high sensitivity [29, 35, 51]. These models are commonly used to deal with medical imaging such as positron emission computed tomography (PET) or magnetic resonance imaging (MRI), mainly because the feature representations produced by these models can be helpful even if the data is partially missing. In addition to medical images, deep-learning models combined with eye-tracking technology have also shown good performance in detecting cognitive impairments in AD patients [46, 53]. Biondi et al. [6] built a deep neural network using the trained autoencoders and a Softmax classifier that allows identifying AD patients with 89.78\(\%\) of accuracy based on eye-tracking data. However, deep-learning models for AD recognition or classification based on eye-movement data are rare in existing studies due to the lack of large-scale eye-tracking datasets. Sun et al. [49] proposed a nested autoencoder model to identify AD patients and normalattentioneye-movement dataset on the designed visual memory task, which showed 85\(\%\) average accuracy in AD recognition. The above encouraging results indicate that deep-learning-based models are promising for understanding the dynamics of eye-movement behavior and its relationship to underlying cognitive impairments of AD patients. However, several limitations have hindered the application of such researches. For example, due to the insufficiency of existing eye-tracking system functions and algorithms in 3D displays, most of the current studies concentrate on visual stimulus of 2D display, and few studies have used a 3D display form. Even though it has been pointed out that under the same content, 3D stimuli can stimulate more abundant eye movements and brain activity behaviors than 2D stimuli [2]. Moreover, further research is limited by a lack of the large-scale eye-tracking dataset since the performance of deep-learning-based approaches depends heavily on the training dataset's size. Motivated by the above limitations, we propose a multi-layered comparison convolution neural network (MC-CNN) to differentiate AD patients from normals with an eye-movement dataset on a designed 3D comprehensive visual task. The hypothesis is that using deep learning to identify the key characteristics of eye behavior during the 3D comprehensive visual task may lead to an accurate classification between AD patients and normals, which can provide meaningful information on the cognitive declination of the patients for clinicians. Our main contributions are summarized as follows: * We explore a novel approach to detect the visual attention deficits of AD patients by analyzing eye-tracking data recorded in a 3D comprehensive visual task, by which eye-movement data is promising to be used as a biomarker for early AD diagnosis; * A deep-learning-based MC-CNN model with a multi-layered feature extractor is proposed to capture the visual attention features efficiently. Using the integration of feature representations on different features and global average pooling (GAP), the proposed model can transfer the diagnosis of AD to an automatical classification problem via artificial intelligence; * By comparing eye-movement heatmap pairs of subjects, an augmented dataset is constructed based on the collected eye-tracking data. The similarity information of visual attention features between individuals is learned by MC-CNN for better classification performance. The rest of this paper is organized as follows. Section II introduces the overall process of the experiment and its mathematical principles. Section III reports the experimental evaluation and analysis of our proposed method. Section IV gives the conclusion and future work. ## II Materials and Methods In this section, we present the materials and methods used in the study, including participants, eye-movement data collection and preprocessing, dataset augmentation, the framework of MC-CNN, classification models for comparison, and evaluation metrics. ### _Participants_ Eye-tracking data were obtained from a total of 106 participants including 68 normals and 38 AD patients. The AD patients (23 female, 15 male) were 54\(-\)80 years old (68\(\pm\)7 years), and were recruited from the cognitive impairment clinics, Tianjin Huanhu Hospital, Tianjin, China ([http://www.tnsi.org/](http://www.tnsi.org/)). The diagnoses were based on NINCDS-ADRDA and included patient history, clinical impression, and brain pathology as detected by mini-mental state evaluation and structural imaging. In AD patients, humans with uncorrected dysfunctions of vision or hearing loss, mental disorders, or other symptoms that made them unable to complete the proposed visual task or scale assessment were excluded. The normals were recruited from friends and relatives of patients who had no subjective or informant-based complaints of cognitive decline. A subgroup of normals was selected for comparison with AD patients and other normals. All eye-tracking data were presented in the form of heatmaps. The use of eye-tracking data for analysis of AD patients' cognitive decline was approved by the World Medical Association Declaration of Helsinki [3]. ### _Eye-Movement data collection and preprocessing_ We performed a preliminary analysis of eye-tracking data under the 3D comprehensive visual task to compare characteristic differences in visual attention between AD patients and normals, and then determined whether these differences could serve as a detecting tool to identify AD. #### Iii-B1 Eye-tracking system with 3D video stimuli Eye movements were recorded using a non-invasive eye-tracking system with stereo stimuli designed by Sun et al. [50], School of Microelectronics, Tianjin University, Tianjin, China. This system estimates gaze positions at an average error of 1.85cm/0.15m over the workspace volume 2.4m\(\times\)4.0m\(\times\)7.9m. It displays stereo stimuli in the resolution of 1920\(\times\)1080 pixels in a limited vision without requiring the viewer to wear any accessories, which gives a friendly and immersive 3D visual experience. The system collects eye-movement data binocularly with the eye-tracking module that can be adjusted in a 360-degree direction freely to meet the need of different users. A calibration procedure is applied to all participants. The structure of the system and the demonstration of the eye-movement data collection process are presented in Fig. 1. #### Iii-B2 3D comprehensive visual task The 3D comprehensive visual task was applied based on the above-mentioned eye-tracking system. During the task, participants were seated in a chair in front of the device. As illustrated in Fig. 1, they had to press their heads on the forehead and chin rest in order to minimize head movements. A repeated calibration procedure was used before each personal experimental block to track the gaze accurately. The calibration stimulus was a grid containing nine red dots at random locations that were displayed one at a time on a black background. During calibration, the participants were instructed to fixate on the dot on the screen and to move their eyes to the next dot. Using these reference points, the system creates a mapping function that relates all eye positions to points in the calibration area. Each mark was displayed for 3s to ensure stability. Eye movement recordings were conducted in a quiet room in order to keep viewers in a natural state. Following the calibration, participants performed the 3D comprehensive visual task. They were presented with a series of 3D images for a duration of 5s each with a 1s black background between every two photographs as shown in Fig. 2. A total of 9 images were for testing, which contained different scenes, including natural scenes, human bodies, cartoon characters, etc. Participants were not given any instructions but were only required to freely view the images. It takes approximately 2 minutes on average to complete the entire visual task. #### Iii-B3 Heatmaps generation To intuitively represent the eye-movement data, a fixation map will be constructed by Matlab. All measured fixation points for each image are overlapped into a map. Then, the generated fixation map can be smoothed and normalized with a Gaussian kernel to generate a color-coded visualization, i.e., heatmaps. Every participant gets 9 heatmaps corresponding to the 9 photographs presented. These nine heatmaps are then stacked to form a final heatmap for the participant. All the heatmaps will be used as the source input for the MC-CNN to extract discriminative represent features for classifying AD patients and normals. The overall procedure of eye-movement data collection and heatmaps generation is presented in Fig. 2. Since AD patients and normals perform different attentions to the same visual stimuli, their heatmaps show obvious differences. Several sample heatmaps of 3 AD patients and 3 normals that are randomly selected from the participants are illustrated in Fig. 3. As can be observed, visual attention represented by heatmaps of normal participants shows greater similarity, however, the differences between AD patients and normals are apparent. For example, the normal participants are usually good at capturing the main bodies in a picture, especially the human bodies and cartoon characters, while the patient participants pay more attention to the background areas. The preferences of color tunes in a picture are also different between AD patients and normals. This particular visual attention phenomenon is further evidence of AD patients' visual attention deficits. Therefore, it is a reasonable hypothesis that eye-movement data in the comprehensive visual task can reflect the attention processes occurring in the participant's brain. Eye movements of normals in the task are roughly similar, and eye movements of AD patients show obvious differences. This is of great value in supporting clinical practice in detecting and diagnosing AD through visual attention measured by eye tracking. Fig. 1: The structure of the non-invasive eye-tracking system with stereo stimuli and the eye-movement data collection process based on the system. Fig. 2: The workflow of eye-tracking data collection and heatmaps generation. ### _Dataset augmentation_ The training dataset size is essential to deep learning-based approaches. A larger training dataset can provide more details to a deep-learning-based classification model for generalizing patterns of the samples. In this experiment, however, the size of the eye-movement dataset is relatively limited due to patients' privacy as well as organizational challenges. Compared to patients, eye-movement data of normals is easier to be collected, which usually causes an unbalanced distribution of the samples in the dataset. To address the problem presented above, we have augmented the training dataset based on the finite eye-tracking data of the recruited participants, which provides a larger and more balanced training dataset for the model. In the 68 normal participants we have recruited, heatmaps of 30 individuals are selected to form a normal group. The remaining heatmaps of 38 AD patients and 38 normals respectively form the AD group and the other normal group. Every subject in the AD group and normal group will combine with a normal subject in the other normal group to form a combination, which expands the size of the training dataset to 2280. By pairwise organizing the heatmaps, the number of combinations is twenty times as much as the original heatmaps. As shown in Fig. 4, the similarities of heatmaps in normal\(\&\)normal combinations are higher than AD\(\&\)normal combinations. Therefore, AD\(\&\)normal combinations are labeled as 0 while normal\(\&\)normal combinations are labeled as 1. The augmented heatmap dataset is then fed into MC-CNN to explore the discriminative features between the AD\(\&\)normal combinations and the normal\(\&\)normal combinations. The labeled augmented heatmap dataset is divided into three subsets in the ratio of 6:2:2, namely a training set, a validation set, and a test set, as shown in table I. The training set is learned by MC-CNN, allowing the model to update its parameters with a back propagation procedure, which is called the convergence process. The validation set is used to determine the current converging state of the model, while the test set is used to confirm its final performance. ### _Framework of MC-CNN_ This article proposes a novel classification mechanism by which visual attention in heatmaps can be classified in an end-to-end manner. As shown in Fig. 5, MC-CNN consists of a convolutional feature extractor and a predictor. After data augmentation (see Section II-C), the heatmaps are fed into MC-CNN. The proposed model learns the similarity information between heatmap pairs for classification. More specifically, the feature extractor aims to capture distinguishing feature representations in heatmaps. Then the feature representations are used to calculate the similarity between the heatmap pairs that are separately from a normal and an unknown class. To utilize more diverse features from shallow to deep convolutional features, the multi-layered feature representation is introduced based on hierarchical residual blocks to form a feature extractor. While the feature extractor extracts feature maps of the input heatmaps, the global averaging pooling layer (GAP) transforms the feature maps into feature vectors. In this manner, the distance of feature vectors extracted from heatmap pairs can represent the similarity information. MC-CNN is designed with the following two improvements: 1) Residual blocks and integration of multi-layered representations are exploited in the feature extractor so that features from low level can be reused, and the vanishing gradient problem can be prevented; 2) GAP is used to generate feature vector without adding parameters thus overfitting is avoided in this layer. #### Ii-D1 Multi-layered feature representation and integration The feature extractor is designed to explore multi-layered feature representations of heatmap combinations from shallow to deep layers. Shallower layers extract basic features such as intensity, color, and shapes, while deeper layers learn semantically stronger feature representations. All the features are then fused to represent the visual attention information of a heatmap combination. We take an example of a heatmap combination \(\textbf{H}=[\textbf{X}_{0},\textbf{X}_{1}]\), where matrices \(\textbf{X}_{0},\textbf{X}_{1}\in R^{C\times H\times W}\). **X\({}_{0}\)** and \(\textbf{X}_{1}\) encode the stacked heatmaps of the unknown viewer (AD patient or normal) and the normal in a combination separately. \(C\) represents the color channel dimension of the heatmap Fig. 4: The adopted pair-wise sample augmentation scheme. Fig. 3: Several sample heatmaps of 3 AD patients and 3 normals that are randomly selected from participants. matrices, while \(H\) and \(W\) separately represent the height and width dimensions of the heatmaps. The function of feature-extracting layers can be defined as, \[\{\textbf{F}_{i}^{0},\textbf{F}_{i}^{1},\dots,\textbf{F}_{i}^{j}\}=f_{ext}( \textbf{X}_{i}),i\in\{0,1\}, \tag{1}\] where \(f_{ext}(\cdot)\) is the function of the feature extractor, \(\textbf{F}_{i}^{j}\) is the feature map obtained from the \(i\)th viewer in the combination on the \(j\)th layer. GAP layers are connected at the end of each layer of feature extractors to encode the feature maps into feature vectors, \[\textbf{v}_{i}^{j}=f_{GAP}(\textbf{F}_{i}^{j}),i\in\{0,1\}, \tag{2}\] where \(f_{GAP}(\cdot)\) is the function of GAP layer and \(\textbf{v}_{i}^{j}\) encodes the feature vector of the \(i\)th viewer in the combination on the \(j\)th layer. The distance of feature vectors from two viewers in a combination is calculated into a distance vector **d**, which represents the similarity between the heatmaps of two viewers, \[\textbf{d}=[d^{0},d^{1},\dots,d^{j},\dots], \tag{3}\] \[d^{j}=f_{dis}(\textbf{v}_{0}^{j},\textbf{v}_{1}^{j}), \tag{4}\] where \(f_{dis}(\cdot)\) is the function to calculate the relevance of two feature vectors and \(d^{j}\) encodes the relation of two heatmaps in the combination on the \(j\)th layer. In distance vector **d**, the relevances are arranged by the depth of convolution layers. Vector **d** is fed to the predictor for feature integration. #### Iii-B2 Classification for diagnosis The distance vector **d** obtained from the feature extractor is then fed into an MLP that performs dimensionality reduction to learn the similarity information. A Softmax regression layer is connected at the end of MLP for binary classification. The 2-layered MLP consists of two perceptions that compress the relation vector **d** into a 2-dimensional output vector **s**, \[\textbf{s}=[s_{0},s_{1}]=\textbf{W}_{2}ReLU(\textbf{W}_{1}\textbf{d}+\textbf{ b}_{1})+\textbf{b}_{2}, \tag{5}\] where \(\textbf{W}_{1}\), \(\textbf{W}_{2}\) and \(\textbf{b}_{1}\), \(\textbf{b}_{2}\) represent the weight matrices and bias vectors of the first and second perception layers. \(ReLU(\cdot)\)is the activation function of the first perception layer. The vector **s** is sent to the Softmax logistic regression layer. The output of Softmax layer is defined as \(\hat{\textbf{l}}\): \[\hat{\textbf{l}}=[\hat{l_{0}},\hat{l_{1}}]=Softmax(\textbf{s}) \tag{6}\] \[\hat{l_{i}}=P(Y=i|\textbf{s})=\frac{e^{s_{i}}}{e^{s_{0}}+e^{s_{1}}},i\in\{0,1\} \tag{7}\] where \(P(Y=i|\textbf{s})\) is the probability that the combination is classified as \(Y\) in the case that the input vector is **s**. \(Y=0\) means that the unknown viewer in the input combination is an AD patient, while \(Y=1\) means he/she is a normal. Since the higher \(\hat{l_{1}}\) is, the greater probability that the unknown viewer is a normal, we define \(\hat{l_{1}}\) as the similarity between the two heatmaps of the input combination **H**. The label of a combination is set as \(l\in\{0,1\}\) which also serves as the targeted numerical value that similarity \(\hat{l_{1}}\) needs to fit with. The fitting loss function is calculated by: \[loss=l\log\hat{l_{1}}+(1-l)\log X(1-\hat{l_{1}}) \tag{8}\] In the convergence procedure, the parameters of MC-CNN are updated by backward gradient descent, which propagates the gradient information of each parameter through the entire MC-CNN and optimizes all the parameters according to the fitting loss. Parameters could be updated in each training ergodic of all items in the training dataset: \[\mathbb{W}=\{w_{0},w_{1},\dots,w_{i},\dots\}=\mathbb{W}-\alpha\frac{\partial loss }{\partial\mathbb{W}}, \tag{9}\] where refers \(\mathbb{W}\) to all parameters that can be updated and \(\alpha\) decides the rate of parameters updating, namely learning rate. With enough epoches, all parameters will be updated until the loss function reaches its minimum value, which indices that the proposed model is at its optimum state of best-fit the predictions to labels. Fig. 5: An illustration of MC-CNN structure, consisting of two main components: feature extractor and predictor. ### _Classification models for comparison_ We compared our proposed scheme with several existing state-of-art classification methods based on the original/augmented heatmap dataset as shown in Fig. 6. For the original heatmap dataset, we employed several traditional convolutional neural networks for classification. Feature fusion and classification are carried out by the inner structure of each network. As for the augmented heatmap dataset, we adopted the convolutional feature-extrating module inside the networks to form the feature extractor. And then, we compared the feature representations on different layers to get distances of the input heatmap combinations, just as MC-CNN. The final classification was also finished by the same predictor in MC-CNN. The networks used for comparison included AlexNet, GoogleLeNet, VGG11, and ResNet34: * _AlexNet_. AlexNet attempts to capture new and unusual features of the input images for classification using an architecture of 5 convolutional and 3 fully connected layers, which adopts several effective techniques such as Rectified Linear Units (ReLUs), local response normalization, and dropout for reducing training time and preventing overfitting; * _GoogLeNet_. GoogLeNet is a deep convolutional neural network architecture to address the general image classification and detection problem, in which an efficient deep neural network architecture for computer vision codenamed Inception, is utilized to enforce the agreement with limited computational resources; * _VGG11_. VGG11 is a deep hierarchical configuration of stacked convolutional layers followed by three fully connected layers, which has been developed to the depth of convolutional architecture via very small convolution filters in all layers; * _ResNet34_. ResNet34 contains a stack of residual network architectures that have addressed the common degeneration problem of deep convolutional neural networks. In ResNet34 a deep residual learning framework is introduced to ease the training with considerably increased depth and gained accuracy; ### _Evaluation metrics_ In this experiment, we introduced four assessment criteria, i.e., accuracy, recall, precision, and F1-score, to evaluate the performance of MC-CNN. The metrics are calculated according to Eq. (12)-(15) below: \[Accuracy=\frac{TP+TN}{TP+TN+FP+FN} \tag{10}\] \[Recall=\frac{TP}{TN+FP} \tag{11}\] \[Precision=\frac{TP}{TP+FP} \tag{12}\] \[F1\text{-}score=2\times\frac{Precision\times Recall}{Precision+Recall} \tag{13}\] where TP, FP, TN, and FN are the calculated true positives, false positives, true negatives, and false negatives separately. In this paper, we took AD patients as the positive samples and normals as the negative samples. ## III Results ### _Convergence analysis_ The value of loss function and accuracy in the training phase and validation phase are logged by diagrams, presenting the process of model convergence. Training and validation loss will decent gradually with the number of epoches, in the meanwhile, training and validation accuracy will increase gradually, and then stabilize around a certain value as illustrated in Fig. 7. ### _Adjustmentations of parameters_ #### Iii-B1 Learning Rate The learning rate operates as the speed of gradient descent, which the performance of model is sensitive to. We tried different learning rate values ranging from a magnitude of \(10^{-5}\) to \(10^{-2}\) in the network. We evaluated the accuracy under different learning rates on several heatmaps for test, and the result is shown in Fig. 8. Considering the characteristic of the learning rate value, the statistical data was logged as logarithmic increasing. From Fig. 8, the optimistic result distributes around the value \(10^{-3}\). This gives the appropriate range of learning rate to subtly adjust. Fig. 6: The main framework of the networks separately based on the original heatmap dataset and the augmented heatmap dataset for comparisons. Fig. 7: The procedures of model convergence and validation. #### Iii-B2 Number of convolutional feature-extracting layers As the convolutional feature extracting layer goes deeper, the feature map of which backward represents information of different semantic strengths. The opportune number of convolution layers can bring the most accurate classification results. In this experiment, the feature extractor of MC-CNN generates feature representations of five layers, which are coded from 0 to 4. The performance of different combinations of feature-extracting layers is shown in TABLE II, from which we can see that the more feature representations are integrated, the better performance can MC-CNN gets. Therefore, the MC-CNN model is finally set to integrate feature representations from 5 feature extracting layers. To ensure the robustness of the model, we took three-fold cross-validation to check the performance of MC-CNN. The accuracy of the convergence, validation and test phases of the three folds is presented in table III. ### _Classification results of different models_ We tested the performance of networks for comparison (see Section II-E) on the same test dataset by applying three-fold cross-validation. Table IV reports the classification performances of our proposed method and other classification models separately based on the original/augmented heatmap dataset. Fig. 9 shows the classification results of 16 participants (8 AD patients ad 8 normals) that are randomly selected from the test dataset calculated by three different models. The 8 AD patients and 8 normals are numbered sequentially from 1 to 8 in the abscissa axis, and are represented by the spots of two shapes(as shown in Fig. 9). The ordinate value of each individual represents the similarity value (for augmented dataset) or the output value before the classifier (for the original dataset). From Fig. 9, it can be seen that the classification models perform better on the augmented dataset compared to the original dataset, which is manifested as the farther distances of two kinds of spots. It is worth noting that AlexNet (with a simple convolutional feature extractor) performs the worst, while ResNet34 (with residual convolutional feature extracting structure) shows the best classification result. The following observations can be made from TABLE IV: 1) Our proposed MC-CNN performs the best among all the Fig. 8: Test accuracy with different learning rate values. Fig. 9: The classification performances of three comparison models of 16 subjects (8 patients and 8 normals) respectively based on the original/augmented dataset. comparative models. 2) AlexNet, as network with the fewest convolutional layers, performs the worst on the original dataset, indicating that feature representations on shallow feature extracting layers only are insufficient to accurately separate between AD patients and normals. 3) After employing the dataset augmentation, the AlexNet, VGG11, and ResNet34 provide a significant improvement in classification accuracy. 4) Although GoogLeNet performs no improvements in accuracy, the recall value has improved on the augmented dataset, which is also an important indicator in disease detection. 5) The AD patients and normals can get better discrimination on the augmented dataset, which further verifies the effectiveness of data augmentation and the proposed MC-CNN model. ### _Ablation experiment for the modules in MC-CNN_ To more comprehensively evaluate the performance of MC-CNN, we designed an ablation experiment to verify the importance of each module of the MC-CNN. Since the MC-CNN extracts multi-layered feature representations using multiple residual blocks and GAP (as shown in Fig. 5) based on the augmented dataset, we design the ablation experiment by replacing them with other modules. For the augmented dataset, we replace it with the original dataset. As for the multiple representations, we change them to a single representation of the last residual block. Moreover, we change the GAP layers into fully-connected layers. Thus, there is a total of four models for comparison: Model1 (model based on the original dataset), Model2 (model with feature representations from a single residual layer), Model3 (model with fully-connected layers rather than GAP layers), and Model4 (the proposed MC-CNN model). The designation of the ablation experiment is presented in Fig. 10, and TABLE V indicates the performances of the four models. As shown in TABLE V, four models show different performances for classifying AD patients and normals. Among them, Model4 shows the best performance, achieving 0.827\(\pm\)0.001 accuracy. Model1, which is based on the original dataset rather than the augmented dataset, shows the worst classification accuracy of 0.723 \(\pm\) 0.014. By replacing the multi-layered feature extractor and the GAP layers in turn, the classification accuracy of the two models (Model2 and Model3) is decreased by 0.017 and 0.089, respectively. The reasons for the differences in the performance of these models can be attributed to the following points: First, for the multi-layered feature extractor, by comparing the multiple feature representations of heatmaps, the model learns the shallower representations as supplement information, which can provide the classifier with more comprehensive features, thus achieves higher accuracy. Second, using the GAP layers rather than traditional fully-connected layers for feature integration introduces no parameter to optimize, thus the overfitting problem is alleviated. Most importantly, for the augmented dataset, the size of the training dataset is significantly enlarged which allows the proposed model to more sufficiently learn the similarity features between the heatmaps in combinations. However, the original heatmap dataset is relatively not large enough to support the model with sophisticated structures. ## IV Conclusion and Future Works In this paper, we have proposed a novel deep-learning-based classification framework to alleviate the low efficiency, invasiveness, and manpower requirements in the early diagnosis of AD. By taking advantage of a 3D comprehensive visual task and multi-layered convolutional neural network, we efficiently integrated the characteristic eye-tracking features extracted from AD patients and normals into a distance representation and achieved enhanced robust classification results for analyzing visual attention deficits in AD patients. We also designed a dataset augmentation method to enlarge the size of training dataset for more sufficient model learning. Experimental results on the recruited subjects demonstrated that the performance of our proposed scheme obtained superior performance over the state-of-art methods. In the future, we will build a larger and more comprehensive eye-movement dataset by including AD patients with different degrees of cognitive impairments. Besides, different models will be designed to analyze more eye-movement features with the updated dataset. ## Acknowledgement The authors would like to thank all of the study participants and their families that participated in the research on Alzheimer's disease here.
2305.02980
Equivariant derived category of a reductive group as a categorical center
We prove that the adjoint equivariant derived category of a reductive group $G$ is equivalent to the appropriately defined monoidal center of the torus-equivariant version of the Hecke category. We use this to give new proofs, independent of sheaf-theoretic set up, of the fact that the Drinfeld center of the abelian Hecke category is equivalent to the abelian category of unipotent character sheaves; and of a characterization of strongly-central sheaves on the torus.
Roman Bezrukavnikov, Andrei Ionov, Kostiantyn Tolmachov, Yakov Varshavsky
2023-05-04T16:44:37Z
http://arxiv.org/abs/2305.02980v3
# Equivariant derived category of a reductive group as a categorical center ###### Abstract We prove that the adjoint equivariant derived category of a reductive group \(G\) is equivalent to the appropriately defined monoidal center of the torus-equivariant version of the Hecke category. We use this to give new proofs, independent of sheaf-theoretic set up, of the fact that the Drinfeld center of the abelian Hecke category is equivalent to the abelian category of unipotent character sheaves; and of a characterization of strongly-central sheaves on the torus. ###### Contents * 1 Introduction * 1.1 Organization of the paper. * 1.2 Acknowledgments. * 2 Equivariant derived category of \(G\) as a categorical center. * 2.1 Notations. * 2.2 Hecke categories. * 3 Sheaves on a group as comodules over the Springer comonad * 3.1 Comonads in triangulated categories. * 3.2 Springer comonad. * 3.3 Springer comonad and bimodule structure. * 4 Proof of the main Theorem. * 4.1 Centralizer category as a category of comodules. * 4.2 Proof of Proposition 4.1.1 * 4.3 Proof of Proposition 4.1.2. * 5 Character sheaves as a categorical center * 5.1 Some facts about monodromic Hecke categories. * 5.2 Monodromic categories and character sheaves. * 5.3 DG models for monodromic categories. * 5.4 Abelian category of character sheaves as a categorical center. * 5.5 Vanishing result for central sheaves on a torus. * A Sheaves on Vinberg semigroup * A.1 General facts about convolution on semigroups. * A.2 Vinberg semigroup and the horocycle space. * A.3 Radon transform and convolution on monodromic categories. ## 1 Introduction Let \(G\) be a connected reductive group, either over \(\mathbb{C}\) or split over a finite field. In this paper, we study the equivariant, with respect to the adjoint aciton of \(G\) on itself, constructible derived category of sheaves on \(G\), which we denote \(D^{b}(G/_{\mathrm{Ad}}G)\), not specifying the precise sheaf-theoretic setting for now. The category \(D^{b}(G/_{\mathrm{Ad}}G)\) is well-known to be a braided monoidal category. We relate this category to the appropriately defined center of another triangulated monoidal category \(\mathcal{H}^{(1)}\), which we call the \(T\)-equivariant Hecke category (for the adjoint action of a maximal torus \(T\subset G\)). Similarly, we relate the derived category \(D^{b}_{\mathfrak{C}}(G)\subset D^{b}(G/_{\operatorname{Ad}}G)\) of unipotent character sheaves on \(G\) to the appropriately defined center of the monodromic \(T\)-equivariant Hecke category \(\mathcal{H}^{(1)}_{mon}\). The idea to relate the category of character sheaves to the categorical center of the monoidal Hecke category is not new. In [1], first named author, Finkelberg and Ostrik proved that the abelian category \(\mathfrak{C}\) of unipotent character sheaves is equivalent to the Drinfeld center of the abelian category of Harish-Chandra D-modules. We reprove their result and extend it to other sheaf-theoretic settings. In [1], authors prove that the appropriately defined \(\infty\)-category of unipotent character sheaves is the categorical center of both equivariant and monodromic versions of the \(\infty\)-Hecke category in the setting of \(D\)-modules. While this work was in final stages of preparation, preprint [10] appeared, where the Drinfeld center of the graded, ungraded or mixed finite Hecke category is computed in \(\ell\)-adic case in \(\infty\)-categorical setting. Our approach is different in that we don't leave the world of triangulated categories (except when considering the statement about abelian Drinfeld center). Our results are also valid uniformly in multiple sheaf-theoretic contexts, in particular for the modular derived categories. We deduce our results from a statement valid for all equivariant sheaves on the group (Theorem 2.2.1). In the upcoming work, second and third named authors are planning to use this fact to define a mixed Koszul self-duality for the derived category of character sheaves. This has applications, in particular, to symmetries in Khovanov-Rozansky homology theories, and the generalizations of those theories, as in [1]. Two-periodic version of such a Koszul duality equivalence in characteristic \(0\) appears in [1]. Finally, we use our results to give a new proof of a variant of the vanishing result proved in [1], eliminating a restriction on the characteristic. We also mention that the description of the asymptotic abelian category of character sheaves as a categorical center was obtained in [12], [13]. While this paper works with unipotent monodromy when treating monodromic objects, the results of [1] are stated in terms of monodromic \(D\)-modules with arbitrary monodromy. We are planning to treat this case in the next version of the manuscript. ### Organization of the paper. In Section 2 we define the Hecke categories we are working with, the variant of the categorical center we are using, and state our main result for triangulated categories. In Section 3 we recall the notion of a separable comonad and relevant results regarding comodules over comonads in triangulated categories, following [1]. In Section 4 we prove our main result for triangulated categories. In Section 5 we recall some results about monodromic categories and character sheaves, state the variant of the main result for the derived category of character sheaves and prove the result about the Drinfeld center of the abelian Hecke category. We also study vanishing properties for central sheaves on a torus. In the Appendix A, following an idea of V. Drinfeld we give a proof of the t-exactness of convolution on the horocycle space using the properties of convolution on the Vinberg semigroup attached to \(G\). ### Acknowledgments. We would like to thank Grigory Papayanov and Xin Jin for helpful discussions. R.B and Y.V were partially supported by the BSF grant 2020189. A.I. was funded by RFBR, project number 20-01-00579. K.T. is supported by the EPSRC programme grant EP/R034826/1. Y.V was partially supported by the ISF grant 2019/21. ## 2 Equivariant derived category of \(G\) as a categorical center. ### Notations. Fix primes \(\ell\neq p\). Let \(q=p^{n}\) for \(n\in\mathbb{Z}_{>0}\), and write \(\mathbb{F}_{q}\) for the finite field with \(q\) elements. We will work with the following sheaf-theoretic settings. 1. For a stack \(X\) defined over \(\mathbb{C}\), let \(D^{b}(X)\) be the bounded algebraically constructible derived category of sheaves on \(X\) with analytic topology, with coefficients in \(\mathbb{k}=\mathbb{C}\) (more generally, any field of characteristic \(0\)). 2. For a stack \(X\) defined over \(\mathbb{C}\), let \(D^{b}(X)\) be the bounded derived category of holonomic \(D\)-modules on \(X\). 3. For a stack \(X\) defined over \(\mathbb{C}\), let \(D^{b}(X)\) be the bounded constructible derived category of sheaves on \(X\) with coefficients in \(\mathbb{k}=\mathbb{F}_{q}\) (more generally, any field \(\mathbb{k}\) of characteristic \(p\)). In settings (I),(II), (III) we use the definition of sheaves on stacks as in [1]. 4. For a stack \(X\) defined over an algebraically closed field of characteristic \(p>0\), let \(D^{b}(X)\) be the bounded constructible derived category of \(\overline{\mathbb{Q}}_{\ell}\)-sheaves on \(X\). * For a stack \(X\) defined over \(\mathbb{F}_{q}\), let \(D^{b}(X)\) be the bounded constructible mixed derived category of \(\overline{\mathbb{Q}}_{\ell}\)-sheaves on \(X\). In settings (IV),(V), we use the definition of sheaves on stacks as in [11]. We define \(\Bbbk\) to be the field \(\mathbb{C}\) in the settings (I) and (II), any field of characteristic \(p\) in the setting (III) and \(\overline{\mathbb{Q}}_{\ell}\) in the settings (IV), (V). We write pt for the spectrum of the field of definition of our stacks in each of the cases. We write \(\underline{\Bbbk}_{X}\) for the constant sheaf on \(X\) (D-module \(\mathcal{O}_{X}\) in (II)). ### Hecke categories. Let \(G\) be a split connected reductive group defined over \(\mathbb{F}_{q}\) (in cases (IV), (V)) or a connected reductive group over \(\mathbb{C}\) (in all other cases). Fix a Borel subgroup \(B\subset G\), a split maximal torus \(T\subset B\). Let \(U\) be the unipotent radical of \(B\). Let \(X=G/B\) be the flag variety of \(G\), \(Y=G/U\) be the basic affine space of \(G\). The natural projection \(Y\to X\) is a \(T\)-torsor with respect to the right multiplication action of \(T\) on \(G/U\). Consider the right diagonal action of \(T\) on \(Y^{2}\). Let \[\mathcal{Y}=(Y\times Y)/T\] and let \(G\) act on \(\mathcal{Y}\) by the diagonal left multiplication. The categories \(D^{b}(\mathcal{Y})\), \(D^{b}(G\backslash\mathcal{Y})\) are equipped with a monoidal structure via the following formula. Consider the diagram (1) Here \(p_{ij}\) stands for the projection \(Y^{3}\to Y^{2}\) along factors \(i,j\) and \(T\) acts on \(Y^{3}\) diagonally on the right, while \(G\) diagonally on the left. We define the convolution product \(-\star_{1}-\) on \(D^{b}(\mathcal{Y})\) or \(D^{b}(G\backslash\mathcal{Y})\) as \[\mathcal{A}\star_{1}\mathcal{B}:=p_{13!}(p_{12}^{*}\mathcal{A}\otimes p_{23}^ {*}\mathcal{B}).\] Denote \(\mathcal{H}^{(1)}:=D^{b}(G\backslash\mathcal{Y})\). The category \(\mathcal{H}^{(1)}\) has an alternative description as a (mixed in the relevant settings) derived category of \(\operatorname{Ad}T\)-equivariant sheaves on \(U\backslash G/U\). Let \(\mathbf{1}\) stand for the monoidal unit of this category. Consider the space \(\mathcal{Y}^{(2)}=Y^{4}/T^{2}\) where the right \(T^{2}\)-action on \(Y^{4}\) is defined by \[(x_{1}U,x_{2}U,x_{3}U,x_{4}U)\cdot(t,z)=(x_{1}tU,x_{2}zU,x_{3}zU,x_{4}tU). \tag{2}\] Let \(G^{2}\) act on \(\mathcal{Y}^{(2)}\) via the formula \[(g,h)\cdot(x_{1}U,x_{2}U,x_{3}U,x_{4}U)=(gx_{1}U,gx_{2}U,hx_{3}U,hx_{4}U).\] The categories \(D^{b}(\mathcal{Y}^{(2)}),D^{b}(G^{2}\backslash\mathcal{Y}^{(2)})\) are equipped with monoidal structures via the following formula. Consider the diagram Here \(p_{ijkl}\) stands for the projection \(Y^{6}\to Y^{4}\) along factors \(i,j,k,l\) and \(T^{3}\) acts on \(Y^{6}\) on the right according to the formula \[(x_{1}U,x_{2}U,x_{3}U,x_{4}U,x_{5}U,x_{6}U)\cdot(u,v,w)=\\ =(x_{1}uU,x_{2}vU,x_{3}wU,x_{4}wU,x_{5}vU,x_{6}uU).\] while \(G^{2}\) acts on \(Y^{6}\) on the left according to the formula \[(g,h)\cdot(x_{1}U,x_{2}U,x_{3}U,x_{4}U,x_{5}U,x_{6}U)=\\ =(gx_{1}U,gx_{2}U,gx_{3}U,hx_{4}U,hx_{5}U,hx_{6}U).\] Define the convolution product \(-\star_{2}-\) on \(D^{b}(\mathcal{Y}^{(2)})\) or \(D^{b}(G^{2}\backslash\mathcal{Y}^{(2)})\) as \[\mathcal{A}\star_{2}\mathcal{B}:=p_{1346!}(p_{1256}^{*}\mathcal{A}\otimes p_{ 2345}^{*}\mathcal{B}). \tag{3}\] Denote \(\mathcal{H}^{(2)}:=D^{b}(G^{2}\backslash\mathcal{Y}^{(2)})\). The category \(\mathcal{H}^{(1)}\) is a module category for the monoidal category \(\mathcal{H}^{(2)}\). The action is given by the "two-sided convolution" defined via the following formula. Consider the diagram For \(\mathcal{B}\in\mathcal{H}^{(2)},\mathcal{A}\in\mathcal{H}^{(1)}\), we define \[\mathcal{B}\bowtie\mathcal{A}=p_{14!}(\operatorname{For}_{G}^{G^{2}}( \mathcal{B})\otimes p_{23}^{*}(\mathcal{A})).\] Here \(\operatorname{For}_{G}^{G^{2}}\) stands for the functor forgetting the equivariance with respect to the diagonal embedding \(G\to G^{2}\). Recall from [1, Chapter 7] that for a monoidal category \(\mathcal{A}\) and a category \(\mathcal{M}\) we have a notion of \(\mathcal{M}\) being a (left or right) module category over \(\mathcal{A}\). If \(\mathcal{M},\mathcal{N}\) are two module categories over \(\mathcal{A}\) let \(\operatorname{Fun}_{\mathcal{A}}(\mathcal{M},\mathcal{N})\) stand for the category of \(\mathcal{A}\)-module functors from \(\mathcal{M}\) to \(\mathcal{N}\), see Definitions 7.2.1 and 7.2.2 from [1]. Write \(\mathcal{ZH}^{(1)}:=\operatorname{Fun}_{\mathcal{H}^{(2)}}(\mathcal{H}^{(1)}, \mathcal{H}^{(1)})\) for the category of module-endofunctors of \(\mathcal{H}^{(1)}\) over \(\mathcal{H}^{(2)}\). Recall that, by definition, an object of \(\mathcal{ZH}^{(1)}\) is a functor \(F:\mathcal{H}^{(1)}\to\mathcal{H}^{(1)}\) together with natural isomorphisms \[s^{F}_{\mathcal{B},\mathcal{A}}:\mathcal{B}\bowtie F(\mathcal{A})\rToarrow \operatorname{\mathcal{A}}F(\mathcal{B}\bowtie\mathcal{A}),\] satisfying certain compatibility conditions. A morphism between functors \(F,G\) in \(\mathcal{ZH}^{(1)}\) is a natural transformation \(\tau:F\to G\) such that the diagrams of the form (4) are commutative. Let \(G/_{\operatorname{Ad}}G\) stand for the quotient stack of \(G\) with respect to the adjoint action. The category \(D^{b}(G/_{\operatorname{Ad}}G)\) is monoidal with respect to the convolution operation defined by the following formula. Consider the diagram For \(\mathcal{A},\mathcal{B}\in D^{b}(G/_{\operatorname{Ad}}G)\), we define \[\mathcal{A}\star\mathcal{B}=m_{!}(p_{1}^{*}\mathcal{A}\otimes p_{2}^{*}( \mathcal{B})).\] Our main result is **Theorem 2.2.1**.: _In all of the settings (I)-(V), there is an equivalence of monoidal categories_ \[\tilde{a}:D^{b}(G/_{\operatorname{Ad}}G)\rToarrow\mathcal{ZH}^{(1)}. \tag{5}\] Note that a priori \(\mathcal{ZH}^{(1)}\) is not a triangulated category. Define the evaluation functor \[\varepsilon:\mathcal{ZH}^{(1)}\to\mathcal{H}^{(1)},F\mapsto F(\mathbf{1}),\] where \(\mathbf{1}\) stands for the monoidal unit in \(\mathcal{H}^{(1)}\). During the course of the proof of Theorem 2.2.1 we will deduce the following **Corollary 2.2.2**.: _We say that the triangle in \(\mathcal{Z}\mathcal{H}^{(1)}\) is distinguished, if it becomes a distinguished triangle in \(\mathcal{H}^{(1)}\) after the application of \(\varepsilon\). The category \(\mathcal{Z}\mathcal{H}^{(1)}\) equipped with the set of distinguished triangles defined in this way and an obvious shift functor [1] is a triangulated category, and \(\tilde{a}\) is an exact equivalence._ **Remark 2.2.3**.: The category \(\mathcal{Z}\mathcal{H}^{(1)}\) is not, a priori, braided monoidal. The equivariant derived category \(D^{b}(G/_{\mathrm{Ad}}G)\), on the other hand, is canonically braided, see [1]. It would be interesting to see if the category \(\mathcal{Z}\mathcal{H}^{(1)}\) can also be equipped with the braiding without reference to Theorem 2.2.1. Note that the equivalence involving the abelian Drinfeld center, proved below as Theorem 5.4.2, is compatible with the braiding. ## 3 Sheaves on a group as comodules over the Springer comonad ### Comonads in triangulated categories. All categories we are working with are assumed to be additive (in fact, \(\mathbb{k}\)-linear). For a pair of functors \(F:\mathcal{C}\rightleftarrows\mathcal{D}:G\) we write \(F\dashdot G\) to indicate that \(F\) is left adjoint to \(G\). For a comonad \(S\) on a category \(\mathcal{D}\), let \(S-\mathrm{comod}_{\mathcal{D}}\) stand for the category of \(S\)-comodules in \(\mathcal{D}\). We have an adjoint pair \(F_{S}\dashdot G_{S}\) \[F_{S}:S-\mathrm{comod}_{\mathcal{D}}\rightleftarrows\mathcal{D}:G_{S},\] such that \(F_{S}\dashdot G_{S}\) with \(F_{S}\) being a functor forgetting the comodule structure. **Definition 3.1.1**.: _A comonad \(S\) is called separable, if the comultiplication map \(\Delta:S\to SS\) admits a natural retraction_ \[s:SS\to S,\;\;s\Delta=\mathrm{id},\] _as a map of \(S\)-bicomodules, that is such that the following diagram is commutative:_ Let \(F:\mathcal{C}\rightleftarrows\mathcal{D}:G\) be a pair of functors, \(F\dashdot G\). We say that the unit transformation \(\eta:\mathrm{Id}_{\mathcal{C}}\to GF\) is naturally split if there exists a transformation \(\sigma:GF\to\mathrm{Id}_{\mathcal{C}}\) with \(\sigma\circ\eta=\mathrm{id}\). **Proposition 3.1.2**.: _Let \(F:\mathcal{C}\rightleftarrows\mathcal{D}:G\) be a pair of functors, \(F\dashv G\)._ 1. _If the unit transformation_ \[\eta:\operatorname{Id}_{\mathcal{C}}\to GF\] _is naturally split, then the comonad_ \(S=FG\) _is separable._ 2. _If the comonad_ \(S\) _is separable, then the unit transformation_ \[\eta_{S}:\operatorname{Id}_{S-\operatorname{comod}_{\mathcal{D}}}\to G_{S}F_{S}\] _is naturally split._ Proof.: For the fact that \(S\) is separable if and only if \(\eta_{S}\) is naturally split see, for example, [11], Proposition 3.11 and references therein. We have that \(s:FGFG\to FG\) is given by the formula \(s=F\sigma G\), where \(\sigma\) is the splitting of \(\eta\). **Remark 3.1.3**.: Note that the separability of \(S=FG\) does not imply the splitting of \(\eta\). For example, for any separable \(S\), \(F\) and \(G\) can be chosen such that \(F\) is not faithful. **Proposition 3.1.4**.: _Assume that \(F:\mathcal{C}\rightleftarrows\mathcal{D}:G\) is a pair of exact functors, \(F\dashv G\), between idempotent complete pre-triangulated categories \(\mathcal{C},\mathcal{D}\) such that the unit transformation \(\eta:\operatorname{Id}_{\mathcal{C}}\to GF\) is naturally split. Define a comonad \(S=FG\). Then_ 1. _A triangle_ \(X\to Y\to Z\to X[1]\) _is distinguished in_ \(\mathcal{C}\) _if and only if a triangle_ \(FX\to FY\to FZ\to FX[1]\) _is distinguished in_ \(\mathcal{D}\)_._ 2. _The natural functor_ \(\mathcal{C}\to S-\operatorname{comod}_{\mathcal{D}}\)_, induced by the functor_ \(F\)_, is an equivalence of categories._ To prove this we will need the following **Proposition 3.1.5**.: _Let \(F:\mathcal{C}\rightleftarrows\mathcal{D}:G\) be a pair of exact functors, \(F\dashv G\), between pre-triangulated categories \(\mathcal{C},\mathcal{D}\). Assume that \(F\) is a faithful functor. Then any object of \(X\in\mathcal{C}\) is a direct summand of the object \(GFX\)._ Proof.: (cf. [11], Proposition 2.10) Consider the completion of the unit map \(\eta_{X}:X\to GFX\) to a distinguished triangle \[X\xrightarrow{\eta_{X}}GFX\to C\xrightarrow{\sigma}X[1].\] By definition, the composition \[FX\xrightarrow{Fn_{X}}FGFX\xrightarrow{\theta_{FX}}FX\] is the identity, where \(\theta:FG\to\operatorname{Id}_{\mathcal{D}}\) is the counit transformation, so that the map \(F\eta_{X}\) is split and \(F(\sigma)=0\). Since \(F\) is faithful, \(\sigma=0\) and so \(\eta_{X}\) is split. We will also use the following statements. **Theorem 3.1.6** ([1]).: _Let \(\mathcal{D}\) be a pre-triangulated category and let \(\mathcal{C}\) be an idempotent-complete suspended category. Let_ \[F:\mathcal{C}\rightleftarrows\mathcal{D}:G\] _be a pair of functors commuting with suspension, \(F\dashv G\). Assume that the unit of \(GF\) is naturally split, and that the comonad \(S=FG\) on \(\mathcal{D}\) is exact. Then \(\mathcal{C}\) is pre-triangulated with distinguished triangles \(\Delta\) being exactly the ones such that \(F(\Delta)\) is distinguished in \(\mathcal{D}\). Moreover, with this pre-triangulation both functors \(F\) and \(G\) become exact._ Proof.: This is [1], Theorem 4.1, stated for comonads instead of monads. **Corollary 3.1.7** ([1]).: _Let \(\mathcal{D}\) be an idempotent-complete pre-triangulated category and let \(S:\mathcal{D}\to\mathcal{D}\) be an exact separable comonad. Then the category \(S-\operatorname{comod}_{\mathcal{D}}\) is pre-triangulated, so that the functors \(F_{S},G_{S}\) are exact. The pre-triangulation is characterized by exactness of either \(F_{S}\) or \(G_{S}\)._ Proof.: This is [1], Corollary 4.3, stated for comonads instead of monads. Proof of Proposition 3.1.4.: "Only if" statement of (a) is just a consequence of \(F\) being exact, so we only need to prove the other direction. Consider any triangle \[X\to Y\to Z\to X[1]\] such that the triangle \[FX\to FY\to FZ\to FX[1]\] is distinguished. Since the unit map \(\eta\) is split, we conclude that \(X\to Y\to Z\to X[1]\) is a direct summand of the distinguished triangle \[GFX\to GFY\to GFZ\to GFX[1],\] and hence is distinguished itself. We now prove (b). For (an arbitrary) comonad \(S\) on a category \(\mathcal{D}\), let \(S-\operatorname{cofree}_{\mathcal{D}}\) stand for its cofree category. Recall that this category has the same objects as \(\mathcal{D}\), and \[\operatorname{Hom}_{S-\operatorname{cofree}_{\mathcal{D}}}(A,B):= \operatorname{Hom}_{\mathcal{D}}(SA,B),\] with the composition \[\operatorname{Hom}(B,C)_{S-\operatorname{cofree}_{\mathcal{D}}}\times \operatorname{Hom}_{S-\operatorname{cofree}_{\mathcal{D}}}(A,B)\to \operatorname{Hom}_{S-\operatorname{cofree}_{\mathcal{D}}}(A,C)\] given by the composition of maps \[(f,g)\mapsto SA\to SSA\xrightarrow{S(f)}SB\xrightarrow{g}C,\] with the first map coming from the co-multiplication. For \(S=FG\), functor \(G\) defines a fully faithful functor \[G_{\mathrm{cofree}}:S-\mathrm{cofree}_{\mathcal{D}}\to\mathcal{C}.\] Since the unit transformation \(\eta\) is split, \(F\) is faithful, so by Proposition 3.1.5, the fully faithful functor \[G_{\mathrm{cofree}}^{\natural}:(S-\mathrm{cofree}_{\mathcal{D}})^{\natural} \to\mathcal{C}^{\natural}\simeq\mathcal{C},\] where \((-)^{\natural}\) stands for the idempotent completion of a category, is essentially surjective, and so is an equivalence. By Proposition 3.1.2, the comonad \(S\) is separable, so by Corollary 3.1.7, the category \(S-\mathrm{comod}_{\mathcal{D}}\) is pre-triangulated and functors \(F_{S},G_{S}\) are exact. The functor \(F_{S}\) is always faithful, and the category \(S-\mathrm{comod}_{\mathcal{D}}\) is idempotent-complete, bacause \(\mathcal{D}\) is such, so by the same argument as above we have an equivalence \[(S-\mathrm{cofree}_{\mathcal{D}})^{\natural}\simeq S-\mathrm{comod}_{\mathcal{ D}},\] induced by the functor \(G_{S,\mathrm{cofree}}:S-\mathrm{cofree}_{\mathcal{D}}\to S-\mathrm{comod}_{ \mathcal{D}}\,.\) Let \(\bar{F}:\mathcal{C}\to S-\mathrm{comod}_{\mathcal{D}}\) be the canonical functor induced by \(F\). One has \(\bar{F}\circ G_{\mathrm{cofree}}\simeq G_{S,\mathrm{cofree}}\), with \(G_{\mathrm{cofree}}^{\natural},G_{S,\mathrm{cofree}}^{\natural}\) being equivalences. It follows that \(\bar{F}^{\natural}\simeq\bar{F}\) is an equivalence as well. **Remark 3.1.8**.: Note that the category \(S-\mathrm{comod}_{\mathcal{D}}\) has no reason to be (pre-)triangulated for an arbitrary triangulated comonad \(S\) on a (pre-)triangulated category \(\mathcal{D}\). It follows that, in the assumption of the Proposition 3.1.4, we can equip \(S-\mathrm{comod}_{\mathcal{D}}\) with a triangulated structure, saying that the triangle in the latter category is distinguished if its image under the forgetful functor to \(\mathcal{D}\) is distinguished. **Remark 3.1.9**.: The proof of Proposition 3.1.4 mostly follows [1] (see also [11]). More specifically, it is proved in [1] that for any stably-separable exact (co-)monad \(M\) on a pre-triangulated (respectively, an \(N\)-triangulated, \(N\geq 2\)) category, the category of (co-)modules over it is pre-triangulated (respectively, \(N\)-triangulated) so that an (\(N\)-)triangle is distinguished in \(M-(\mathrm{co}\)-)mod\({}_{\mathcal{D}}\) if and only if it is distinguished after the application of the forgetful functor to \(\mathcal{D}\). In [11] it is proved that, for a monad \(M\) on \(\mathcal{C}\), if the forgetful functor from a triangulated category \(M-\mathrm{mod}_{\mathcal{C}}\) to \(\mathcal{C}\) is exact, then, for any triangulated realization \(F:\mathcal{C}\rightleftarrows\mathcal{D}:G\) of a monad \(M\) with \(\mathcal{D}\) idempotent-complete and such that \(G\) is conservative, we have a triangulated equivalence \(\mathcal{D}\simeq M-\mathrm{mod}_{\mathcal{C}}\). We refer to the citation above for the definitions of the terms used in this remark. ### Springer comonad. We now apply the results of Subsection 3.1 to our geometric setup. Consider the diagram where \(p\) is the projection and \(q\) is the quotient of the map \[q^{\prime}:G\times G/U\to G/U\times G/U,q^{\prime}(g,xU)=(xU,gxU)\] by the free right \(T\)-action (with respect to which it is easy to see that \(q^{\prime}\) is equivariant). Consider the adjoint action of \(G\) on itself and left diagonal action of \(G\) on the other corners of the diagram above. This makes the diagram \(G\)-equivariant. The functor \[\mathfrak{h}\mathfrak{c}=q_{!}p^{*}:D^{b}(G/_{\mathrm{Ad}}G)\to D^{b}(G \backslash\mathcal{Y})\] is called the Harish-Chandra transform. It is well-known (see, for example, [10]) to be monoidal with respect to the convolutions \(\star\) and \(\star_{1}\). It has a right adjoint functor \[\chi=p_{*}q^{!}.\] We have a "Springer comonad" \(\mathcal{S}=\mathfrak{h}\mathfrak{c}\circ\chi\) on \(\mathcal{H}^{(1)}\). The terminology is justified by the following discussion. Write \(x\mapsto{}^{g}x:=gxg^{-1}\) and define \[\tilde{\mathcal{N}}:=\{(x,gB)\in G\times G/B:x\in{}^{g}U\}.\] Let \(\pi:\tilde{\mathcal{N}}\to G\) be the natural projection, and let \[\Sigma=\pi_{*}\natural_{\tilde{\mathcal{N}}}[2\dim U]\] be the Springer sheaf. We will use the following well-known **Proposition 3.2.1**.: _The sheaf \(\delta_{e}:=\iota_{e*}\natural_{\mathrm{pt}}\), where \(\iota_{e}:\mathrm{pt}\to G\) is the unit map of \(G\), is a direct summand of \(\Sigma\)._ Proof.: In all settings except (III) this is classical, see [1]. In the setting (III), this can be deduced, for example, from [1]. We now recall the following results, proved in [10] in the setting (II), with proofs transported verbatim to other settings. **Lemma 3.2.2**.: _[_10_, Lemma 8.5.4]_ _There is a natural isomorphism_ \[\chi\circ\mathfrak{h}\mathfrak{c}(-)\simeq\Sigma\star(-).\] Proof.: Follows from base change isomorphism and diagram chase. **Proposition 3.2.3**.: _([2, Theorem 8.5.1]) The unit transformation \(\eta:\operatorname{Id}_{D^{b}(G/_{\operatorname{Ad}}G)}\to\chi\circ\mathfrak{hc}\) for the comonad \(\mathcal{S}\) is naturally split._ Proof.: Apply Lemma 3.2.2 and note that \(\delta_{e}\) is the monoidal unit in \(D^{b}(G/_{\operatorname{Ad}}G)\). By Proposition 3.2.1 we get the result. Since \(D^{b}(G/_{\operatorname{Ad}}G)\) and \(\mathcal{H}^{(1)}\) are idempotent complete, we now have the following consequence of Propositions 3.2.3 and 3.1.4. **Corollary 3.2.4**.: _The Harish-Chandra functor \(\mathfrak{hc}\) induces an equivalence of categories_ \[\mathfrak{F}:D^{b}(G/_{\operatorname{Ad}}G)\simeq\mathcal{S}-\operatorname{ comod}_{\mathcal{H}^{(1)}}.\] ### Springer comonad and bimodule structure. We record the following simple observation. **Proposition 3.3.1**.: _Let \(\mathcal{A}\) be a monoidal category acting on a category \(\mathcal{M}\), and \(\bar{S}\) be a coalgebra object in \(\mathcal{A}\). Action of \(\bar{S}\) defines a comonad on \(\mathcal{M}\), denoted by \(S\). Any functor \(F\in\operatorname{Fun}_{\mathcal{A}}(\mathcal{M},\mathcal{M})\) sends \(S\)-comodules to \(S\)-comodules._ We now show that the Springer comonad fits into the setting of Proposition 3.3.1. **Proposition 3.3.2**.: _There is a coalgebra object \(\bar{\mathcal{S}}\in\mathcal{H}^{(2)}\) such that there is an isomorphism of comonads_ \[\bar{\mathcal{S}}\bowtie(-)\simeq\mathcal{S}(-). \tag{6}\] Proof.: Write \[\alpha:G\times G/B\times G/B\to\mathcal{Y}^{(2)},\alpha(g,xB,yB)=(xU,yU,gyU,gxU).\] and define the action of \(G^{2}\) on \(G\times G/B\times G/B\) using the formula \[(h_{1},h_{2})\cdot(g,xB,yB)=(h_{2}gh_{1}^{-1},h_{1}xB,h_{1}yB),\] so that \(\alpha\) is a \(G^{2}\)-equivariant map. Let \[\bar{\mathcal{S}}:=\alpha_{!}\mathbb{k}_{(G\times G/B\times G/B)/G^{2}}[2\dim U ]\in\mathcal{H}^{(2)}.\] From the projection formula and proper base change, it is easy to see that that we have an isomorphism of functors as in (10). To show that \(\bar{\mathcal{S}}\) is a coalgebra object, consider the diagram \(G\times\mathcal{Y}\)\(\mathcal{Y}^{(2)}\)\(G\times G/B\times\mathcal{Y}\)\(\mathcal{Y}^{(2)}\)\(\mathcal{Y}^{(2)}\)\(G\times\mathcal{Y}\)\(\mathcal{Y}^{(2)}\)\(G\times G/B\times\mathcal{Y}\)\(\mathcal{Y}^{(2)}\)\(G\times\mathcal{Y}\)\(\mathcal{Y}^{(2)}\)\(G\times\mathcal{Y}\)\(G/B\times\mathcal{Y}\)\(\mathcal{Y}^{(2)}\)\(G\times\mathcal{Y}\)\(G/B\times\mathcal{Y}\)\(G\times\mathcal{Y}\)\(G\times\mathcal{Y}\)\(G/B\times\mathcal{Y}\) where the maps are given by \[p^{\prime}(g,xB,x^{\prime}U,y^{\prime}U)=(g,x^{\prime}U,y^{\prime}U),\] \[q^{\prime}(g,xB,x^{\prime}U,y^{\prime}U)=(xU,x^{\prime}U,y^{\prime}U,gxU),\] which are well defined since they are compatible with the right actions of \(T\) and \(T^{2}\) on \(Y^{2}\) and \(Y^{4}\). We define an action of \(G^{2}\) on \(G\times G/B\times\mathcal{Y}\) using the formula \[(h_{1},h_{2})\cdot(g,xB,x^{\prime}U,y^{\prime}U)=(h_{2}gh_{1}^{-1},h_{1}xB,h_ {1}x^{\prime}U,h_{2}y^{\prime}U)\] and on \(G\times\mathcal{Y}\) in an evident way to make the map \(p^{\prime}\) equivariant. We now observe that the map \(p^{\prime}\) is proper, the map \(q^{\prime}\) is smooth, so that the functor \(q^{\prime}_{!}p^{\prime*}\) is left adjoint to the functor \(p^{\prime}_{*}q^{\prime^{!}}\simeq p^{\prime}_{!}q^{\prime*}[2\dim U]\). Moreover, we have an isomorphism of functors \[\bar{\mathcal{S}}\star_{2}(-)\simeq q^{\prime}_{!}p^{\prime*}p^{\prime}_{!}q^ {\prime*}[2\dim U], \tag{7}\] so that convolution with \(\bar{\mathcal{S}}\) is indeed a comonad, and \(\bar{\mathcal{S}}\) is a coalgebra object. Diagram chase shows that the isomorphism \(\bar{\mathcal{S}}\bowtie(-)\simeq\mathcal{S}(-)\) is compatible with the comultiplication. ## 4 Proof of the main Theorem. ### Centralizer category as a category of comodules. Recall the notation \[\mathcal{Z}\mathcal{H}^{(1)}:=\operatorname{Fun}_{\mathcal{H}^{(2)}}(\mathcal{ H}^{(1)},\mathcal{H}^{(1)})\] for the category of module-endofunctors of \(\mathcal{H}^{(1)}\) over \(\mathcal{H}^{(2)}\). Recall the evaluation functor \[\varepsilon:\mathcal{Z}\mathcal{H}^{(1)}\to\mathcal{H}^{(1)},F\mapsto F( \mathbf{1}).\] We will deduce Theorem 2.2.1 from the existence of the following diagram, satisfying properties stated in Propositions 4.1.1, 4.1.2: (8) **Proposition 4.1.1**.: * _There is a monoidal functor_ \[\tilde{a}:D^{b}(G/_{\mathrm{Ad}}G)\to\mathcal{Z}\mathcal{H}^{(1)},\] _satisfying_ \(\varepsilon\circ\tilde{a}\simeq\mathfrak{h}\mathfrak{c}\)_._ _._ 2. _There is a functor_ \[\tilde{b}:\mathcal{Z}\mathcal{H}^{(1)}\to\mathcal{S}-\mathrm{comod}_{\mathcal{H}^ {(1)}},\] _satisfying_ \(\mathfrak{h}\mathfrak{c}\circ\mathfrak{F}^{-1}\circ\tilde{b}\simeq\varepsilon\)_._ 3. _Functors_ \(\tilde{a},\tilde{b}\) _satisfy_ \[\tilde{b}\circ\tilde{a}\simeq\mathfrak{F}.\] 4. _The functor_ \(\tilde{a}\) _is fully faithful._ **Proposition 4.1.2**.: _There is a natural isomorphism of functors_ \[\tilde{a}\circ\chi\circ\varepsilon(-)\simeq(-)\circ\tilde{a}(\Sigma),\] _where \(\circ\) denotes the monoidal structure on \(\mathcal{Z}\mathcal{H}^{(1)}\) coming from the composition of functors._ Proof of Theorem 2.2.1.: We claim that the monoidal functor \(\tilde{a}\) of Proposition 4.1.1 (a) is an equivance of monoidal categories. For this it suffices to show that \(\tilde{a}\) is an equivalence of plain categories. By Proposition 4.1.2 every object in \(\mathcal{Z}\mathcal{H}^{(1)}\) is a direct summand of an object in the image of \(\tilde{a}\). Since both source and target of \(\tilde{a}\) are idempotent complete, and \(\tilde{a}\) is fully faithful by Proposition 4.1.1 (d), we get that \(\tilde{a}\) is essentially surjective, hence the result. Note that by Proposition 4.1.1 (b) (c), the functor \(\chi\circ\varepsilon\) is isomorphic to the functor \(\chi\circ\mathfrak{h}\mathfrak{c}\circ\mathfrak{F}^{-1}\circ\tilde{b}\), and so is equipped with \(W\)-action, coming from the action of \(W\) on \(\chi\circ\mathfrak{h}\mathfrak{c}\simeq\Sigma\star(-)\). **Corollary 4.1.3**.: _We can express the functor inverse to \(\tilde{a}\) as \((\chi\circ\varepsilon)^{W}\), where \((-)^{W}\) stands for the functor of taking invariants with respect to \(W\)._ ### Proof of Proposition 4.1.1. We will need the following **Lemma 4.2.1**.: _There are monoidal functors \(L,R:\mathcal{H}^{(1)}\to\mathcal{H}^{(2)}\) such that there are isomorphisms_ \[\mathcal{A}\star_{1}\mathcal{B}\simeq L(\mathcal{A})\bowtie\mathcal{B},\] \[\mathcal{B}\star_{1}\mathcal{A}\simeq R(\mathcal{A})\bowtie\mathcal{B},\] _functorial in \(\mathcal{A},\mathcal{B}\in\mathcal{H}^{(1)}\)._ Proof.: We construct the functor \(L\), the functor \(R\) is constructed completely analogously. Consider the space \(Y^{3}\) together with the action of \(G^{2}\) on the left given by \[(g_{1},g_{2})\cdot(x_{1}U,x_{2}U,x_{3}U)\mapsto(g_{1}x_{1}U,g_{1}x_{2}U,g_{2} x_{3}U)\] and the diagonal action of \(T\) on the right. Then the closed embedding \(Y^{3}\to Y^{4}\) given by \[(x_{1}U,x_{2}U,x_{3}U)\mapsto(x_{1}U,x_{2}U,x_{3}U,x_{3}U)\] induces a \(G^{2}\)-equivariant map \(\iota\colon Y^{3}/T\to\mathcal{Y}^{(2)}\). There is also a projection on the first two coordinates \(p_{12}\colon Y^{3}\to Y^{2}\), which induces a map \(p_{12}\colon Y^{3}/T\to\mathcal{Y}\). This map is \(G^{2}\)-equivariant, with the action of the first copy of \(G\) on \(\mathcal{Y}\) being the usual diagonal action and the action of the second copy being trivial. Abusing notation, write \(p_{12}\) also for the composition \[p_{12}:G^{2}\backslash Y^{3}/T\to G^{2}\backslash\mathcal{Y}\to G\backslash \mathcal{Y},\] where the second map is associated to the first projection \(G^{2}\to G\). We define \[L:=\iota_{*}p_{12}^{*}:\mathcal{H}^{(1)}\to\mathcal{H}^{(2)}.\] To prove the required property, observe that the projection \(p_{14}\) in the definition of the \(\bowtie\) restricted to the image of \(\iota\) is exactly the projection \(p_{13}\colon Y^{3}/T\to\mathcal{Y}\) used in the definition of the \(\star_{1}\). The sheaves being pushed forward are identified by \(T\)-equivariance. Finally, to observe the monoidality of the functor \(L\) notice that we have \[p_{1256}^{*}L(\mathcal{A})\otimes p_{2345}^{*}L(\mathcal{B})\simeq\iota_{*}^{ (2)}(p_{12}^{*}\mathcal{A}\otimes p_{23}^{*}\mathcal{B}),\] where \(\iota^{(2)}\colon Y^{4}/T\to Y^{6}/T^{3}\) is induced by \[(x_{1}U,x_{2}U,x_{3}U,x_{4}U)\mapsto(x_{1}U,x_{2}U,x_{3}U,x_{4}U,x_{4}U,x_{4}U)\] and \(p_{ij}\colon Y^{4}/T\to\mathcal{Y}\) are the projections. Moreover, the diagram is commutative implying that \[L(\mathcal{A})\star_{2}L(\mathcal{B})\simeq\iota_{*}p_{134!}(p_{12}^{*} \mathcal{A}\otimes p_{23}^{*}\mathcal{B}).\] It remains to see that the diagram \[\xy@{->}{p_{12}}\xy@{->}{p_{13}}\xy@{->}{p_{12}}\xy@{->}{p_{12}}\xy@{->}{p_{13}} \xy@{->}{p_{12}}\xy@{->}{p_{12}}\xy@{->}{p_{13}}\xy@{->}{p_{13}}\xy@{->}{p_{12}} \xy@{->}{p_{13}}\xy@{->}{p_{13}}\xy@{->}{p_{13}}\xy@{->}{p_{13}}\xy@{->}{p_{13}} \xy@{->}{p is Cartesian and base change will give \[L(\mathcal{A})\star_{2}L(\mathcal{B})\simeq L(\mathcal{A}\star_{1}\mathcal{B})\] as desired. Proof of Proposition 4.1.1.: We first prove (a). We define \(\tilde{a}\) as \[\tilde{a}\colon\mathcal{F}\mapsto-\star_{1}\mathfrak{h}\mathfrak{c}(\mathcal{F }),\] together with the central structure provided by an argument similar to [10, Proposition 9.2.1(ii)]. In more details, we need to define the canonical isomorphism \[s^{\tilde{a}(\mathcal{F})}_{\mathcal{B},\mathcal{A}}\colon\mathcal{B} \bowtie(\mathcal{A}\star_{1}\mathfrak{h}\mathfrak{c}(\mathcal{F}))\xrightarrow{ \sim}(\mathcal{B}\bowtie\mathcal{A})\star_{1}\mathfrak{h}\mathfrak{c}( \mathcal{F}) \tag{9}\] for each \(\mathcal{A}\) and \(\mathcal{B}\). Combining diagrams used to define \(-\star_{1}-\) and \(\mathfrak{h}\mathfrak{c}\) one sees that \(-\star_{1}\mathfrak{h}\mathfrak{c}(\mathcal{F})\) is given by the push forward of the sheaf \(\mathcal{F}\boxtimes-\) along the map \[f\colon G\times\mathcal{Y}\to\mathcal{Y},\ \left(g,x_{1}U,x_{2}U\right)\mapsto \left(x_{1}U,gx_{2}U\right)\] Therefore, we can write the right hand side of (9) as push forward along the map \(f\circ\left(\mathrm{id}\times p_{14}\right)\colon G\times\mathcal{Y}^{(2)} \to\mathcal{Y}\): \[\left(\mathcal{B}\bowtie\mathcal{A}\right)\star_{1}\mathfrak{h}\mathfrak{c}( \mathcal{F})\simeq\left(f\circ\left(\mathrm{id}\times p_{14}\right)\right)_{*} \left(\mathcal{F}\boxtimes\left(\mathrm{For}_{G}^{G^{2}}(\mathcal{B})\otimes p _{23}^{*}(\mathcal{A})\right)\right).\] On the other hand, to compute the left hand side of (9) by the base change and the projection formula we take the push forward along the map \(p_{14}\colon G\times\mathcal{Y}^{(2)}\to\mathcal{Y}\): \[\mathcal{B}\bowtie(\mathcal{A}\star_{1}\mathfrak{h}\mathfrak{c}(\mathcal{F}) )\simeq p_{14,*}\left(pr_{G}^{*}\mathcal{F}\otimes\mathrm{For}_{G}^{G^{2}}(f_ {3}^{*}\mathrm{inv}_{G}^{*}\mathcal{B})\otimes p_{23}^{*}(\mathcal{A})\right),\] where \(pr_{G}\) is the projection on the \(G\) factor and \(f_{3}\) is the map of \(G\)-action on the third coordinate and \(\mathrm{inv}_{G}\) is the inversion map \(g\mapsto g^{-1}\) on \(G\) factor. Using the \(G\)-equivariance of \(\mathcal{B}\) with respect to the second copy of \(G\) we get the canonical isomorphism \(f_{3}^{*}\mathrm{inv}_{G}^{*}\mathcal{B}\simeq f_{4}^{*}\mathcal{B}\), where \(f_{4}\) is the map of \(G\)-action on the forth coordinate. We finally get the desired isomorphism by the projection formula and the identification \(f\circ\left(\mathrm{id}\times p_{14}\right)=p_{14}\circ f_{4}\). We now prove (b). First recall that, by Proposition 3.3.2 there is a coalgebra object \(\bar{\mathcal{S}}\in\mathcal{H}^{(2)}\) such that there is an isomorphism of comonads \[\bar{\mathcal{S}}\bowtie(-)\simeq\mathcal{S}(-). \tag{10}\] Define \(\tilde{b}(F):=F(\mathbf{1})\), an \(\mathcal{S}\)-comodule by Proposition 3.3.1 together with the observation that \(\mathbf{1}\simeq\mathfrak{h}\mathfrak{c}(\delta_{e})\) has a canonical \(\mathcal{S}\)-comodule structure as an image of an object under \(\mathfrak{h}\mathfrak{c}\), by the general discussion of Subsection 3.1. The isomorphism \(\mathfrak{h}\mathfrak{c}\circ\mathfrak{F}^{-1}\circ\tilde{b}\simeq\varepsilon\) is again immediate from the definition. This completes the proof of (b). For (c), note that we have the following isomorphism of plain objects in \(\mathcal{H}^{(1)}:\) \[\tilde{b}\circ\tilde{a}(\mathcal{F})\simeq\mathfrak{h}\mathfrak{c}(\mathcal{ F})\star_{1}\mathfrak{h}\mathfrak{c}(\delta_{e})\simeq\mathfrak{h}\mathfrak{c}( \mathcal{F}),\] where the last equivalence is by monoidality of \(\mathfrak{h}\mathfrak{c}\). The \(\mathcal{S}\)-comodule on \(\mathfrak{h}\mathfrak{c}(\mathcal{F})\) comes from the \(\mathcal{S}\)-comodule structure on the second factor \(\mathfrak{h}\mathfrak{c}(\delta_{e})\). In other words, it is given by the composition \[\mathfrak{h}\mathfrak{c}(\mathcal{F})\simeq\mathfrak{h}\mathfrak{ c}(\mathcal{F})\star_{1}\mathfrak{h}\mathfrak{c}(\delta_{e})\to\\ \to\mathfrak{h}\mathfrak{c}(\mathcal{F})\star_{1}\mathcal{S}( \mathfrak{h}\mathfrak{c}(\delta_{e}))\simeq\mathfrak{h}\mathfrak{c}( \mathcal{F})\star_{1}\mathfrak{h}\mathfrak{c}(\Sigma)\simeq\mathcal{S}( \mathfrak{h}\mathfrak{c}(\mathcal{F})).\] The functor \(\mathfrak{F}\) is essentially defined by the same map, i.e. \(\mathfrak{h}\mathfrak{c}(\delta_{e})\to\mathfrak{h}\mathfrak{c}(\Sigma)\) convolved with the identity map on \(\mathfrak{h}\mathfrak{c}(\mathcal{F})\). To prove (d) we first note that by (c) we have \(\mathfrak{h}\mathfrak{c}\simeq\varepsilon\circ\tilde{a}\), and \(\mathfrak{h}\mathfrak{c}\) is a faithful functor by Proposition 3.2.3, so \(\tilde{a}\) is also faithful. We now show that the functor \(\varepsilon\) is faithful. Recall that a morphism between \(F,G\in\mathcal{Z}\mathcal{H}^{(1)}\) is a natural transformation \(\tau:F\to G\) making diagrams of the form (4) commutative. It follows from Lemma 4.2.1 and diagram \[\begin{CD}F(\mathcal{A})&\simeq F(L(\mathcal{A})\bowtie\mathbf{1})@<{s_{L( \mathcal{A}),1}^{F}}<{}<L(\mathcal{A})\bowtie F(\mathbf{1})\\ G(\mathcal{A})&\simeq G(L(\mathcal{A})\bowtie\mathbf{1})@<{\tau_{\mathcal{A}}}<{}<L( \mathcal{A})\bowtie G(\mathbf{1})\end{CD}\] being commutative, that such a transformation is defined by its value \(\tau_{\mathbf{1}}\) on \(\mathbf{1}\), and so \(\varepsilon\) is indeed faithful. By (c), \(\mathfrak{h}\mathfrak{c}\circ\mathfrak{F}^{-1}\circ\tilde{b}\simeq\varepsilon\), and so \(\tilde{b}\) is also faithful. Since \(\tilde{b}\circ\tilde{a}\) is an equivalence by (c), it follows that \(\tilde{a}\) is full, which finishes the proof of (d). ### Proof of Proposition 4.1.2. We will need the following **Lemma 4.3.1**.: _For any \(\mathcal{A}\in\mathcal{H}^{(1)},\mathcal{B}\in\mathcal{H}^{(2)}\) there is an isomorphism natural in \(\mathcal{A}\) and \(\mathcal{B}\)_ \[c_{\mathcal{B},\mathcal{A}}:\mathcal{B}\star_{2}L(\mathcal{A})\star_{2}\bar{ \mathcal{S}}\rightharpoonup\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _such that, for any \(\mathcal{F}\in\mathcal{H}^{(1)}\), the diagram_ \[\begin{CD}(\mathcal{B}\star_{2}L(\mathcal{A})\star_{2}\bar{\mathcal{S}}) \bowtie\mathcal{F}@>{}>{}>\mathcal{B}\bowtie(\mathcal{A}\star_{1}\mathcal{S} \mathcal{F})\\ @V{c_{\mathcal{B},\bowtie\mathcal{A}}\bowtie\mathcal{F}}V{}V@V{}V{s_{ \mathcal{B},\mathcal{A}}^{i(\mathcal{X}(\mathcal{F}))}}V{}V\\ (L(\mathcal{B}\bowtie\mathcal{A})\star_{2}\bar{\mathcal{S}})\bowtie\mathcal{F}@>{ }>{}>(\mathcal{B}\bowtie\mathcal{A})\star_{1}\mathcal{S}\mathcal{F}\end{CD}\] _is commutative. In the diagram, horisontal arrows are defined using the isomorphisms from Lemma 4.2.1 and \(\bar{\mathcal{S}}\bowtie\mathcal{F}\simeq\mathcal{S}\mathcal{F}\)._ Proof.: We freely use the notations from the proof of Lemma 4.2.1. It is sufficient to verify the required condition for \(\mathcal{A}=\mathbf{1}\). Indeed, if this is done we can set \(\mathcal{B}^{\prime}:=\mathcal{B}\star_{2}L(\mathcal{A})\), so that \(\mathcal{B}\star_{2}L(\mathcal{A})\star_{2}\bar{\mathcal{S}}=\mathcal{B}^{ \prime}\star_{2}\bar{\mathcal{S}}\) and \(\mathcal{B}\bowtie\mathcal{A}=\mathcal{B}^{\prime}\bowtie\mathbf{1}\) and we can set \(c_{\mathcal{B},\mathcal{A}}:=c_{\mathcal{B}^{\prime},\mathbf{1}}\) as long as we know the latter (note that \(L(\mathbf{1})\) is the monoidal unit of \(\mathcal{H}^{(2)}\) as \(L\) is the monoidal functor). We now have to construct an isomorphism \[c_{\mathcal{B},\mathbf{1}}\colon\mathcal{B}\star_{2}\bar{\mathcal{S}}\to L( \mathcal{B}\bowtie\mathbf{1})\star_{2}\bar{\mathcal{S}},\] satisfying the required compatibility. We have a description of the functor \(-\star_{2}\bar{\mathcal{S}}\) similar to (7), with \(q^{\prime}\) replaced by \[q^{\prime}(g,xB,x^{\prime}U,y^{\prime}U)=(x^{\prime}U,xU,gxU,y^{\prime}U).\] It is, therefore, sufficient to construct a canonical isomorphism \[p^{\prime}_{!}q^{\prime*}\mathcal{B}\simeq p^{\prime}_{!}q^{\prime*}(L( \mathcal{B}\bowtie\mathbf{1})).\] By the definition, we can further unwrap the right hand side as \[p^{\prime}_{!}q^{\prime*}(L(\mathcal{B}\bowtie\mathbf{1}))\simeq p^{\prime}_{! }q^{\prime*}\iota_{*}p^{*}_{12}p_{13,!}\Delta^{*}_{23}\mathcal{B},\] where \(\Delta_{23}\) is the map induced by \(Y^{3}\to Y^{4}\) sending the middle copy to the diagonal between the second and the third copies. The following square is Cartesian: where \[\iota^{\prime}(g,x^{\prime}U,y^{\prime}U)=(g,g^{-1}y^{\prime}B,x^{\prime}U,y^ {\prime}U)\] and \[q^{\prime\prime}(g,x^{\prime}U,y^{\prime}U)=(g^{-1}y^{\prime}U,x^{\prime}U,y^ {\prime}U).\] Indeed, restricting \(q^{\prime}\) to the image of \(\iota\), we see that the Borel subgroup in the preimage is defined uniquely by the factors in \(G\times\mathcal{Y}\) as \(g^{-1}y^{\prime}B\). Moreover, the composition \(p^{\prime}\circ\iota^{\prime}\) is the identity map, which allows us to further rewrite \[p^{\prime}_{!}q^{\prime*}\iota_{*}p^{*}_{12}p_{13,!}\Delta^{*}_{23}\mathcal{B} \simeq(p_{12}\circ q^{\prime\prime})^{*}p_{13,!}\Delta^{*}_{23}\mathcal{B}.\] Note that \[p_{12}\circ q^{\prime\prime}(g,x^{\prime}U,y^{\prime}U)=(g^{-1}y^{\prime}U,x^ {\prime}U)\] and the following diagram is also Cartesian: Here we identify the image of the map \(\Delta_{23}:Y^{3}/T^{2}\to\mathcal{Y}^{(2)}\) with \(\mathcal{Y}\times G/B\). Now we can further rewrite \[(p_{12}\circ q^{\prime\prime})^{*}p_{13,!}\Delta^{*}_{23}\mathcal{B}\simeq p ^{\prime}_{!}(\Delta_{23}\circ((p_{12}\circ q^{\prime\prime})\times\mathrm{id} _{G/B}))^{*}\mathcal{B}.\] It remains to notice that the composition \(\Delta_{23}\circ((p_{12}\circ q^{\prime\prime})\times\mathrm{id}_{G/B})\) equals to the composition of \(q^{\prime}\times\mathrm{id}_{G}\) : and the \(G\)-action map on the last two coordinates of \(\mathcal{Y}^{(2)}\). The \(G^{2}\)-equivariance of \(\mathcal{B}\) then provides the desired isomorphism. Moreover, the resulting composition is identified with the one from the constructed in Lemma 4.2.1, proving the compatibility with the central structure. Note that both maps are coming from \(G\)-equivariance of \(\mathcal{B}\) with respect to the action on the last two coordinates. Proof of Proposition 4.1.2.: First note that \[\tilde{a}\circ\chi\circ\varepsilon(F)(\mathcal{A})\simeq\tilde{a}(\chi(F( \mathbf{1}))=\mathcal{A}\star_{1}\mathcal{S}F(\mathbf{1}).\] We define a natural transformation \[\tau:\tilde{a}\circ\chi\circ\varepsilon(F)\to F\circ\tilde{a}(\Sigma) \tag{11}\] whose value on an object \(\mathcal{A}\) is the compostion \(\tau_{\mathcal{A}}\) of isomorphisms \[\mathcal{A}\star_{1}\mathcal{S}F(\mathbf{1})\xrightarrow{s^{F}_{\mathcal{S}, \mathbf{1}}}\mathcal{A}\star_{1}F(\mathfrak{hc}(\Sigma))\xrightarrow{s^{F}_{L (\mathcal{A}),\mathfrak{hc}(\Sigma)}}F(\mathcal{A}\star_{1}\mathfrak{hc}( \Sigma)),\] where \(s^{F}\) is the structure of an object in \(\mathcal{ZH}^{(1)}\) on \(F\). To unburden the notations, let us denote left and right hand side of (11) by \(\Phi\) and \(\Psi\), respectively. To show that \(\tau:\Phi\to\Psi\) is an isomorphism of functors in \(\mathcal{Z}\mathcal{H}^{(1)}\), we need to show that the diagram (12) is commutative for all \(\mathcal{B}\in\mathcal{H}^{(2)}\). To do this, write Here the unlabeled vertical morphisms are defined using the isomorphisms from Lemma 4.2.1 and \(\bar{\mathcal{S}}\bowtie\mathcal{F}\simeq\mathcal{SF}\). The morphism \(c,c^{\prime}\) are defined from the morphism of Lemma 4.3.1 as follows: \[c=c_{\mathcal{B},\mathcal{A}}\bowtie F(\mathbf{1}),c^{\prime}=F(c_{\mathcal{ B},\mathcal{A}}\bowtie\mathbf{1}).\] Diagram chase using the definition of module endofunctors shows that the compositions of vertical arrows are vertical arrows in (12). Diagrams \(I\) and \(III\) are commutative by Lemma 4.3.1, and diagrams \(II\) and \(IV\) by the naturality of structural isomorphisms \(s\) for \(F\in\mathcal{Z}\mathcal{H}^{(1)}\). This finishes the proof. ## 5 Character sheaves as a categorical center In this section we reprove a result of [1] concerning the Drinfeld center of the abelian Hecke category, and extend it to other sheaf-theoretic situations. We also consider some applications mentioned in the introduction. For a scheme \(X\) with an action of a torus \(A\) we write \(D^{b}(X\nearrow A)\) for the unipotently \(A\)-monodromic category, i.e. the full triangulated subcategory of \(D^{b}(X)\) generated, as a triangulated category, by the objects of the form \(\pi^{*}\mathcal{F}\), where \(\mathcal{F}\in D^{b}(X/A)\), and \(\pi:X\to X/A\) is the standard projection. When \(X\) is equipped also with a commuting action of an algebraic group \(G\), we write \(D^{b}(G\backslash X\nearrow A)\) for the full triangulated subcategory of objects that are in \(D^{b}(X\nearrow A)\) after forgetting the \(G\)-equivariance. ### Some facts about monodromic Hecke categories. Recall that in [1] a completed unipotently monodromic Hecke category \(\hat{\mathscr{M}}=\hat{D}^{b}(G\backslash((G/U)^{2}\nearrow T^{2}))\) is defined in cases (IV), (V), and in other cases in [1]. The case (II) is obtained by Riemann-Hilbert correspondence from (I), since \(D\)-modules that appear are regular holonomic, see [10]. It comes with two collections of objects indexed by the elements of the Weyl group, denoted by \(\hat{\Delta}_{w},\hat{\nabla}_{w}\) with \(w\in W\), called, respectively, standard and costandard free-monodromic perverse sheaves. These are pro-objects in the category \[\mathscr{M}=D^{b}(G\backslash((G/U)^{2}\nearrow T^{2})).\] There is a perverse t-structure on the categories \(\mathscr{M}\) and \(\hat{\mathscr{M}}\), whose heart we denote by \(\mathcal{P}\) and \(\hat{\mathcal{P}}\), respectively. The categories \(\mathscr{M},\hat{\mathscr{M}}\) are equipped with the monoidal structure via the diagram (1) without the quotient by the right diagonal torus action. We also denote this monoidal structure by \(-\star_{1}^{\prime}-\). It is convenient to shift this monoidal structure in the monodromic case, so that the monodromic monoidal pro-unit becomes perverse, cf. Proposition 5.1.1 below. We write \(-\star_{1}^{mon}-:=-\star_{1}^{\prime}\left[\dim T\right]\). We will keep the simplified notation \(-\star_{1}-\) for this shifted monoidal structure - this should not cause any confusion. Standard and costandard pro-objects satisfy the following properties with respect to convolution: **Proposition 5.1.1**.: 1. \(\hat{\delta}:=\hat{\Delta}_{e}\simeq\hat{\nabla}_{e}\) _is the unit of the monoidal structure_ \(\star_{1}\)_._ 2. \(\hat{\Delta}_{v}\star_{1}\hat{\Delta}_{w}\simeq\hat{\Delta}_{vw},\hat{\nabla} _{v}\star_{1}\hat{\nabla}_{w}\simeq\hat{\nabla}_{vw}\) _if_ \(l(vw)=l(v)+l(w)\)_._ 3. \(\hat{\Delta}_{v}\star_{1}\hat{\nabla}_{v^{-1}}\simeq\hat{\delta}\)_._ Proof.: (a) and (b) are Lemma 4.3.3 of [1], and Lemma 6.7 of [1] in the setting (III), (c) is Lemma 7.7 of [1]. Recall that an object in \(\hat{\mathscr{M}}\) is called a free-monodromic tilting object, if it can be both obtained by successive extensions (of twists, in the mixed settings) of standard objects and by successive extensions (of twists) of costandard objects. We have the following standard corollary of Proposition 5.1.1. **Corollary 5.1.2**.: _Let \(T\) be a free-monodromic tilting object._ * _The convolution functors_ \(T\star_{1}-,-\star_{1}T\) _are t-exact, i.e. preserve categories_ \(\mathcal{P},\hat{\mathcal{P}}\)_._ * _The objects_ \(\hat{\Delta}_{w_{0}}\star_{1}T\star_{1}\hat{\nabla}_{w_{0}},\hat{\nabla}_{w_ {0}}\star_{1}T\star_{1}\hat{\Delta}_{w_{0}}\) _are free-monodromic tilting._ Define the shifted heart of the perverse t-structure on \(\mathscr{M},\hat{\mathscr{M}}\) by \[D_{w_{0}}^{\leq 0}=\hat{\Delta}_{w_{0}}\star_{1}{}^{p}D^{\leq 0},D_{w_{0}}^{ \geq 0}=\hat{\Delta}_{w_{0}}\star_{1}{}^{p}D^{\geq 0},\mathcal{P}_{w_{0}}:= \hat{\Delta}_{w_{0}}\star_{1}\mathcal{P}.\] **Proposition 5.1.3**.: _The convolution \(\star_{1}\) is left t-exact with respect to the shifted t-structure \((D_{w_{0}}^{\leq 0},D_{w_{0}}^{\geq 0})\)._ Proof.: This follows by a standard argument from the proof of Proposition 4.3.4 in [1], using the exactness properties of convolution with the standard and costandard sheaves, cf. also Appendix 5.1 in [1]. For completeness, in the Appendix A we give another proof of Proposition 5.1.3, following an idea of V. Drinfeld, which is more explicit and does not use the exactness properties of convolution with the standard and costandard sheaves. Denote by \(\mathcal{H}_{w_{0}}^{\bullet}(-)\) the cohomology functor with respect to the t-structure \((D_{w_{0}}^{\leq 0},D_{w_{0}}^{\geq 0})\). By Proposition 5.1.3 we can equip the abelian category \(\mathcal{P}_{w_{0}}\) with the truncated monoidal structure using the formula \[X_{1}\star_{1}^{0}X_{2}:=\mathcal{H}_{w_{0}}^{0}(X_{1}\star_{1}X_{2}),X_{1},X_ {2}\in\mathcal{P}_{w_{0}}.\] ### Monodromic categories and character sheaves. Let \(\mathcal{H}_{mon}^{(1)}\) and \(\mathcal{H}_{mon}^{(2)}\) be full unipotently monodromic subcategories of \(\mathcal{H}^{(1)}\) and \(\mathcal{H}^{(2)}\) with respect to the projections \(\mathcal{Y}\to(G/B)^{2}\) and \(\mathcal{Y}^{(2)}\to(G/B)^{4}\), respectively. Note that \(\mathcal{H}^{(1)}\) is an equivariant version of the category \(\mathscr{M}\). Let \(D_{\mathfrak{C}}^{b}(G)\subset D^{b}(G/_{\mathrm{Ad}}G)\) be the full triangulated subcategory with objects \(\mathcal{F}\) satisfying \(\mathfrak{h}\mathfrak{c}(\mathcal{F})\in\mathcal{H}_{mon}^{(1)}\). Since the functor \(\mathfrak{h}\mathfrak{c}\) is monoidal and \(\mathcal{H}_{mon}^{(1)}\) is closed under convolution, we conclude that \(D_{\mathfrak{C}}^{b}(G)\) is closed under convolution. The triangulated category \(D_{\mathfrak{C}}^{b}(G)\) is known as the derived category of unipotent character sheaves, and its abelian heart with respect to the perverse t-structure is the category of unipotent character sheaves (see [10], [11]). Let \(\mathfrak{C}\) stand for the abelian subcategory of perverse objects in \(D_{\mathfrak{C}}^{b}(G)\). We shift the monoidal structure \(\star_{2}\) and action \(\bowtie\) by a homological shift \([2\dim T]\), and the monoidal structure \(\star\) by \([\dim T]\) (cf. the discussion before Proposition 5.1.1), keeping the notations the same. The pro-unit \(\hat{\delta}\) of \(\hat{\mathscr{M}}\) is perverse and \(T\)-equivariant, so defines a pro-unit in \(\mathcal{H}^{(1)}_{mon}\), which we denote in the same way. **Definition 5.2.1**.: _Write_ \[\mathcal{Z}\mathcal{H}^{(1)}_{mon}:=\operatorname{Fun}_{\mathcal{H}^{(2)}_{ mon}}^{fd}(\mathcal{H}^{(1)}_{mon},\mathcal{H}^{(1)}_{mon}),\] _where_ \[\operatorname{Fun}_{\mathcal{H}^{(2)}_{mon}}^{fd}(\mathcal{H}^{(1)}_{mon}, \mathcal{H}^{(1)}_{mon})\] _stands for the full subcategory of \(\operatorname{Fun}_{\mathcal{H}^{(2)}_{mon}}(\mathcal{H}^{(1)}_{mon}, \mathcal{H}^{(1)}_{mon})\) consisting of functors \(F\) such that the limit \(F(\hat{\delta})\) exists in \(\mathcal{H}^{(1)}_{mon}\)._ We have the following variant of Theorem 2.2.1. Recall that semigroupal category is the category with product satisfying all the axioms of the monoidal category, except those involving the monoidal unit. **Theorem 5.2.2**.: _In all cases (I)-(V), the functor \(\tilde{a}\) induces an equivalence of semigroupal categories_ \[\tilde{a}_{\mathcal{C}}:D^{b}_{\mathcal{C}}(G)\to\mathcal{Z}\mathcal{H}^{(1) }_{mon}.\] **Remark 5.2.3**.: Note that it is not obvious that the category on the right is closed under the composition. This will become apparent after the proof. The Theorem will be deduced from the following: **Proposition 5.2.4**.: _Let \(\mathfrak{Z}\) be an object in \(\mathcal{H}^{(1)}_{mon}\) such that the convolution functor \(-\star_{1}\mathfrak{Z}\) can be given a structure of the module endofunctor in \(\mathcal{Z}\mathcal{H}^{(1)}_{mon}\). Then the functor \(-\star_{1}\mathfrak{Z}\), considered as a functor from \(\operatorname{Fun}(\mathcal{H}^{(1)},\mathcal{H}^{(1)})\) with values in \(\mathcal{H}^{(1)}_{mon}\), can be given a structure of a module endofunctor in \(\mathcal{Z}\mathcal{H}^{(1)}\), compatible with its structure of a functor in \(\mathcal{Z}\mathcal{H}^{(1)}_{mon}\) on monodromic objects in \(\mathcal{H}^{(2)}\)._ This will be in turn deduced using the following simple statement. **Lemma 5.2.5**.: _Let \(\mathcal{C},\mathcal{D}\) be categories and let \(\mathcal{C}^{\prime}\) be a full subcategory of \(\mathcal{C}\). Let \(F_{1},F_{2}:\mathcal{C}\to\mathcal{D}\) be two functors, such that their restriction to \(\mathcal{C}^{\prime}\) are isomorphic. Assume that \(G_{1},G_{2}\) are right adjoint to \(F_{1},F_{2}\) respectively. If \(G_{1},G_{2}\) take values in \(\mathcal{C}^{\prime}\), then \(F_{1}\simeq F_{2}\), with isomorphism compatible with the chosen isomorphism of their restrictions to \(\mathcal{C}^{\prime}\)._ Proof.: Indeed, since \(G_{1},G_{2}\) take values in \(\mathcal{C}^{\prime}\), they are right adjoint to restrictions of \(F_{1},F_{2}\) to \(\mathcal{C}^{\prime}\), respectively. Since the two restrictions are isomorphic, it follows that \(G_{1}\simeq G_{2}\), and so \(F_{1}\simeq F_{2}\) Proof of Proposition 5.2.4.: We need to construct, for all objects \(\mathcal{A}\in\mathcal{H}^{(1)},\mathcal{B}\in\mathcal{H}^{(2)}\), a natural isomorphism \[\mathcal{B}\bowtie(\mathcal{A}\star_{1}\mathfrak{Z})\stackrel{{ \sim}}{{\longrightarrow}}(\mathcal{B}\bowtie\mathcal{A})\star_{1} \mathfrak{Z}.\] Define two functors \(\mathcal{H}^{(2)}\to\mathcal{H}^{(1)}_{mon}\) as \[F_{1}(\mathcal{B})=\mathcal{B}\bowtie(\mathcal{A}\star_{1}\mathfrak{Z}),F_{2} (\mathcal{B})=(\mathcal{B}\bowtie\mathcal{A})\star_{1}\mathfrak{Z}.\] Since \(-\star_{1}\mathfrak{Z}\) has a structure of the functor in \(\mathcal{Z}\mathcal{H}^{(1)}_{mon}\), the functor \(F_{1}\) is isomorphic to \(F_{2}\) when restricted to \(\mathcal{H}^{(1)}_{mon}\). Hence, by Lemma 5.2.4, it is enough to show that the functors right adjoint to \(F_{1},F_{2}\) take values in \(\mathcal{H}^{(2)}_{mon}\). It is easy to see that the functor right adjoint to \(-\star_{1}\mathfrak{X},\mathfrak{X}\in\mathcal{H}^{(1)}\), preserves the category \(\mathcal{H}^{(1)}_{mon}\) (this functor is given by dually defined "\(*\)-convolution" with \(\mathbb{D}\mathfrak{X}\)). It is thus enough to show that the functor right adjoint to \(\mathcal{B}\mapsto\mathcal{B}\bowtie\mathcal{A}\) takes values in \(\mathcal{H}^{(2)}_{mon}\) for \(\mathcal{A}\in\mathcal{H}^{(1)}_{mon}\). Recall that the definition of the functor \(-\bowtie-\). We have a diagram For \(\mathcal{B}\in\mathcal{H}^{(2)},\mathcal{A}\in\mathcal{H}^{(1)}\), we defined \[\mathcal{B}\bowtie\mathcal{A}=p_{14!}(\operatorname{For}_{G}^{G^{2}}( \mathcal{B})\otimes p_{23}^{*}(\mathcal{A})).\] It follows that the right adjoint is given by the following expression, up to the omitted shift: \[\mathcal{F}\mapsto\operatorname{Av}_{G\times G*}^{G}p_{14}^{*}\mathcal{F} \stackrel{{!}}{{\otimes}}p_{23}^{*}\mathbb{D}\mathcal{A},\] where \(\operatorname{Av}_{G\times G*}^{G}\) stands for the operator of \(*\)-averaging from \(G\)-equivariant derived category to \(G\times G\)-equivariant derived category. Since \(\mathcal{A}\) is assumed to be monodromic, the sheaf \(p_{14}^{*}\mathcal{F}\stackrel{{!}}{{\otimes}}p_{23}^{*}\mathbb{ D}\mathcal{A}\) is monodromic with respect to the projection \(G/U\to G/B\) along the second and third factors. Since the left action of \(G\times G\) commutes with the right \(T\)-action along every factor, it follows that \(\operatorname{Av}_{G\times G*}^{G}p_{14}^{*}\mathcal{F}\stackrel{{! }}{{\otimes}}p_{23}^{*}\mathbb{D}\mathcal{A}\) is also monodromic with respect to the corresponding projection. The claim now follows, since the complex in \(D^{b}(G\backslash(G/U\times G/U))\) monodromic with respect to the projection along one factor is also monodromic with respect to the projection along the other. Proof of Theorem 5.2.2.: Let \((\mathcal{ZH}^{(1)})_{mon}\) be the full subcategory of \(\mathcal{ZH}^{(1)}\) with objects \(F\) satisfying \(\varepsilon(F)\in\mathcal{H}^{(1)}_{mon}\). Since \(\varepsilon\) is monoidal, the subcategory \((\mathcal{ZH}^{(1)})_{mon}\) is semigroupal. Now it follows from Theorem 2.2.1 and Proposition 4.1.1 that \(\tilde{a}\) induces an equivalence of semigroupal categories \[D^{b}_{\mathcal{C}}(G)\to(\mathcal{ZH}^{(1)})_{mon}.\] It suffices to show that there exists a natural equivalence of semigroupal categories \[(\mathcal{ZH}^{(1)})_{mon}\stackrel{{\sim}}{{\longrightarrow}} \mathcal{ZH}^{(1)}_{mon}.\] The functor \((\mathcal{ZH}^{(1)})_{mon}\to\mathcal{ZH}^{(1)}_{mon}\) is given by forgetting the module structure for the action of non-monodromic objects. For \(F\in(\mathcal{ZH}^{(1)})_{mon}\) the condition on the limit \(F(\hat{\delta})\) to exist follows from \(\hat{\delta}\) being a monoidal pro-unit, see Proposition 5.1.1 (a). To define the inverse functor, to show that any functor in \(F_{mon}\in\mathcal{ZH}^{(1)}_{mon}\) can be extended to a functor \(F\in\mathcal{ZH}^{(1)}\), with \[F_{mon}\simeq F\circ\iota_{mon},\] where \(\iota_{mon}\) is the canonical embedding \(\mathcal{H}^{(1)}_{mon}\to\mathcal{H}^{(1)}\). First note that, since \(F_{mon}(\hat{\delta})\) exists in \(\mathcal{H}^{(1)}_{mon}\), we have \[F_{mon}(\mathcal{A})\simeq\mathcal{A}\star_{1}F_{mon}(\hat{\delta}).\] Indeed, since in the monodromic monodromic category \(*\)-convolution is isomorphic, up to a homological shift, to the!-convolution (see, for example, Lemma 4.1.1 of [1]), the functor \(\mathcal{A}\) is right adjoint and so preserves limits. We thus have, using Lemma 4.2.1 and the fact that \(\hat{\delta}\) is a monoidal unit \[\mathcal{A}\star_{1}F_{mon}(\hat{\delta})\,\simeq\,F_{mon}(\mathcal{A}\star_ {1}\hat{\delta})\,\simeq\,F_{mon}(\mathcal{A})\star_{1}\hat{\delta}\,\simeq\, F_{mon}(\mathcal{A}). \tag{13}\] The result now follows from Proposition 5.2.4 applied to \(\mathfrak{Z}=F_{mon}(\hat{\delta})\). We will need some explicit DG models for the derived \(\mathcal{H}^{(1)}_{mon}\) and \(\mathcal{H}^{(2)}_{mon}\), which we now describe. ### DG models for monodromic categories. Let \(X\) be a scheme with a left action of an algebraic group \(G\) and commuting right action of an algebraic torus \(A\). Let \(S_{A}=\Bbbk[X_{*}(A)]^{\wedge}\) be the group algebra of the cocharacter lattice \(X_{*}(A)\) of \(A\) completed at the augmentation ideal. Let \(K_{A}\) be the Koszul complex of the augmentation ideal in \(S_{A}\) We may identify \(S_{A}\) with the symmetric algebra of the \(\Bbbk\)-vector space \(V_{A}\) completed at the origin. In the settings where it is relevant, we make \(S_{A}\) and \(K_{A}\) into a DG Fr-algebra by making Fr act on \(V_{A}\) by \(q^{-1}\). Consider the following DG categories. Let \(\hat{\mathcal{T}}^{(1)}_{DG}\) be the category of complexes of free-monodromic tilting sheaves in \(\hat{\mathscr{M}}\) together with the action of the DG Fr-algebra \(K_{T}\) for the right diagonal action of \(T\) on \(Y^{2}\), with action of \(S_{T}\) compatible with the monodromy action, and such that the underlying complex represents an object in \(\mathscr{M}\). Let \(\mathcal{P}^{(1)}_{DG}\) be the category of complexes of perverse sheaves in \(\mathscr{M}\) together with the action of the DG Fr-algebra \(K_{T}\) for the right diagonal action of \(T\) on \(Y^{2}\), with action of \(S_{T}\) compatible with the monodromy action. Let \(\mathcal{P}^{(1)}_{w_{0},DG}\) be the category of complexes of objects in \(\mathcal{P}_{w_{0}}\) together with the action of the DG Fr-algebra \(K_{T}\) for the right diagonal action of \(T\) on \(Y^{2}\), with action of \(S_{T}\) compatible with the monodromy action. Let \(\hat{\mathcal{T}}^{(2)}_{DG}\) be the category of complexes of free-monodromic tilting sheaves in \(\hat{\mathscr{M}}^{(2)}:=\hat{D}^{b}(G^{2}\backslash((G/U)^{4}\diagup T^{4}))\) together with the action of the DG Fr-algebra \(K_{T^{2}}\) for the action of \(T^{2}\) on \(Y^{4}\) given by the formula (2), with the action of \(S_{T^{2}}\) compatible with the monodromy action, and such that the underlying complex represents an object in \(\mathscr{M}^{(2)}:=D^{b}(G^{2}\backslash((G/U)^{4}\diagup T^{4}))\). We define the convolutions \(-\star_{1}^{DG}-\) between \(\hat{\mathcal{T}}^{(1)}_{DG}\) and either \(\hat{\mathcal{T}}^{(1)}_{DG},\mathcal{P}^{(1)}_{DG}\) or \(\mathcal{P}^{(1)}_{w_{0},DG}\); \(-\star_{2}^{DG}-\) between \(\hat{\mathcal{T}}^{(2)}_{DG}\) and \(\hat{\mathcal{T}}^{(2)}_{DG}\); and \(-\bowtie^{DG}-\) between \(\hat{\mathcal{T}}^{(2)}_{DG}\) and either \(\hat{\mathcal{T}}^{(1)}_{DG}\) or \(\mathcal{P}^{(1)}_{DG}\) as follows. The convolution between two free-monodromic tilting objects \(\mathcal{T}_{1}\star_{1}\mathcal{T}_{2}=p_{13!}(p_{12}^{*}\mathcal{T}_{1} \otimes p_{23}^{*}\mathcal{T}_{2})\) is defined via diagram and is again a tilting object as follows from Proposition 5.1.1. Considering the sheaves on \(Y^{3}\) equipped with the action of \(K_{T}\) corresponding to the diagonal action of \(T\) makes the operations compatible with the \(K_{T}\)-action, thus providing an action of \(K_{T}\) on \(\mathcal{T}_{1}\star_{1}\mathcal{T}_{2}\). This gives a convolution of two objects in \(\hat{\mathcal{T}}^{(1)}_{DG}\). We define \(P_{1}\star_{1}^{DG}\mathcal{T}_{2}\) with \(P_{1}\) in \(\mathcal{P}\) or \(\mathcal{P}_{w_{0}}\) and \(\mathcal{T}_{2}\) free-monodromic tilting as the object in \(\mathcal{P}^{(1)}_{DG}\) or \(\mathcal{P}^{(1)}_{w_{0},DG}\) in the same way by applying Corollary 5.1.2. To define \(\mathcal{T}_{1}\star_{2}^{DG}\mathcal{T}_{2}\) in \(\hat{\mathcal{T}}^{(2)}_{DG}\) we start with taking \[\mathcal{T}_{1}\star_{2}^{\prime}\mathcal{T}_{2}=p_{1346!}(p_{1256}^{*} \mathcal{T}_{1}\otimes p_{2345}^{*}\mathcal{T}_{2}).\] defined via diagram This complex is equipped with the action of \(K_{T^{2}}\) compatible with the action of the monodromy where the one copy of \(K_{T}\) acts via its action on the first and the forth coordinates on \(\mathcal{T}_{1}\) and the other via its action on the second and the third coordinates on \(\mathcal{T}_{2}\). Furthermore, note that the action of \(K_{T}\) via the action on the second and the third coordinates on \(\mathcal{T}_{1}\) and the action of \(K_{T}\) via the action on the first and the fourth coordinates on \(\mathcal{T}_{2}\) induce the same action on \(\mathcal{T}_{1}\star_{2}^{\prime}\mathcal{T}_{2}\). The diagonal \(K_{T}\)-action kills the augmentation ideal of its degree zero component \(S_{T}=K_{T}^{0}\) and, hence, factors through the action of the exterior algebra \(\Lambda(V_{T}[1])\) of the Koszul generators. We define \[\mathcal{T}_{1}\star_{2}^{DG}\mathcal{T}_{2}:=(\mathcal{T}_{1}\star_{2}^{ \prime}\mathcal{T}_{2})\otimes_{\Lambda(V_{T}[1])}^{L}\Bbbk,\] where \(\Bbbk\) is considered to be the \(1\)-dimensional module over \(\Lambda(V_{T}[1])\) concentrated in homological degree \(0\). Together with the action of \(K_{T^{2}}\) from the above this provides an object of \(\hat{\mathcal{T}}_{DG}^{(2)}\). Similarly, to define \(\mathcal{T}_{1}\bowtie^{DG}\mathcal{T}_{2}\) we start with \(\mathcal{T}_{1}\bowtie^{\prime}\mathcal{T}_{2}=p_{14!}(\mathcal{T}_{1}\otimes p _{23}^{*}(\mathcal{T}_{2}))\) given via The action of \(K_{T}\) on \(\mathcal{T}_{1}\bowtie^{\prime}\mathcal{T}_{2}\) comes from the action on \(\mathcal{T}_{1}\) via the action on the first and the forth coordinates. Meanwhile, the action of \(K_{T}\) on \(\mathcal{T}_{1}\) via the action on the second and the third coordinates and the action of \(K_{T}\) on \(\mathcal{T}_{2}\) induce the same action on \(\mathcal{T}_{1}\bowtie^{\prime}\mathcal{T}_{2}\). The diagonal \(K_{T}\)-action kills the augmentation ideal of its degree zero component \(S_{T}=K_{T}^{0}\) and, hence, factors through the exterior algebra \(\Lambda(V_{T}[1])\). We define \[\mathcal{T}_{1}\bowtie^{DG}\mathcal{T}_{2}:=(\mathcal{T}_{1}\bowtie_{2}^{ \prime}\mathcal{T}_{2})\otimes_{\Lambda(V_{T}[1])}^{L}\Bbbk.\] Together with the action of \(K_{T}\) from the above this provides an object of \(\hat{\mathcal{T}}_{DG}^{(1)}\). We have the following **Proposition 5.3.1**.: 1. _The DG categories_ \(\hat{\mathcal{T}}_{DG}^{(1)},\mathcal{P}_{DG}^{(1)},\mathcal{P}_{w_{0},DG}^{(1)}\) _are DG models for the category_ \(\mathcal{H}_{mon}^{(1)}\) _._ 2. _The DG categories_ \(\hat{\mathcal{T}}^{(2)}_{DG}\) _is a DG model for the category_ \(\mathcal{H}^{(2)}_{mon}\)_._ 3. _The convolutions_ \(-\star_{1}^{DG}-,-\star_{2}^{DG}-\) _and_ \(-\bowtie^{DG}-\) _define monoidal structures on the respective categories or actions of the monoidal categories. Moreover, the DG enhancements_ \(\hat{\mathcal{T}}^{(1)}_{DG},\mathcal{P}^{(1)}_{DG},\mathcal{P}^{(1)}_{w_{0}, DG}\) _of_ \(\mathcal{H}^{(1)}_{mon}\) _and_ \(\hat{\mathcal{T}}^{(2)}_{DG}\) _of_ \(\mathcal{H}^{(2)}_{mon}\) _are compatible with the convolution structures._ Proof.: Parts a) and b) are a straightforward generalization of Lemma 44 and Corollary 45 of [1]. Part c) follows from [1, Proposition 51], similarly to [1, Proposition 50]. ### Abelian category of character sheaves as a categorical center. Since the \(\star\)-convolution on \(D^{b}(G/_{Ad}G)\) is left t-exact with respect to the perverse t-structure, the abelian category \(\mathfrak{C}\) becomes a semigroupal category with product \[X_{1}\star^{0}X_{2}:={}^{p}\mathcal{H}^{0}(X_{1}\star X_{2}).\] We will use the following result, proved in characteristic 0 setting in [1] and in general geometric setting in [1]. Let \[\operatorname{For}_{T}:\mathcal{H}^{(1)}\to\mathcal{M}\] stand for the functor forgetting the \(T\)-equivariance. **Proposition 5.4.1**.: _The functor_ \[\mathcal{F}\mapsto\hat{\nabla}_{w_{0}}\star\operatorname{For}_{T}\mathfrak{h} (\mathcal{F}),D^{b}_{\mathfrak{C}}(G)\to\mathcal{M}\] _is t-exact with respect to the perverse t-structure._ It follows that for \(\mathcal{F}\in\mathfrak{C}\) we have \(\operatorname{For}_{T}\mathfrak{h}(\mathcal{F})\in\mathcal{P}_{w_{0}}\). Let \(\mathcal{Z}_{D}(\mathcal{A})\) stand for the Drinfeld center of a monoidal category \(\mathcal{A}\). We now give a new proof of Theorem 3.6 of [1] that works in all sheaf-theoretic settings considered, except the setting (V). **Theorem 5.4.2**.: _In cases (I)-(IV), restriction of the functor \(\operatorname{For}_{T}\mathfrak{h}\mathfrak{c}\) to \(\mathfrak{C}\) can be lifted to the equivalence of braided semigroupal categories_ \[\mathfrak{C}\rTo\mathcal{Z}_{D}(\mathcal{P}_{w_{0}}).\] Proof of Theorem 5.4.2.: Let \(\mathcal{Z}\mathcal{H}^{(1)}_{ab}\) be the full subcategory of functors \(F\in\mathcal{Z}\mathcal{H}^{(1)}_{mon}\) satisfying \(\operatorname{For}_{T}\circ\varepsilon(F)\in\mathcal{P}_{w_{0}}\). It follows from Theorem 5.2.2 and Proposition 5.4.1 that we have an equivalence of semigroupal categories \(\mathfrak{C}\rTo\mathcal{Z}\mathcal{H}^{(1)}_{ab}\), induced by the functor \(\tilde{a}\). It suffices to show that \(\operatorname{For}_{T}\circ\varepsilon\) induces an equivalence \[\mathcal{Z}\mathcal{H}^{(1)}_{ab}\rTo\mathcal{Z}_{D}(\mathcal{P}_{w_{0}}).\] Indeed, the fact that \(\operatorname{For}_{T}\circ\varepsilon\circ\tilde{a}(\mathcal{F}),\mathcal{F}\in \mathfrak{C}\), has a structure of an object in \(\mathcal{Z}_{D}(\mathcal{P}_{w_{0}})\) follows from an argument completely analogous to the one in to Proposition 4.1.1 (a). We now construct the inverse functor \(g:\mathcal{Z}_{D}(\mathcal{P}_{w_{0}})\to\mathcal{Z}\mathcal{H}_{ab}^{(1)}\). Take \(Z\in\mathcal{Z}_{D}(\mathcal{P}_{w_{0}})\). Looking at the central isomorphism \(Z\star_{1}\hat{\delta}\to\hat{\delta}\star_{1}Z\), we see that the left monodromy action on \(Z\) coincides with the right one, so that \(Z\) can be considered an object \(\bar{Z}\) of \(\mathcal{P}_{w_{0},DG}^{(1)}\) concentrated in the \(0\)th homological degree. Let \(T_{1},T_{2},T_{3}\) be three complexes of free monodromic tilting sheaves in \(\hat{\mathscr{M}}\). The central structure on \(Z\) gives an isomorphism \[T_{1}\star_{1}T_{2}\star_{1}Z\star_{1}T_{3}\to T_{1}\star_{1}T_{2}\star_{1}T_{ 3}\star Z. \tag{14}\] of complexes of objects in \(\mathcal{P}_{w_{0}}\), by Corollary 5.1.2. Since any indecomposable free-monodromic tilting sheaf in the monodromic Hecke category for \(G^{2}\) has the form \(T_{1}\boxtimes T_{3}\) for free-monodromic tilting sheaves in the \(T_{1},T_{3}\) in \(\hat{\mathscr{M}}\), isomorphism (14) defines an isomorphism \[\bar{A}\bowtie(\bar{X}\star_{1}\bar{Z})\to(\bar{A}\bowtie\bar{X})\star_{1}\bar {Z}\] in \(\mathcal{P}_{w_{0},DG}^{(1)}\) for objects \(\bar{A}\) in \(\hat{\mathcal{T}}_{DG}^{(2)}\), \(\bar{X}\) in \(\hat{\mathcal{T}}_{DG}^{(1)}\), natural in \(\bar{A},\bar{X}\). It follows, by Proposition 5.3.1, that convolution with \(Z\) defines an object in \(\mathcal{Z}\mathcal{H}_{ab}^{(1)}\), which completes the proof. The compatibility with the braiding follows by construction of the structure of the module endofunctor on \(-\star_{1}(\mathfrak{h}\circ\tilde{a}^{-1}\circ g(Z))\) in Theorem 2.2.1 (cf. Proposition 4.1.1 (a)). ### Vanishing result for central sheaves on a torus. We return to an arbitrary setting (I) - (V). Consider the closed embedding \(i\colon T\hookrightarrow\mathcal{Y}\) given by \(t\mapsto(U,tU)\), which passes to \(i\colon T/_{\operatorname{Ad}}T\hookrightarrow G\backslash\mathcal{Y}\) after taking quotients. Note that the category of the unipotent free-monodromic sheaves on \(T\) is equivalent to the category of \(S_{T}\)-modules. The action of \(W\) on \(T\) induces an action of \(W\) on \(S_{T}\). **Proposition 5.5.1**.: _Let \(M\) be an \(S_{T}\)-module and let \(\mathcal{F}\) be the corresponding sheaf on \(T\). Then \(i_{*}\mathcal{F}\) is in \(\mathcal{P}_{w_{0}}\) and admits a central structure in \(\mathcal{Z}_{D}(\mathcal{P}_{w_{0}})\) if and only if \(M\) descends to an \(S_{T}^{W}\)-module, i.e. there exists an \(S_{T}^{W}\)-module \(M_{0}\), such that \(M\cong M_{0}\otimes_{S_{T}^{W}}S_{T}\)._ Proof.: The fact that \(i_{*}\mathcal{F}\) is in \(\mathcal{P}_{w_{0}}\) follows from base change and t-exactness of the \(*\)-pushforward from an affine open subset. By [1, Proposition 4.5.7] or [1, Theorem 11.8] depending on the setting there is a fully faithful monoidal functor from the category of tilting objects in \(\hat{\mathscr{M}}\) to the category of \(S_{T}\otimes_{S_{T}^{W}}S_{T}\)-modules with the monoidal structure given by \(-\otimes_{S_{T}^{W}}-\). The essential image of this functor is the category of Soergel bimodules. By construction, objects supported on the image of under this functor correspond to the \(S_{T}\otimes_{S_{T}^{W}}S_{T}\)-modules supported on the diagonal. More precisely, \(i_{*}\mathcal{F}\) goes to \(M\) with \(S_{T}\otimes_{S_{T}^{W}}S_{T}\) acting via the multiplication map. Since each objects in \(\mathcal{P}_{w_{0}}\) admits a tilting resolution (see, for example, [1, Lemma B.2.6]) it is sufficient to study central Soergel bimodules. If \(M\) descends to an \(S_{T}^{W}\)-module \(M_{0}\) and \(N\) is any \(S_{T}\otimes_{S_{T}^{W}}S_{T}\)-module, then there are canonical isomorphisms \[M\otimes_{S_{T}}N\simeq M_{0}\otimes_{S_{T}^{W}}N\simeq N\otimes_{S_{T}^{W}}M_ {0}\simeq N\otimes_{S_{T}}M,\] satisfying all required compatibilities and, thus, endowing \(i_{*}\mathcal{F}\) with the desired central structure. Conversly, if \(i_{*}\mathcal{F}\) admits a central structure, since \(S_{T}\otimes_{S_{T}^{W}}S_{T}\) is a Soergel bimodule (it is known to be the image of the big tilting, i.e. the unique indecomposable tilting object with full support, see [1, Lemma 4.6.3] or [1, Theorem 9.1]) we have an isomorphism of \(S_{T}\otimes_{S_{T}^{W}}S_{T}\otimes_{S_{T}^{W}}S_{T}\)-modules \[M\otimes_{S_{T}^{W}}S_{T}\cong M\otimes_{S_{T}}(S_{T}\otimes_{S_{T}^{W}}S_{T}) \xrightarrow{\sim}(S_{T}\otimes_{S_{T}^{W}}S_{T})\otimes_{S_{T}}M\cong S_{T} \otimes_{S_{T}^{W}}M.\] Moreover, this map satisfies the cocycle condition for the composition \[M\otimes_{S_{T}^{W}}S_{T}\otimes_{S_{T}^{W}}S_{T}\xrightarrow{\sim}S_{T} \otimes_{S_{T}^{W}}M\otimes_{S_{T}^{W}}S_{T}\xrightarrow{\sim}S_{T}\otimes_{S_ {T}^{W}}S_{T}\otimes_{S_{T}^{W}}M.\] This ensures the descent of \(M\) to an \(S_{T}^{W}\)-module. Assume that \(\mathcal{F}\in D^{b}(T)\) satisfies one of the equivalent conditions of Proposition 5.5.1. Then there is a \(W\)-equiavriant structure on \(\mathcal{F}\) coming from the \(W\)-action on \(S_{T}\). This gives a \(W\)-action on \(\chi(i_{*}\mathcal{F})=\operatorname{Ind}_{B}^{G}(\mathcal{F})\), where \(\operatorname{Ind}_{B}^{G}(-)\) stands for the functor of parabolic induction. We have the following Corollary of the results in this section, proved in [11] (see Theorem 6.1) with a restriction on characteristic. **Corollary 5.5.2**.: _The sheaf \(\mathfrak{hc}(\operatorname{Ind}_{B}^{G}(\mathcal{F})^{W})\), where \((-)^{W}\) stands for the functor of \(W\)-invariants, is supported on the preimage of the diagonal under the quotient map \(Y^{2}\to(G/B)^{2}\)._ Proof.: It is enough to prove the statement in settings (I)-(IV). By Proposition 5.5.1 and Theorem 5.4.2, we see that there is an object \(\mathcal{E}\) in \(\mathfrak{C}\) with \(\mathfrak{hc}(\mathcal{E})\simeq\mathcal{F}\). We have \[\operatorname{Ind}_{B}^{G}\mathcal{F}\simeq\chi(i_{*}\mathcal{F})\simeq\chi \circ\mathfrak{hc}(\mathcal{E})\simeq\Sigma\star\mathcal{E},\] where the first isomorphism holds because \(i_{*}\mathcal{F}\) is supported over the diagonal in \((G/B)^{2}\), and the last one from Lemma 3.2.2. We get that \(\mathcal{E}\simeq(\operatorname{Ind}_{B}^{G}(\mathcal{F}))^{W}\), where invariants are taken with respect to the \(W\)-action on the Springer sheaf, where the action is chosen so that \(\delta_{e}\simeq\Sigma^{W}\). The compatibility of this action and the action used in the formulation of the Proposition is proved in Proposition 5.4.6 of [1]. Sheaves on Vinberg semigroup The aim of this appendix is to give a different proof of Proposition 5.1.3, namely the fact that the convolution \(\star_{1}\) is left t-exact with respect to the t-structure \((D^{\leq 0}_{w_{0}},D^{\geq 0}_{w_{0}})\), following an idea of Drinfeld. ### General facts about convolution on semigroups. Let \(V\) be an algebraic semigroup, and let \(Z\subset V\) be a closed two-sided ideal (i.e. \(Z\) is a closed algebraic subset of \(V\) which is a two-sided ideal with respect to the semigroup product). Let \(\check{V}\subset V\) be the open complement of \(Z\). The category \(D^{b}(V)\) is semigroupal (monoidal if \(V\) is unital), with convolution given by the formula \[\mathcal{F}\star_{V}\mathcal{G}:=m_{!}(\mathcal{F}\boxtimes\mathcal{G}),\] where \(m\) stands for the multiplication map \(m:V\times V\to V\). Since the map \(m\) is affine, the convolution \(\star_{V}\) is left t-exact with respect to the perverse t-structure, by Artin's theorem. Write \(C\subset V\times V\) for the preimage \(m^{-1}(\check{V})\) of \(\check{V}\) under the multiplication map. Not that, since \(Z\) is a two-sided ideal, \(C\subset\check{V}\times\check{V}\). Consider a diagram (15) where \(\pi^{\prime}_{1},\pi^{\prime}_{2},m^{\prime}\) are restrictions of projections \(\pi_{1},\pi_{2}:V^{2}\to V\) and multiplication map \(m:\check{V}^{2}\to V\) to \(C\), respectively. The category \(D^{b}(\check{V})\) is equipped with "convolution" operation \[\mathcal{F}\mathbin{\lower 0.645pt\hbox{$\pm$}}\mathcal{G}:=m^{\prime}_{!}( \pi^{\prime*}_{1}\mathcal{F}\otimes\pi^{\prime*}_{2}\mathcal{G}). \tag{16}\] We will use the following: **Lemma A.1.1**.: _The operation \(\mathbin{\lower 0.645pt\hbox{$\pm$}}\) is left t-exact with respect to the perverse t-structure._ Proof.: Let \(j:\check{V}\to V,i:Z\to V\) be the open and complementary closed embeddings. Using base change and straightforward diagram chase, one obtains an isomorphism \[\mathcal{F}\mathbin{\lower 0.645pt\hbox{$\pm$}}\mathcal{G}\mathbin{ \lower 0.645pt\hbox{$\sim$}}j^{*}(j_{!}\mathcal{F}\star_{V}j_{!} \mathcal{G}). \tag{17}\] Consider the canonical distinguished triangles \[i_{*}i^{!}j_{!}\mathcal{F}\to j_{!}\mathcal{F}\to j_{*}\mathcal{F}\to i_{*}i^{! }j_{!}\mathcal{F}[1],\] \[i_{*}i^{!}j_{!}\mathcal{G}\to j_{!}\mathcal{G}\to j_{*}\mathcal{G}\to i_{*}i^{!}j_{!} \mathcal{G}[1].\] Since \(Z\) is a two-sided monoidal ideal, sheaves \[i_{*}i^{!}j_{!}\mathcal{F}\star_{V}j_{*}\mathcal{G},j_{*}\mathcal{F}\star_{V}i_ {*}i^{!}j_{!}\mathcal{G},i_{*}i^{!}j_{!}\mathcal{F}\star_{V}i_{*}i^{!}j_{!} \mathcal{G}\] are supported on \(Z\). It follows that we have an isomorphism \[j^{*}(j_{!}\mathcal{F}\star_{V}j_{!}\mathcal{G})\stackrel{{ \sim}}{{\longrightarrow}}j^{*}(j_{*}\mathcal{F}\star_{V}j_{*} \mathcal{G}).\] Now note that functors since \(j_{*},\star_{V}\) are left t-exact and \(j^{*}\) is \(t\)-exact, we get the result from the isomorphism (17). ### Vinberg semigroup and the horocycle space. We recall some facts about the Vinberg semigroup, following mostly the exposition of [10], and refer the reader there for further bibliography. Let \(G,B,U,T\) be as in the main text, and let \(U^{-}\) stand for a maximal unipotent subgroup opposite to \(U\). Let \(Z(G)\) stand for the center of \(G\). Attached to \(G\) there is an affine semigroup \(\overline{G_{\mathrm{enh}}}\), whose group of invertible elements is \(G_{\mathrm{enh}}=(G\times T)/Z(G)\). \(G\) embeds into \(G_{\mathrm{enh}}\), and so \(G\times G\) acts on \(\overline{G_{\mathrm{enh}}}\) by multiplication on left and right: \((g_{1},g_{2})\cdot x=g_{1}xg_{2}^{-1}\). In loc. cit., an open subset \(\overline{G_{\mathrm{enh}}}\) in \(\overline{G_{\mathrm{enh}}}\), such that the complement \(\overline{G_{\mathrm{enh}}}\backslash\overline{\mathring{G}_{\mathrm{enh}}}\) is a two-sided ideal, is defined. Let \(r\) stand for the semisimple rank of \(G\). Let \[T_{\mathrm{adj}}=T/Z(G),\ \overline{T_{\mathrm{adj}}}\simeq\mathbb{A}^{r} \supset T_{\mathrm{adj}}.\] The semigroup \(\overline{G_{\mathrm{enh}}}\) comes equipped with a homomorphism of semigroups \[\bar{\pi}:\overline{G_{\mathrm{enh}}}\to\overline{T_{\mathrm{adj}}}\] and a section \[\bar{\mathfrak{s}}:\overline{T_{\mathrm{adj}}}\to\overset{\circ}{\overline{ G_{\mathrm{enh}}}}\subset\overline{G_{\mathrm{enh}}}.\] Let \(V\) be the fiber over \(0\in\overline{T_{\mathrm{adj}}}\simeq\mathbb{A}^{r}\). This is an affine semigroup. Let \(\mathring{V}=\overset{\circ}{\overline{G_{\mathrm{enh}}}}\cap V\), open subset of \(\mathring{V}\). Since both \(V\) and \(\overline{G_{\mathrm{enh}}}\backslash\overline{\mathring{G}_{\mathrm{enh}}}\) are two-sided ideals in \(\overline{G_{\mathrm{enh}}}\), we get that \(Z:=V\backslash\mathring{V}\) is a two-sided ideal in \(V\), so that we are in the situation of previous subsection and Lemma A.1.1. The proposition below follows directly from the constructions in the Appendix D of [10]. **Proposition A.2.1**.: 1. _Open subset_ \(\mathring{V}\subset V\) _is the_ \(G\times G\) _orbit of_ \(\bar{\mathfrak{s}}(0)\) _._ 2. _Stabilizer of_ \(\bar{\mathfrak{s}}(0)\) _in_ \(G\times G\) _is_ \(U\times U^{-}\times T\)_, embedded as_ \((u,u^{\prime},t)\mapsto(ut,u^{\prime}t)\)_, so that_ \[\mathring{V}\simeq\frac{G/U\times G/U^{-}}{T},\] _with_ \(T\) _acting on_ \(G/U\times G/U^{-}\) _diagonally on the right._ 3. _The preimage_ \(C\) _of_ \(\mathring{V}\) _in_ \(\mathring{V}\times\mathring{V}\) _under the multiplication map is identified with_ \[C=\{((x_{1}U,x_{2}U^{-})T,(x_{3}U,x_{4}U^{-})T)\in\mathring{V}^{2}:x_{2}^{-1}x _{3}\in U^{-}B\}.\] Note that for \(V,\mathring{V},C\) as above, all arrows in the diagram (15) are \(G\)-equivariant. Combining Proposition A.2.1 with Lemma A.1.1, we obtain **Corollary A.2.2**.: _Operation \(\pm\), defined on the derived category_ \[D^{b}(G\backslash(G/U\times G/U^{-})/T)\] _via the formula (16), is left t-exact with respect to the perverse t-structure._ ### Radon transform and convolution on monodromic categories We introduce some more notation. Let \(H,H_{1},H_{2}\in\{U,U^{-}\}\), and consider a diagram Here \(p_{ij}\) stands for the projection along factors \(i,j\). Note that all arrows in the above diagram are equivariant with respect to the left diagonal action of \(G\) and right diagonal action of \(T\). We define the convolution operation \(-\star_{H}-\): for \[\mathcal{F}\in D^{b}(G\backslash(G/H_{1}\times G/H)/T),\mathcal{G}\in D^{b}(G \backslash(G/H\times G/H_{2})/T)\] write \[\mathcal{F}\star_{H}\mathcal{G}:=p_{13!}(p_{12}^{*}\mathcal{F}\otimes p_{23}^ {*}\mathcal{G}).\] Note that for \(H_{1}=H_{2}=H=U\) this coincides with the monoidal structure \(-\star_{1}-\). Operations \(\star_{H}\) define an associative product on the sum of categories, \[\bigoplus_{H_{1},H_{2}\in\{U,U^{-}\}}D^{b}(G\backslash(G/H_{1}\times G/H_{2}) /T).\] where the product is defined to be zero when the definition of neither \(\star_{H}\) is applicable. Let \[O_{w_{0}}=\{(xU^{-},yU)T\in(G/U^{-}\times G/U)/T:x^{-1}y\in U^{-}U\}.\] The closed embedding \(\iota_{w_{0}}:O_{w_{0}}\to(G/U^{-}\times G/U)/T\) is \(G\)-equivariant. Define \[\mathfrak{R}=\iota_{w_{0}}!\underline{\Bbbk}_{G\backslash O_{w_{0}}}\simeq \iota_{w_{0}*}\underline{\Bbbk}_{G\backslash O_{w_{0}}}.\] Let \[\tilde{O}_{w_{0}}=\{(xU^{-},yU)T\in(G/U^{-}\times G/U)/T:x^{-1}y\in U^{-}B\}.\] Let \(\phi:\tilde{O}_{w_{0}}\to T\) stand for the projection \[(xU^{-},yU)\mapsto x^{-1}y=u^{-}ut\mapsto t,u\in U,u^{-}\in U^{-},t\in T,\] which is easy to see to be well-defined, \(G\)-equivariant (with trivial action of \(G\) on \(T\)) and descends to the quotient by the right diagonal action of \(T\). This trivializes \(T\)-torsor \[\tilde{O}_{w_{0}}\simeq O_{w_{0}}\times T.\] Write \(\hat{\mathcal{L}}\) for the pro-unipotent local system on \(T\) and, abusing notation, its pullback to \(\tilde{O}_{w_{0}}\). The open embedding \(\iota_{w_{0}}:\tilde{O}_{w_{0}}\to(G/U^{-}\times G/U)/T\) is \(G\)-equivariant. Define \[\hat{\mathfrak{R}}_{!}=\iota_{w_{0}!}\hat{\mathcal{L}}\langle\dim T\rangle \in D^{b}(G\backslash(G/U^{-}\times G/U)/T),\] where \(\langle\dim T\rangle\) stands for the shift and Tate twist \([\dim T](\dim T/2)\) in the setting (V) and homological shift in other settings. Similarly, swap \(U\) and \(U^{-}\) in the above and define \[\hat{\mathfrak{R}}_{*}=\iota_{w_{0}*}\hat{\mathcal{L}}\langle\dim T\rangle \in D^{b}(G\backslash(G/U\times G/U^{-})/T).\] We call the functor \[\mathcal{F}\mapsto\mathcal{F}\star_{U^{-}}\mathfrak{R}:D^{b}(G\backslash(G/U \times G/U^{-})/T)\to D^{b}(G\backslash(G/U\times G/U)/T),\] the (!-version of) Radon transform functor. We record some standard properties of Radon transform we will need. By monodromic categories we mean, as before, categories of sheaves monodromic with respect to projections \(G/U\to G/B\) and \(G/U^{-}\to G/B^{-}\), where \(B^{-}\) is the Borel subgroup containing \(U^{-}\). Recall that, when treating monodromic categories, we shift the convolution product by \([\dim T]\). **Proposition A.3.1**.: * _There is an isomorphism of functors, natural in_ \(\mathcal{F},\mathcal{G}:\)__ \[\mathcal{F}\mathop{\underline{\star}}\mathcal{G}\rightharpoonup\joinrel\sim \mathcal{F}\star_{U^{-}}\mathfrak{R}\star_{U}\mathcal{G}.\] * _When restricted to monodromic categories, there is an isomorphism of functors natural in_ \(\mathcal{F}:\)__ \[\mathcal{F}\star_{U^{-}}\mathfrak{R}\rightharpoonup\joinrel\sim \mathcal{F}\star_{U^{-}}\hat{\mathfrak{R}}_{!}.\] * _When restricted to monodromic categories, the Radon transform functor is an equivalence, with inverse given by_ \[\mathcal{F}\mapsto\mathcal{F}\star_{U}\hat{\mathfrak{R}}_{*}.\] Proof.: Part (a) is a straightforward computation using the base change isomorphism and the isomorphism \[C\rightharpoonup\joinrel\sim\{(x_{1}U,x_{2}U^{-},x_{3}U,x_{4}U^{-})T\in(G/U \times G/U^{-})^{2}/T:x_{2}^{-1}x_{3}\in U^{-}U,\}\] where a single copy of \(T\) acts diagonally on the right. The isomorphism is given by the formula \[(x_{1}U,x_{2}U^{-})T,(x_{3}U,x_{4}U^{-})T)\mapsto\\ \mapsto(x_{1}U,x_{2}U^{-1},x_{3}\phi(x_{2}U^{-},x_{3}U)^{-1}U,x_ {4}\phi(x_{2}U^{-},x_{3}U)^{-1}U^{-})T. \tag{18}\] Part (b) is a straightforward computation using base change and the fact that \(\hat{\mathcal{L}}\langle\dim T\rangle\) is the monoidal unit in the monodromic category of sheaves in \(D^{b}(T)\), see references in Proposition 5.1.1. Part (c) is completely analogous to 5.1.1 (c). **Remark A.3.2**.: Without monodromicity assumption, the functor adjoint to the Radon transfrom is given by the appropriate \(*\)-convolution with object analogous to \(\mathfrak{R}\), as opposed to \(!\)-convolution with a \(*\)-extension of a sheaf on an open subset, as in the proposition above. Corollary 5.1.3 now follows directly from Corollary A.2.2 and the following **Lemma A.3.3**.: _The functor \((-)\star_{U}\hat{\mathfrak{R}}_{*}\) intertwines the products \(\star_{1}=\star_{U}\) and \(\mathop{\underline{\star}}\) on the monodromic categories. Namely, for any \(\mathcal{F},\mathcal{G}\in\mathcal{H}^{(1)}_{mon}\), there is a natural isomorphism_ \[(\mathcal{F}\star_{1}\mathcal{G})\star_{U}\hat{\mathfrak{R}}_{*}\simeq( \mathcal{F}\star_{U}\hat{\mathfrak{R}}_{*})\mathop{\underline{\star}}( \mathcal{G}\star_{U}\hat{\mathfrak{R}}_{*}).\] Proof.: By Proposition A.3.1 (a) and (b) we have an isomorphism \[(\mathcal{F}\star_{U}\hat{\mathfrak{R}}_{*})\mathop{\underline{\star}}( \mathcal{G}\star_{U}\hat{\mathfrak{R}}_{*})\simeq(\mathcal{F}\star_{U}\hat{ \mathfrak{R}}_{*})\star_{U^{-}}\hat{\mathfrak{R}}_{!}\star_{U}(\mathcal{G} \star_{U}\hat{\mathfrak{R}}_{*}).\] Using the associativity of \(\star\)-operations and Proposition A.3.1 (c) we get \[(\mathcal{F}\star_{U}\hat{\mathfrak{R}}_{*})\star_{U^{-}}\hat{ \mathfrak{R}}_{!}\star_{U}(\mathcal{G}\star_{U}\hat{\mathfrak{R}}_{*})\simeq\\ \simeq(\mathcal{F}\star_{U}\hat{\mathfrak{R}}_{*}\star_{U^{-}} \hat{\mathfrak{R}}_{!})\star_{U}(\mathcal{G}\star_{U}\hat{\mathfrak{R}}_{*})\simeq\\ \simeq(\mathcal{F}\star_{1}\mathcal{G})\star_{U}\hat{\mathfrak{R }}_{*}. \tag{19}\] and so the result.
2304.04000
SimbaML: Connecting Mechanistic Models and Machine Learning with Augmented Data
Training sophisticated machine learning (ML) models requires large datasets that are difficult or expensive to collect for many applications. If prior knowledge about system dynamics is available, mechanistic representations can be used to supplement real-world data. We present SimbaML (Simulation-Based ML), an open-source tool that unifies realistic synthetic dataset generation from ordinary differential equation-based models and the direct analysis and inclusion in ML pipelines. SimbaML conveniently enables investigating transfer learning from synthetic to real-world data, data augmentation, identifying needs for data collection, and benchmarking physics-informed ML approaches. SimbaML is available from https://pypi.org/project/simba-ml/.
Maximilian Kleissl, Lukas Drews, Benedict B. Heyder, Julian Zabbarov, Pascal Iversen, Simon Witzke, Bernhard Y. Renard, Katharina Baum
2023-04-08T12:50:50Z
http://arxiv.org/abs/2304.04000v2
# SimbaML: Connecting Mechanistic Models and Machine Learning with Augmented Data ###### Abstract Training sophisticated machine learning (ML) models requires large datasets that are difficult or expensive to collect for many applications. If prior knowledge about system dynamics is available, mechanistic representations can be used to supplement real-world data. We present SimbaML (Simulation-Based ML), an open-source tool that unifies realistic synthetic dataset generation from ordinary differential equation-based models and the direct analysis and inclusion in ML pipelines. SimbaML conveniently enables investigating transfer learning from synthetic to real-world data, data augmentation, identifying needs for data collection, and benchmarking physics-informed ML approaches. SimbaML is available from [https://pypi.org/project/simba-ml/](https://pypi.org/project/simba-ml/). ## 1 Introduction and Related Work The success of machine learning (ML) models highly depends on the quality and quantity of available data. However, collecting real-world data is costly and time-consuming. Recent advances in generative models that produce synthetic data, such as generative adversarial networks (Goodfellow et al., 2020), variational autoencoders (Wan et al., 2017), and diffusion models (Sohl-Dickstein et al., 2015), do not fully alleviate this issue as these models need to be trained on large data corpora themselves and cannot extrapolate out-of-distribution. Scientific communities have commonly developed a wealth of domain knowledge that should be leveraged (Baker et al., 2018; Alber et al., 2019). Detailed prior knowledge of interactions between modeled entities can be represented by mechanistic models, such as ordinary differential equations (ODEs). They have been used to simulate the dynamical behavior of various systems (Herty et al., 2007; Harush & Barzel, 2017; Hass et al., 2019; Baum et al., 2019). Consequently, they allow physics-informed learning via observational bias (Karniadakis et al., 2021) and generating datasets for ML benchmarks. Multiple frameworks combining data generation from mechanistic models with ML have recently been proposed (Otness et al., 2021; Takamoto et al., 2022; Hoffmann et al., 2021), see Table 1 in the supplement for details. However, they do not allow for generating realistic data by simulating measurement errors or missing data. In addition, they are not designed for easy extension to other mechanistic models, mainly focus on benchmarking ML models, and commonly do not provide transfer learning functionalities. We introduce SimbaML, an all-in-one framework for integrating prior knowledge of ODE models into the ML process by synthetic data augmentation. SimbaML allows for the convenient generation of realistic synthetic data by sparsifying and adding noise. Furthermore, our framework provides customizable pipelines for various ML experiments, such as identifying needs for data collection and transfer learning. ## 2 SimbaML: Features and Use Cases SimbaML provides diverse functionalities to generate synthetic data to improve ML tasks (see Figure 1 A). As for the simulation part, we obtain time-series data by solving user-defined ODE systems. SimbaML can add different types of noise and sparsify data by randomly removing time points to make time-series data more realistic. Noise, kinetic parameters, as well as initial condition values, can be easily configured by the user. Furthermore, SimbaML offers multiple pipelines covering data pre-processing, training, and evaluation of ML models. In addition to supporting standard ML approaches, SimbaML allows for the effortless integration of any other model, for example, from Keras, PyTorch Lightning, and scikit-learn. As all pipelines can be customized based on configuration files, SimbaML enables highly automated ML experiments. Overall, SimbaML provides well-tested functionalities (100% test coverage) for various use cases, ranging from transfer learning to benchmarking and identifying needs for data collection. We illustrate the versatility of SimbaML in two use cases. First, we use SimbaML to identify needs for data collection. We generate multiple time-series datasets with noise for a complex biochemical model of a signaling pathway (Huang & Ferrell, 1996). Based on the performance comparison for different ML models (see Figure 1 B), we can make an informed decision to use the random forest or nearest neighbor approach for smaller dataset sizes. The neural network becomes a viable candidate if sufficient data is available. Figure 1: (A) Overview of the SimbaML framework. (B) Performance of various ML models for different synthetic dataset sizes simulated using a biochemical pathway model. SimbaML conveniently allows end-to-end evaluation of required dataset properties, given prior knowledge of the system dynamics. (C) Comparison of two Covid-19 7-day forecasts from March 13, 2020, using a probabilistic neural network augmented with synthetic data generated by SimbaML and real-world data (we show 50% and 85% prediction intervals). This suggests that including prior knowledge using SimbaML can improve the quality of forecasts. Second, we use SimbaML to augment scarce real-world data (Robert Koch-Institut, 2022) for a COVID-19 time series forecasting task (see Figure 1 C). We generate noisy synthetic time series using the Susceptible-Infected-Recovered (SIR) epidemiological model (Kermack and McKendrick, 1927) with randomly sampled kinetic parameter values. Exemplary predictions on synthetically augmented training data improve compared to predictions based solely on observed data. ## 3 Conclusion We present SimbaML, an all-in-one solution for generating realistic data from ODE models and leveraging them for improved ML experiments. For two exemplary use cases, we show how SimbaML effectively utilizes prior knowledge via ODEs for ML tasks in settings with limited data availability. As an open-source Python package, we plan to extend SimbaML with additional functionalities, for example, out-of-the-box physics-informed and graph neural networks, to maximize the potential of mechanistic models for ML. ### Acknowledgements This work was supported by the Bundesministerium fur Wirtschaft und Klimaschutz Daten- und KI-gestitztes Fruhwarnsystem zur Stabilisierung der deutschen Wirtschaft (01MK21009E) given to BYR, and by the Klaus Tschira Stiftung gGmbH (GSO/KT 25) given to KB.
2310.20029
Transcendence and normality of complex numbers via Hurwitz continued fractions
We study the topological, dynamical, and descriptive set theoretic properties of Hurwitz continued fractions. Hurwitz continued fractions associate an infinite sequence of Gaussian integers to every complex number which is not a Gaussian rational. The resulting space of sequences of Gaussian integers $\Omega$ is not closed. By means of an algorithm, we show that $\Omega$ contains a natural subset whose closure $\overline{\mathsf{R}}$ encodes continued fraction expansions of complex numbers which are not Gaussian rationals. We prove that $(\overline{\mathsf{R}}, \sigma)$ is a subshift with a feeble specification property. As an application, we determine the rank in the Borel hierarchy of the set of Hurwitz normal numbers with respect to the complex Gauss measure. We also construct a family of complex transcendental numbers with bounded partial quotients.
Felipe García-Ramos, Gerardo González Robert, Mumtaz Hussain
2023-10-30T21:25:51Z
http://arxiv.org/abs/2310.20029v2
# Transcendence and Normality of Complex Numbers ###### Abstract. Hurwitz continued fractions associate an infinite sequence of Gaussian integers to every complex number which is not a Gaussian rational. However, the resulting space of sequences of Gaussian integers \(\Omega\) is not closed. In this paper, we show that \(\Omega\) contains a natural subset whose closure \(\overline{\mathsf{R}}\) encodes continued fraction expansions of complex non-Gaussian rationals. Furthermore, by means of an algorithm that we develop, we exhibit for each complex non-Gaussian rational \(z\) a sequence belonging to \(\overline{\mathsf{R}}\) giving a continued fraction expansion of \(z\). We also prove that \((\overline{\mathsf{R}},\sigma)\) is a subshift with a feeble specification property. As an application, we determine the rank in the Borel hierarchy of the set of Hurwitz normal numbers with respect to the complex Gauss measure. More precisely, we prove that it is a \(\Pi^{3}_{0}\)-complete set. As a second application, we construct a family of complex transcendental numbers with bounded partial quotients. ## 1. Introduction A regular continued fraction is a finite or infinite expression of the form \[a_{0}+\cfrac{1}{a_{1}+\cfrac{1}{a_{2}+\cfrac{1}{\ddots}}} \tag{1}\] where all the terms \(a_{n}\) are integers and \(a_{n}\geq 1\) for any \(n\geq 1\). Regular continued fractions are a powerful tool in number theory. They provide a number system where the expansion of a real number is infinite if and only if it is irrational; for example, \[\sqrt{2}=1+\cfrac{1}{2+\cfrac{1}{2+\ddots}}.\] From a Diophantine approximation perspective, regular continued fractions give the best rational approximations to a real number in a systematic way (see, for example, the classic reference [20]). Regular continued fractions can be studied from a dynamical perspective with the aid of the Gauss map \(T_{\mathbb{R}}:[0,1)\to[0,1)\) given by \(T_{\mathbb{R}}(0)=0\) and \(T_{\mathbb{R}}(x)=x^{-1}-[x^{-1}]\) for \(x\neq 0\) where \([\cdot]\) is the floor function. This dynamical approach not only provides a natural framework to explore diversity and complexity but also establishes connections with other fields such as harmonic analysis, probability theory, and descriptive set theory [4, 24, 25, 31]. There have been several attempts to obtain multidimensional continued fraction algorithms (see, for example, [18, 30]) but the higher dimensional theory, in comparison, is hardly as comprehensive as the one-dimensional theory. One such example is the continued fraction theory for complex plane that we focus on in this paper. In his 1887 paper Introduction Let \(\Omega\) be a bounded domain with \(\Omega\) bounded bounded in \(\mathbb{R}\). We denote by \(\Omega\) the set of all bounded domains \(\Omega\) with \(\Omega\) bounded in \(\mathbb{R}\). We denote by \(\Omega\) the set of all bounded domains \(\Omega\) with \(\Omega\) bounded in \(\mathbb{R}\). We denote by \(\Omega^{\prime}\) the set of all bounded domains \(\Omega^{\prime}\) with \(\Omega^{\prime}\) bounded in \(\mathbb{R}\). Call \(\sigma\) the left-shift map on \(\mathbb{Z}[i]^{\mathbb{N}}\). In [33], Yuri proved that \((\overline{\mathsf{R}},\sigma)\) is a sofic shift realized by a directed graph whose vertices are open prototype sets (see definition in Section 3) and with edges labeled by Gaussian integers. Yuri's paper is concerned with the ergodic theory of a class of dynamical systems defined in compact subsets of \(\mathbb{R}^{n}\), \(n\in\mathbb{N}\). She considered measures equivalent to the \(n\)-dimensional Lebesgue measure and, thus, certain Lebesgue-null sets as well as some subsets of the associated shift space are discarded. These omissions force us to make a more detailed study of the relationship between \((\overline{\mathsf{R}},\sigma)\) and the dynamics of Hurwitz continued fraction when dealing with other problems such as those in Section 4 and Section 6. Recall that a real number \(x\in[0,1)\) is normal to base \(2\) if the asymptotic frequency of every finite word in \(\{0,1\}\) of length \(k\in\mathbb{N}\) in the binary expansion of \(x\) is exactly \(2^{-k}\). In general, given a numeration system, normal numbers are those for which the asymptotic sequence of every finite word of digits is typical. For the unit interval and regular continued fractions, the notion of _typical_ is determined by the Gauss measure of cylinders (see [3, 4, 5, 21]). In the context of Hurwitz continued fractions, we consider the unique Borel probability \(T\)-ergodic measure \(\mu_{\mathrm{H}}\) which is equivalent to the Lebesgue measure on \(\mathfrak{F}\) (see [13, 26]) and we refer to normal numbers as _Hurwitz normal numbers_ (see Subsection 5.1 for a precise definition). As a first application of Theorem 1.1 and its proof, we determine how complicated is the set of Hurwitz normal numbers as a Borel set. **Theorem 1.2**.: _The set of Hurwitz normal numbers cannot be expressed as a countable intersection of open sets or as a countable union of closed (or even \(G_{\delta}\)) sets._ In fact, we will establish a more robust result by categorizing the set of normal numbers within the framework of the Borel hierarchy. The Borel hierarchy serves as a natural ordinal rank, quantifying the number of steps required to construct a Borel set, commencing from open sets. A similar result was obtained by Dani and Nogueira when they were studying the image of \(\mathbb{Z}[i]^{2}\) under binary quadratic forms [12, Propositions 8.5, 8.6]. Theorem 1.2 is a consequence of the dynamical structure of \(\overline{\mathsf{R}}\) with respect to the left-shift map. We prove a weak form of specification introduced by Airey, Jackson, Kwietniak, and Mance in [4]. **Theorem 1.3**.: _The subshift \((\overline{\mathsf{R}},\sigma)\) has the right feeble-specification property._ The subshift \((\overline{\mathsf{R}},\sigma)\) is not a full shift, in other words, \(\overline{\mathsf{R}}\) is not of the form \(\mathscr{A}^{\mathbb{N}}\) for some \(\mathscr{A}\subseteq\mathbb{Z}[i]\). Nonetheless, Theorem 1.3 shows that the system \((\overline{\mathsf{R}},\sigma)\) still has a considerable level of freedom in terms of specification. The specification property was introduced by Bowen in his seminal paper [6]. Bowen's Specification Theorem asserts that all transitive hyperbolic systems possess the "specification property", wherein complete independence is established among finite subsets with bounded gaps. The specification and its variations have proven to be potent tools unlocking access to various other properties, such as the uniqueness of measures of maximal entropy [6, 10], universality [27], and the intricate structure of generic points [4]. The second application of Theorem 1.1 is related to trascendental numbers. In [2], Adamczewski and Bugeaud solved Cobham's Conjecture when they proved that, for any \(b\in\mathbb{N}_{\geq 2}\), a real number whose \(b\)-adic expansion is non-periodic and generated by finite automaton must be transcendental. They extended their work to continued fractions [1] and, afterward in [7], Bugeaud gave a combinatorial condition on non-periodic continued fraction ensuring its transcendence. This condition can be expressed in an extremely neat manner using the repetition exponent introduced by Bugeaud and Kim [9]. Given a finite alphabet \(\mathscr{A}\) and an infinite word \(\mathbf{a}=(a_{n})_{n\geq 1}\) in \(\mathscr{A}\), for each \(n\in\mathbb{N}\) define \[r(n,\mathbf{a})\coloneqq\min\left\{m\in\mathbb{N}:\exists i\in\{1,\ldots,m-n\} \quad(a_{i},\ldots,a_{i+n+1})=(a_{m},,\ldots,a_{m+n+1})\right\}.\] The **repetition exponent** of \(\mathbf{a}\) is \[\operatorname{rep}(\mathbf{a})\coloneqq\liminf_{n\to\infty}\frac{r(n,x)}{n}.\] Bugeaud proved that for any bounded non-periodic sequence \(\mathbf{a}=(a_{n})_{n\geq 1}\in\mathbb{N}^{\mathbb{N}}\), if \(\operatorname{rep}(\mathbf{a})<\infty\), the continued fraction (1) is transcendental for any \(a_{0}\in\mathbb{Z}\)[7, Theorem 1.3] (Theorem 6.2). In [14], a complex version of under the additional assumption \(|a_{n}|\geq\sqrt{8}\) for all \(n\in\mathbb{N}\) was proved by the second author of this manuscript. In this paper we apply the algorithm from the proof of Theorem 1.1 to construct a family of transcendental numbers whose partial quotients \((a_{n})_{n\geq 1}\) are bounded and such that \(|a_{n}|=2\) for infinitely many \(n\in\mathbb{N}\). Schmidt's Subspace Theorem, a deep result in Diophantine approximation, lies at the heart of this construction. **Theorem 1.4**.: _Let \(\mathbf{B}\coloneqq(B_{n})_{n\geq 1}\) be a bounded sequence in \(\mathbb{Z}\) which is not periodic and such that_ \[\min_{n\in\mathbb{N}}|B_{n}|\geq 3\quad\text{ and }\quad\operatorname{rep}( \mathbf{B})<\infty.\] _Then, \(\zeta=[0;-2;1+iB_{1},-2,1+iB_{2},-2,1+iB_{3},-2,1+iB_{4},\ldots]\) is transcendental._ The method of the proof of Theorem 1.4 allows us to construct other families of transcendental numbers (Theorem 6.6). To sum up, these are our applications: 1. The set \(\overline{\mathbf{R}}\coloneqq\{(a_{n})_{n\geq 1}\in\mathbb{Z}[i]^{\mathbb{N}}: \forall n\in\mathbb{N}\quad(a_{1},\ldots,a_{n})\text{ is regular }\}\) is closed and dynamically chaotic in the sense that it has a weak type of specification. 2. The set of Hurwitz normal numbers is classified in the third level of the Borel hierarchy. 3. We construct a family of transcendental complex numbers with bounded partial quotients. The structure of the paper is as follows. First, in Section 2, we establish notation and conventions. In Section 3, we introduce the basic definitions on Hurwitz continued fractions, we discuss the classification of cylinders and prototype sets, and we give some basic results. In Section 4 we prove Theorem 1.1 and state the algorithm without proof. Section 5 starts with some basic notions on Borel Hierarchy and the feeble-specification property. Afterward, we show Theorem 1.3 and then Theorem 5.1 (which implies Theorem 1.2). In Section 6 we show Theorem 1.4. Lastly, Section 7 contains the proof of the algorithm we use to show Theorem 1.1. **Acknowledgements.** Research by F. Garcia-Ramos was funded by the Grant: U1U/W16/ NO/01.03 of the Strategic Excellence Initiative program of the Jagiellonian University. Research by G. Gonzalez Robert was partially funded by CONAHCyT through the program _Estancias posdoctorales por Mexico_. Both G. Gonzalez Robert and M. Hussain were funded by the ARC Discovery Project 200100994. ## 2. Notation and conventions We gather some frequently used notations. 1. By natural numbers \(\mathbb{N}\) we mean the set of positive integers and we write \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\). 2. The real part of a complex number \(z\) is denoted by \(\operatorname{Re}(z)\) and its imaginary part is denoted by \(\operatorname{Im}(z)\). 3. If \(X,Y\) are sets, \(A\subseteq X\), \(B\subseteq Y\) and \(f:X\to Y\) is any function, we define \[f[A]:=\{f(a):a\in A\}\quad\text{and}\quad f^{-1}[B]:=\{a\in A:f(a)\in B\}.\] 4. If \((X,\tau)\) is a topological space and \(A\subseteq X\), we let \(\operatorname{Cl}_{X}(A)\) (resp. \(\operatorname{Int}_{X}(A)\) ) be the closure of \(A\) (resp. the interior). We omit the sub-index \(X\) if the space is clear. 5. We use the terms _word_ and _sequence_ interchangeably. 6. If \(\mathscr{A}\) is a non-empty set and \(\mathbf{a}=(a_{n})_{n\geq 1}\in\mathscr{A}^{\mathbb{N}}\), for each \(m\in\mathbb{N}\) the **prefix** of length \(m\) of \(\mathbf{a}\) is \(\operatorname{pref}(\mathbf{a};m):=(a_{1},\dots,a_{m})\). 7. If \(\mathscr{A}\) is a non-empty set, \(\mathbf{a}\) is a word in \(\mathscr{A}\) of length \(n\in\mathbb{N}\) and \(\mathbf{b}=(b_{n})_{n\geq 1}\) is a word in \(\mathscr{A}\), we write \[\mathbf{ab}:=(a_{1},\dots,a_{n},b_{1},b_{2},\dots).\] 8. For any \(a\in\mathbb{Z}[i]\), the function \(\tau_{a}:\mathbb{C}\to\mathbb{C}\) is given by \(\tau_{a}(z)=z+a\), \(z\in\mathbb{C}\). By \(\iota\) we mean the complex inversion: \(\iota(z)=z^{-1}\) for \(z\in\mathbb{C}\setminus\{0\}\). 9. Given finitely many functions \(h_{1},\dots,h_{n}\) that are either \(\iota\), \(\tau_{a}\) for some \(a\in\mathbb{Z}[i]\) or some element in \(\mathsf{D}_{\mathsf{i}8}\) (see Section 3), we write \(h_{n}\cdots h_{1}:=h_{n}\circ\cdots\circ h_{1}\). 10. For \(z\in\mathbb{C}\) and \(\rho>0\), we write \[\mathbb{D}(z;\rho):=\{w\in\mathbb{C}:|z-w|<\rho\}, \qquad\overline{\mathbb{D}}(z;\rho):=\{w\in\mathbb{C}:|z-w|<\rho\}\] \[\qquad\qquad\mathbb{D}(z):=\mathbb{D}(z;1), \qquad\overline{\mathbb{D}}(z):=\operatorname{Cl}_{\mathbb{C}} \left(\mathbb{D}(z;1)\right),\] \[\mathbb{E}(z):=\mathbb{C}\setminus\overline{\mathbb{D}}(z;1), \qquad\overline{\mathbb{E}}(z):=\operatorname{Cl}_{\mathbb{C}} \left(\mathbb{E}(z;1)\right),\] \[C(z;\rho):=\{w\in\mathbb{C}:|z-w|=1\}, C(z):=C(z;1).\] xi. For \(z\in\mathbb{C}\), \(\operatorname{Pm}(z):=\min\{|\operatorname{Re}(z)|,|\operatorname{Im}(z)|\}\). We use implicitly the formulas for the inversion of discs and lines in the complex plane throughout the paper. Namely, for \(k\in\mathbb{R}\), if \[L(\operatorname{Re}(z)=k)=\{z\in\mathbb{C}:\operatorname{Re}(z)=k\}\quad\text{ and }\quad L(\operatorname{Im}(z)=k)=\{z\in\mathbb{C}:\operatorname{Im}(z)=k\};\] then, \[\iota[L(\operatorname{Re}(z)=k)]=C\left(\frac{1}{2k};\frac{1}{2|k|}\right), \quad\iota[L(\operatorname{Im}(z)=k)]=C\left(-\frac{i}{2k};\frac{1}{2|k|}\right).\] Also, if \(z_{0}\in\mathbb{C}\) and \(\rho>0\) are such that \(\rho\neq|z_{0}|\) then \[\iota\left[C(z_{0};\rho)\right]=C\left(\frac{\overline{z_{0}}}{|z|^{2}-\rho^{2}} ;\frac{\rho}{|\rho^{2}-|z_{0}|^{2}|}\right).\] ## 3. Hurwitz continued fractions We start this section by defining Hurwitz continued fractions (other complex continued fractions algorithms have been discussed in [12, 23, 28]). Let \([\cdot\,]:\mathbb{R}\to\mathbb{Z}\) be the usual floor function. For \(z\in\mathbb{C}\), the **nearest Gaussian integer**\([z]\) to \(z\) is \[[z]:=\left\lfloor\operatorname{Re}(z)+\frac{1}{2}\right\rfloor+i\left\lfloor \operatorname{Im}(z)+\frac{1}{2}\right\rfloor.\] We define \[\mathfrak{F}=\{z\in\mathbb{C}:[z]=0\}.\] The **complex Gauss map**, \(T:\mathfrak{F}\to\mathfrak{F}\), is given by \[T(z)=\begin{cases}z^{-1}-[z^{-1}],\text{ if }z\neq 0,\\ 0,\text{ if }z=0.\end{cases}\] For any \(z\in\mathfrak{F}\setminus\{0\}\), we define \(a_{1}(z):=[z^{-1}]\) and, if \(T^{n-1}(z)\neq 0\), we put \(a_{n}(z):=a_{1}(T^{n-1}(z))\) for \(n\in\mathbb{N}\). As it is customary, the exponent in \(T\) denotes iteration. The **Hurwitz continued fraction** (**HCF**) of \(z\) is \[[0;a_{1}(z),a_{2}(z),\ldots]:=\frac{1}{a_{1}(z)+\frac{1}{\ddots}}.\] The terms of the sequence \((a_{n})_{n\geq 1}\) are the **partial quotients** of \(z\). Define the sequences of Gaussian integers \((p_{n})_{n\geq 0}\), \((q_{n})_{n\geq 0}\) by: \[\begin{pmatrix}p_{-1}&p_{-2}\\ q_{-1}&q_{-2}\end{pmatrix}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\qquad\text{ and }\qquad\forall n\in\mathbb{N}_{0}\quad \begin{pmatrix}p_{n}\\ q_{n}\end{pmatrix}=\begin{pmatrix}p_{n-1}&p_{n-2}\\ q_{n-1}&q_{n-2}\end{pmatrix}\begin{pmatrix}a_{n}\\ 1\end{pmatrix}. \tag{2}\] The general theory of continued fractions (see, for example, [20, Section 2]) tells us that \[\forall n\in\mathbb{N}\quad\frac{p_{n}}{q_{n}}=[0;a_{1},a_{2},\ldots,a_{n}]:= \frac{1}{a_{1}+\frac{1}{a_{2}+\frac{1}{\ddots+\frac{1}{a_{n}}}}}\cdot\] There are complex numbers \(\xi\) for which there are more than one Gaussian integer \(a_{0}\) solving \[|\xi-a_{0}|=\min\left\{|\xi-a|:a\in\mathbb{Z}[i]\right\}. \tag{3}\] For such \(\xi\), we have defined \([\xi]\) to be the solution of (3) with the largest real and imaginary parts. In many applications, the breaking ties criterion is irrelevant for it applies on a set of Hausdorff dimension \(1\), and hence it is a Lebesgue null set. However, it ensures the uniqueness of the HCF expansion. If we are willing to loosen slightly the uniqueness condition, Theorem 1.1 and its proof imply that it makes no difference what solution of (3) we choose. We will often need continued fractions which are not necessarily HCFs. In this case, we write \[\langle b_{0};b_{1},b_{2},\ldots\rangle\coloneqq b_{0}+\cfrac{1}{b_{1}+\cfrac{1} {b_{2}+\ddots}}.\] and reserve the notation \([b_{0};b_{1},b_{2},\ldots]\) to HCFs. We recall some elementary facts of Hurwitz continued fractions. Define \(\mathscr{D}\coloneqq\mathbb{Z}[i]\setminus\{0,1,i,-1\}\). **Proposition 3.1**.: _Let \(z=[0;a_{1},a_{2},\ldots]\in\mathfrak{F}\setminus\{0\}\) and let \((p_{n})_{n\geq 0}\), \((q_{n})_{n\geq 0}\) be given by Equation (2). Then,_ 1. _[_16_, Section I]_ _For every_ \(n\)_, we have_ \(a_{n}\in\mathscr{D}\)_._ 2. _[_16_, Section I]_ _The sequence_ \((|q_{n}|)_{n\geq 0}\) _is strictly increasing._ 3. _[_12_, Corollary 5.3]_ _If_ \(\psi\colon=\sqrt{\frac{1+\sqrt{5}}{2}}\)_, then_ \(|q_{n}|\geq\psi^{n-1}\) _for all_ \(n\in\mathbb{N}_{1}\)_._ 4. _[_12_, Corollary 3.3]_ _If_ \(z=[0;a_{1},a_{2},\ldots]\in\mathfrak{F}\setminus\mathbb{Q}(i)\)_, then_ \[\forall n\in\mathbb{N}\quad z=\frac{(a_{n+1}+[0;a_{n+2},a_{n+3},\ldots])p_{n}+p _{n-1}}{(a_{n+1}+[0;a_{n+2},a_{n+3},\ldots])q_{n}+q_{n-1}}.\] 5. _[_22_, Theorem 1]_ _If_ \(z=[0;a_{1},a_{2},\ldots]\in\mathfrak{F}\setminus\mathbb{Q}(i)\)_, then_ \[\forall n\in\mathbb{N}\quad\left|z-\frac{p_{n}}{q_{n}}\right|<\frac{1}{|q_{n}| ^{2}}.\] ### HCFs Cylinders Cylinders and prototype sets are the building blocks of the geometric theory of Hurwitz continued fractions. Given \(n\in\mathbb{N}\) and \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathscr{D}^{n}\), the **prototype set**\(\mathfrak{F}_{n}\)**(\(\mathbf{a}\))** is defined by \[\mathfrak{F}_{n}\mathbf{(a}\))\(\colon=T^{n}[\mathcal{C}_{n}\mathbf{(a)}].\] We denote the interior (resp. closure) of \(\mathcal{C}_{n}\mathbf{(a)}\) by \(\mathcal{C}_{n}^{\circ}\mathbf{(a)}\) (resp. \(\overline{\mathcal{C}}_{n}\mathbf{(a)}\)) and similar conventions apply on \(\mathfrak{F}\). For brevity, by **open cylinders** we mean the interior of a regular cylinder and similarly for prototype sets. For any set \(A\), the number of elements in \(A\) is denoted \(\#A\). Cylinders induce a natural classification of finite words in \(\mathscr{D}\). We say that \(\mathbf{a}\) is 1. **valid** if \(\mathcal{C}_{n}\mathbf{(a)}\neq\varnothing\); iv. **extremely irregular** if \(\#\mathcal{C}_{n}\mathbf{(a)}=1\); 2. **regular** if \(\mathcal{C}_{n}^{\circ}\mathbf{(a)}\neq\varnothing\); v. **full** if \(T^{n}(\mathcal{C}_{n}\mathbf{(a)})=\mathfrak{F}\); 3. **irregular** if \(\mathcal{C}_{n}^{\circ}\mathbf{(a)}\neq\varnothing\); and \(\mathcal{C}_{n}^{\circ}\mathbf{(a)}=\varnothing\); We denote by \(\Omega(n)\) (resp. \(\mathsf{R}(n)\), \(\mathsf{lr}(n)\), \(\mathsf{El}(n)\), \(\mathsf{F}(n)\)) the set of valid (resp. regular, irregular, extremely irregular, full) words of length \(n\). We extend the notions _regular_, _irregular_, _extremely irregular_, and _full_ to cylinders and prototype sets in an obvious way. It is well known that there are only finitely many prototype sets (see, for example, [13]); in fact, we have \(\#\{\mathfrak{F}_{n}^{\circ}(\mathbf{a}):n\in\mathbb{N},\mathbf{a}\in\mathsf{R}(n) \}=13\) and the interior of every open prototype set is one of the sets \[\mathfrak{F}^{\circ},\quad\mathfrak{F}_{1}^{\circ}(-2)=\mathfrak{F}^{\circ} \smallsetminus\overline{\mathbb{D}}(1),\] or one of their right-angled rotations. Figure 1 depicts the possible forms of open prototype sets. An infinite word \(\mathbf{a}=(a_{n})_{n\geq 1}\in\mathscr{D}^{\mathbb{N}}\) is 1. **valid** if it is the HCF of some \(z\in\mathfrak{F}\); 2. **regular** if it is valid and \((a_{1},\ldots,a_{n})\in\mathsf{R}(n)\) for all \(n\in\mathbb{N}\); 3. **irregular** if it is valid and \((a_{1},\ldots,a_{n})\in\mathsf{lr}(n)\) for some \(n\in\mathbb{N}\); 4. **extremely irregular** if it is valid and \((a_{1},\ldots,a_{n})\in\mathsf{El}(n)\) for some \(n\in\mathbb{N}\). 5. **full** if it is valid and \((a_{1},\ldots,a_{n})\in\mathsf{F}(n)\) for all \(n\in\mathbb{N}\). We denote by \(\Omega\) (resp. \(\mathsf{R}\), \(\mathsf{El}\), \(\mathsf{F}\)) the set of valid (resp. regular, extremely irregular, full). A significant part of our calculations and our arguments is based on an observation stemming from the definitions of cylinders: if \((a_{n})_{n\geq 1}\) is a sequence in \(\mathscr{D}\), then \[\mathcal{C}_{1}(a_{1})=\mathfrak{F}\cap\iota\tau_{a_{1}}[\mathfrak{F}],\quad \mathcal{C}_{1}(a_{1})=\mathfrak{F}\cap\iota\tau_{a_{1}}[\mathfrak{F}]\cap \iota\tau_{a_{2}}\iota\tau_{a_{1}}[\mathfrak{F}];\] in general, \[\forall n\in\mathbb{N}\quad\mathcal{C}_{n}(a_{1},\ldots,a_{n})=\mathfrak{F}_{ 1}\cap\iota\tau_{a_{1}}[\mathfrak{F}]\cap\ldots\cap(\iota\tau_{a_{n}}\cdots \iota\tau_{a_{1}})[\mathfrak{F}]. \tag{4}\] Since the functions \(\iota\) and \(\tau_{a}\) are homeomorphisms, we may replace \(\mathfrak{F}\) and \(\mathcal{C}\) with \(\mathfrak{F}^{\circ}\) and \(\mathcal{C}^{\circ}\), respectively, in (4). Recall that a word finite word \(\mathbf{b}\) on \(\mathscr{D}\) is a factor of a word \(\mathbf{a}\) on \(\mathscr{D}\) if there are words \(\mathbf{c},\mathbf{d}\) in \(\mathscr{D}\) such that \(\mathbf{a}=\mathbf{c}\mathbf{b}\mathbf{d}\). Then, an immediate consequence of (4) is that every factor of a regular word (finite or infinite) is again regular. Furthermore, letting \(\varepsilon\) be the empty word and \(\mathfrak{F}_{0}(\varepsilon):=\mathfrak{F}\), we have the next proposition. **Proposition 3.2**.: _Every sequence \((a_{n})_{n\geq 1}\) in \(\mathscr{D}\) satisfies_ \[\forall n\in\mathbb{N}\quad\mathfrak{F}_{n}(a_{1},\ldots,a_{n})=-\tau_{a_{n}} \iota\left[\mathfrak{F}_{n-1}(a_{1},\ldots,a_{n-1})\cap\mathcal{C}_{1}(a_{n}) \right].\] ### Regular cylinders We recall two important properties of regular cylinders. some results on geometric sequences. The first one tells us that every infinite sequence in \(\mathscr{D}\) with large terms is full. The second proposition reveals the symmetries of the family of regular cylinders. **Proposition 3.3** ([8, Proposition 4.3]).: _If \(\mathbf{a}=(a_{n})_{n\geq 1}\) is such that \(|a_{n}|\geq\sqrt{8}\), then \(\mathbf{a}\in\mathsf{F}\)._ Let \(\mathsf{rot},\mathsf{mir}_{1}:\mathbb{C}\to\mathbb{C}\) be given by \[\forall z\in\mathbb{C}\quad\mathsf{rot}(z):=iz\quad\text{ and }\quad\mathsf{mir}_{1}(z):= \overline{z}\] and call \(\mathsf{Di}_{8}\) the group of isometries they generate. Obviously, \(\mathsf{Di}_{8}\) is the dihedral group of order \(8\). Proposition 4.5 and Corollary 4.6 in [8] say that any \(f\in\mathsf{Di}_{8}\) maps open cylinders onto open cylinders of the same level. These results are proven by explicitly determining the image of an open cylinder under \(\mathsf{rot}\) and \(\mathsf{mir}_{1}\). In particular, writing \(\mathsf{mir}_{2}:=\mathsf{rot}^{2}\mathsf{mir}_{1}\) and calling \((\mathsf{mir}_{1},\mathsf{mir}_{2})\subseteq\mathsf{Di}_{8}\) is the subgroup generated by \(\mathsf{mir}_{1}\) and \(\mathsf{mir}_{2}\), we have the next result. **Proposition 3.4**.: _If \(n\in\mathbb{N}\), \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathsf{R}(n)\), and \(\mathsf{mir}\in(\mathsf{mir}_{1},\mathsf{mir}_{2})\), then_ \[\mathsf{mir}\,(\mathcal{C}_{n}^{\circ}(\mathbf{a}))=\mathcal{C}_{n}^{\circ}( \mathsf{mir}(a_{1}),\ldots,\mathsf{mir}(a_{n})).\] ### Irregular cylinders For any pair of complex numbers \(z\) and \(w\), write \[[z,w):=\{z+t(w-z):t\in[0,1)\}.\] Irregular cylinders, and thus prototype sets, exist. Certainly, if \[\alpha:=\frac{2-\sqrt{3}}{2}\approx 0.13397,\] for every \(m\in\mathbb{Z}\) with \(|m|\geq 2\) we have \[\mathfrak{F}_{2}(-2,1+mi)=\begin{cases}\left[-\frac{1}{2}-i\alpha,-\frac{1}{2 }+\frac{i}{2}\right),\text{ if }\ m=2,\\ \left[-\frac{1}{2}-\frac{i}{2},-\frac{1}{2}+i\alpha\right),\text{ if }\ m=-2,\\ \left[-\frac{1}{2}-\frac{i}{2},-\frac{1}{2}+\frac{i}{2}\right),\text{ if }\ |m|\geq 3\end{cases}\] (see [8, Section 4]). Additionally, if \[\zeta_{1}:=-\frac{1}{2}+i\alpha,\quad\zeta_{2}:=-\frac{1}{2}-i\alpha,\quad \zeta_{3}:=-\alpha-\frac{i}{2},\quad\zeta_{4}:=\alpha-\frac{i}{2},\] straightforward calculations yield \(a_{1}(\zeta_{1})=-2\), \(a_{1}(\zeta_{2})=-2+i\), \(a_{1}(\zeta_{3})=2i\), \(a_{1}(\zeta_{4})=1+2i\) and \[T(\zeta_{1})=\zeta_{4},\quad T(\zeta_{2})=\zeta_{4},\quad T(\zeta_{3})=\zeta _{2},\quad T(\zeta_{4})=\zeta_{2}. \tag{5}\] As a consequence, we have the HCF expansions \[\zeta_{1} =[0;-2,1+2i,-2+i,1+2i,-2+i,1+2i,-2+i,\ldots],\] \[\zeta_{2} =[0;-2+i,1+2i,-2+i,1+2i,-2+i,1+2i,-2+i,\ldots],\] \[\zeta_{3} =[0;2i,-2+i,1+2i,-2+i,1+2i,-2+i,1+2i,\ldots],\] \[\zeta_{4} =[0;1+2i,-2+i,1+2i,-2+i,1+2i,-2+i,1+2i,\ldots].\] By (4) and the formulas for inverting circles, we may show that \(\zeta_{1}\), \(\zeta_{2}\), \(\zeta_{3}\), \(\zeta_{4}\) are extremely irregular. In fact, by carefully tracking the image under \(T\) of the boundaries of prototype sets, we may even show that a complex number \(\xi\in\mathfrak{F}\) is extremely irregular if and only if \(T^{n}(\xi)=\zeta_{4}\) for some \(n\in\mathbb{N}\). **Lemma 3.5**.: _Let \(n\in\mathbb{N}\) be arbitrary. If \(\mathbf{a}\in\mathsf{R}(n)\) and \(b\in\mathscr{D}\) are such that \(\mathbf{a}b\in\mathsf{lr}(n+1)\), then \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})\) is one of the next sets:_ \[\mathfrak{F}^{\circ}\setminus\overline{\mathbb{D}}(1),\quad \mathfrak{F}^{\circ}\setminus\overline{\mathbb{D}}(-i),\quad\mathfrak{F}^{ \circ}\setminus\left(\overline{\mathbb{D}}(1)\cup\overline{\mathbb{D}}(-i) \right),\] \[\mathfrak{F}^{\circ}\setminus\left(\overline{\mathbb{D}}(-1) \cup\overline{\mathbb{D}}(-i)\right),\quad\mathfrak{F}^{\circ}\setminus\left( \overline{\mathbb{D}}(1)\cup\overline{\mathbb{D}}(i)\right).\] Proof.: The open prototype sets left unmentioned in the lemma are: 1. \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}^{\circ}\) 2. \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}^{\circ}\setminus i^{k} \overline{\mathbb{D}}(1+i)\) for \(k\in\{1,2,3,4\}\), 3. \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}^{\circ}\setminus\overline{ \mathbb{D}}(-1)\) and \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}^{\circ}\setminus\overline{ \mathbb{D}}(i)\), 4. \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}^{\circ}\setminus\left( \overline{\mathbb{D}}(i)\cup\overline{\mathbb{D}}(-1)\right)\). The proof now amounts to check that for each of these options \[\mathbf{a}b\in\Omega(n+1)\quad\text{ implies }\quad\mathbf{a}b\in\mathsf{R}(n+1),\] which follows from the stronger statement \[\iota\left[\overline{\mathfrak{F}}_{n}(\mathbf{a})\right]\cap\tau_{b}\left[ \overline{\mathfrak{F}}\right]\neq\emptyset\quad\text{ implies }\quad\iota\left[\mathfrak{F}_{n}^{\circ}(\mathbf{a})\right]\cap\tau_{b} \left[\overline{\mathfrak{F}}\right]\neq\emptyset.\] For the rest of the proof, we rely on pictures in order to avoid long and elementary computations. 1. This is obvious. 2. For \(k=1\), we have \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}^{\circ}\setminus\overline{ \mathbb{D}}(1+i)\) so \(\iota[\overline{\mathfrak{F}}_{n}(\mathbf{a})]=\iota[\overline{\mathfrak{F}}] \cap\overline{\mathbb{E}}(1-i)\) and \[\iota\left[\overline{\mathfrak{F}}_{n}(\mathbf{a})\right]\cap\tau_{-b}\iota[ \mathfrak{F}^{\circ}]\begin{array}{l}\neq\emptyset,\text{ if }b\neq 1-i,\\ =\emptyset,\text{ if }b=1-i.\end{array}\] Furthermore, as we may see in Figure 2, for \(b\neq 1-i\) we have \(\iota\left[\mathfrak{F}_{n}^{\circ}(\mathbf{a})\right]\cap\tau_{b}\left[ \overline{\mathfrak{F}}\right]\neq\emptyset\), hence \(\mathbf{a}b\in\mathsf{R}(n+1)\). The cases \(k\in\{2,3,4\}\) are similar. 3. If \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}^{\circ}\setminus\overline{ \mathbb{D}}(-1)\), then \[\iota\left[\overline{\mathfrak{F}}_{n}(\mathbf{a})\right]=\iota\left[\overline {\mathfrak{F}}\right]\cap\left\{z\in\mathbb{C}:\operatorname{Re}(z)\geq-\frac{ 1}{2}\right\}\] and, by Figure 3, \[\iota\left[\overline{\mathfrak{F}}_{n}(\mathbf{a})\right]\cap\tau_{b}[ \mathfrak{F}]\begin{array}{l}\neq\emptyset,\text{ if }\operatorname{Re}(b)\geq 0,\\ =\emptyset,\text{ if }\operatorname{Re}(b)<0.\end{array}\] Figure 3 also tells us that \(\operatorname{Re}(b)\geq 0\) yields \(\iota[\mathfrak{F}_{n}^{\circ}(\mathbf{a})]=\iota[\overline{\mathfrak{F}}] \cap\tau_{b}[\mathfrak{F}]\) and, thus, \(\mathbf{a}b\in\mathsf{R}(n+1)\). 4. When \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}^{\circ}\setminus\left( \overline{\mathbb{D}}(i)\cup\overline{\mathbb{D}}(-1)\right)\), by Figure 4, we have \(\tau_{b}\left[\mathfrak{F}_{n}^{\circ}(\mathbf{a})\right]\cap\iota[\overline{ \mathfrak{F}}]\neq\emptyset\) if and only if \(\operatorname{Re}(b)\geq 0\) and \(\operatorname{Im}(b)\geq 0\). In this case, \(\iota[\mathfrak{F}_{n}^{\circ}(\mathbf{a})]\cap\tau_{b}[\mathfrak{F}^{\circ}] \neq\emptyset\) and \(\mathbf{a}b\in\mathsf{R}(n+1)\). ## 4. Proof of Theorem 1.1 ### The shift space Endow \(\mathscr{D}^{\mathbb{N}}\) with the product topology assuming each factor has a discrete topology. Then, \(\mathscr{D}^{\mathbb{N}}\) can be metricized with the metric dist given by \[\forall\mathbf{a},\mathbf{b}\in\mathscr{D}^{\mathbb{N}}\quad\mathrm{dist}( \mathbf{a},\mathbf{b}):=\begin{cases}0,\ \mathrm{if}\ \mathbf{a}=\mathbf{b},\\ 2^{-\min\{n\in\mathbb{N}:\mathrm{pref}(\mathbf{a};n)\ast\mathrm{pref}(\mathbf{b };m)\}},\ \mathrm{if}\ \mathbf{a}\neq\mathbf{b}.\end{cases}\] The shift map \(\sigma\colon\mathscr{D}^{\mathbb{N}}\to\mathscr{D}^{\mathbb{N}}\) is the map that satisfies \(\sigma(x)(n)=x(n+1)\) for all \(n\in\mathbb{N}\). We link the dynamical systems \((\Omega,\sigma)\) and \((\mathfrak{F}\setminus\mathbb{Q}(i),T)\) through the map \(\Lambda:\Omega\to\mathfrak{F}\setminus\mathbb{Q}(i)\) given by \[\forall\mathbf{a}=(a_{n})_{n\geq 1}\in\Omega\quad\Lambda(\mathbf{a}):=[0;a_{1},a_{2},a_{3},\ldots].\] Let \(\mathbf{a}\in\Omega\) and \(\varepsilon>0\) be arbitrary. Take \(m\in\mathbb{N}\) such that \(\frac{2}{\psi^{m-1}}<\varepsilon\) and denote by \(p_{m}/q_{m}\) the \(m\)-th convergent of \(\Lambda(\mathbf{a})\). If \(\mathbf{b}\in\Omega\) is such that \(\mathrm{dist}(\mathbf{a},\mathbf{b})<2^{-m}\), then \(\mathrm{pref}(\mathbf{a};m)=\mathrm{pref}(\mathbf{b};m)\) and, by Proposition 3.1.v, \[|\Lambda(\mathbf{a})-\Lambda(\mathbf{b})|\leq\left|\Lambda(\mathbf{a})-\frac{p_ {m}}{q_{m}}\right|+\left|\Lambda(\mathbf{b})-\frac{p_{m}}{q_{m}}\right|\leq \frac{2}{|q_{m}|^{2}}\leq\frac{2}{\psi^{m-1}}<\varepsilon.\] In other words, \(\Lambda\) is uniformly continuous. Call \(\Lambda|_{\mathsf{R}}\) the restriction of \(\Lambda\) to \(\mathsf{R}\) and let \(\overline{\Lambda}\) be the unique continuous extension of \(\Lambda|_{\mathsf{R}}\) to \(\overline{\mathsf{R}}\). The next lemma contains the core of Theorem 1.1. We will prove it by constructing algorithmically the sequence \(\mathbf{b}\) from \(\mathbf{a}\). **Lemma 4.1**.: _If \(\mathbf{a}\in\mathsf{Ir}\), then there is some \(\mathbf{b}\in\overline{\mathsf{R}}\) such that \(\overline{\Lambda}(\mathbf{b})=\Lambda(\mathbf{a})\)._ We are now in the position of proving Theorem 1.1. Proof of Theorem 1.1.: The continuity of \(\overline{\Lambda}\) implies \(\overline{\Lambda}\left(\overline{\mathsf{R}}\right)\subseteq\overline{ \mathfrak{F}}\). Let \(\mathbf{a}\in\overline{\mathsf{R}}\) be arbitrary and put \(z=\overline{\Lambda}(\mathbf{a})\). By Proposition 3.1.v, we have \[\forall n\in\mathbb{N}\quad\left|z-\frac{p_{n}(a_{1},\ldots,a_{n})}{q_{n}(a_{ 1},\ldots,a_{n})}\right|\leq\frac{1}{|q_{n}(a_{1},\ldots,a_{n})|^{2}}\] and, since \((|q_{n}|)_{n\geq 1}\) is strictly increasing, \(z\in\overline{\mathfrak{F}}\setminus\mathbb{Q}(i)\). Now take any \(z\in\overline{\mathfrak{F}}\setminus\mathbb{Q}(i)\). Assume that \(z=[0;a_{1},a_{2},\ldots]\in\mathfrak{F}\). If \(\mathbf{a}\in\mathsf{R}\), then \(z=\overline{\Lambda}(\mathbf{a})\). When \(\mathbf{a}\in\mathsf{Ir}\), the sequence \(\mathbf{b}\) provided by Lemma 4.1 solves the problem. Assume now that \(z\not\in\mathfrak{F}\), hence \[\mathrm{Re}(z)=\tfrac{1}{2}\quad\text{ or }\quad\mathrm{Im}(z)=\tfrac{1}{2}.\] If \(\mathrm{Im}(z)=\tfrac{1}{2}\), then \(w=\mathsf{mir}_{1}(z)\in\mathfrak{F}\) and we can obtain some \(\mathbf{b}=(b_{n})_{n\geq 1}\in\overline{\mathsf{R}}\) such that \(\Lambda(\mathbf{b})=w\). Thus, the sequence \(\mathbf{c}\coloneqq(\mathsf{mir}_{1}(b_{j}))_{j\geq 1}\) also belongs to \(\overline{\mathsf{R}}\), by Proposition 3.4, and satisfies \(\Lambda(\mathbf{c})=z\). A similar argument replacing \(\mathsf{mir}_{1}\) with \(\mathsf{mir}_{2}\) gives the proof when \(\mathrm{Re}(z)=\tfrac{1}{2}\). ### The algorithm For each \(a\in\mathscr{D}\) such that \[a\in\{1+im,-1+im,m+i,m-i\}\text{ for some }m\in\mathbb{Z},\;|m|\geq 2,\] define \[S(a)\colon=\begin{cases}im,\text{ if }a\in\{1+im,-1+im\},\\ m,\text{ if }a\in\{m+i,m-i\}.\end{cases}\] Algorithm. * Input. A sequence \(\mathbf{a}=(a_{n})_{n\geq 1}\in\mathsf{Ir}\). * Output. A sequence \(\mathbf{b}=(b_{j})_{j\geq 1}\in\mathscr{D}^{\mathbb{N}}\) satisfying: * \((b_{1},\ldots,b_{j})\in\mathsf{R}(j)\) for \(j\in\mathbb{N}\), * \(\Lambda(\mathbf{a})\in\overline{\mathcal{C}}_{j}(a_{1},\ldots,a_{j})\) for all \(j\in\mathbb{N}\), * \(\Lambda(\mathbf{a})=\langle 0;b_{1},b_{2},\ldots\rangle=\overline{\Lambda}( \mathbf{b})\). 1. Put \(N=0\) and define \(\mathbf{b}_{N}\colon=(b_{N}(j))_{j\geq 1}\) by \[\forall j\in\mathbb{N}\quad b_{N}(j):=a_{j}.\] 2. If possible, pick \(j_{N}\in\mathbb{N}\) such that (6) \[(b_{N}(1),\ldots,b_{N}(j_{N}))\in\mathsf{R}(j_{N})\quad\text{ and }\quad(b_{N}(1),\ldots,b_{N}(j_{N}),b_{N}(j_{N}+1))\not \in\mathsf{R}(j_{N}+1).\] Define the sequence \(\mathbf{b}_{N+1}=(b_{N+1}(j))_{j\geq 1}\) as follows: 1. For \(l\in\{1,\ldots,j_{N}\}\), write \[b_{N+1}(l):=b_{N}(l).\] 2. For \(l=j_{N}+1\), write \[b_{N+1}(j_{N}+1):=S\left(b_{N}(j_{N}+1)\right).\] 3. For \(l\in\{j_{N}+1+n:n\in\mathbb{N}\}\), let \(m\in\mathbb{Z}\) be such that \(b_{N}(j_{N}+1)\in\{1+im,-1+im,m+i,m-i\}\). * If \(b_{N}(j_{N}+1)\in\{m+i,m-i\}\) for some \(m\in\mathbb{Z}\), put \[\forall n\in\mathbb{N}\quad b_{N+1}(j_{N}+1+n):=\mathsf{mir}_{1}\left(b_{N}(j_{ N}+1+n)\right).\] * If \(b_{N}(j_{N}+1)\in\{1+im,-1+im\}\), then \[\forall n\in\mathbb{N}\quad b_{N+1}(j_{N}+1+n):=\mathsf{mir}_{2}\left(b_{N}(j_{ N}+1+n)\right).\] If there is no \(j_{N}\in\mathbb{N}\) such that (6) holds, put \(\mathbf{b}_{N+1}\colon=\mathbf{b}_{N}\). Put \(N:=N+1\). 3. Repeat step 2. 4. Take \(\mathbf{b}:=\lim\limits_{N\to\infty}\mathbf{b}_{N}\). ### Examples For the sake of clarity, we illustrate the algorithm with a couple of examples. Example 1. Let \((B_{n})_{n\geq 1}\) be a sequence in \(\mathbb{Z}\) such that \(|B_{n}|\geq 3\) for all \(n\in\mathbb{N}\) and \(\mathbf{a}=(a_{n})_{n\geq 1}\) by \[\mathbf{a}=(-2,1+iB_{1},-2,1+iB_{2},-2,1+iB_{3},-2,1+iB_{4},\ldots).\] First, let us show that \(\mathbf{a}\in\mathsf{Ir}\). Since \(|B_{1}|\geq 3\), we have \[\mathfrak{F}_{2}(-2,1+iB_{1})=\left[-\frac{1}{2}-\frac{i}{2},-\frac{1}{2}+ \frac{i}{2}\right),\] so \((-2,1+iB_{1})\in\mathsf{Ir}(2)\) and \(\overline{\mathcal{C}}_{1}(-2)\cap\overline{\mathfrak{F}}_{2}(-2,1+iB_{1}) \subseteq\mathfrak{F}_{2}(-2,1+iB_{1})\), which implies \[\overline{\mathcal{C}}_{3}(-2,1+iB_{1},-2)\subseteq\mathcal{C}_{2}(-2,1+iB_{ 1}).\] Moreover, we have \(\mathfrak{F}_{3}(-2,1+iB_{1},-2)=\mathfrak{F}\cap C(1;1)\), so \[\iota\left[\mathfrak{F}_{3}(-2,1+iB_{1},-2)\cap\tau_{1+iB_{2}}[\overline{ \mathfrak{F}}]\right]=\left[1+i\left(B_{2}-\frac{1}{2}\right),1+i\left(B_{2}+ \frac{1}{2}\right)\right],\] and hence \[\overline{\mathcal{C}}_{4}(-2,1+iB_{1},-2,1+iB_{2})\subseteq\mathcal{C}_{3}(- 2,1+iB_{1},-2)\] In fact, we may continue this way to show \[\forall n\in\mathbb{N}\quad\overline{\mathcal{C}}_{n+1}\big{(}a_{1},\ldots,a _{n+1}\big{)}\subseteq\mathcal{C}_{n}(a_{1},\ldots,a_{n}).\] Since \(|\mathcal{C}_{n}(a_{1},\ldots,a_{n})|\to 0\) as \(n\to\infty\), we can single out a complex number \(z\) satisfying \[\{z\}=\bigcap_{n\in\mathbb{N}}\mathcal{C}_{n}(a_{1},\ldots,a_{n}).\] As a consequence, \(z=[0;a_{1},a_{2},\ldots]\) and \(\mathbf{a}\in\mathsf{lr}\), because \((-2,1+iB_{1})\in\mathsf{lr}(2)\). The algorithm gives us the sequence \[\mathbf{b}=(b_{j})_{j\geq 1}=(-2,iB_{1},2,iB_{2},-2,iB_{3},2,iB_{4},-2,iB_{5},2, iB_{6},\ldots),\] To see it, we worked out the first two iterations. 1. Take \(N=0\) and \(\mathbf{b}_{0}\colon=\mathbf{a}\). Since \(j_{0}=1\), we have \[b_{1}(1) \coloneqq a_{1}=-2,\] \[b_{1}(2) \coloneqq S(a_{2})=iB_{1},\] \[\forall l\in\mathbb{N}_{\geq 3} b_{1}(l) \coloneqq\mathsf{mir}_{2}(a_{l}),\] which means \[\mathbf{b}_{1}=(-2,iB_{1},2,-1+iB_{2},2,-1+iB_{3},2,-1+iB_{4},\ldots).\] Add \(1\) to \(N\). 2. For \(N=1\), we have \(j_{1}=3\) because \[(-2,iB_{1},2)\in\mathsf{R}(3)\quad\text{ and }\quad(-2,iB_{1},2,-1+iB_{2})\not \in\mathsf{R}(3).\] Note that \((-2,iB_{1},2,-1+iB_{2})\not\in\Omega(3)\). We have \[\forall l\in\{1,2,3\} b_{2}(l) \coloneqq b_{1}(l),\] \[b_{2}(4) \coloneqq S(b_{1}(3))=iB_{2},\] \[\forall l\in\mathbb{N}_{\geq 5} b_{2}(l) \coloneqq\mathsf{mir}_{2}(b_{1}(l))=\mathsf{mir}_{2}^{2}(a_{l})=a_ {l}.\] which is \[\mathbf{b}_{2}=(-2,iB_{1},2,iB_{2},-2,1+iB_{3},-2,1+iB_{4},\ldots).\] Example 2. Similarly, from the extremely irregular sequence \[\mathbf{a}=(-2,1+2i,-2+i,1+2i,-2+i,1+2i,-2+i,\ldots),\] where the block \(1+2i,-2+i,1+2i,-2+i\) is repeated, we get \[\mathbf{b}=(-2,2i,2,-2i,-2,2i,2,-2i,\ldots).\] Observe that the sequences \[(-2+i,\,-2i,\,2,\,2i,-2,\,-2i,\,2,\,2i,-2,\,2i,\ldots),\] \[(-2+i,\,1-2i,\,-2,\,2i,\,2,\,-2i,\,-2,\,2i,\,2,\,-2i,\ldots)\] also belong to \(\overline{\mathsf{R}}\) and are mapped to \(\zeta_{1}\) under \(\overline{\Lambda}\) (cfr. [8, Proposition 5.5]). ## 5. First application: Borel hierarchy of the set of Hurwitz normal numbers The main aim of this section is to determine how complicated are Hurwitz normal numbers as Borel sets. First, we give a rigorous definition of Hurwitz normal numbers. Then, we recall the construction of the Borel Hierarchy. Afterward, we give some definitions and results from symbolic dynamics. Then, we show Theorem 1.3. We finish this section proving Theorem 5.1 below, which implies Theorem 1.2. **Theorem 5.1**.: _The set \(\operatorname{Norm}(\mu_{\mathrm{H}})\) is \(\mathbf{\Pi}^{0}_{3}(\mathbb{C})\)-complete._ ### Hurwitz normal numbers In [26], Nakada proved the existence and uniqueness of a \(T\)-ergodic Borel probability measure on \(\mathfrak{F}\), \(\mu_{\mathrm{H}}\), equivalent to the Lebesgue measure. For any \(A\subseteq\mathfrak{F}\) let \(\chi_{A}\) be its indicator function; that is, \(\chi_{A}(x)=1\) if \(x\in A\) and \(\chi_{A}(x)=0\) if \(x\in\mathfrak{F}\setminus A\). By virtue Birkhoff's Ergodic Theorem [32, Theorem 1.14], for every \(n\in\mathbb{N}\) and every \(\mathbf{b}\in\mathsf{R}(n)\), almost every \(z\in\mathfrak{F}\) with respect to \(\mu_{\mathrm{H}}\) (or, equivalently, to Lebesgue measure) satisfies \[\lim_{n\to\infty}\frac{\#\{j\in\{0,\ldots,n-1\}:(a_{j}(z),\ldots,a_{j+n-1}(z)) =\mathbf{b}\}}{n}=\mu_{\mathrm{H}}(\mathcal{C}_{n}(\mathbf{b})). \tag{7}\] In fact, for almost every \(z\in\mathfrak{F}\) the equality (7) holds for every finite regular word \(\mathbf{b}\). We refer to such numbers \(z\) as **Hurwitz normal numbers** and we denote them by \(\operatorname{Norm}(\mu_{\mathrm{H}})\). As for regular continued fractions or integer base expansions, normality is closely related to equidistribution. Given a sequence \((z_{n})_{n\geq 1}\) in \(\mathfrak{F}\), let \(\delta_{z_{n}}\) be the point mass measure on \(z_{n}\) for each \(n\in\mathbb{N}\). We call \((z_{n})_{n\geq 1}\)**uniformly distributed (\(\operatorname{mod}\,\mu_{\mathrm{H}}\))** if the sequence of measures \(\frac{1}{n}\sum_{j=1}^{n}\delta_{z_{n}}\) converges weakly to \(\mu_{\mathrm{H}}\). From the Portmanteau Theorem [17, Theorem 5.25], we can derive the next equality: \[\operatorname{Norm}(\mu_{\mathrm{H}})=\{z\in\mathfrak{F}:(T^{n}(z))_{n\geq 1} \text{ is uniformly distributed }\pmod{\mu_{\mathrm{H}}}\}\,.\] In particular, for every \(z\in\operatorname{Norm}(\mu_{\mathrm{H}})\) the set \(\mathcal{O}_{+}^{T}(z):=\{T^{n}(z):n\in\mathbb{N}_{0}\}\) is dense in \(\mathfrak{F}\). ### Borel hierarchy We recall some definitions on the Borel hierarchy. The proofs for the statements in this subsection can be found in [19, Chapter 11]. Let \((X,\tau)\) be a Polish space, that is, a separable completely metrizable topological space. Denote the first uncountable ordinal by \(\omega_{1}\). Write \(\Sigma^{0}_{1}(X):=\tau\) and for each ordinal \(\xi\) with \(1\leq\xi<\omega_{1}\) put \[\mathbf{\Pi}^{0}_{\xi}(X) :=\left\{X\setminus A\colon A\in\mathbf{\Sigma}^{0}_{\xi}(X)\right\}\] \[\mathbf{\Sigma}^{0}_{\xi}(X) :=\left\{\bigcup_{n\in\mathbb{N}}A_{n}:n\in\mathbb{N},\,\xi_{n}< \xi,A_{n}\in\mathbf{\Pi}^{0}_{\xi_{n}}(X)\right\}.\] Then, for any two ordinals \(\xi,\xi^{\prime}\) satisfying \(1\leq\xi<\xi^{\prime}<\omega_{1}\) we have \[\mathbf{\Pi}^{0}_{\xi}(X)\cup\mathbf{\Sigma}^{0}_{\xi}(X)\subseteq\mathbf{\Pi }^{0}_{\xi^{\prime}}(X)\quad\text{ and }\quad\mathbf{\Pi}^{0}_{\xi}(X)\cup\mathbf{\Sigma}^{0}_{\xi}(X)\subseteq \mathbf{\Sigma}^{0}_{\xi^{\prime}}(X).\] Let \(\mathscr{B}(X)\) be the Borel \(\sigma\)-algebra on \(X\). It is a well established fact that \[\mathscr{B}(X)=\bigcup_{\xi<\omega_{1}}\mathbf{\Pi}^{0}_{\xi}(X)=\bigcup_{ \xi<\omega_{1}}\mathbf{\Sigma}^{0}_{\xi}(X).\] and that, whenever \(X\) is also uncountable, all the families \(\mathbf{\Pi}^{0}_{\xi}(X)\) and \(\mathbf{\Sigma}^{0}_{\xi}(X)\) are different. This allows us to stratify \(\mathscr{B}(X)\) in terms of their complexity in the so-called **Borel hierarchy**. For any ordinal \(\xi<\omega_{1}\), we say that a set \(A\subseteq X\) is \(\mathbf{\Pi}^{0}_{\xi}(X)\)**-hard** (resp. \(\mathbf{\Sigma}^{0}_{\xi}(X)\)**-hard**) if \(A\notin\mathbf{\Sigma}^{0}_{\xi}(X)\) (resp. \(A\notin\mathbf{\Pi}^{0}_{\xi}(X)\)). We say that \(A\) is \(\mathbf{\Sigma}^{0}_{\xi}(X)\)**-complete** (resp. \(\mathbf{\Pi}^{0}_{\xi}(X)\)**-complete**) if it belongs to \(\mathbf{\Sigma}^{0}_{\xi}(X)\) and \(\mathbf{\Sigma}^{0}_{\xi}(X)\)-hard (resp. \(\mathbf{\Pi}^{0}_{\xi}(X)\) and \(\mathbf{\Pi}^{0}_{\xi}(X)\)-hard). In simple terms, a Borel set is \(\mathbf{\Pi}^{0}_{\xi}(X)\)-complete when its first appearance in the Borel hierarchy is precisely \(\mathbf{\Pi}^{0}_{\xi}(X)\). ### Symbolic dynamics Let \(\mathscr{A}\) be non-empty and at most countable set. We denote non-empty words \(\mathbf{u}\) on \(\mathscr{A}\) as \(\mathbf{u}=(u_{1},u_{2},\ldots)\). In symbolic dynamics, it is customary to denote words without parenthesis, i.e. \(\mathbf{u}=u_{1}u_{2}\cdots\). He has decided to keep the parentheses for the notation's consistency. Let \(\mathscr{A}^{\sim\omega}\) denote the set of finite words in \(\mathscr{A}\). The length of a non-empty finite word \(\mathbf{u}\) is denoted by \(|\mathbf{u}|\); for example, for \(\mathbf{u}=u_{1}\cdots u_{n}\) we have \(|\mathbf{u}|=n\) and if \(\epsilon\) is the empty word, we define \(|\epsilon|=0\). We remind the reader that given two finite words \(\mathbf{u}=(u_{1},\ldots,u_{n})\) and \(\mathbf{v}=(v_{1},\ldots,v_{m})\), the concatenation of \(\mathbf{u}\) and \(\mathbf{v}\) is \[\mathbf{uv}\colon=(u_{1},\ldots,u_{n},v_{1},\ldots v_{m}).\] For any two non-empty finite words \(\mathbf{v}=v_{1}\cdots v_{n}\), \(\mathbf{w}=w_{1}\cdots w_{n}\) of the same length \(n\in\mathbb{N}\), the **normalized Hamming distance** between \(\mathbf{v}\) and \(\mathbf{w}\) is \[d_{H}(\mathbf{v},\mathbf{w})=\frac{\#\{j\in\{1,\ldots,n\}:v_{j}\neq w_{j}\}}{ n}.\] Endow \(\mathscr{A}^{\mathbb{N}}\) with the product topology. A **subshift**\(X\) of \(\mathscr{A}^{\mathbb{N}}\) is a closed and strongly shift invariant subset of \(\mathscr{A}^{\mathbb{N}}\), that is \(\sigma(X)=X\). For any \(\mathbf{b}\in\mathscr{A}^{<\omega}\), \(\mathbf{b}\neq\varepsilon\), the **symbolic cylinder based on \(\mathbf{b}\)** is the set \[[\mathbf{b}]_{X}\colon=\left\{\mathbf{a}\in X:\operatorname{pref}(\mathbf{a}; |\mathbf{b}|)=\mathbf{b}\right\}.\] We denote by \(\mathscr{L}(X)\) the set of all finite words that appear as factors of some words in \(X\). The subshift \(X\) has the **right feeble specification property** if there is some \(\mathscr{G}\subseteq\mathscr{L}(X)\) with the following properties: 1. If \(\mathbf{u},\mathbf{v}\in\mathscr{G}\), then \(\mathbf{uv}\in\mathscr{G}\); 2. For every \(\varepsilon>0\) there exists \(N\in\mathbb{N}\) such that for any \(\mathbf{u}\in\mathscr{G}\) and any \(\mathbf{v}\in\mathscr{L}(X)\) satisfying \(|\mathbf{v}|\geq N\) there are some \(\mathbf{s}^{\prime},\mathbf{v}^{\prime}\in\mathscr{A}^{<\omega}\) satisfying \[|\mathbf{v}^{\prime}|=|\mathbf{v}|,\quad 0\leq|\mathbf{s}|\leq \varepsilon|\mathbf{v}|,\quad d_{H}(\mathbf{v},\mathbf{v}^{\prime})< \varepsilon,\quad\mathbf{usv}^{\prime}\in\mathscr{G}.\] For any \(\mathbf{x}=(x_{1},x_{2},x_{3},\ldots)\in\mathscr{A}^{\mathbb{N}}\) and \(\mathbf{w}\in\mathscr{A}^{<\omega}\) define \[e(\mathbf{w},\mathbf{x},N)\colon=\#\left\{j\in\{1,\ldots,N\}:(x_{j},x_{j+1}, \ldots,x_{j+n-1})=\mathbf{w}\right\}.\] The **quasi-regular set**\(Q(X)\) of \(X\) is the set of words for which the frequency of every finite word exist: \[Q(X)\,=\left\{\mathbf{x}\in X:\forall\mathbf{w}\in\mathscr{L}(X)\quad\lim_{N \rightarrow\infty}\frac{e(\mathbf{w},\mathbf{x},N)}{N}\text{ exists }\right\}.\] For any \(\sigma\)-invariant Borel measure \(\nu\) on \(X\), we say that a word \(\mathbf{x}\in X\) is \(\nu\)**-generic** if \[\forall\mathbf{w}\in\mathscr{A}^{<\omega}\quad\lim_{N\rightarrow\infty}\frac{e (\mathbf{w},\mathbf{x},N)}{N}=\nu\left([\mathbf{w}]_{X}\right).\] We denote by \(G_{\nu}\) the set of \(\nu\)-generic points. Clearly, \(Q(X)\subseteq G_{\nu}\). In [4], Airey, Jackson, Kwietniak, and Mance determined the place in the Borel Hierarchy of certain Borel sets under the right feeble specification property and some additional mild conditions. **Theorem 5.2** ([4, Theorem 6]).: _Let \(X\) be a closed subshift with the right feeble specification property and at least two shift-invariant Borel probability measures. Let \(\nu\) be a shift-invariant Borel probability measure. If \(B\) is a Borel set such that \(G_{\nu}\subseteq B\subseteq Q(X)\), then \(B\) is \(\mathbf{\Pi}^{0}_{3}(X)\)-hard. In particular, \(G_{\nu}\) and \(Q(X)\) are \(\mathbf{\Pi}^{0}_{3}(X)\)-complete._ ### Proof of Theorem 1.3 The following lemma is essentially [12, Proposition 8.4]. **Lemma 5.3**.: _Let \(n\in\mathbb{N}\) and \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathsf{R}(n)\) be arbitrary._ 1. _If_ \(\operatorname{Pm}(a_{n})\geq 3\) _and_ \(\mathbf{b}\in\mathsf{R}(m)\) _for some_ \(m\in\mathbb{N}\)_, then_ \(\mathbf{a}\mathbf{b}\in\mathsf{R}(m+n)\)_._ 2. _There exists some_ \(b\in\mathbb{Z}[i]\) _such that_ \(\operatorname{Pm}(b)\geq 3\) _and_ \(\mathbf{a}b\in\mathsf{F}(n+1)\)_._ Proof of Theorem 1.3.: By definition, \(\overline{\mathsf{R}}\) is closed in \(\mathscr{D}^{\mathbb{N}}\). Moreover, \(\overline{\mathsf{R}}\) is strong \(\sigma\)-invariant because factors of regular words are regular by (4). Define the set \[\mathscr{G}\colon=\left\{\mathbf{c}=(c_{1},\ldots,c_{m})\in\bigcup_{n\in \mathbb{N}}\mathsf{R}(n):\operatorname{Pm}(c_{m})\geq 3\right\}.\] Let us verify that \(\mathscr{G}\) has the conditions in the definition of the right feeble specification property. 1. If \(\mathbf{u},\mathbf{v}\in\mathscr{G}\), then \(\mathbf{u}\mathbf{v}\in\mathscr{G}\) by Lemma 5.3.(i). 2. Let \(\varepsilon>0\) be arbitrary and pick \(N\in\mathbb{N}\) such that \(N^{-1}<\varepsilon\). Take \(\mathbf{v}=(v_{1},\ldots,v_{m})\in\mathscr{L}(\overline{\mathsf{R}})\) with \(m\geq N\). Because of \(\mathbf{v}\in\mathsf{R}(m)\), we have \((v_{1},\ldots,v_{m-1})\in\mathsf{R}(m-1)\) and, by Lemma 5.3.(ii), we may choose \(a\in\mathscr{D}\) satisfying \[\operatorname{Pm}(a)\geq 3\quad\text{ and }\quad\mathbf{v}^{\prime}\colon=(v_{1}, \ldots,v_{m-1},a)\in\mathscr{G}.\] Letting \(\mathbf{s}\) be the empty word, for every \(\mathbf{u}\in\mathscr{G}\) we have \[0\leq|\mathbf{s}|\leq\varepsilon|\mathbf{v}|,\quad|\mathbf{v}^{\prime}|=| \mathbf{v}|,\quad d_{H}(\mathbf{v},\mathbf{v}^{\prime})<\varepsilon,\quad \mathbf{udv}^{\prime}\in\mathscr{G}.\qed\] ### Proof of Theorem 5.1 We split the proof of Theorem 5.1 into Lemma 5.4 and Lemma 5.6. **Lemma 5.4**.: _The set \(\operatorname{Norm}(\mu_{\mathrm{H}})\) is \(\mathbf{\Pi}^{0}_{3}(\mathbb{C})\)-hard._ Proof.: The space \((\overline{\mathsf{R}},\sigma)\) has infinitely many \(\sigma\)-invariant measures. Certainly, by Proposition 3.3, any constant sequence \(\mathbf{a}=(a,a,\ldots)\in\mathscr{D}^{\mathbb{N}}\) with \(|a|\geq\sqrt{8}\) belongs to \(\overline{\mathsf{R}}\) and the point mass measure based on \(\mathbf{a}\) is \(\sigma\)-invariant. Let us construct the \(\sigma\)-invariant Borel probability measure \(\nu\) on \(\overline{\mathsf{R}}\) on which we apply Theorem 5.2. Let \(\mathscr{I}\subseteq\overline{\mathsf{R}}\) be the set \[\mathscr{I}\colon=\bigcap_{n\in\mathbb{N}}\bigcup_{\mathbf{a}\in\mathsf{R}(n )}\mathcal{C}^{\circ}_{n}(\mathbf{a})\] and define the map \(\Phi:\Lambda^{-1}[\mathscr{I}]\to\mathscr{I}\) by \[\forall\mathbf{a}\in\Lambda^{-1}[\mathscr{I}]\quad\Phi(\mathbf{a})\coloneqq \overline{\Lambda}(\mathbf{a}).\] It is easy to show that \(\Lambda^{-1}[\mathscr{I}]\) is a Borel subset of \(\overline{\mathsf{R}}\) and that \(\Phi\) is a homeomorphism. Then, the following conditions define a unique Borel probability measure \(\nu\) on \(\overline{\mathsf{R}}\) given by: 1. For every Borel subset \(B\) of \(\Lambda^{-1}[\mathscr{I}]\) we have \(\nu(B)\mathrel{\mathop{:}}=\mu_{\mathrm{H}}(\Phi[B])\). 2. \(\nu\left(\overline{\mathsf{R}}\smallsetminus\overline{\Lambda}^{-1}[\mathscr{I }]\right)=0\). The measure \(\nu\) is \(\sigma\)-invariant, by virtue of the \(T\)-invariance of \(\mu_{\mathrm{H}}\) and the equality \(\Lambda\circ\sigma=T\circ\Lambda\) on \(\mathsf{R}\). Therefore, by Theorem 5.2, the set \(G_{\nu}\subseteq\overline{\mathsf{R}}\) is \(\Pi^{0}_{3}(\overline{\mathsf{R}})\)-complete. To finish the proof, we must translate this conclusion into the complex plane. First, note that if \(z\) belongs to the boundary of some prototype set, then \(T(z)\) also belongs to the boundary of some prototype set. The proof of this assertion amounts to applying the complex inversion to a finite collection of line segments and arches. As a consequence, since for each \(z\in\operatorname{Norm}(\mu_{\mathrm{H}})\) the set \(\mathcal{O}_{\star}^{T}(z)\) is dense in \(\mathfrak{F}\), we have \(\operatorname{Norm}(\mu_{\mathrm{H}})\subseteq\mathscr{I}\). Moreover, the definitions of \(G_{\nu}\) and \(\operatorname{Norm}(\mu_{\mathrm{H}})\) yield \[G_{\nu}=\Phi^{-1}[\operatorname{Norm}(\mu_{\mathrm{H}})].\] Therefore, the set \(\operatorname{Norm}(\mu_{\mathrm{H}})\) is \(\boldsymbol{\Pi}^{0}_{3}(\mathscr{I})\)-complete and, thus, it is \(\boldsymbol{\Pi}^{0}_{3}(\mathbb{C})\)-hard (see Remark 5.5 below). _Remark 5.5_.: If \(X,Y\) are Polish spaces, \(A\subseteq X\), \(B\subseteq Y\), a **Wadge reduction** of \(A\) to \(B\) is a continuous function \(f:X\to Y\) such that \(f^{-1}[B]=A\). It is easy to verify that if, for some ordinal \(\xi<\omega_{1}\), the set \(B\) is \(\boldsymbol{\Pi}^{0}_{\xi}\)-hard and there is Wadge reduction of \(A\) to \(B\), then \(A\) is \(\boldsymbol{\Pi}^{0}_{\xi}\)-hard. For more on Wadge reductions, see [19, Section 21.E]. **Lemma 5.6**.: _The set \(\operatorname{Norm}(\mu_{\mathrm{H}})\) belongs to \(\boldsymbol{\Pi}^{0}_{3}(\mathbb{C})\)._ Proof.: Let \((\varepsilon_{k})_{k\geq 1}\) be a strictly decreasing sequence of positive numbers converging to \(0\). For \(M,n,k\in\mathbb{N}\) and \(\mathbf{b}\in\mathsf{R}(n)\), define the sets \[E_{n}(\mathbf{b},M,k)\mathrel{\mathop{:}}=\left\{z\mathrel{ \mathop{:}}=[0;a_{1},a_{2},\ldots]\mathrel{\mathop{:}}\left|\frac{1}{M}\sum_{ j=0}^{M-1}\chi_{\mathcal{C}_{n}(\mathbf{b})}\left(T^{j}(z)\right)-\mu_{ \mathrm{H}}(\mathcal{C}_{n}(\mathbf{a}))\right|\leq\varepsilon_{k}\right\},\] \[F_{n}(\mathbf{b},M,k)\mathrel{\mathop{:}}=\operatorname{Cl}_{ \mathbb{C}}(E_{n}(\mathbf{b},M,k)),\] \[G_{n}(\mathbf{b},M,k)\mathrel{\mathop{:}}=\operatorname{Int}_{ \mathbb{C}}(E_{n}(\mathbf{b},M,k)).\] Then, the definition of \(\operatorname{Norm}(\mu_{\mathrm{H}})\) gives \[\operatorname{Norm}(\mu_{\mathrm{H}})=\bigcap_{n\in\mathbb{N}}\bigcap_{ \mathbf{b}\in\mathsf{R}(n)}\bigcap_{k\in\mathbb{N}}\bigcup_{N\in\mathbb{N}} \bigcap_{M\in\mathbb{N}_{2}}E_{n}(\mathbf{b},M,k).\] Note that each \(E_{n}(\mathbf{b},M,k)\) is a finite union of cylinders of level \(n+M\). Put \[\mathfrak{L}\mathrel{\mathop{:}}=\bigcap_{n\in\mathbb{N}}\bigcap_{ \mathbf{b}\in\mathsf{R}(n)}\bigcap_{k\in\mathbb{N}}\bigcup_{N\in\mathbb{N}} \bigcap_{M\in\mathbb{N}_{2}}G_{n}(\mathbf{b},M,k),\] \[\mathfrak{M}\mathrel{\mathop{:}}=\bigcap_{n\in\mathbb{N}}\bigcap_{ \mathbf{b}\in\mathsf{R}(n)}\bigcap_{k\in\mathbb{N}}\bigcup_{N\in\mathbb{N}} \bigcap_{M\in\mathbb{N}_{2}}F_{n}(\mathbf{b},M,k),\] so \(\mathfrak{L}\subseteq\operatorname{Norm}(\mu_{\mathrm{H}})\subseteq\mathfrak{M}\). Now we show \(\mathfrak{M}\subseteq\mathfrak{L}\) in order to conclude the lemma. Let \(z\in\mathfrak{M}\) be arbitrary and assume that \[z\in\bigcap_{n\in\mathbb{N}}\mathcal{C}_{n}^{\circ}\left(a_{1}(z),\ldots,a_{n}( z)\right). \tag{8}\] Let \(n,k\in\mathbb{N}\) and \(\mathbf{b}\in\mathsf{R}(n)\) be arbitrary and consider \(N\in\mathbb{N}\) such that \[\forall\,M\in\mathbb{N}_{\geq N}\quad z\in F_{n}(\mathbf{b},M,k).\] Take any \(M\in\mathbb{N}_{\geq N}\) and let \(\mathbf{d}\in\mathsf{R}(M+n)\) be such that \[\mathcal{C}_{M+n}(\mathbf{d})\subseteq E_{n}(\mathbf{b},M,k)\quad\text{ and }\quad z\in\overline{\mathcal{C}}_{M+n}(\mathbf{d}).\] Since different regular cylinders of the same level have disjoint interior, from (8) we get \(\mathbf{d}=(a_{1}(z),\dots,a_{M+n}(z))\), so \(z\in G_{n}(\mathbf{b},M,k)\). This shows \(\mathfrak{M}\subseteq\mathfrak{L}\) and, hence, \(\operatorname{Norm}(\mu_{\mathrm{H}})=\mathfrak{M}\). Lastly, we now show that (8) is indeed true. To this end, it suffices to prove that \(\mathcal{O}_{T}^{+}(z)\) is dense in \(\mathfrak{F}\), which in turns follows from \[\forall n\in\mathbb{N}\quad\forall\mathbf{b}\in\mathsf{R}(n)\quad\left\{T^{j} (z):j\in\mathbb{N}_{0}\right\}\cap\mathcal{C}_{n}^{\circ}(\mathbf{b})\neq\emptyset. \tag{9}\] Take any \(n\in\mathbb{N}\) and \(\mathbf{b}\in\mathsf{R}(n)\). Let \(c\in\mathscr{D}\) be such that \[\mathbf{b}c\in\mathsf{R}(n+1)\quad\text{ and }\quad\overline{\mathcal{C}}_{n+1} (\mathbf{b}c)\subseteq\mathcal{C}_{n}^{\circ}(\mathbf{b}).\] Choose \(k\in\mathbb{N}\) so large that \[\varepsilon_{k}<\frac{\mu_{\mathrm{H}}(\mathcal{C}_{n+1}(\mathbf{b}c))}{4}\] and let \(N\in\mathbb{N}\) be such that \[\forall M\in\mathbb{N}_{\geq N}\quad z\in F_{n+1}(\mathbf{b}c,M,k).\] Note that \(E_{n+1}(\mathbf{b}c,M,k)\) is a finite union of cylinders of level \(M+n\). Take \(\mathbf{d}\in\mathsf{R}(M+n)\) satisfying \[\mathcal{C}_{M+n}(\mathbf{d})\subseteq E_{n+1}(\mathbf{b}c,M,k)\quad\text{ and }\quad z\in\overline{\mathcal{C}}_{M+n}(\mathbf{d}).\] The choice of \(k\) and the definition of \(E_{n+1}(\mathbf{b}c,M,k)\) guaranty the existence of some \(r\in\{n+1,\dots,M+n\}\) satisfying \[(d_{r-n},\dots,d_{r-1},d_{r})=(b_{1},\dots,b_{n},c).\] Then, by choice of \(c\), we have \[\overline{\mathcal{C}}_{M+n-1}(\mathbf{d})\subseteq\overline{\mathcal{C}}_{r} (d_{1},\dots,d_{r})\subseteq\mathcal{C}_{r-1}^{\circ}(d_{1},\dots,d_{r-1});\] therefore, applying \(T^{r-n-1}\) we conclude \(T^{r-n-1}(z)\in\mathcal{C}_{n}^{\circ}(\mathbf{b})\) and, hence, (9). ## 6. Second application: construction of transcendental numbers Recall from the introduction that, given a finite alphabet \(\mathscr{A}\) and an infinite word \(\mathbf{a}=(a_{n})_{n\geq 1}\) on \(\mathscr{A}\), for each \(n\in\mathbb{N}\) we define \[r(n,\mathbf{a}):=\min\left\{m\in\mathbb{N}:\exists i\in\{1,\dots,m-n\}\quad(a _{i},\dots,a_{i+n+1})=(a_{m},,\dots,a_{m+n+1})\right\}\] and that the repetition exponent of \(\mathbf{a}\) is \[\operatorname{rep}(\mathbf{a}):=\liminf_{n\to\infty}\frac{r(n,x)}{n}.\] Bugeaud and Kim [9] showed that the finiteness of \(\operatorname{rep}(\mathbf{x})\) for a given word \(\mathbf{x}\) is equivalent to a combinatorial condition used in [1, 7] to construct real transcendental numbers. **Lemma 6.1** ([9, Section 10]).: _Let \(\mathscr{A}\) be a finite alphabet and \(\mathbf{a}=(a_{n})_{n\geq 1}\in\mathscr{A}^{\mathbb{N}}\). If \(\operatorname{rep}(\mathbf{a})<\infty\), then there are three sequences \((W_{n})_{n\geq 1}\), \((U_{n})_{n\geq 1}\), \((V_{n})_{n\geq 1}\) of finite words in \(\mathscr{A}\) such that_ _1. For all_ \(n\in\mathbb{N}\)_, the word_ \(W_{n}U_{n}V_{n}U_{n}\) _is a prefix of_ \(\mathbf{a}\)_;_ _2. The sequence_ \(\left(\left(|W_{n}|+|V_{n}|\right)/|U_{n}|\right)_{n\geq 1}\) _is bounded above;_ _3. \(|W_{n}|\to\infty\) as \(n\to\infty\)._ Lemma 6.1 allows us to state a well-known result by Bugeaud [7] as follows. **Theorem 6.2** ([7, Theorem 1.3]).: _Let \(\mathbf{a}=(a_{n})_{n\geq 1}\in\mathbb{N}^{\mathbb{N}}\) be a bounded and non-periodic sequence such that \(\operatorname{rep}(\mathbf{a})<\infty\). Then, \(\langle 0;a_{1},a_{2},a_{3},\ldots\rangle\) is transcendental._ Theorem 6.3 is the HCF version of Theorem 6.2. **Theorem 6.3**.: _[_14_, Theorem 5.1]_ _Let \(\mathbf{a}\in\Omega\) be a bounded and non-periodic sequence such that \(\operatorname{rep}(\mathbf{a})<\infty\). Let \((W_{n})_{n\geq 1}\), \((U_{n})_{n\geq 1}\), and \((V_{n})_{n\geq 1}\) be sequences as in Lemma 6.1._ _1. If_ \(\liminf\limits_{n\to\infty}|W_{n}|<\infty\)_, then_ \(\Lambda(\mathbf{a})\) _is transcendental._ _2. If \(\liminf\limits_{n\to\infty}|W_{n}|=\infty\) and \(|a_{n}|\geq\sqrt{8}\) for all \(n\in\mathbb{N}\), then \(\Lambda(\mathbf{a})\) is transcendental._ We conjecture that the condition \(|a_{n}|\geq\sqrt{8}\) for all \(n\in\mathbb{N}\) can be removed from the second point of Theorem 6.3. The second application of our work, Theorem 1.4, is a construction of a family of transcendental numbers overlooked by Theorem 6.3. This provides some evidence supporting our conjecture. In [29], Schmidt extended his celebrated Subspace Theorem to number fields [29]. Theorem 6.4 below is the corresponding statement for \(\mathbb{Q}(i)\). For \(m\in\mathbb{N}\) and \(\mathbf{z}=(z_{1},\ldots,z_{m})\in\mathbb{C}^{m}\), we write \(\|\mathbf{z}\|=\max\{|z_{1}|,\ldots,|z_{m}|\}\) and \(\overline{\mathbf{z}}=(\overline{z_{1}},\ldots,\overline{z_{m}})\), where the bar means complex conjugation. **Theorem 6.4** ([29, Theorem 3]).: _Let \(m\in\mathbb{N}\). Assume that we have two collections of linearly independent linear forms \(\mathscr{L}_{1},\ldots,\mathscr{L}_{m}\) and \(\mathscr{M}_{1},\ldots,\mathscr{M}_{m}\) in \(m\) variables with real or complex algebraic coefficients. For each \(\varepsilon>0\) there is a finite collection \(T_{1},\ldots,T_{k}\) of proper subspaces of \(\mathbb{C}^{m}\) such that every solution \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{m})\in\mathbb{Z}[i]^{m}\) of_ \[|\mathscr{L}_{1}(\boldsymbol{\beta})\cdots\mathscr{L}_{m}(\boldsymbol{\beta} )|\,|\mathscr{M}_{1}(\boldsymbol{\beta})\cdots\mathscr{M}_{m}(\boldsymbol{ \beta})|<\|\boldsymbol{\beta}\|^{-\varepsilon}\] _lies in \(T_{1}\cup\cdots\cup T_{k}\)._ Recall that, in general, if \(A_{0},A_{1},\ldots,A_{n}\) are variables and \(q_{0},q_{1},\ldots,q_{n}\) are the denominators of the continued fraction \(\langle A_{0};A_{1},A_{2},A_{3},\ldots,A_{n}\rangle\), then \[\frac{q_{n-1}}{q_{n}}=\langle 0;A_{n},A_{n-1},\ldots,A_{2},A_{1}\rangle. \tag{10}\] Equation (10) is known as mirror formula [20, Theorem 6]. **Lemma 6.5**.: _Under the assumptions of Theorem 1.4, there exists an \(\varepsilon>0\) such that for all \(n\in\mathbb{N}\) we have \(\psi^{2u_{n}}\geq|q_{2w_{n}}q_{2w_{n}+2u_{n}+2v_{n}}|^{\varepsilon}\)._ Proof.: This is essentially [14, Lemma 5.2]. The **shuffle** of two words of the same length \(A=(a_{1},\ldots,a_{n})\) and \(B=(b_{1},\ldots,b_{n})\) is the word \(s(A,B)=(c_{1},\ldots,c_{2n})\) given by \(c_{2j-1}=a_{j}\) and \(c_{2j}=b_{j}\) for all \(j\in\{1,\ldots,n\}\); that is, \[s(A,B)=(a_{1},\,b_{1},\,a_{2},\,b_{2},\ldots,\,a_{n},\,b_{n}).\] Proof of Theorem 1.4.: We omit several lengthy but direct computations. Similar arguments have appeared in [1, 7] for real numbers and in [14, Section 5.2] for complex numbers. Assume that \(\zeta\) is an algebraic number, then \([\mathbb{Q}(i,\zeta):\mathbb{Q}(i)]\geq 3\) because its HCF is not periodic (see, for example, [12, Theorem 4.2]). Let \((W_{n})_{n\geq 1}\), \((U_{n})_{n\geq 1}\), and \((V_{n})_{n\geq 1}\) be the sequences associated to \(\mathbf{B}\) as in Lemma 6.1 and define the sequences of non-negative integers \((w_{n})_{n\geq 1}\), \((u_{n})_{n\geq 1}\), and \((v_{n})_{n\geq 1}\) by \[\forall n\in\mathbb{N}\quad w_{n}:=|W_{n}|,\;u_{n}:=|U_{n}|,\;v_{n}:=|V_{n}|,\; t_{n}:=w_{n}+u_{n}+v_{n}.\] We may assume that every \(u_{n}\) is even. Indeed, suppose that \(u_{n}\) is odd for every large \(n\). For such \(n\), call \(x\) be the last letter of \(U_{n}\) and \(\widehat{U}:=\operatorname{pref}(U_{n},u_{n}-1)\), so \[W_{n}U_{n}V_{n}U_{n}=W_{n}\,\widehat{U}_{n}x\,V_{n}\widehat{U}_{n}x\] and we can replace \(U_{n}\) with \(\widehat{U}_{n}\) and \(V_{n}\) with \(x\,V_{n}\). When \(\liminf\limits_{n\to\infty}w_{n}<\infty\), the sequence \((-2,1+iB_{1},-2,1+iB_{2},\ldots)\) also has finite repetition exponent and its corresponding sequence \((W_{n})_{n\geq 1}\) has a constant subsequence. The transcendence of \(\zeta\) then follows from Theorem 6.3. Assume that \(w_{n}\to\infty\) when \(n\to\infty\). It was observed in [7, p. 1013] that we do not lose any generality by assuming \(B_{w_{n}}\pm B_{t_{n}}\) for all \(n\in\mathbb{N}\). Define \(\mathbf{a}=(a_{n})_{n\geq 1}\) by \[\forall n\in\mathbb{N}\quad a_{n}:=\begin{cases}-2,\;\text{if $n$ is odd},\\ 1+iB_{\frac{n}{2}},\;\text{if $n$ is even}.\end{cases}\] By Example 1, we have \(\mathbf{a}\in\mathsf{Ir}\) and the algorithm produces the sequence \[\mathbf{b}=(b_{1},b_{2},\ldots)=(-2,iB_{1},2,iB_{2},-2,iB_{3},2,iB_{4},\ldots).\] Note that \[\forall n\in\mathbb{N}\quad(b_{1},\ldots,b_{n}),\;(b_{n},\ldots,b_{1})\in \mathsf{R}(n). \tag{11}\] For each \(n\in\mathbb{N}\), let \(A_{n}\), \(B_{n}\), \(C_{n}\) be the alternating words \[A_{n} :=(-2,\,2,\,\ldots,\,(-2)^{w_{n}})\,,\] \[B_{n} :=\left((-2)^{w_{n}+1},\,(-2)^{w_{n}+2},\ldots,\,(-2)^{w_{n}+u_{n }}\right),\] \[C_{n} :=\left((-2)^{w_{n}+1},\,(-2)^{w_{n}+2},\ldots,\,(-2)^{w_{n}+v_{n }}\right),\] and define \[W_{n}^{\prime}:=s(A_{n},W_{n}),\quad U_{n}^{\prime}:=s(B_{n},U_{n}),\quad V_ {n}^{\prime}:=s(C_{n},V_{n}).\] Therefore, since for all \(n\in\mathbb{N}\) we have \(\operatorname{pref}(\mathbf{b},2n)\in\mathsf{F}(n)\) and \(u_{n}\) is even, the periodic sequence \(W_{n}^{\prime}\,U_{n}^{\prime}\,V_{n}^{\prime}U_{n}^{\prime}\,V_{n}^{\prime} U_{n}^{\prime}\cdots\) belongs to \(\overline{\mathsf{R}}\) and the number \[\zeta_{n}:=\overline{\Lambda}(W_{n}^{\prime}\,U_{n}^{\prime}\,V_{n}^{\prime} \,U_{n}^{\prime}\,V_{n}^{\prime}\,U_{n}^{\prime}\,V_{n}^{\prime}\cdots)\] is a quadratic irrational over \(\mathbb{Q}(i)\) solving the polynomial \[P_{n}(X):=\begin{vmatrix}q_{2w_{n}-1}&q_{2t_{n}-1}\\ q_{2w_{n}}&q_{2t_{n}}\end{vmatrix}X^{2}-\begin{vmatrix}q_{2w_{n}-1}&p_{2t_{n}-1} \\ q_{2w_{n}}&p_{2t_{n}}\end{vmatrix}+\begin{vmatrix}p_{2w_{n}-1}&q_{2t_{n}-1}\\ p_{2w_{n}}&q_{2t_{n}}\end{vmatrix}X+\begin{vmatrix}p_{2w_{n}-1}&p_{2t_{n}-1}\\ p_{2w_{n}}&p_{2t_{n}}\end{vmatrix}\] (see the proof of [12, Theorem 4.2]). We can rely on the basic properties of determinants and part iii of Lemma 3.1 (cfr. [14, p. 21]) to show that, for some constant \(\kappa_{1}=\kappa_{1}(\zeta)>0\), we have \[\forall n\in\mathbb{N}\quad|P_{n}(\zeta)|\leq\kappa_{1}\frac{|q_{2t_{n}}|}{|q_{ 2w_{n}}q_{2t_{n}+2u_{n}}^{2}|}.\] Let \((\mathbf{x}_{n})_{n\geq 1}\) be the sequence in \(\mathbb{Z}[i]^{4}\) given by \[\mathbf{x}_{n}:=\left(\begin{vmatrix}q_{2w_{n}-1}&q_{2t_{n}-1}\\ q_{2w_{n}}&q_{2t_{n}}\end{vmatrix},\begin{vmatrix}q_{2w_{n}-1}&p_{2t_{n}-1}\\ q_{2w_{n}}&p_{2t_{n}}\end{vmatrix},\begin{vmatrix}p_{2w_{n}-1}&q_{2t_{n}-1}\\ p_{2w_{n}}&q_{2t_{n}}\end{vmatrix}\right),\] so \(\|\mathbf{x}_{n}\|\leq 2|q_{2w_{n}}q_{2t_{n}}|\). Note that, if \(M\in\mathbb{N}\) is such that \(|B_{n}|\leq M\) for all \(n\in\mathbb{N}\), we have \[\forall n\in\mathbb{N}\quad|\zeta x_{n,1}-x_{n,2}|\leq 2M\left|\frac{|q_{2w_{n}} }{q_{2t_{n}}}\right|.\] Certainly, for any \(n\in\mathbb{N}\) we have \[|\zeta x_{n,1}-x_{n,2}| =|q_{2w_{n}-1}(\zeta q_{2t_{n}}-p_{2t_{n}})-q_{2w_{n}}(\zeta q_{2 t_{n}-1}-p_{2t_{n}-1})|\] \[\leq\left|\frac{q_{2w_{n}-1}}{q_{2t_{n}}}\right|+\left|\frac{q_{ 2w_{n}}}{q_{2t_{n}-1}}\right|\leq 2(M+1)\left|\frac{q_{2w_{n}}}{q_{2t_{n}-1}} \right|.\] We can obtain a similar bound for \(|\zeta x_{n,1}-x_{n,3}|\). Consider the linear forms \(\mathscr{L}_{1,1}\), \(\mathscr{L}_{1,2}\), \(\mathscr{L}_{1,3}\), \(\mathscr{L}_{1,4}\) in the variable \(\mathbf{X}=(X_{1},X_{2},X_{3},X_{4})\) and the linear forms \(\mathscr{M}_{1,1}\), \(\mathscr{M}_{1,2}\), \(\mathscr{M}_{1,3}\), \(\mathscr{M}_{1,4}\) in the variable \(\mathbf{Y}=(Y_{1},Y_{2},Y_{3},Y_{4})\) given by \[\mathscr{L}_{1,1}(\mathbf{X}) =\zeta^{2}X_{1}-\zeta(X_{2}+X_{3})+X_{4}, \mathscr{M}_{1,1}(\mathbf{X}) =\overline{\mathscr{L}_{1,1}(\overline{\mathbf{Y}})},\] \[\mathscr{L}_{1,2}(\mathbf{X}) =\zeta X_{1}-X_{2}, \mathscr{M}_{1,2}(\mathbf{Y}) =\overline{\mathscr{L}_{1,2}(\overline{\mathbf{Y}})}\] \[\mathscr{L}_{1,3}(\mathbf{X}) =\zeta X_{1}-X_{3}, \mathscr{M}_{1,3}(\mathbf{Y}) =\overline{\mathscr{L}_{1,3}(\overline{\mathbf{Y}})}\] \[\mathscr{L}_{1,4}(\mathbf{X}) =X_{1}, \mathscr{M}_{1,4}(\mathbf{Y}) =\overline{\mathscr{L}_{1,4}(\overline{\mathbf{Y}})}.\] Take \(\varepsilon>0\) as in Lemma 6.5. Then, for some positive constants \(\kappa_{2},\kappa_{3}\) and all \(n\in\mathbb{N}\) we have \[|\mathscr{L}_{1,1}(\mathbf{x}_{n})\mathscr{L}_{1,2}(\mathbf{x}_ {n})\mathscr{L}_{1,3}(\mathbf{x}_{n})\mathscr{L}_{1,4}(\mathbf{x}_{n})| =|P_{n}(\zeta)|\,|\zeta x_{n,1}-x_{n,2}||\zeta x_{n,1}-x_{n,3}| \,|x_{n,1}|\] \[\leq\kappa_{2}\left|\frac{q_{2t_{n}}}{q_{2w_{n}}q_{2t_{n}+2u_{n}}^ {2}}\frac{q_{2w_{n}}}{q_{2t_{n}}}\frac{q_{2t_{n}}}{q_{2w_{n}}}q_{2w_{n}}q_{2t_ {n}}\right|\] \[=\kappa_{2}\left|\frac{q_{2t_{n}}^{2}}{q_{2t_{n}+2u_{n}}^{2}}\right|\] \[\leq\frac{\kappa_{3}}{\psi^{4u_{n}}}\leq\frac{\kappa_{3}}{|q_{2w_{n }}q_{2t_{n}}|^{\varepsilon}}=\frac{\kappa_{3}}{\|\mathbf{x}_{n}\|^{ \varepsilon}}.\] By the definition of \(\mathscr{M}_{1,1}\), \(\mathscr{M}_{1,2}\), \(\mathscr{M}_{1,3}\), \(\mathscr{M}_{1,4}\), for each \(n\in\mathbb{N}\) we have \[|\mathscr{M}_{1,1}(\mathbf{x}_{n})\mathscr{M}_{1,2}(\mathbf{x}_{n})\mathscr{M} _{1,3}(\mathbf{x}_{n})\mathscr{M}_{1,4}(\mathbf{x}_{n})|=|\mathscr{L}_{1,1}( \mathbf{x}_{n})\mathscr{L}_{1,2}(\mathbf{x}_{n})\mathscr{L}_{1,3}(\mathbf{x}_{ n})\mathscr{L}_{1,4}(\mathbf{x}_{n})|\,.\] Therefore, by Theorem 6.4 there is some non-zero \(\mathbf{x}=(x_{1},x_{2},x_{3},x_{4})\in\mathbb{Z}[i]^{4}\) and an infinite set \(\mathcal{N}_{1}\subseteq\mathbb{N}\) such that \[\forall n\in\mathcal{N}_{1}\quad x_{1}x_{n,1}+x_{2}x_{n,2}+x_{3}x_{n,3}+x_{4}x_ {n,4}=0. \tag{12}\] Let \((Q_{n})_{n\geq 1}\) and \((R_{n})_{n\geq 1}\) be the sequences given by \[\forall n\in\mathbb{N}\qquad Q_{n}:=\frac{q_{2w_{n}-1}q_{2t_{n}}}{q_{2w_{n}}q_{2 t_{n}-1}},\quad R_{n}:=\zeta-\frac{p_{2n}}{q_{2n}}.\] Then, \((Q_{n})_{n\geq 1}\) is then bounded away from \(0\) since both sequences \((q_{2w_{n}-1}/q_{2w_{n}})_{n\geq 1}\) and \((q_{2t_{n}-1}/q_{2t_{n}})_{n\geq 1}\) are bounded and bounded away from \(0\). For each \(n\in\mathcal{N}_{1}\), we divide (12) by \(q_{2w_{n}}q_{2t_{n}-1}\) to obtain \[0 =x_{1}(Q_{n}-1)+x_{2}\left(Q_{n}(\zeta-R_{2t_{n}})-(\zeta-R_{2t_{ n}-1})\right)+\] \[\quad+x_{3}\left(Q_{n}(\zeta-R_{2w_{n}-1})-(\zeta-R_{2w_{n}}) \right)+\] \[\quad+x_{4}\left(Q_{n}(\zeta-R_{2w_{n}-1})(\zeta-R_{2t_{n}})-( \zeta-R_{2w_{n}})(\zeta-R_{2t_{n}-1})\right)\] and, since \(R_{n}\to 0\) as \(n\to\infty\), we conclude \[\lim_{\begin{subarray}{c}n\to\infty\\ n\in\mathcal{N}_{1}\end{subarray}}(Q_{n}-1)\left(x_{1}+(x_{2}+x_{3})\zeta+ \zeta^{2}x_{4}\right)=0.\] In order to show \[x_{1}+(x_{2}+x_{3})\zeta+\zeta^{2}x_{4}=0, \tag{13}\] take an infinite subset \(\mathcal{N}_{1}^{\prime}\) of \(\mathcal{N}_{1}\) and two different constants \(a,b\in\mathbb{Z}\) such that \(a=B_{w_{n}}\) and \(b=B_{t_{n}}\) for all \(n\in\mathcal{N}_{1}^{\prime}\) and such that the next limits exist: \[\gamma:=\lim_{\begin{subarray}{c}n\to\infty\\ n\in\mathcal{N}_{1}^{\prime}\end{subarray}}\frac{q_{2w_{n}}}{q_{2w_{n}-1}}, \quad\eta:=\lim_{\begin{subarray}{c}n\to\infty\\ n\in\mathcal{N}_{1}^{\prime}\end{subarray}}\frac{q_{2t_{n}}}{q_{2t_{n}-1}}.\] Since the compact sets \[\tau_{ia}\left(\overline{\mathcal{C}}_{1}(-2)\cup\overline{\mathcal{C}}_{1}(2 )\right)\quad\text{ and }\quad\tau_{ib}\left(\overline{\mathcal{C}}_{1}(-2)\cup \overline{\mathcal{C}}_{1}(2)\right)\] are disjoint, the sequence \((Q_{n})_{n\geq 1}\) is bounded away from \(1\) and, therefore, (13) is true. In this step we have used the mirror formula and (11). Therefore, by \([\mathbb{Q}(i,\zeta),\mathbb{Q}(i)]\geq 3\), we must have \[x_{1}=x_{2}+x_{3}=x_{4}=0.\] As a consequence, for \(n\in\mathcal{N}_{1}\), the quadratic irrational \(\zeta_{n}\) is a root of \[P_{n}(X)=\left|\begin{matrix}q_{2w_{n}-1}&q_{2t_{n}-1}\\ q_{2w_{n}}&q_{2t_{n}}\end{matrix}\right|X^{2}-2\left|\begin{matrix}q_{2w_{n}-1} &p_{2t_{n}-1}\\ q_{2w_{n}}&p_{2t_{n}}\end{matrix}\right|X+\left|\begin{matrix}p_{2w_{n}-1}&p_{ 2t_{n}-1}\\ p_{2w_{n}}&p_{2t_{n}}\end{matrix}\right|. \tag{14}\] Let \((\mathbf{v}_{n})_{n\geq 1}\), \(\mathbf{v}_{n}=(v_{n,1},v_{n,2},v_{n,3})\in\mathbb{Z}[i]^{3}\), be given by \[\forall n\in\mathbb{N}\quad\mathbf{v}_{n}:=\left(\left|\begin{matrix}q_{2w_{n}-1 }&q_{2t_{n}-1}\\ q_{2w_{n}}&q_{2t_{n}}\end{matrix}\right|,\left|\begin{matrix}q_{2w_{n}-1}&p_{ 2t_{n}-1}\\ q_{2w_{n}}&p_{2t_{n}}\end{matrix}\right|,\left|\begin{matrix}p_{2w_{n}-1}&p_{ 2t_{n}-1}\\ p_{2w_{n}}&p_{2t_{n}}\end{matrix}\right|\right).\] hence, \(\|\mathbf{v}_{n}\|_{\infty}\leq 2|q_{2w_{n}}q_{2t_{n}}|\). Consider the linear forms \(\mathscr{L}_{2,1}\), \(\mathscr{L}_{2,2}\), \(\mathscr{L}_{2,3}\) in the variable \(\mathbf{X}=(X_{1},X_{2},X_{3})\) and the linear forms \(\mathscr{M}_{2,1}\), \(\mathscr{M}_{2,2}\), \(\mathscr{M}_{2,3}\) in the variable \(\mathbf{Y}=(Y_{1},Y_{2},Y_{3})\) given by \[\mathscr{L}_{2,1}(\mathbf{X}) =\zeta^{2}X_{1}-2\zeta X_{2}+X_{3}, \mathscr{M}_{2,1}(\mathbf{X}) =\overline{\mathscr{L}_{2,1}(\overline{\mathbf{Y}})},\] \[\mathscr{L}_{2,2}(\mathbf{X}) =\zeta X_{1}-X_{2}, \mathscr{M}_{2,2}(\mathbf{Y}) =\overline{\mathscr{L}_{1,2}(\overline{\mathbf{Y}})}\] \[\mathscr{L}_{2,3}(\mathbf{X}) =X_{1}, \mathscr{M}_{2,3}(\mathbf{Y}) =\overline{\mathscr{L}_{2,3}(\overline{\mathbf{Y}})}\] Using the new formula for \(P_{n}(X)\), we may see that for some positive constants \(\kappa_{4}=\kappa_{4}(\zeta)\), \(\kappa_{5}=\kappa_{5}(\zeta)\), and \(\kappa_{6}=\kappa_{6}(\zeta)\), for each \(n\in\mathcal{N}_{1}\) we have \[|\mathscr{L}_{2,1}(\mathbf{v}_{n})\mathscr{L}_{2,2}(\mathbf{v}_{n })\mathscr{L}_{2,3}(\mathbf{v}_{n})| =|P_{n}(\zeta)(\zeta v_{n,1}-v_{n,2})v_{n,1}|\] \[\leq\kappa_{4}\frac{|q_{2w_{n}}q_{2t_{n}}|}{|q_{2t_{n}+2u_{n}}^{2 }|}\leq\kappa_{4}\frac{|q_{2t_{n}}|}{|q_{2t_{n}+2u_{n}}|}\leq\frac{\kappa_{5}}{| q_{2w_{n}}q_{2t_{n}}|^{\varepsilon}}=\frac{\kappa_{6}}{\|\mathbf{v}_{n}\|^{\varepsilon}}\] as well as \[|\mathscr{M}_{2,1}(\mathbf{v}_{n})\mathscr{M}_{2,2}(\mathbf{v}_{n})\mathscr{ M}_{2,3}(\mathbf{v}_{n})|=|\mathscr{L}_{2,1}(\mathbf{v}_{n})\mathscr{L}_{2,2}( \mathbf{v}_{n})\mathscr{L}_{2,3}(\mathbf{v}_{n})|\,.\] Hence, there is an infinite set \(\mathcal{N}_{2}\subseteq\mathcal{N}_{1}\) and a non-zero \(\mathbf{r}=(r_{1},r_{2},r_{3})\in\mathbb{Z}[i]^{3}\) such that \[\forall n\in\mathcal{N}_{2}\quad r_{1}v_{n,1}+r_{2}v_{n,2}+r_{3}v_{n,3}=0.\] Dividing by \(q_{2w_{n}}q_{2t_{n}-1}\), we obtain \[\lim_{\begin{subarray}{c}n\to\infty\\ n\in\mathcal{N}_{2}\end{subarray}}(Q_{n}-1)(r_{3}\zeta^{2}+r_{2}\zeta+r_{1})=0.\] We may show as before that \((Q_{n})_{n\in\mathcal{N}_{2}}\) has a limit point different from \(1\), so \[r_{3}\zeta^{2}+r_{2}\zeta+r_{1}=0,\] which implies \([\mathbb{Q}(i,\zeta):\mathbb{Q}(i)]\leq 2\), contradicting our hypothesis. The proof of Theorem 1.4 is robust enough for us to obtain similar results not covered by Theorem 6.3. **Theorem 6.6**.: _Let \(\mathbf{B}=(B_{n})_{n\geq 1}\) be a sequence in \(\mathbb{Z}\)._ 1. _If_ \(\mathbf{B}\) _is non-periodic,_ \(\min\{|B_{n}|:n\in\mathbb{N}\}\geq 2\)_,_ \(\mathbf{A}=(-2,-2,\ldots)\) _is a constant sequence, and_ \(s(\mathbf{A},\mathbf{B})\in\Omega\)_, then_ \[[0;-2,1+iB_{1},-2,1+iB_{2},-2,1+iB_{3},-2,1+iB_{4},\ldots]\] _is transcendental._ 2. _Let_ \(\mathbf{A}=(a_{n})_{n\geq 1}\) _be a sequence in_ \(\{2,2i,-2,-2i\}\) _and let_ \(\mathbf{C}=s(\mathbf{A},\mathbf{B})\)_. If_ \(\mathbf{C}\) _is not periodic, valid, and_ \(\operatorname{rep}(\mathbf{C})<\infty\)_, then_ \([0;a_{1},b_{1},a_{2},b_{2},a_{3},b_{3},\ldots]\) _is transcendental._ ## 7. Proof of the algorithm ### Preliminary lemmas We omit the proofs of some of the next lemmas. They amount to check directly all the possible forms of open prototype sets and to argue recursively. **Lemma 7.1**.: _If \(n\in\mathbb{N}\) and \((a_{1},\ldots,a_{n-1},-2+i)\in\mathsf{R}(n)\), then \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}_{1}^{\circ}(-2)\) or \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}_{1}^{\circ}(-2+i)\). As a consequence, \((a_{1},\ldots,a_{n-1},-2+i,b)\not\in\mathsf{R}(n+1)\) implies \(\operatorname{Re}(b)\geq 1\)._ **Lemma 7.2**.: _If \(a,b\in\mathscr{D}\) satisfy \(\mathfrak{F}_{1}^{\circ}(a)\cap\mathcal{C}_{1}^{\circ}(b)\neq\varnothing\) and \(\mathcal{C}_{1}^{\circ}(b)\setminus\mathfrak{F}_{1}^{\circ}(a)\neq\varnothing\), then \(|a|=\sqrt{5}\)._ Proof.: This can be proved by considering the next cases: \(|a|=\sqrt{2}\), \(|a|=2\), \(|a|=\sqrt{5}\), \(|a|\geq\sqrt{8}\) We can show inductively that for \(n\in\mathbb{N}\) and \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathsf{R}(n)\) with \(|a_{n}|=\sqrt{5}\) there is some \(j\in\{0,1,2,3\}\) such that \[\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}_{1}^{\circ}(2i^{j})\quad\text { or }\quad\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\left(i^{j}(1+2i)\right).\] Also, we may show that for \(n\in\mathbb{N}\) and \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathsf{R}(n)\) with \(|a_{n}|=\sqrt{8}\) there is some \(j\in\{0,1,2,3\}\) for which \[\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\mathfrak{F}^{\circ}\quad\text{ or }\quad\mathfrak{F}_{n}^{\circ}(\mathbf{a})=\left(i^{j}(1+2i)\right).\] This gives the next proposition. **Lemma 7.3**.: _If \(\mathfrak{F}_{n}^{\circ}(\mathbf{a})\cap\mathcal{C}_{1}^{\circ}(b)\neq\varnothing\) and \(\mathcal{C}_{1}^{\circ}(b)\setminus\mathfrak{F}_{n}^{\circ}(\mathbf{a})\neq\varnothing\), then \(|a_{n}|\in\{\sqrt{5},\sqrt{8}\}\)._ **Lemma 7.4**.: _Take \(n,k\in\mathbb{N}\), \((x_{1},\ldots,x_{n})\in\mathscr{D}^{n}\), \((d_{1},\ldots,d_{k})\in\mathscr{D}^{k}\), \(m\in\mathbb{Z}\) such that \(|m|\geq 2\) and \(c\in\{m,im\}\). If \((x_{1},\ldots,x_{n},c,d_{1},\ldots,d_{k})\in\mathsf{R}(n+k+1)\), then_ \[\mathfrak{F}_{n+k+1}^{\circ}(x_{1},\ldots,x_{n},c,d_{1},\ldots,d_{k})= \mathfrak{F}_{k}^{\circ}(d_{1},\ldots,d_{k}).\] Proof.: If \(|c|\geq 3\), then \(\mathfrak{F}_{n+1}^{\circ}(x_{1},\ldots,x_{n},c)=\mathfrak{F}^{\circ}\) and, by (4) and the definition of \(T\), \[\mathfrak{F}_{n+k+1}^{\circ}(x_{1},\ldots,x_{n},c,d_{1},\ldots,d_{ k}) =T^{n+k+1}\left(\mathcal{C}_{n+k+1}^{\circ}(x_{1},\ldots,x_{n},c, d_{1},\ldots,d_{k})\right)\] \[=T^{k}\left(\mathfrak{F}_{n+1}^{\circ}(x_{1},\ldots,x_{n},c)\cap \mathcal{C}_{k}^{\circ}(d_{1},d_{2},\ldots,d_{k})\right)\] \[=T^{k}\left(\mathfrak{F}^{\circ}\cap\mathcal{C}_{k}^{\circ}(d_{1 },\ldots,d_{k})\right)\] \[=\mathfrak{F}_{k}^{\circ}(d_{1},\ldots,d_{k}).\] When \(|c|=2\), we check the \(4\) different forms that \(\mathfrak{F}_{n+1}^{\circ}(x_{1},\ldots,x_{n},c)\neq\varnothing\) may take to conclude \[\mathfrak{F}_{n+2}^{\circ}(x_{1},\ldots,x_{n},c,d_{1})=\mathfrak{F}_{1}^{ \circ}(d_{1}).\] For instance, if \(c=-2\), then \[\iota\left(\mathfrak{F}_{n}^{\circ}(-2)\right)=\iota[\mathfrak{F}^{\circ}] \cap\{z\in\mathbb{C}:\operatorname{Re}(z)<1\}\] and for all \(d\in\mathscr{D}\) we have either \(\tau_{d}[\mathfrak{F}^{\circ}]\subseteq\iota[\mathfrak{F}^{\circ}]\) or \(\tau_{d}[\mathfrak{F}^{\circ}]\cap\iota[\mathfrak{F}^{\circ}]=\varnothing\). From here we argue as in the case \(|d|\geq 3\). **Lemma 7.5**.: _If \(m\in\mathbb{Z}\) is such that \(|m|\geq 2\) and \(a\in\{1+im,m+i\}\), then_ \[(\mathsf{mir}_{1}\circ S)(a)=(S\circ\mathsf{mir}_{1})(a)\quad\text{ and }\quad( \mathsf{mir}_{2}\circ S)(a)=(S\circ\mathsf{mir}_{2})(a).\] **Lemma 7.6**.: _Take \(\mathbf{b}=(b_{n})_{n\geq 1}\in\Omega\). If \((b_{1},b_{2})\in\mathsf{Ir}(2)\), then_ \[\operatorname{Re}([b_{1};b_{2},b_{3},\ldots])=\frac{1}{2}\quad\text{ or }\quad \operatorname{Im}([b_{1};b_{2},b_{3},\ldots])=\frac{1}{2}.\] **Lemma 7.7**.: _If \(\mathsf{mir}\in\langle\mathsf{mir}_{1},\mathsf{mir}_{2}\rangle\), then \(\overline{\Lambda}\circ\mathsf{mir}=\mathsf{mir}\circ\overline{\Lambda}\)._ Proof.: The lemma follows the continuity of \(\mathsf{mir}_{1}\) and \(\mathsf{mir}_{2}\) from the fact that for every \(\mathsf{mir}\in\langle\mathsf{mir}_{1},\mathsf{mir}_{2}\rangle\) we have \[\mathsf{mir}\left(\langle 0;z_{1},z_{2}\rangle\right)=\langle 0;\mathsf{mir}(z_{1}), \mathsf{mir}(z_{2})\rangle\] for any \(z_{1},z_{2}\in\mathbb{C}\) such that both sides of the equality are defined. ### Proof of the algorithm Let \(\mathbf{a}=(a_{n})_{n\geq 1}\) be an irregular sequence. Since every cylinder of level \(1\) is regular, there is some \(j_{0}\in\mathbb{N}\) such that \[(a_{1},\ldots,a_{j_{0}})\in\mathsf{R}(j_{0})\text{ and }(a_{1},\ldots,a_{j_{0}},a_{ j_{0}+1})\in\mathsf{R}(j_{0}+1).\] Note that \(\mathfrak{F}_{j_{0}}^{\circ}(a_{1},\ldots,a_{j_{0}})\neq\mathfrak{F}^{\circ}\), for otherwise \((a_{1},\ldots,a_{j_{0}},a_{j_{0}+1})\) would be regular. The open prototype set \(\mathfrak{F}_{j_{0}}^{\circ}(a_{1},\ldots,a_{j_{0}})\) can assume twelve forms and, by checking all of them, we conclude the existence of some \(m\in\mathbb{Z}\) such that \[|m|\geq 2\quad\text{ and }\quad a_{j_{0}+1}\in\left\{1+im,m+i\right\}.\] Define \(\mathbf{b}_{1}\) as in the algorithm; that is, letting \(t=1\) if \(a_{j_{0}+1}=m+i\) and \(t=2\) if \(a_{j_{0}+1}=1+m\), \[\mathbf{b}_{1}\coloneqq(a_{1},\ldots,a_{j_{0}},S(a_{j_{0}+1}),\mathsf{mir}_{t }(a_{j_{0}+2}),\mathsf{mir}_{t}(a_{j_{0}+3}),\mathsf{mir}_{t}(a_{j_{0}+4}),\ldots).\] Assume that \(|m|\geq 3\), then \[\mathfrak{F}_{j_{0}+1}^{\circ}\left(a_{1},\ldots,a_{j_{0}},S(a_{j_{0}+1}) \right)=\mathfrak{F}^{\circ}\quad\text{ and }\quad(b_{1}(1),\ldots,b_{1}(j_{0}+1))\in\mathsf{R}(j_{0}+1).\] If \(a_{j_{0}+1}=1+im\), then \[\operatorname{Re}\left([a_{j_{0}+1};a_{j_{0}+2},a_{j_{0}+3},\ldots]\right)= \frac{1}{2} \tag{15}\] and, since \(z=1-\overline{z}\) holds whenever \(\operatorname{Re}(z)=\frac{1}{2}\), \[[a_{j_{0}+1};a_{j_{0}+2},a_{j_{0}+3},\ldots] =1-\overline{[a_{j_{0}+1};a_{j_{0}+2},a_{j_{0}+3},\ldots]}\] \[=\left\langle 1-\overline{a_{j_{0}+1}};\overline{a_{j_{0}+2}}, \overline{a_{j_{0}+3}},\ldots\right\rangle\] \[=\left\langle S(a_{j_{0}+1});\mathsf{mir}_{2}(a_{j_{0}+1}); \mathsf{mir}_{2}(a_{j_{0}+2}),\mathsf{mir}_{2}(a_{j_{0}+3}),\ldots\right\rangle.\] Therefore, \[\Lambda(\mathbf{a}) =[0;a_{1},\ldots,a_{j_{0}},a_{j_{0}+1},a_{j_{0}+2},\ldots]\] \[=\left\langle 0;a_{1},a_{2},\ldots,a_{j_{0}},S(a_{j_{0}+1}), \mathsf{mir}_{2}(a_{j_{0}+2}),\mathsf{mir}_{2}(a_{j_{0}+3})\ldots\right\rangle\] \[=\left\langle 0;b_{1}(1),b_{1}(2),\ldots,b_{1}(j_{0}),b_{1}(j_{0}+1 ),b_{1}(j_{0}+2),b_{1}(j_{0}+3),\ldots\right\rangle\] \[=\overline{\Lambda}(\mathbf{b}_{1}).\] When \(a_{j_{0}+1}=m+i\), we have \[\operatorname{Im}\left([a_{j_{0}+1};a_{j_{0}+2},a_{j_{0}+3},\ldots]\right)= \frac{1}{2}. \tag{16}\] Then, since \(z=\overline{z}+i\) whenever \(\operatorname{Im}(z)=\frac{1}{2}\), we have \[[a_{j_{0}+1};a_{j_{0}+2},a_{j_{0}+3},\ldots] =i+\overline{[a_{j_{0}+1};a_{j_{0}+2},a_{j_{0}+3},\ldots]}\] \[=\left\langle i+\overline{a_{j_{0}+1}};\overline{a_{j_{0}+2}}, \overline{a_{j_{0}+3}},\ldots\right\rangle\] \[=\left\langle S\left(a_{j_{0}+1}\right);\mathsf{mir}_{1}(a_{j_{0} +1});\mathsf{mir}_{1}(a_{j_{0}+2}),\mathsf{mir}_{1}(a_{j_{0}+3}),\ldots\right\rangle\] and we also conclude \(\Lambda(\mathbf{a})=\overline{\Lambda}(\mathbf{b}_{1})\). Assume that \(|m|=2\). Since \(\mathfrak{F}\) contains the left and the bottom sides, we must have \(a_{j_{0}+1}\in\left\{1+2i,1-2i,2+i,-2+i\right\}\). If \(a_{j_{0}+1}\in\left\{1+2i,1-2i\right\}\), then (15) holds, we have \(S(a_{j_{0}+1})\in\left\{2i,-2i\right\}\) and \[(b_{1}(1),\ldots,b_{1}(j_{N}),b_{1}(j_{N}+1))=(a_{1},\ldots,S(a_{j_{0}}), \mathsf{mir}_{2}(a_{j_{0}+1}))\in\mathsf{R}(j_{1}+1).\] The last statement is verified case by case. For instance, when \(a_{j_{0}+1}=1+2i\), then \([0;a_{j+2},a_{j+3},\ldots]\) belongs to the line segment determined by \(\zeta_{1}\) (included) and \(\frac{-1+i}{2}\)) (excluded), so \(a_{j_{0}+2}\in\{-2,-2-i,-1-i\}\) and \(\mathsf{mir}_{2}(a_{j_{0}+2})\in\{2,2-i,1-i\}\) and the claim holds because \[\mathfrak{F}_{2}^{\circ}(2i,2)\neq\emptyset,\quad\mathfrak{F}_{2}^{\circ}(2i,2 -i)\neq\emptyset,\quad\mathfrak{F}_{2}^{\circ}(2i,1-i)\neq\emptyset.\] Similarly, when \(a_{j_{0}+1}\in\{2+i,-2+i\}\), equation (16) holds, \(S(a_{j_{0}+1})\in\{2,-2\}\) and \[(b_{1}(1),\ldots,b_{1}(j_{N}),b_{1}(j_{N}+1))=(a_{1},\ldots,S(a_{j_{0}}), \mathsf{mir}_{1}(a_{j_{0}+1}))\in\mathsf{R}(j_{1}+1).\] Irrespective to what case occurs when \(|m|=2\), we may conclude \(\Lambda(\mathbf{a})=\overline{\Lambda}(\mathbf{b}_{1})\) as when \(|m|\geq 3\). To sum up, the sequence \(\mathbf{b}_{1}\) has the next properties: 1. \((b_{1}(1),\ldots,b_{1}(j_{0}),b_{1}(j_{0}+1))\in\mathsf{R}(j_{N}+1)\), 2. \(\Lambda(\mathbf{a})=(b_{1}(1),b_{1}(2),b_{1}(3),\ldots)=\overline{\Lambda}( \mathbf{b}_{1})\). Assume that for some \(N\in\mathbb{N}\) we have the following: * We are able to perform \(N\) iterations of the procedure and the resulting sequence \(\mathbf{b}_{N}\) satisfies: 1. \((b_{N}(1),\ldots,b_{N}(j_{N-1}+1))\in\mathsf{R}(j_{N-1}+1)\), 2. \(\Lambda(\mathbf{a})=(0;b_{N}(1),b_{N}(2),b_{N}(3),\ldots)=\overline{\Lambda}( \mathbf{b}_{N})\). * Either (17) \[\operatorname{Re}\left([a_{j_{N-1}+1};a_{j_{N-1}+2},\ldots]\right)=\frac{1}{2} \ \ \text{or}\ \ \operatorname{Im}\left([a_{j_{N-1}+1};a_{j_{N-1}+2},\ldots]\right)=\frac{1}{2}.\] _Remark 7.8_.: By the definition of \(\mathbf{b}_{N}\), for each \(M\in\{1,\ldots,N\}\) there exists some \(\mathsf{mir}\in(\mathsf{mir}_{1},\mathsf{mir}_{2})\) such that \[\forall n\in\mathbb{N}\quad b_{M}\big{(}j_{M-1}+1+n\big{)}=\mathsf{mir}\left( b_{M-1}(j_{M-1}+1+n)\right).\] As a consequence, there is some \(\mathsf{mir}^{\prime}\in(\mathsf{mir}_{1},\mathsf{mir}_{2})\) for wchich \[\forall n\in\mathbb{N}\quad b_{M}(j_{M-1}+1+n)=\mathsf{mir}^{\prime}\left(a_{ j_{M-1}+1+n}\right).\] Suppose there is some \(j_{N}\in\mathbb{N}\) satisfying \[(b_{N}(1),\ldots,b_{N}(j_{N}))\in\mathsf{R}(j_{N})\quad\text{ and }\quad(b_{N}(1), \ldots,b_{N}(j_{N}+1))\not\in\mathsf{R}(j_{N}+1). \tag{18}\] Note that \(j_{N}-j_{N-1}\in\mathbb{N}\) by the induction hypothesis. We consider the cases \(j_{N}-j_{N-1}=1\) and \(j_{N}-j_{N-1}\geq 2\) in order to prove \[(b_{N}(1),\ldots,b_{N}(j_{N}),S(b_{N}(j_{N}+1)))\in\mathsf{R}(j_{N}+1), \tag{19}\] \[\operatorname{Re}\left([a_{j_{N}+1};a_{j_{N}+2},\ldots]\right)=\frac{1}{2} \quad\text{ or }\quad\operatorname{Im}\left([a_{j_{N}+1};a_{j_{N}+2},\ldots]\right)=\frac{1}{2}, \tag{20}\] \[\Lambda(\mathbf{a})=\overline{\Lambda}(\mathbf{b}_{N+1}). \tag{21}\] **Case \(j_{N}=j_{N-1}+1\).** Since \(b_{N}(j_{N})=b_{N}(j_{N-1}+1)=S(b_{N-1}(j_{N-1}+1))\), there is some \(m\in\mathbb{Z}\), \(|m|\geq 2\), such that \(b_{N}(j_{N})\in\{m,im,-m,-im\}\). If we had \(|m|\geq 3\), we would have \[\mathfrak{F}_{j_{N}}^{\circ}(b_{N}(1),\ldots,b_{N}(j_{N}))=\mathfrak{F}_{j_{N- 1}+1}^{\circ}(b_{N-1}(1),\ldots,S(b_{N-1}(j_{N-1}+1))=\mathfrak{F}^{\circ},\] which implies \((b_{N}(1),\ldots,b_{N}(j_{N}),b_{N}(j_{N}+1))\in\mathsf{R}(j_{N}+1)\), contradicting (18). Therefore, \(|m|=2\) and \(b_{N}(j_{N})\in\{-2,-2i,2,2i\}\). Let us further assume that \(b_{N}(j_{N})=-2\), the cases are treated similarly. Then, we have \[b_{N-1}(j_{N})=b_{N-1}(j_{N-1}+1)\in\{-2+i,-2-i\}\] We work under the assumption \(b_{N-1}(j_{N})=-2+i\), an analogous argument works for \(-2-i\). Under these new assumptions, (18) give \[\mathfrak{F}_{j_{N}}^{\circ}\left(b_{N}(1),\ldots,b_{N}(j_{N})\right)= \mathfrak{F}_{1}^{\circ}(-2)\quad\text{ and }\quad\operatorname{Re}\left(b_{N}(j_{N}+1)\right)\geq 1. \tag{22}\] Note that for some \(\mathsf{mir}\in\{\mathsf{mir}_{1},\mathsf{mir}_{2}\}\) have \[b_{N-1}(j_{N-1}+1)=\mathsf{mir}(a_{j_{N}})\quad\text{ and }\quad b_{N-1}(j_{N}+1 )=\mathsf{mir}(a_{j_{N}+1}). \tag{23}\] Certainly, when \(N=1\), we may consider \(\mathsf{mir}\) to be the identity map on \(\mathbb{C}\). For \(N>1\), the induction hypothesis tells us that \(j_{N-1}+j_{N-2}\in\mathbb{N}\), so \[j_{N-1}+1=j_{N-2}+(j_{N-1}+j_{N-2})+1\geq j_{N-2}+2\] and our claim follows from Remark 7.8. We now consider four possibilities for \(\mathsf{mir}\); in fact, we will show that \(\mathsf{mir}\) is the identity map. * When \(\mathsf{mir}\) is the identity map, we have \(a_{j_{N}}=-2+i\) and \(\operatorname{Re}(a_{j_{N}+1})\geq 1\). Hence, the set \(\mathfrak{F}_{j_{N}}(a_{1},\ldots,a_{j_{N}})\in\mathsf{Ir}(j_{N})\) is of one of the forms depicted on Figure 5. We consider each case separately. 1. In this case, we have \([0;a_{j_{N}+1},a_{j_{N}+2},a_{j_{N}+3},\ldots]=\zeta_{4}\) and (20) holds. This equality also implies \(a_{j_{N}+1}=1+2i\) and, by (23), \[(b_{N-1}(j_{N}),b_{N-1}(j_{N}+1))=(-2+i,1-2i).\] Then, by \(b_{N}(j_{N})=S(b_{N-1}(j_{N}+1))\) and \(b_{N}(j_{N}+1)=\mathsf{mir}_{1}(b_{N-1}(j_{N}+1))\), we have \[(b_{N}(j_{N}),b_{N}(j_{N}+1))=(-2,1+2i)\] and \(S(b_{N}(j_{N}+1))=2i\). The first condition in (22) yields \[\mathfrak{F}_{j_{N}+1}^{\circ}\left(b_{N}(1),\ldots,b_{N}(j_{N}),S(b_{N}(j_{N} +1))\right)=\mathfrak{F}_{1}(2i)\neq\emptyset\] and (19) follows. 2. In this case, from \(a_{j_{N}}=-2+i\) and (17), we get \(\operatorname{Im}\left([a_{j_{N-1}+1};a_{j_{N-1}+2},\ldots]\right)=\frac{1}{2}\). And, in view of \[\left\{w\in\mathbb{C}:\operatorname{Im}(w)=\frac{1}{2}\right\}\cap\tau_{a_{j _{N}}}\left(\mathfrak{F}_{j_{N}}(a_{1},\ldots,a_{j_{N}})\right)=\left\{\tau_{ a_{j_{N}}}(\zeta_{4})\right\}\] we obtain \([0;a_{j_{N}+1},a_{j_{N}+2},\ldots]=\zeta_{4}\). The argument in (a) ensures (19) and (20). 3. As in case (b), we must have \([0;a_{j_{N}+1},a_{j_{N}+2},\ldots]=\zeta_{4}\) and we now argue as in (a). 4. We apply the complex inversion on \(\mathfrak{F}_{j_{N}}(a_{1},\ldots,a_{j_{N}})\) to conclude \(a_{j_{N}+1}\in\{-1+i,1-+2i,2i,1+2i\}\) and, since \(\operatorname{Re}(a_{j_{N}+1})\geq 1\), we have \(a_{j_{N}+1}=1+2i\). Because of \[\iota\left[\mathfrak{F}_{j_{N}}(a_{1},\ldots,a_{j_{N}})\cap\tau_{a_{j_{N}+1}}[ \mathfrak{F}]\cap\tau_{a_{j_{N}}}\left(\mathfrak{F}_{j_{N}}(a_{1},\ldots,a_{j_ {N}})\right)=\left\{\tau_{a_{j_{N}}}(\zeta_{2})\right\},\] we may proceed as in case (b) to conclude (19) and the equality for the real part in (20). The discussion leading (22) actually tells us that \(b_{N-1}(j_{N-1}+n)=a_{j_{N-1}+n}\) for all \(n\in\mathbb{N}\); hence, by the basis of induction, \[\overline{\Lambda}\left(b_{N}(j_{N-1}+n)_{n\geq 1}\right)=\Lambda\left((a_{j_{N-1 }+n})_{n\geq 1}\right).\] Lastly, we conclude \[\overline{\Lambda}\left(b_{N}(j_{N-1}+n)_{n\geq 1}\right)=\overline{\Lambda} \left(b_{N+1}(j_{N-1}+n)_{n\geq 1}\right)\] arguing as in the case \(N=0\) and applying Lemma 7.7. * If \(\mathsf{mir}=\mathsf{mir}_{1}\), then \(a_{j_{N}}=-2-i\) but (17) cannot hold. * When \(\mathsf{mir}=\mathsf{mir}_{2}\), then we have \(a_{j_{N}}=2+i\) and \(\mathrm{Re}(a_{j_{N}+1})\leq-1\). Also, in (20), we have the equality for the imaginary part. We consider the cases shown in Figure 6. 1. This cannot happen, by virtue of (20). 2. In this configuration, (20). would imply \([0;a_{j_{N}+1},a_{j_{N}+2},\ldots]=\zeta_{3}\), so we would have \(a_{j_{N}+1}=2i\), contradicting \(\mathrm{Re}(a_{j_{N}+1})\leq-1\). We have 3. Same as (b). 4. In this case, we would have \(a_{j_{N}+1}\in\{2i,1+2i,1+i\}\) which is incompatible with \(\mathrm{Re}(a_{j_{N}+1})\leq-1\). * When \(\mathsf{mir}=\mathsf{mir}_{1}\mathsf{mir}_{2}\), then \(a_{j_{N}}=2-i\) and \(\mathrm{Re}(a_{j_{N}}+1)\leq-1\). We only have two possibilities for \(\mathfrak{F}_{j_{N}}(a_{1},\ldots,a_{j_{N}})\) (see Figure 7). 1. We have \[[0;a_{j_{N}+1},a_{j_{N}+2},a_{j_{N}+3},\ldots]=\zeta_{1}\quad\text{ and }\quad\mathrm{Im}([a_{j_{N}+1};a_{j_{N}+2},a_{j_{N}+3},\ldots])=\frac{1}{2}.\] 2. This configuration is incompatible with \(\mathrm{Re}(a_{j_{N}}+1)\leq-1\). **Case \(j_{N}\geq j_{N-1}+2\).**: By construction, for some \(m\in\mathbb{Z}\) with \(|m|\geq 2\) we have \[b_{N}(j_{N-1}+1)=S\left(b_{N-1}(j_{N-1}+1)\right)\in\left\{m,im\right\}.\] Therefore, by Lemma 7.4 and \(j_{N-1}+2=j_{N}\), \[\mathfrak{F}_{j_{N}}^{\circ}\left(b_{N}(1),\ldots,b_{N}(j_{N}) \right) =\mathfrak{F}_{j_{N}-j_{N}-1}^{\circ}(b_{N}(j_{N-1}+2),\ldots,b_{N- 1}(j_{N}))\neq\emptyset,\] \[\mathfrak{F}_{j_{N}+1}^{\circ}\left(b_{N}(1),\ldots,b_{N}(j_{N}+1 )\right) =\mathfrak{F}_{j_{N}-j_{N}}^{\circ}\left(b_{N}(j_{N-1}+2),\ldots,b_{N}(j_{N }),b_{N}(j_{N}+1)\right)=\emptyset.\] Moreover, by Remark 7.8, we may pick \(\mathsf{mir}\in\left\langle\mathsf{mir}_{1},\mathsf{mir}_{2}\right\rangle\) such that \[\forall n\in\mathbb{N}\quad b_{N}(j_{N-1}+1+n)=\mathsf{mir}\left(a_{j_{N-1}+1 +n}\right).\] As a consequence, recalling the definition of \(j_{N}\) and Proposition 3.4, we have \[(a_{j_{N-1}+2},\ldots,a_{j_{N}})\in\mathsf{R}(j_{N}-j_{N-1}-1)\ \text{ and }\ (a_{j_{N-1}+2},\ldots,a_{j_{N}+1})\in\mathsf{Ir}(j_{N}-j_{N-1}-1).\] Applying the case \(N=0\) we get \[(a_{j_{N-1}+2},\ldots,a_{j_{N}},S(a_{j_{N}+1}))\in\mathsf{R}(j_{N}-j_{N-1}-1)\] and, by Lemma 7.6, we conclude (19). The case \(N=0\) also implies (20). Lastly, let \(\mathbf{d}\) be the sequence in \(\mathscr{D}\) obtained from the first iteration of the algorithm applied on \(\mathbf{a}^{\prime}:=(a_{n})_{n\geq j_{N-1}+2}\). Then, \(\Lambda(\mathbf{a}^{\prime})=\overline{\Lambda}(\mathbf{d})\). If \(\mathbf{b}^{\prime}:=(b_{N}(n))_{n\geq j_{N-1}+2}\) and \(\mathbf{e}^{\prime}:=(b_{N}(n))_{n\geq j_{N-1}+2}\), then \(\mathsf{mir}(\mathbf{a}^{\prime})=\mathbf{b}^{\prime}\), \(\mathsf{mir}(\mathbf{d})=\mathbf{e}\), and an application of Lemma 7.7 yields \[\Lambda(\mathsf{mir}(\mathbf{a}^{\prime}))=\mathsf{mir}\left(\Lambda(\mathbf{a} ^{\prime})\right)=\mathsf{mir}\left(\overline{\Lambda}(\mathbf{d})\right)= \overline{\Lambda}(\mathsf{mir}(\mathbf{d}))=\overline{\Lambda}(\mathsf{e})\] Therefore, (21) is true.
2306.05201
Deep learning the hierarchy of steering measurement settings of qubit-pair states
Quantum steering has attracted increasing research attention because of its fundamental importance, as well as its applications in quantum information science. Here we leverage the power of the deep learning model to infer the steerability of quantum states with specific numbers of measurement settings, which form a hierarchical structure. A computational protocol consisting of iterative tests is constructed to overcome the optimization, meanwhile, generating the necessary training data. According to the responses of the well-trained models to the different physics-driven features encoding the states to be recognized, we can numerically conclude that the most compact characterization of the Alice-to-Bob steerability is Alice's regularly aligned steering ellipsoid; whereas Bob's ellipsoid is irrelevant. We have also provided an explanation to this result with the one-way stochastic local operations and classical communication. Additionally, our approach is versatile in revealing further insights into the hierarchical structure of quantum steering and detecting the hidden steerability.
Hong-Ming Wang, Huan-Yu Ku, Jie-Yien Lin, Hong-Bin Chen
2023-06-08T13:55:04Z
http://arxiv.org/abs/2306.05201v2
# Deep learning the hierarchy of steering measurement settings of qubit-pair states ###### Abstract Quantum steering has attracted increasing research attention because of its fundamental importance, as well as its applications in quantum information science. Regardless of the well-established characterization of the steerability of assemblages, it remains unclear how to detect the degree of steerability even for an arbitrary qubit-pair state due to the cumbersome optimization over all possible incompatible measurements. Here we leverage the power of the deep learning models to infer the hierarchy of steering measurement setting. A computational protocol consisting of iterative tests is constructed to overcome the optimization, meanwhile, generating the necessary training data. According to the responses of the well-trained models to the different physics-driven features encoding the states to be recognized, we can conclude that the most compact characterization of the Alice-to-Bob steerability is Alice's regularly aligned steering ellipsoid; whereas Bob's ellipsoid is irrelevant. Additionally, our approach is versatile in revealing further insights into the hierarchical structure of quantum steering and detecting the hidden steerablity. ## I Introduction The quantum correlations between spatially separated parties are widely acknowledged as a nontrivial resource showing advantages over the classical counterparts for quantum foundations and quantum information [1; 2; 3; 4]. Among the family of quantum correlations, Bell nonlocality [5] is arguably the most famous one due to the historical debate of EPR paradox [6]. A nonlocal state can be certified by the violation of Bell's inequality [7], and can be explicitly applied to the fully device-independent (DI) quantum information tasks [8; 9; 10; 11; 12]. Apart from the fully DI scenario, the constraints can be relaxed to the one-side DI quantum information tasks [13; 14; 15], in which one of the two-party system (say Bob) is steered by the other party's (say Alice) untrusted measurements. In this Alice-to-Bob scenario, one relies on a steerable correlation, also referred to steerable assemblage, as a resource to achieve quantum advantages over an unsteerable assemblage admitting the local-hidden-state (LHS) models [16; 17; 18]. Such a quantum resource requires a steerable state with an incompatible measurement [19; 20; 21] such that the generated assemblage is incompatible with the LHS models. There have been many approaches to test the incompatibility of a given assemblage with respect to the LHS models [22; 23; 24; 25; 26; 27]; in contrast, to verify the steerability of a quantum state remains a cumbersome task. To that aim, one has to optimize over all possible incompatible measurements, e.g., (1) the number \(n\) of observables and (2) which observables to be measured on a given state. With this construction, the classification of quantum states according to the number \(n\) of required observables to exhibit the steerability forms a hierarchy, which we refer to as the hierarchy of steering measurement settings. There are some attempts to tackle this difficulty by deriving feasible criteria. For example, the criterion, derived by Bowles, Hirsch, Quintino, and Brunner (BHQB) [28], is a sufficient condition for verifying the qubit pair unsteerability. Furthermore, the maximal violation of the steering inequality, derived by Cavalcanti, Jones, Wiseman, and Reid (CJWR) [29], by a qubit-pair state under 2 and 3 observables has been utilized as a quantifier of qubit pair steerability [30]. However, both conditions are not strong enough in fully discriminating the (un)steerability of qubit-pair states. The explicitly experimental examination of the insufficiency has been reported recently [31]. On the other hand, inspired by the wide-spreading applications of learning algorithms in quantum physics, e.g., quantum dynamics [32; 33; 34], quantum computing [35; 36], quantum chemistry [37; 38; 39], and quantum communication [40; 41] (see also the recent reviews [42; 43; 44]), the machine-learning approach has also been applied to verify the quantum correlations [45; 46; 47; 48; 49]. Although the idea of detecting the steerability of qubit-pair states with machine learning has been implemented [48], their approach hardly reveals the physical insights, e.g., hierarchy of steering measurement setting and characterization of the steerability of quantum states, from the prediction. Here we leverage the supervised deep learning (DL) algorithm to circumvent the cumbersome optimization and infer the fine-grained hierarchy of steering measurement settings. Noteworthily, due to the aforementioned insufficiency of the theoretical criteria, to exactly verify the steerability of a given quantum state is difficult, rendering the actual boundaries of the hierarchy undetermined. Such a problem without ground truth (GT), or hard to find out the GT, is called to be GT-deficient. This hinders the generation of reliable training data, as well as the training of a supervised DL algorithm. To overcome this insufficiency, we construct a computational protocol by iteratively testing the steerability of a given qubit-pair state with the semidefinite program (SDP) under \(n\) random observables, designating the state to be \(n\)-measurement steerable (\(n\)-MS). This constitutes the necessary data set for training a DL algorithm. Physical intuition on the steerability allows us to reduce the parameters encoding a qubit-pair state in the data set. From the counterintuitive responses of the well-trained DL models to the physics-driven parameters, we can acquire a compact, and precise, way for the characterization of steerability. Our results suggest that, in contrast to the entanglement, the characterization of the steerability from Alice to Bob is dominantly determined by Alice's regularly aligned ellipsoid; whereas Bob's ellipsoid is irrelevant. Recall that Alice's steering ellipsoid [50] is defined to be the set of all states that can be steered to from Bob, and vice versa. These operational definitions of quantum steering and the steering ellipsoid render our results rather counterintuitive. Our results, on the one hand, unravel a new direction in the characterization of steerability; on the other hand, these also underpin the capability of the DL approaches to shed light on a new route toward the physics remaining obscure, rather than merely a substitution of cumbersome computational tasks. For a comprehensive visualization, we have also applied the well-trained DL models to depict the hierarchy of two types of states generalized from the Werner state. Our DL models are capable of identifying the 4-MS states, i.e., the states requiring at least four observables to exhibit steerability on Bob, with high precision. Noteworthily, no theoretical criteria in the literature are capable of efficiently determining the 4-MS states. Furthermore, with a different way of parameterizing the qubit-pair states, our DL models can also be applied to characterize hidden quantum steerability of generalized Werner states, whereby an unsteerable state becomes steerable after a filtering operation. ## I Results ### Hierarchy of steering measurement settings We first recall the notion of quantum steering. Consider two parties, denoted as Alice and Bob, sharing an unknown quantum state \(\rho^{\text{AB}}\). Alice performs a measurement, described by the projective measurement \(M_{a|x}\) satisfying \(M_{a|x}M_{\sigma^{\prime}|x}=\delta_{a,a^{\prime}}M_{a|x}\ \forall\ x\) and \(\sum_{a}M_{a|x}=I_{d}\), where \(x=0,\ldots,n-1\) and \(a=0,\ldots,o-1\) represent the index of observables and outcomes of measurements, respectively. With one-way classical communication, Bob will obtain a set of (subnormalized) states, referred to as the assemblage \(\{\sigma_{a|x}\}_{a,x}\), with \(\sigma_{a|x}=\text{Tr}_{\text{A}}[(M_{a|x}\otimes I_{d})\rho^{\text{AB}}]\), containing both information of the probability \(p(a|x)=\text{Tr}(\sigma_{a|x})\) and Bob's quantum states \(\rho^{\text{B}}_{a|x}=\sigma_{a|x}/\text{Tr}(\sigma_{a|x})\). An assemblage is defined to be unsteerable if it admits a classical description of the local-hidden-state (LHS) model, namely, \[\sigma_{a|x}=\sum_{\lambda}p(\lambda)p(a|x,\lambda)\rho^{\prime}_{\lambda}, \ \forall\ a,x \tag{1}\] where \(\{\rho^{\prime}_{\lambda}\}\) is a set of preexisting quantum states, \(p(\lambda)\) is a probability distribution, and \(p(a|x,\lambda)\) denotes the postprocessing of Alice under the hidden variable \(\lambda\). For a given assemblage, deciding whether it admits an LHS is a semidefinite program (SDP) [17]. Whenever feasible solutions of Eq. (1) can be found, the given assemblage admits an LHS and, by definition, is unsteerable; otherwise, steerable. The detailed constructions of the SDP can be found in Ref. [17]. Although an assemblage is generated by a quantum state, we stress that, it is the assemblage that conceived to be the intrinsic resource in quantum steering, rather than a state [51]. Throughout this work, we only consider the steerability from Alice to Bob, unless stated otherwise. In contrast to the efficient verification of the steerability of an assemblage with an SDP, it becomes more difficult to verify the steerability of a given bipartite quantum state, since, to verify that, one has to test all possible incompatible measurements. More specifically, to determine the least number of observables generating steerable assemblages, Alice must iteratively test 2 random observables until she successfully steers Bob; otherwise, she will measure one more observable if she concludes that she has no way to steer Bob with 2 observables, and so on. A quantum state is defined to be \(n\)-measurement steerable (\(n\)-MS) if the state requires at least \(n\) observables from Alice to exhibit steerability on Bob. The classification of states in terms of \(n\)-MS naturally forms a hierarchical structure that we define as the hierarchy of steering measurement settings. We remark that a similar issue has been raised in Bell nonlocality as well [5]. Even for the simplest case (qubit-pair Werner state), the exact local and nonlocal boundary is still vague [52]. There are several feasible criteria in the literature discriminating the (un)steerability of a qubit-pair \(\rho^{\text{AB}}\) state. The BHQB criterion [28] is a sufficient condition for certifying the qubit-pair unsteerability; however, it is not strong enough to detect all unsteerable states. Additionally, two criteria of 2- and 3-MS are derived based on the CJWR steering inequality [29; 30]. These 2- and 3-MS criteria are hardly generalized to the cases for \(n\geq 4\); moreover, they have a limited performance in detecting all 2- and 3-MS states. This insufficiency has been experimentally verified recently [31]. A brief review of these criteria is presented in Supplementary Note 1. In light of the insufficiencies of aforementioned theoretical works [28; 29; 30], a computable approach to determine the hierarchy is desirable. We propose to leverage the compelling approach of supervised deep learning (DL) algorithms, which are clever at extracting hidden patterns and relationships within a huge amount of data. ### Training data and SDP iteration In training a supervised DL model, it is necessary to generate a sufficient amount of labelled training data. Each training datum \((f,l)\) consists of the feature \(f\in\mathbb{R}^{k}\) of length \(k\) encoding the object to be recognized and a label \(l\in\mathbb{R}\) indicating the ground truth (GT) associated with the feature \(f\). The goal of a DL model is to extract the relationship between \(f\) and \(l\) among the training data set \(\{(f,l)\}\) and predict an \(l^{\prime}\) for an \(f^{\prime}\) never seen to the model. However, the theoretical insufficiencies discussed above prevent us from generating labelled training data efficiently, rendering the problem GT-deficient. To overcome the GT-deficiency, we construct a computational protocol (Fig. 1**a**) to generate the necessary labelled training data. The first stage of the protocol is a pre-filter consisting of the positive partial transpose (PPT) [53, 54] and the BHQB criteria, which are used to efficiently capture the separable (SEP) and some unsteerable (UNS) states from the randomly generated qubit-pair density matrices fed into the protocol. After passing through the pre-filter, the states are entangled with indeterminate (IND) steerability. The next stage is a fine-grained iterative SDP test determining the hierarchy. The detailed procedure is exemplified in Fig. 1**b** for the cases of 2- and 3-MS. At each level of the hierarchy, the IND states will be iteratively tested by the SDP under \(n\) randomly chosen observables \(x=\vec{u}\cdot\hat{\sigma}\) by Alice, where \(\vec{u}\) is a three-demensional unit vector. Then the states will be designated a label \(l=n\)-MS once the protocol successfully detects the steerability of the assemblage; otherwise, it will terminate after sufficiently many failures and pass the IND states to the next level. Note that the iterative test will be repeated for sufficiently many times to implement the optimization on Alice. Further details and statistical analyses are presented in Methods and Supplementary Note 2. Due to the limitation on our computational resource, here we have worked out a maximal \(n=4\). To further promote the capability of the DL model, the final stage is the coarse-grained iteration, where a 12-MS over 100 iterations has been worked out (Fig. 1**a**). For those fail in this stage, we can not assign a reliable label. They are dropped without being appended to the training data. ### Feature engineering Before feeding into the DL model, it is vital to properly encode the object to be recognized into a feature \(f\) of length \(k\). Since feature engineering reflects how the DL model recognizes the object, it has a significant influence on the behavior of the DL model. Generically, the reduction of the feature length will facilitate promoting efficiency during the training; nevertheless, this allows the DL model to acquire only limited information on the object, leading to a reduced prediction accuracy. Here we consider five types of feature of different lengths, which is reduced based on the physical insights into quantum steering. In contrast to the above generic intuition, we will see that the physics-driven reduction of the feature length is helpful in improving prediction accuracy. This helps us to extract informative parameters relevant to quantum steering and to discover the physics behind the data. Further details are presented in Methods. Figure 1: **Computational protocol generating the labelled training data.****a** To determine a reliable label \(l\) as the GT of the hierarchy, each state is fed into a computational protocol, which consists of three stages of the pre-filter, the SDP iteration, and the coarse-grained iteration. **b** In the stage of the SDP iteration, the hierarchy can be determined. Each indeterminate input state will be iteratively tested by the SDP, under Alices’s \(n\) random observables \(x=\vec{u}\cdot\hat{\sigma}\). Once the steerability is detected by the SDP for a certain set of \(x\)’s, then the state is \(n\)-MS; otherwise, it will be repeatedly tested for a sufficiently long iteration and passed to the next level. The iteration times for 2-, 3-, and 4-MS are 900, 27,000 and 810 thousands, respectively. Recall that, a qubit-pair state \(\rho^{\rm AB}=\frac{1}{4}\sum_{ij}\Theta_{ij}\sigma_{i}\otimes\sigma_{j}\) can be fully described by \(\Theta_{ij}=\text{Tr}(\rho^{\rm AB}\sigma_{i}\otimes\sigma_{j})\), where \(\sigma_{0}=I_{d}\) and \(\sigma_{i}\) denotes the three Pauli operators. Consequently, neglecting the triviality \(\Theta_{00}=1\), \(\rho^{\rm AB}\) is encoded into a feature of length \(k=15\) consisting of \(\Theta_{ij}\), denoted as General-15. Moreover, the state can be transformed to \(\tilde{\rho}^{\rm AB}=[I_{d}\otimes(2\rho^{\rm B})^{-1/2}]\rho^{\rm AB}[I_{d} \otimes(2\rho^{\rm B})^{-1/2}]\) through the one-way stochastic local operations and classical communication (1W-SLOCC) on Bob, which preserves the steerability from Alice to Bob [55, 56, 57, 28]. It is critical to note that, since Bob's local state of \(\tilde{\rho}^{\rm AB}\) is maximally mixed, we only need a feature of length \(k=12\) to encode a qubit-pair state, denoted as SLOCC-12. Additionally, a particularly inspiring technique studying quantum correlations is the quantum steering ellipsoid [50], which is defined as the set of all attainable local states generated by measurements from the other side. For instance, if Bob performs all possible measurements, Alice's (normalized) local states in a Bloch sphere form an ellipsoid. The mathematical description of Alice's ellipsoid requires a \(3\times 3\) symmetric matrix \(Q_{\rm A}\) and its center \(\vec{c}_{\rm A}\in\mathbb{R}^{3}\), giving rise to a feature of length \(k=9\), which we designate as ELLA-9; similarly, ELLB-9 for Bob. Furthermore, it has been pointed out [50] that both \(\rho^{\rm AB}\) and \(\tilde{\rho}^{\rm AB}\) have the same Alice's ellipsoid, underpinning the adequacy of ELLA-9 in the characterization of quantum steering. Generically, the ellipsoids are obliquely aligned. We can further rotate the Bloch sphere such that the three semiaxes of Alices's ellipsoid are regularly aligned with the computational bases by applying an appropriate local unitary transformation on both sides. Note that the local unitary transformations preserve the steerability of a given quantum state. This corresponds to the elimination of the information on the orientation of the ellipsoid by diagonalizing \(Q_{\rm A}\), leading to the most compact feature of length \(k=6\), denoted as LUTA-6. According to our findings below, here we merely consider Alice's regularly aligned ellipsoid. ### Verification of the accuracy of the models We have trained five models for each feature accordingly with the data output from the protocol. We then proceed to verify the accuracy of the well-trained models. The testing data of size 10,300 are also output from the protocol other than those in the training data set. The details on the grouping of the generated data into training and test sets are presented in Supplementary Notes 3; while the structures of the well-trained models are described in Supplementary Note 4. It will become clear that, from the responses of the models to the physics-driven features, we can figure out the most informative parameters relevant to the characterization of quantum steering. More specifically, a counterintuitive conclusion can be drawn that the steerability from Alice to Bob is dominantly characterized by Alice's regularly aligned ellipsoid. Figure 2 shows the accuracy of the models trained with each feature. It can be seen that, except for the ELLB-9 model, the overall accuracy gets improved significantly along with the reduced features. Particularly, the LUTA-6 model almost precisely reproduces all the labels of the testing data. This, on the one hand, implies that the six parameters of the LUTA-6 feature dominantly characterize the steerability from Alice to Bob. On the other hand, even if the General-15 feature does contain complete information on the input states \(\rho^{\rm AB}\), the redundancy in the feature irrelevant to the steerablitiy expands the model space. This hinders the trainability of the General-15 model, giving rise to a limited accuracy. We then take a further closer look into the procedure of feature engineering to reveal deeper physical insights into the behavior of the models. The feature reduction begins with the 1W-SLOCC transformation \(\rho^{\rm AB}\mapsto\tilde{\rho}^{\rm AB}=[I_{d}\otimes(2\rho^{\rm B})^{-1/2}] \rho^{\rm AB}[I_{d}\otimes(2\rho^{\rm B})^{-1/2}]\), which is shown to preserve the steerability from Alice to Bob [56, 57, 28, 25], leading to a more compact SLOCC-12 model with improved accuracy. The aforementioned technique of quantum steering ellipsoid [50] characterizes the entanglement in a geometrical manner; whereas, it remains vague how can this approach characterize quantum steering. In Fig. 2**c** (**d**), we show the results of the ELLA(B)-9 models, encoded based on the ellipsoids of Alice (Bob), respectively. On the one hand, the ELLA-9 model exhibits further improved accuracy than the SLOCC-12, indicating an even further compact encoding scheme, and less redundancy. On the other hand, the failure of ELLB-9 is seemingly counterintuitive. Since Bob's ellipsoid is defined to be the set of states \(\rho^{\rm B}_{a|x}\) that can be steered to from Alice, an intuitive conjecture arises naturally that Bob's ellipsoid should be adequate to characterize the steerability from Alice to Bob. There are also attempts in the literature to connect both concepts, i.e., the semiaxes of Bob's ellipsoid can be used to determine Alice's optimal measurements [59, 60, 61]. Although the accuracy of the ELLB-9 model is unsatisfactory, it provides information about hidden steerability, which will be discussed in the next section. In principle, the physics behind the behaviors of the ELLA-9 and the ELLB-9 models can be both understood from the critical properties of the 1W-SLOCC. Since any 1W-SLOCCs on Bob can neither alter Alice's steering ellipsoid [50], nor change the detection of steerable state from Alice to Bob [57, 58]. The ELLA-9 feature should contain all the necessary information to certify the steerability. Consequently, we put particular emphasize on the conclusion that, it should be Alice's steering ellipsoid, instead of Bob's ellipsoid, that adequate to characterize the steerability from Alice to Bob, underpin by the further improved accuracy of the ELLA-9 model as well. In contrast, the 1W-SLOCCs on Alice do change the steerability from Alice to Bob. However, the ELLB-9 feature of a state is invariant under the 1W-SLOCCs on Alice. Namely, two states related to each other via a 1W SLOCC on Alice should share the same ELLB-9 encoding with different steerability, rendering the label \(l\) given by the protocol unreliable. This severely suppresses the accuracy of the ELLB-9 model. We will also demonstrate the effects of the 1W-SLOCCs on Alice in detecting the hidden steerabiltiy. Finally, the LUTA-6 feature is inherited from Alice's ellipsoid by rotating the Bloch sphere such that it is regularly aligned, leading to the most compact feature and a highly precise prediction shown in Fig. 2**e**. This highlights that the orientation of Alice's ellipsoid is irrelevant in the characterization of quantum steering. Remarkably, this also explains the insufficiency of the CJWR-based criteria, which are used to detect the steerability of a qubit-pair state with 2 and 3 measurement settings [30]. Specifically, from the above feature engineering, the steerability of a state \(\rho^{\rm AB}=\frac{1}{4}\sum_{ij}\Theta_{ij}\sigma_{i}\otimes\sigma_{j}\) is invariant under the CASCAS transformations \[\rho^{\rm AB}\mapsto\tilde{\rho}^{\rm AB}\mapsto\rho^{\prime\rm AB}=(U^{\rm A }\otimes U^{\rm B})\tilde{\rho}^{\rm AB}(U^{\rm A}\otimes U^{\rm B})^{\dagger}, \tag{2}\] where \(U^{\rm A}\otimes U^{\rm B}\) is an appropriate local unitary operator diagonalizing \(Q_{\rm A}\). The CJWR-based criteria only take the eigenvalues of \(Q_{\rm A}\) into account and neglect \(\bar{c}_{\rm A}\), which we have shown to be critical as well in characterizing the steerability. In this sense, the CJWR-based criteria are considered to be symmetic. Further detailed behaviors of the models expressed in terms of the confusion matrices are shown in Supplementary Note 5. ### Hierarchy predicted by the models and hidden steerability To further showcase the merit of the well-trained models, meanwhile to facilitate the visualization of the predictions, we apply the models to predict the hierarchies of two different types of quantum states generalized from the standard Werner state; namely, \[\begin{split}\rho_{1}(p,\xi)=& p|\Psi_{\xi}\rangle \langle\Psi_{\xi}|+(1-p)\frac{I_{d}}{2}\otimes\frac{I_{d}}{2}\\ \rho_{2}(p,\xi)=& p|\Psi_{\xi}\rangle\langle\Psi_{ \xi}|+(1-p)\rho^{\rm A}\otimes\frac{I_{d}}{2}.\end{split} \tag{3}\] where \(|\Psi_{\xi}\rangle=\cos\xi|00\rangle+\sin\xi|11\rangle\), \(0\leq\xi\leq\pi/2\), \(p\in[0,1]\), and \(\rho^{\rm A}={\rm Tr}_{\rm B}|\Psi_{\xi}\rangle\langle\Psi_{\xi}|\). Here we use the subscribes in \(\rho_{1}\) and \(\rho_{2}\) to denote the two different types of noise adding to the pure entangled state \(|\Psi_{\xi}\rangle\). In view of the accuracy presented in Fig. 2, we show the hierarchies predicted by the LUTA-6 and the ELLB-9 models (starry curves) in Fig. 3 for comparison. Additionally, to evaluate the performance of the models on these two types of states, we also determine their hierarchies (colored regions) with the protocol, which serve as the GT to be compared with the model predictions. Figures 3**a** and **b** show the predictions by the LUTA-6 model for the type-1 and type-2 states, respectively. As expected, the predicted boundaries between 2- and 3-MS, as well as between 3- and 4-MS, are in good agreement with the fine-grained hierarchies given by the protocol; particularly, the prediction for type-1 states not only coincides with the GT, but also deviates from those given by the CJWR-based criteria (dotted and dot-dashed curves). This discrepancy is in line with the recent experimental report [31], wherein merely 2 or 3 Pauli measurements have been tested for the type-1 states. Furthermore, the green regions of 4-MS states, which are hardly detected by the existing theoretical approaches in the literature, have also been predicted by the LUTA-6 model in high precision. According to our results, the range of the 4-MS standard Werner state \(\rho_{\rm W}(p)=\rho_{1}(p,\pi/4)\) is \(1/\sqrt{3}>p>0.555\). Inferring from the hierarchical structure shown in Fig. 3, it is reasonable to conjecture that each \(n\)-MS band would be narrower with increasing \(n\). In addition, we also show the potential STE-UNS boundaries (purple starry curves) suggested by the models, lying within the blank region of IND states Figure 2: **The accuracy of the well-trained models on the testing data**. The accuracy for each label (histogram) on the 10,300 testing data along with the different features is shown in panels **a** to **e**. Crucially, except for the ELLB-9 model, the overall performance of the model is significantly improved, indicating an even compact, and informative, encoding scheme. Particularly, the LUTA-6 (Alice’s regularly aligned ellipsoid) model is capable of predicting the hierarchy of steering measurement settings at a very high overall accuracy of 99.2 %. On the other hand, the ELLB-9 (Bob’s ellipsoid) model shows an unsatisfactory accuracy even though this feature encodes all of Bob’s states that can be steered to from Alice. From these responses of the models to different features, we can pinpoint that the adequate characterization of the steerability from Alice to Bob should be given by the Alice’s regularly aligned ellipsoid. sandwiched between the 4-MS and the UNS detected by the BHQB criterion. This also reflects the insufficiency of the BHQB criterion. In contrast to the LUTA-6 model, the performance of the ELLB-9 model (Figs. 3**c** and **d**) is unsatisfactory, as expected. Although the overall prediction for type-1 states are in line with the GT, the accuracy is worse, compared to the LUT-6 counterpart. We note that, actually, type-1 states result in the same ellipsoids for Alice and Bob due to the high symmetry of the states. Thus, the accuracy for type-1 states by the ELLB-9 model is satisfactory (see also the average accuracy for random states by the ELLB-9 model in Fig. 2**d**.) Interestingly, the ELLB-9 model appears to completely fail to predict the hierarchy of type-2 states. Every state is identified as the Werner state \(\rho_{\rm W}(p)=\rho_{2}(p,\pi/4)\), giving rise to the horizontal borders. To understand the seemingly erroneous horizontal borders given by the ELLB-9 model for the type-2 states (Fig. 3**d**), we notice that each type-2 state \(\rho_{2}(p,\xi)\) will be transformed to the Werner state by an SLOCC on Alice, i.e., \(\rho_{\rm W}(p)=[(2\rho^{\sf A})^{-1/2}\otimes I_{d}]\rho_{2}(p,\xi)[(2\rho^{ \sf A})^{-1/2}\otimes I_{d}]\), \(\forall\ \xi\in[0,\pi/2]\). In other words, for a given \(p\), the states \(\rho_{2}(p,\xi)\) lying on the same horizontal line share the same ELLB-9 feature encoding with \(\rho_{\rm W}(p)\), and, consequently, are identified as the Werner state \(\rho_{\rm W}(p)\) by the ELLB-9 model. However, \(\rho_{2}(p,\xi)\) and \(\rho_{\rm W}(p)\) do exhibit different hierarchies of steering measurement settings. This, on the one hand, results in the seemingly erroneous prediction shown in Fig. 3**d**. On the other hand, the inconsistency of the predictions by the LUTA-6 and ELLB-9 models for type-2 states (Figs. 3**b** and **d**) is reminiscent of the famous concept of hidden steerability, where an unsteerable state can become steerable w.r.t. an appro Figure 3: **Prediction of the hierarchy of both types of states**. The predictions of the hierarchy (starry curves) for **a** type-1 and **b** type-2 states by the LUTA-6 model are in good agreement with the GT given by the protocol (solid curves). Due to its high accuracy, the LUTA-6 model is capable of discovering several physical insights. More specifically, in the case of type-1 states, the predictions deviate from the CJWR-based criteria (dotted and dot-dashed curves), indicating the insufficiency of the existing theoretical approaches in the literature. Furthermore, for both types of states, the LUTA-6 model can predict the border of the green regions of 4-MS in high precision, which is hard to detect with existing theoretical approaches; meanwhile, we also show the predicted STE-UNS boundaries (purple starry curves), sandwiched between the 4-MS and the UNS detected by the BHQB criterion. In contrast, the predictions of the hierarchy (starry curves) for **c** type-1 and **d** type-2 states by the ELLB-9 model are unsatisfactory, compared to the LUTA-6 counterpart. Although the prediction for type-1 states remains in line with the GT with a worse accuracy, the ELLB-9 model appears to completely fail to predict the hierarchy of type-2 states. Whereas, we stress that, the predicted horizontal lines in turn reveal the underlying physics of hidden steerability. priate 1W-SLOCC on Alice [62; 63; 64; 56]. In particular, in the previous studies, the SLOCCs on both Alice and Bob were considered to activate hidden steering. The prediction from DL model suggests that it is sufficient to consider 1W-SLOCC on Alice to activate hidden steering [65]. ## Discussion Although the assemblage is known to be the intrinsic resource in quantum steering, its optimal preparation from a bipartite state remains unclear. We attempt to shed light on the avenue toward this issue. In this work, we tackle the cumbersome optimization over all possible incompatible measurements with two approaches. Remarkably, each of which provides deeper insights into the characterization of quantum steering, leading to the hierarchy of steering measurement settings. We first develop a computational protocol implementing the optimization by sufficiently long iterations under \(n\) random observables. We concretely exemplify the protocol with two types of generalized Werner state, and clearly showcase its merit in overcoming the insufficiencies of the theoretical criteria in the literature, including the fine-grained 2- and 3-MS borders and a 4-MS region inaccessible to the existing criteria. We also achieve to circumvent the time-consuming burden of optimization by leveraging the supervised DL algorithm. According to the responses of the well-trained models to the physics-driven features, we can acquire deeper physical insights seemingly contradictory to existing operational definitions. Our results suggest that, Alice's regularly aligned steering ellipsoid is adequate to characterize the steerability from Alice to Bob very efficiently, with merely a few information. Additionally, we also showcase the versatility of our models in predicting the hierarchy and the exploration of the hidden quantum steering. Our results naturally open several new interesting directions. For example, an explicit characterization of the steerability in terms of Alice's ellipsoid would be impactful, particularly in constructing stronger steering criteria. For a given qubit-pair state, our protocol can be helpful in finding the optimal preparation of steerable assembles. To explore an efficient characterization of two-way steerability in a similar way would be a heuristic attempt. Crucially, our results also underpin the possibility of the DL models in mining new physics behind a huge amount of data, rather than a substitute of cumbersome computational task. ## Methods ### Data generation and SDP iteration The training data set \(\{(f,l)\}\) is a collection of tuples, where \(f\in\mathbb{R}^{k}\) is the feature encoding the object to be recognized and \(l\in\mathbb{R}\) is the label indicating the GT associated to the feature \(f\). Due to the GT-deficient nature of the problem to be resolved, we construct a computational protocol to determine the correct label \(l\) of each datum, i.e., the hierarchy of steering measurement settings of qubit-pair states. By running the RandomDensityMatrix code [66], we can efficiently generate a large amount of random density matrices (Fig. 1**a**). Then we feed them into the protocol described below to determine the label \(l\) of each state. The first stage of the protocol is a pre-filter, which consists of two discriminators capable of efficiently capturing those states manifestly unsteerable. The PPT criterion can efficiently and exactly determine the separability (\(l=\) SEP) of a qubit-pair state and pass the entangled state to the next BHQB criterion. Although the BHQB criterion is sufficient for unsteerability (\(l=\) UNS), it can be efficiently implemented and enhance the ratio of steerable states among the output states from the pre-filter. After passing through the pre-filter, the states are entangled with indeterminate (IND) steerability. The second stage is the SDP iteration used to determine the label of \(n\)-MS. The detailed procedure is schematically shown in Fig. 1**b**. The IND states from the pre-filter are sent to the first level of 2-MS, wherein Alice will randomly perform two observables. Then Bob obtains the corresponding assemblages and determines their steerability with SDP. Once the assemblages are steerable, then the corresponding states are designated a label \(l=\) 2-MS. For those fail to demonstrate steerability, Alice will perform the measurements again with two new random observables. This SDP test will be repeated for at most 900 times at the level of 2-MS. The states fail to demonstrate 2-MS after 900 iterations will be sent to the next level of 3-MS. At each level of the hierarchy, the procedure is similar besides the number \(n\) of Alice's observables and the times of iterative test. For the following level of 3-, 4-, and 5-MS, the iterative test will be repeated for 27,000, 810 thousands, and 24 millions times, respectively. Such high repetition is to ensure the optimization over all possible incompatible measurements on Alice's side. Further statistical analyses on the iteration times are presented in Supplementary Note 2. It can be seen that, to ensure the optimization, the iteration times increases exponentially with the number \(n\) of observables. The demand on the computing power rapidly exceeds our machines. Here we have worked out a maximal \(n=4\). However, it is manifest that there should be steerable states with \(n>4\), which can not be detected with the fine-grained SDP iteration terminating at 4-MS in the second stage. In the final stage of coarse-grained iteration (Fig. 1**a**), we manage to dig out more steerable states to the largest extent from the indeterminate states fail at level \(n=4\). The procedure of this stage is also the same as those in the SDP iteration. But here, limited by our computational resource, we set \(n=12\) and repeat for at most 100 times. For states successfully demonstrate steerability, we designate a label \(l=\text{STE}\). On the other hand, for those fail in this stage, we can not decide whether they are steerable with \(n\geq 13\) or unsteerable. Consequently, we can not designate a reliable label and drop them from the data set. ### Extracting relevant parameters in the feature Here we present the construction of five types of feature encoding a qubit-pair state \(\rho^{\text{AB}}\), and explain how to extract informative parameters relevant to the characterization of quantum steering, leading to a compact feature. As discussed in the main text, the density matrix of a qubit-pair state \(\rho^{\text{AB}}=\frac{1}{4}\sum_{ij}\Theta_{ij}\sigma_{i}\otimes\sigma_{j}\) can be expressed in terms of a \(4\times 4\) real matrix \(\Theta\) with \(\Theta_{ij}=\text{Tr}(\rho^{\text{AB}}\sigma_{i}\otimes\sigma_{j})\), where \(\sigma_{0}=I_{d}\) and \(\sigma_{i}\) denotes the three Pauli operators. It can be seen that \(\Theta\) has a block structure: \[\Theta=\left[\begin{array}{cc}1&\widetilde{b}^{T}\\ \tilde{a}&T\end{array}\right]. \tag{4}\] Since any density matrices are always of unital trace, \(\Theta_{00}=1\) is trivial and irrelevant to quantum steering. Therefore, we need 15 real parameters to fully describe \(\rho^{\text{AB}}\), denoted as General-15. Consider the one-way stochastic local operations and classical communication (1W-SLOCC) on Bob via the mapping: \[\rho^{\text{AB}}\mapsto\tilde{\rho}^{\text{AB}}=[I_{d}\otimes(2\rho^{\text{B}} )^{-1/2}]\rho^{\text{AB}}[I_{d}\otimes(2\rho^{\text{B}})^{-1/2}]. \tag{5}\] It is critical to note that Bob's local state of \(\tilde{\rho}^{\text{AB}}\) is maximally mixed, corresponding to the transformed matrix \[\widetilde{\Theta}=\left[\begin{array}{cc}1&0^{T}\\ \tilde{a}&\widetilde{T}\end{array}\right]. \tag{6}\] More specifically, \(\widetilde{T}\) is a \(3\times 3\) real matrix determined by \(\widetilde{T}_{ij}=\text{Tr}(\tilde{\rho}^{\text{AB}}\sigma_{i}\otimes\sigma_{ j})\) for \(i.j\in\{1,2,3\}\), and \(\tilde{a}\in\mathbb{R}^{3}\). Therefore, we have only 12 nontrivial parameters left, denoted as SLOCC-12. Moreover, it has been pointed out [55; 56; 28; 57] that \(\tilde{\rho}^{\text{AB}}\) and \(\rho^{\text{AB}}\) have the same steerability from Alice to Bob, provided that \(\rho^{\text{B}}=\text{Tr}_{\text{A}}\rho^{\text{AB}}\) is a mixed state. Therefore, SLOCC-12 is capable of characterizing quantum steerability with lesser parameters. Inspired by the technique of quantum steering ellipsoid [50], we would like to ask whether quantum steering can also be characterized in a geometrical manner. Suppose that Bob performs a measurement described by an operator \(E=\frac{1}{2}\sum_{j=0}^{3}X_{j}\sigma_{j}\), then Alice will obtain her local state \(\frac{1}{2}\Theta X\) with probability \(p_{E}=\frac{1}{2}(1+\vec{b}\cdot\vec{X})\). According to the operational definition, Alice's ellipsoid is obtained by collecting the normalized local state \(\frac{1}{2}\Theta X/p_{E}\) for all possible measurement operators \(E\). Moreover, it has been pointed out [50] that Alice's ellipsoid is invariant under the 1W-SLOCC on Bob (5). This provides a convenient way for expressing Alice's ellipsoid for \(\rho^{\text{AB}}\) with the parameters in the transformed matrix \(\widetilde{\Theta}\) in Eq. (6); namely, \[Q_{\text{A}}=\widetilde{T}\widetilde{T}^{T}. \tag{7}\] Note that \(Q_{\text{A}}\) is a \(3\times 3\) symmetric matrix; therefore, along with the center of the ellipsoid \(\vec{c}_{\text{A}}=\tilde{a}\), we need merely at most 9 parameters in total to encode \(\rho^{\text{AB}}\), denoted as ELLA-9. Similar procedure can be done with respect to the 1W-SLOCC operator \((2\rho^{\text{A}})^{-1/2}\otimes I_{d}\) on Alice, leading to the ELLB-9. Noteworthily, this reduction is achieved by symmetrizing \(\widetilde{T}\) by \(Q_{\text{A}}\) in Eq. (7). For a randomly generated \(\rho^{\text{AB}}\), Alice's ellipsoid may be obliquely aligned; namely, the off-diagonal elements of \(\widetilde{T}\) may be non-zero. To construct the most compact feature, we can diagonalize \(\widetilde{T}\) by applying an appropriate local unitary transformation on both sides as \[\tilde{\rho}^{\text{AB}}\mapsto\rho^{\prime\text{AB}}=(U^{\text{A}}\otimes U^ {\text{B}})\tilde{\rho}^{\text{AB}}(U^{\text{A}}\otimes U^{\text{B}})^{ \dagger}. \tag{8}\] Then we have \[\Theta^{\prime}=\left[\begin{array}{cc}1&0^{T}\\ \tilde{a}^{\prime}&T^{\prime}\end{array}\right] \tag{9}\] with \(T^{\prime}\) being a diagonal matrix, leading to the most compact feature with \(k=6\), denoted as LUTA-6. In this construction, the information on the orientation of the ellipsoid is removed. ## Data availability The data that support the findings of this study are available upon reasonable request from the corresponding authors. ## Code availability The code that supports the findings of this study is available upon reasonable request from the corresponding authors.
2304.04766
Non-Linear Estimation using the Weighted Average Consensus-Based Unscented Filtering for Various Vehicles Dynamics towards Autonomous Sensorless Design
The concerns to autonomous vehicles have been becoming more intriguing in coping with the more environmentally dynamics non-linear systems under some constraints and disturbances. These vehicles connect not only to the self-instruments yet to the neighborhoods components, making the diverse interconnected communications which should be handled locally to ease the computation and to fasten the decision. To deal with those interconnected networks, the distributed estimation to reach the untouched states, pursuing sensorless design, is approached, initiated by the construction of the modified pseudo measurement which, due to approximation, led to the weighted average consensus calculation within unscented filtering along with the bounded estimation errors. Moreover, the tested vehicles are also associated to certain robust control scenarios subject to noise and disturbance with some stability analysis to ensure the usage of the proposed estimation algorithm. The numerical instances are presented along with the performances of the control and estimation method. The results affirms the effectiveness of the method with limited error deviation compared to the other centralized and distributed filtering. Beyond these, the further research would be the directed sensorless design and fault-tolerant learning control subject to faults to negate the failures.
Bambang L. Widjiantoro, Moh Kamalul Wafi, Katherin Indriawati
2023-04-09T03:34:51Z
http://arxiv.org/abs/2304.04766v1
Non-Linear Estimation using the Weighted Average Consensus-Based Unscented Filtering for Various Vehicles Dynamics towards Autonomous Sensorless Design ###### Abstract The concerns to autonomous vehicles have been becoming more intriguing in coping with the more environmentally dynamics non-linear systems under some constraints and disturbances. These vehicles connect not only to the self-instruments yet to the neighborhoods components, making the diverse interconnected communications which should be handled locally to ease the computation and to fasten the decision. To deal with those interconnected networks, the distributed estimation to reach the untouched networks, pursuing sensorless design, is approached, initiated by the construction of the modified pseudo measurement which, due to approximation, led to the weighted average consensus calculation within unscented filtering along with the bounded estimation errors. Moreover, the tested vehicles are also associated to certain robust control scenarios subject to noise and disturbance with some stability analysis to ensure the usage of the proposed estimation algorithm. The numerical instances are presented along with the performances of the control and estimation method. The results affirms the effectiveness of the method with limited error deviation compared to the other centralized and distributed filtering. Beyond these, the further research would be the directed sensorless design and fault-tolerant learning control subject to faults to negate the failures. autonomous vehicles, estimation method, unscented Kalman filtering, the weighted average consensus filtering ## I Introduction The ideas of autonomous vehicles being operated in certain environment have been discussed before the 20th century [1], from underwater, land, and air, however the future challenges with advanced manufacturing are constantly open to tackle [2] in addition to the current surveys [3]. The breakthrough in communication technologies along with the more dynamics entities causes this subject more reality with the guarantees from the stability studies, such as using neural-network [4] and under some regular switching methods [5]. Moreover, the stability of the discretized autonomous system was also studied [6] in a connected networked system [7] subject to delays while considering the stability controls [8, 9]. To give more comprehensive results, the examined autonomous vehicles in this paper comprise various dynamical systems; the adaptive cruise control [10, 11], the active suspension system [12, 13], the electric aircraft [14, 15], and the DC motor drive system [16, 17]. To deal with the more interconnected systems, the control design is required from the basic feedback control [18] to intelligent-based control [19, 20] in real-time [21] with constraints [22]. The stability analysis and control designs, as the discussions in the paper, then leads to the modest-cost implementation using sensorless design in lieu of the hardware-sensor from the basic estimation methods. These methods vary in applications, comprising in standard filtering [23], the adaptive estimation method [24], and the most current distributed estimation [25], while this paper focuses on the unscented Kalman filtering (UKF) as the foundation. This UKF estimation, said to answer the lack of EKF estimation in terms of the Gaussian Random variable (GRV), has been widely applied in non-linear systems [26] and the adaptive dynamical systems [27] with considering the stochastic uncertainties [28]. However, this sensorless design is supposed to be distributed, therefore the modified UKF estimation, which is the one applied in this paper, is required to construct as written in [29] with sensor networks and it would be propagated into the mentioned vehicles. The pseudo measurement matrix presented in [30] was the basis of this development mentioning the Markov nonlinear system while this was the linearized approximation [31] and the ultimate consensus-based algorithms in multi-vehicle implementing cooperative control were studied in [32] and also for the sake of distributed filtering [33]. From those, this research focuses to study the effectiveness of the proposed distributed estimation into various vehicles as the basis of sensorless networks which could be developed into the simpler relaxed computation [34] to ease the decision. Beyond that, the simplified robust sensorless algorithms in electric vehicles and drives are our upcoming concerns of research in addition to fault-tolerant distributed learning control taking into account these research from [35, 36, 37, 38, 39] and [40, 41, 42, 43, 44, 45, 46, 47]. ## II Mathematical Dynamics This section focuses on building the dynamics vehicle-related systems being used the examine the effectiveness of the proposed scenarios. There are five various vehicle plants, from the car cruise control, the quarter bus-suspension with disturbance, the longitudinal-pitch of the aeroplane, to the speed and position of DC motor. Beyond that, the control mechanism along with the stability discussion under some limited assumptions are written in the following sections before the algorithms are then proposed, leading to the sensorless designs. ### _The Car Cruise Control Model_ In many recent contemporary vehicles, the development of any advanced self-acting controls is demanding to guarantee the safety alongside the comfort, no exception to cruise control. This cruise control is designed to maintain a stable desired speed regardless arbitrary disturbances, including the alterations of winds and roads. Moreover, it is then achieved by comparing the measured and the desired speed, which regulating the throttle based on the designed control scenario. The dynamic of the vehicle (\(m\)) is depicted in Fig.(1) with the force (\(u\)) being produced at the road surface, by assuming the perfect control to this force and neglecting the arbitrary forces acting on producing the force. By contrast on the mode's motion, the resistive external forces (\(bv\)) are implied to be linearly changed with respect to the velocity (\(v\)). Eq.(1) is the dynamic of the Newton's second law along with the measured system (\(y\)), \[m\dot{v}+bv=u,\qquad\text{and}\qquad y=v \tag{1}\] and the basic state-space representative constitute as follows, \[\dot{\textbf{x}}=\left[\dot{v}\right]=\left[\frac{-b}{m}\right]v+\left[ \frac{1}{m}\right]u, \tag{2}\] \[y=1\cdot v\] which is from Eq.(2), the transfer function, \(\Phi_{n}(s),\forall n=1,\dots\) showing the order of the examined systems, results in Eq.(3), \[\Phi_{1}(s)=\frac{V(s)}{U(s)}=\frac{1}{ms+b}\frac{m}{Ns} \tag{3}\] ### _A Quarter Bus-Suspension Design_ The attractive advanced suspension designs are becoming more intriguing, being linked to autonomous design. This active suspension scenario, with actuator enabling to produce the control force (\(u\)) acting on the body motion control, is derived from a-quarter simplified bus-mode design. The variables are explained as follows; \(M_{1}\) and \(M_{2}\) denote the a-quarter body and suspension mass, the constants of springs (\(k_{n}\)) and dampers (\(b_{n}\)), \(\forall n=1,2\), of suspension and wheel in turn along with non-linear disturbance (\(\gamma\)). From Newton's law the equations of motions could be written as Eq.(4), \[M_{1}\ddot{x}_{1}=-\psi_{b_{1}}-\psi_{k_{1}}+u \tag{4}\] \[M_{2}\ddot{x}_{2}=\psi_{b_{1}}+\psi_{k_{1}}+\psi_{b_{2}}+\psi_{ k_{2}}-u\] with \(\psi_{\bullet}\) defines the associated dynamics of (\(\bullet\))-term, such that \[\psi_{b_{1}}=b_{1}\left(\dot{x}_{1}-\dot{x}_{2}\right) \psi_{b_{2}}=b_{2}\left(\dot{\gamma}-\dot{x}_{2}\right)\] \[\psi_{k_{1}}=k_{1}(x_{1}-x_{2}) \psi_{k_{2}}=k_{2}(\gamma-x_{2})\] and the Laplacian functions, with zero initial conditions along with the input (\(u,\gamma\)) and the output (\(x_{1}-x_{2}\)) are drawn as, \[M_{1}s^{2}X_{1}(s)+\Psi_{b_{1}}(s)+\Psi_{k_{1}}(s)=U(s)\] \[M_{2}s^{2}X_{2}(s)-\Psi_{b_{1}}(s)-\Psi_{k_{1}}(s)-\Psi_{b_{2}}( s)-\Psi_{k_{2}}(s)=-U(s)\] which could be concluded into standard algebraic equation, \[\textbf{F}x=\textbf{g} \tag{5}\] where the terms of \(\textbf{F},x\), and **g** in Eq.(5) are made of, \[\textbf{F}=\begin{bmatrix}M_{1}s^{2}+b_{1}s+k_{1}&-(b_{1}s+k_{1}) \\ -(b_{1}s+k_{1})&M_{2}s^{2}+(b_{1}+b_{2})s+(k_{1}+k_{2})\end{bmatrix}\] \[x=\begin{bmatrix}X_{1}(s)\\ X_{2}(s)\end{bmatrix},\qquad\text{and}\qquad\textbf{g}=\begin{bmatrix}U(s)\\ (b_{2}s+k_{2})\Gamma(s)-U(s)\end{bmatrix}\] Fig. 1: Free-body dynamic of the car Fig. 2: A quarter bus suspension and the value of \(x\) is obtained from the inverse with slight modification of matrix (\(g\)) such that it only appears \(U(s)\) and \(\Gamma(s)\) as in Eq.(6) with the detail parameters of \(\Delta_{n}\), \[x=\frac{1}{\det(\textbf{F})}\begin{bmatrix}\Delta_{1}(s)&\Delta_{2}(s)\\ \Delta_{3}(s)&\Delta_{4}(s)\end{bmatrix}\begin{bmatrix}U(s)\\ \Gamma(s)\end{bmatrix} \tag{6}\] where, after being altered, the \(\mathrm{adj}(\textbf{F})\times\textbf{g}\) is then turned into \(\Delta_{n},\forall n=1\to 4\) as described below, \[\Delta_{1}(s) =M_{2}s^{2}+b_{2}s+k_{2} \Delta_{2}(s) =z(s)\] \[\Delta_{4}(s) =M_{1}b_{2}s^{3}+M_{1}k_{2}s^{2}+z(s) \Delta_{3}(s) =-M_{1}s^{2}\] with the modification of initial **g** into the remaining \(U(s),\Gamma(s)\) \[\begin{bmatrix}U(s)\\ (b_{2}s+k_{2})\Gamma(s)-U(s)\end{bmatrix} \longrightarrow \begin{bmatrix}U(s)\\ \Gamma(s)\end{bmatrix}\] where \(z(s)\) in \(\Delta_{2}(s)\) term is \(b_{1}b_{2}s^{2}+(b_{1}k_{2}+b_{2}k_{1})s+k_{1}k_{2}\). To construct the transfer functions \(\Phi_{2}(s)\), it is required to set the sequence of the inputs. For \(\Phi_{2a}\), the control input (\(u\)) is taken and the disturbance (\(\gamma\)) is assumed zero while for \(\Phi_{2b}\) is the reciprocal as in Eq.(7) in turn, \[\Phi_{2a}(s) =\frac{X_{1}(s)-X_{2}(s)}{U(s)}\] \[=\frac{(M_{1}+M_{2})s^{2}+b_{2}s+k_{2}}{\det(\textbf{F})} \rightarrow\gamma=0 \tag{7}\] \[\Phi_{2b}(s) =\frac{X_{1}(s)-X_{2}(s)}{\Gamma(s)}\] \[=\frac{-M_{1}b_{2}s^{3}-M_{1}k_{2}s^{2}}{\det(\textbf{F})} \to u=0\] Beyond that, the state-space representative is shown in Eq.(8) with the respected extended matrices in Eq.(9). Furthermore, the state variables includes (\(x_{1},y_{1}\)) and their derivative with \(y_{1}=x_{1}-x_{2}\) while the output \(y=y_{1}\), therefore \[\dot{\textbf{x}} =A\textbf{x}+B\textbf{u} \tag{8}\] \[y =C\textbf{x}+D\textbf{u}\] using the following concepts or otherwise the transfer functions modification as presented, \[\int_{k}\frac{d^{k}x_{n}}{dt^{k}}\;dt=\int_{k-1}\frac{d^{k-1}x_{n }}{dt^{k-1}}\,dt=x_{n},\] \[\frac{1}{M_{n}}\sum_{i=1}^{k}F_{i}=\frac{d^{k}x_{n}}{dt^{k}},\quad \forall n=1\to 2;k=2\] ### _The Aircraft Longitudinal-Pitch Dynamics_ The mathematical approaches portraying the aircraft motions with six nonlinear paired are intricate to deal with yet with appropriate assumptions, the schemes of decoupling and linearizing into two axes, lateral and longitudinal perspective, are acceptable. This system focuses on the autonomous aircraft pitch control being administered by the solely longitudinal axis as shown in Fig.(3). Supposed the steady-cruise occurs in the airplane at certain constants of speed and altitude, then the force variables of drag, lift, weight and thrust systematically balance between the two planes, \(x\) and \(y\). Keep in mind the more forced assumption of pitch angle alteration, not influencing the velocity, in arbitrary conditions is also applied to simplify the model. This leads to the following longitudinal dynamics with the steady-state variables of attack (\(\alpha\)) and pitch (\(\theta\)) angle and pitch rate (\(q\)) as written in Eq.(10), Eq.(11), and Eq.(12), \[\dot{\alpha}=\mu\Omega\sigma\left[-\psi_{\alpha_{1}}\alpha+\psi_{\alpha_{2}}q- \psi_{\alpha_{3}}\delta\right] \tag{10}\] where \(\psi_{\alpha_{n}},\forall n=1\to 3\) make of \[\psi_{\alpha_{1}}=\Gamma_{\ell}+\Gamma_{d};\quad\psi_{\alpha_{2}}=\frac{1}{ \mu-\Gamma_{\ell}};\quad\psi_{\alpha_{3}}=\Gamma_{w}\sin\gamma+\Gamma_{\ell}\] and the pitch rate (\(q\)) is written with the following equation of motion \[\dot{q}=\frac{\mu\Omega}{2I_{n}}\left[\psi_{q_{1}}\alpha+\psi_{q_{2}}q+\psi_{ q_{3}}\delta\right] \longrightarrow\quad\mu=\frac{\rho\textbf{S}\bar{c}}{4m} \tag{11}\] where \(\psi_{q_{n}},\forall n=1\to 3\) constitute \[\psi_{q_{1}} =\Gamma_{m}-\eta(\Gamma_{\ell}+\Gamma_{d}), \longrightarrow\eta=\mu\sigma\Gamma_{m}\] \[\psi_{q_{2}} =\Gamma_{m}+\sigma\Gamma_{m}(1-\mu\Gamma_{\ell}), \longrightarrow\sigma=(1+\mu\Gamma_{\ell})^{-1}\] \[\psi_{q_{3}} =\eta\Gamma_{w}\sin\gamma\] ### _The Aircraft Longitudinal-Pitch Dynamics_ The mathematical approaches portraying the aircraft motions with six nonlinear paired are intricate to deal with yet with appropriate assumptions, the schemes of decoupling and linearizing into two axes, lateral and longitudinal perspective, are acceptable. This system focuses on the autonomous aircraft pitch control being administered by the solely longitudinal axis Fig. 3: Coordinate dynamics of the aircraft The last would be the pitch angle (\(\theta\)) written as \[\dot{\theta}=\Omega q\quad\longrightarrow\quad\Omega=\frac{2\mathbf{E}_{u}}{\bar{c}} \tag{12}\] where \(\mu,\Omega,\sigma,\eta\) are the constants being then affected by the following variables; \(\delta,\rho,\mathbf{S},\bar{c},m,\mathbf{E}_{u},\gamma,I_{n}\) comprise the deflection angle of elevator, air density, wing area, mean chord length, mass, speed equilibrium, angle of flight trajectory, and the normalized of moment inertia respectively. Beyond that the coefficients are also considered as the coefficient of thrust (\(\Gamma_{t}\)), drag (\(\Gamma_{d}\)), lift (\(\Gamma_{\ell}\)), weight (\(\Gamma_{w}\)) and pitch moment (\(\Gamma_{m}\)). To obtain the dynamics, it is required to form the state-space from the Laplacian transfer function as in Eq.(13) with the respected \(c_{n},\forall n=1\to 7\), \[sA(s) =c_{1}A(s)+c_{2}Q(s)+c_{3}\Delta(s)\] \[sQ(s) =c_{4}A(s)+c_{5}Q(s)+c_{6}\Delta(s) \tag{13}\] \[s\Theta=c_{7}Q(s)\] and after some algebraic formula, it is achieved this function, \[\Phi_{3}(s)=\frac{\Theta(s)}{\Delta(s)}=\frac{1.151s+0.177}{s^{3}+0.739s^{2}+0. 921s} \tag{14}\] with the standard matrices as in Eq.(15) which could be also built from Eq.(14), therefore \[\begin{bmatrix}\dot{\alpha}\\ \dot{q}\\ \dot{\theta}\end{bmatrix}=\begin{bmatrix}c_{1}&c_{2}&0\\ c_{4}&c_{5}&0\\ 0&c_{7}&0\end{bmatrix}\begin{bmatrix}\alpha\\ q\\ \theta\end{bmatrix}+\begin{bmatrix}c_{3}\\ c_{6}\\ 0\end{bmatrix}\delta;\qquad y=\theta \tag{15}\] ### _DC Motor Systems - Speed & Position Perspectives_ The last dynamical system would be one of the most common actuators being used in electrical drives, which is focused on the two measured variables of speed and position. This system illustrates the translational rotating rotor-motion paired with the wheels as shown in Fig.(4). The rotor plant is supposed to have the voltage (\(V\)) input working on the armature of the motor whereas the outputs capture the two states, the position \(\theta\) and the speed \(\dot{\theta}\) of the shaft. Furthermore, the two objects associated to the input-output are assumed to be rigid while regarding the resistive-contacting force, the model of the friction torque linearly parallels to angular velocity. With the steady value assumption of magnetic field, the torque, having the positive linear combination with the current and the magnetic field, is then solely corresponding to the current (\(i\)) with certain torque parameter (\(\kappa_{t}\)), such that \[T=\kappa_{t}i \tag{16}\] where the opposite emf (\(e\)) is positively affected by the multiplication between certain electromotive force parameter (\(\kappa_{e}\)) which equals to (\(\kappa_{t}\)) and the shaft speed \(\dot{\theta}\) \[e=\kappa_{e}\dot{\theta} \tag{17}\] and from Eq.(16) and Eq.(17), the respected models in terms of the force Newton's law and the voltage Kirchhoff's could be written in Eq.(18) \[T\ddot{\theta}+b\theta=\kappa i \tag{18}\] \[L\frac{di}{dt}+Ri=V-\kappa\dot{\theta}\] which are then constructed by the Laplacian functions stated as \[s(Js+b)\Theta(s) =\kappa I(s) \tag{19}\] \[(Ls+R)I(s) =V(s)-\kappa s\Theta(s)\] From Eq.(19), the term of (\(s\)) in the position \(\Theta(s)\) is then mixed as the output velocity whereas the \(I(s)\) becomes the equality variable between the two equations, making the input voltage \(V(s)\) as written in Eq.(20), therefore \[\Phi_{4}(s)=\frac{\dot{\Theta}(s)}{V(s)}=\frac{\kappa}{(Js+b)(Ls+R)+\kappa^{2}} \tag{20}\] Either the Laplacian term in Eq.(20) or by constructing from the equation models could be easily turned into state-space representation with two measured variables of position and speed as denoted in Eq.(21), \[\frac{d}{dt}\begin{bmatrix}\dot{\theta}\\ \dot{\theta}\\ i\end{bmatrix} =\begin{bmatrix}0&1&0\\ 0&\frac{-b}{J}&\frac{\kappa}{J}\\ 0&\frac{-\kappa}{L}&\frac{-R}{J}\end{bmatrix}\begin{bmatrix}\theta\\ \theta\\ i\end{bmatrix}+\begin{bmatrix}0\\ 0\\ 1\end{bmatrix}V \tag{21}\] \[y =\begin{bmatrix}1&0&0\\ 0&1&0\end{bmatrix}\mathbf{x}\longrightarrow\begin{bmatrix}\theta\\ \dot{\theta}\end{bmatrix}\] ## III Control Designs The Control method used is the feedback aided with pre-compensation along with some disturbance under some desired Fig. 4: Free-body rotor design Fig. 5: A precompensator state-feedback system condition. Fig.(5) explains the control scenario of the feedback aided compensation \(\bar{N}\) where it requires standard pole placement to obtain \(K\). Regarding the compensation, it has two initial setup of \(s_{1}\) as the length of matrix \(A\) and \(s_{2}\) defining the ones vector of output in addition to zeros vector of \(s_{1}\), such that, \[s_{1}=\text{length}(A)\qquad s_{2}=[\text{zeros}([1,s_{1}]),\;1]\] where the compensation (\(\bar{N}\)) is constructed using the formulas \[N=\text{inv}\left(A,B;C,D\right)\times s_{2}^{\top}\] \[\bar{N}=N_{u}+KN_{x}\longrightarrow Nx=N(1:s_{1})\text{ and }N_{u}=N(1+s_{1})\] which could vanish the error. Furthermore, as regards the first plant, the cruise is designed to be constant in any two different points for 50s. The first 30s is for 10 m/s as the reference whereas the rest 20s is to maintain 7 m/s with (\(-1.5\)) pole for finding the gain \(K\). The second plant requires the disturbance \(\gamma\) in addition to input signal \(u\) and from this, the control should stabilize the output \(y_{1}\) in under 5s. Keep in that while having two inputs (\(u,\gamma\)), the state-feedback could only control the input signal of the first column \(B(\mathbf{1})\), such that \[\dot{x}=(A-B(\mathbf{1})\times K)+B[U,\Gamma]^{\top}\] where the characteristic polynomial of the system is then written as the \(\det[sI-(A-B(\mathbf{1})K)]\) instead of the standard \(\det[sI-(A-BK)]\) along with the minimizing compensation \(\bar{N}\) algorithm. With respect to the third plant, the full-rank (\(n\)) analysis of controllable and observable is done, ensuring to place the poles around complex \(s-\)plane. The input \(u\) would be the desired pitch angle \(\theta_{r}\) and the \(K\) times the full-state measured \(\mathbf{x}\) however, since this is the higher-order dynamics, the more advanced technique is applied with the weighted matrices of \(R\) and modified \(Q\) applying the varied-constant \(p\) and matrix \(C\). After that, the gain \(K\) and the pre-compensation are ready to be implemented. Finally, the rotor dynamics have poles in sequence of speed \(v\) (\(-5\pm i\)) and position \(\theta\) (\(-100\pm 100i,-200\)) along with the same scenarios of finding the gain \(K\) and the minimizing compensation \(\bar{N}\). Beyond that, the stability analysis are discussed further to guarantee the desired outputs along with the proposed estimation mechanism. ## IV Stability Analysis The design of the control systems \(C(s)\) for each plant is shown in Fig.(6) according to certain transfer function \(\Phi_{n},\forall n=1\to 4\) with (\(r,e,d\)) represent the reference, error and disturbance. For PID-typed control, the signal control would be evaluated through Eq.(22), \[u(t)=K_{p}e(t)+K_{i}\int e(t)dy+K_{d}\frac{de}{dt} \tag{22}\] while the control transfer functions in Eq.(23) would be adjusted to the plant \(\Phi_{n}\) based on Eq.(3), Eq.(7), Eq.(14), and Eq.(20), especially when finding the position in \(\Phi_{4}(s)\) it is then required to integrate, therefore \[C(s)=\begin{cases}K_{p}E(s),&\text{$\rightarrow$ P}\\ (K_{p}+K_{d}s)E(s),&\text{$\rightarrow$ PD}\\ \dfrac{K_{p}s+K_{i}}{s}E(s),&\text{$\rightarrow$ PI}\\ \dfrac{K_{d}s^{2}+K_{p}s+K_{i}}{s}E(s),&\text{$\rightarrow$ PID}\end{cases} \tag{23}\] Moreover, the damping ratio (\(\zeta\)) and natural frequency (\(\omega_{n}\)) for each plant should be well-defined using Eq.(24), such that \[\omega_{n}\geq\dfrac{1.8}{t_{r}},\qquad\zeta\geq\sqrt{\dfrac{\ln^{2}(M_{p})}{ \pi^{2}+\ln^{2}(M_{p})}} \tag{24}\] Keep in mind that other than PID combination in Eq.(23), there are lag-control and lead-compensation being used in analysing the stability. For instance the lag-control of the first dynamic \(\Phi_{1}\) as in Eq.(25), \[C(s)=\dfrac{s+z_{0}}{s+p_{0}}\longrightarrow\dfrac{V(s)}{R(s)}=\dfrac{s+z_{0 }}{ms^{2}+(b+mp_{0})s+bp_{0}} \tag{25}\] where the closed-loop design constitutes in Eq.(26), \[\dfrac{V(s)}{U(s)}=\dfrac{K_{p}(s+z_{0})}{ms^{2}+(b+mp_{0}+K_{p})s+(bp_{0}+K_{ p}z_{0})} \tag{26}\] and the led mechanism could be seen with another additional gain \(K_{\ell}\) in \(C(s)\) and putting the value of zero \(z_{0}\) less than that of pole \(p_{0}\). While the lag considers the right of \(s-\) plane, the counterpart led places on the left side with results in Fig.(7). Fig. 6: The block diagrams of the tested plants according to the transfer function \(\Phi_{n},\forall n=1\to 4\) as the basis of stability analysis ## Appendix A Proofs Fig. 7: The root-locus analysis of: 1) the first dynamics \(\Phi_{1}\) with proportional (a) and lag-controller (b); 2) the second plant \(\Phi_{20}\) (c) with notch filter (d) and its magnifier (e) to the limit along with PID-control (f); 3) the third system \(\Phi_{3}\) with original-induced by proportional control (g) and the modification of lead-compensation scenario (h); 4) while the last transfer functions comprise the speed \(\Phi_{4}\) with lag-control (i) and the position \(\Phi_{4}\) with proportional (j), the PI control (k) and the PID-control (l) ## V The Modified Unscented Filtering The common algorithm of Unscented Kalman Filter covers the vital response for the standard-EKF question in terms of the best-opted Gaussian random variables (GRV). It gives the broader deterministic sampled-values and this has captured the posterior mean along with its covariance of the opted GRV when being matched to the more dynamic non-linear systems, \[\begin{split}\mathbf{x}_{k+1}&=f(\mathbf{x}_{k}, \mathbf{u}_{k})+q_{k}\\ \mathbf{y}_{k}&=h(\mathbf{x}_{k})+r_{k}\end{split} \tag{27}\] where the matrices of \(\mathbf{x}_{k},\mathbf{u}_{k},f_{k},h_{k},w_{k},v_{k}\) and \(\mathbf{y}_{k}\) comprise state, control signal, system and measurement model, process and measurement noise, and measurement vector respectively. Considering to input a state-variable (\(\mathbf{x}\)), with the properties of \(\mathbf{\bar{x}}\) and \(\mathbf{P}_{x}\) with \(L-\)dimension, into the non-linear model \(f(\bullet)\) to gain the measured \(\mathbf{y}_{k}\), the modified matrix \(\chi_{i}\) with \(\forall i=0\to(2L+1)\)_sigma_ values, such that \[\begin{cases}\chi_{0}=\bar{\mathbf{x}}&i=0\\ \chi_{i}=\bar{\mathbf{x}}+\left(\sqrt{(L+\lambda)\,\mathbf{P}_{k-1}}\right)_{i }&i=1,\ldots,L\\ \chi_{i}=\bar{\mathbf{x}}-\left(\sqrt{(L+\lambda)\,\mathbf{P}_{k-1}}\right)_{i }&i=L+1,\ldots,2L\end{cases} \tag{28}\] in which \(\lambda=\alpha^{2}(L+\kappa)-L\) explains the weighted value and a parameter (\(\alpha\)) as the key factor defines the distribution of the sigma values \((\chi_{i})\) surrounding the expected value (\(\mathbf{\bar{x}}\)) with the value in a range of \(10^{-4}\) and \(1\). While (\(\kappa\)) constitutes the second parameter, either \(0\) or \((3-L)\), the third (\(\beta\)), demonstrating the previous data of the state, makes of the exact 2 as the optimal. Moreover, the method of the standard UKF is written and to differ from the preceding discussion, this applies the \((n)-\)th iteration in place of (\(k\)), 1. **Initialization** setup the inputs \(\hat{\mathbf{x}}_{0}\) and \(\mathbf{P}_{0}\) 2. For \(n=1\to\infty\), iterate the algorithms from Eq.(\(\alpha_{1}\)) to Eq.(\(\alpha_{11}\)). First, calculate the sigma values (\(\chi\)), \[\chi_{a,n-1}^{(i)}=\left[\mathbf{x}_{a,n-1}^{(i)}\quad\mathbf{x}_{a,n-1}^{(i) }\pm\sqrt{(L+\lambda)\mathbf{P}_{a,n-1}}\right]\] ( \[\alpha_{1}\] ) 3. **Time updated:** Compute the sigma \(\chi_{a,n-1}^{(i)}\) using the function system to obtain \(\hat{\chi}_{a,n}^{(i)}\) and \(\hat{\mathbf{y}}_{n}^{(i)}\). Furthermore, solving the values of \(\hat{\mathbf{P}}_{a,n}\), \(\hat{\mathbf{y}}_{n}\), and \(\hat{\mathbf{P}}_{a,n}\), implementing the following formulas \[\hat{\chi}_{a,n}^{(i)} =\mathbf{f}\left(\chi_{a,n-1}^{(i)},\mathbf{u}_{n}\right)\] ( \[\alpha_{2}\] ) \[\hat{\mathbf{x}}_{a,n} =\sum_{i=0}^{2L}W^{(i)}\hat{\chi}_{a,n}^{(i)}\] ( \[\alpha_{3}\] ) \[\hat{\mathbf{P}}_{a,n} =\sum_{i=0}^{2L}W^{(i)}\left[\hat{\chi}_{a,n}^{(i)}-\hat{\mathbf{ x}}_{a,n}\right]\left[\hat{\chi}_{a,n}^{(i)}-\hat{\mathbf{x}}_{a,n}\right]^{\top}\] ( \[\alpha_{4}\] ) \[\hat{\mathbf{y}}_{n}^{(i)} =\mathbf{g}\left(\hat{\chi}_{a,n}^{(i)},\mathbf{u}_{n}\right)\] ( \[\alpha_{5}\] ) \[\hat{\mathbf{y}}_{n} =\sum_{i=0}^{2L}W^{(i)}\hat{\mathbf{y}}_{n}^{(i)}\] ( \[\alpha_{6}\] ) 4. **Measurement updated:** Compute the variables of \(\hat{\mathbf{S}}_{n}\), \(\hat{\mathbf{K}}_{n}^{xy}\), \(\mathbf{W}_{n}\) along with the updated states \(\hat{\mathbf{x}}_{n}^{+}\) and error covariance \(\mathbf{P}_{n}\), \[\hat{\mathbf{S}}_{n} =\sum_{i=0}^{2L}W^{(i)}\left[\hat{\mathbf{y}}_{n}^{(i)}-\hat{ \mathbf{y}}_{n}\right]\left[\hat{\mathbf{y}}_{n}^{(i)}-\hat{\mathbf{y}}_{n} \right]^{\top}\] ( \[\alpha_{7}\] ) \[\hat{\mathbf{K}}_{n}^{xy} =\sum_{i=0}^{2L}W^{(i)}\left[\hat{\chi}_{n}^{(i)}-\hat{\mathbf{x} }_{n}\right]\left[\hat{\mathbf{y}}_{n}^{(i)}-\hat{\mathbf{x}}_{n}\right]^{\top}\] ( \[\alpha_{8}\] ) \[\mathbf{W}_{n} =\hat{\mathbf{K}}_{n}^{xy}\hat{\mathbf{S}}_{n}^{-1}\] ( \[\alpha_{9}\] ) \[\hat{\mathbf{x}}_{n}^{+} =\hat{\mathbf{x}}_{n}+\mathbf{W}_{n}z_{n}\longrightarrow z_{n}= \mathbf{y}_{n}-\hat{\mathbf{y}}_{n}^{(i)}\] ( \[\alpha_{10}\] ) \[\mathbf{P}_{n} =\hat{\mathbf{P}}_{n}-\mathbf{W}_{n}\hat{\mathbf{S}}_{n}\mathbf{W}_ {n}^{\top}\] ( \[\alpha_{11}\] ) with the magnitudes of \[W_{i}\] are stated as follows, \[\begin{split} W_{0}^{(m)}&=\frac{\lambda}{L+\lambda} \\ W_{0}^{(c)}&=\frac{\lambda}{L+\lambda}+\left(1- \alpha^{2}+\beta\right)\\ W_{i}^{(m)}&=W_{i}^{(c)}=\frac{1}{2(L+\lambda)}, \forall i=1\to 2L\end{split}\] (29) The weighted average comes up inspired by the consensus-based algorithms with some matrix modification of the pseudo measurement as written in [30]. Nevertheless this was linear approximation of the pseudo matrix [31] and according to [32] and [33], the consensus was applied by [29] without pseudo matrix. More specifically, each node (\(i\)) calculates locally with the weighted average between the estimated state and error covariance (\(\hat{\mathbf{x}}_{n}^{i},\mathbf{P}_{n}^{i}\)) in neighborhood region \(\mathcal{N}_{i}\) with certain magnitude of \(\pi^{(1,j)},j\in\mathcal{N}_{i}\). The coupled \((\hat{\mathbf{x}}_{n}^{i},\mathbf{P}_{n}^{i})_{i\in\mathcal{N}}\) is then supposed to be weighted average if \(\ell\to\infty\) as in Eq.(30), \[\left(\hat{\mathbf{x}}_{n}^{i},\mathbf{P}_{n}^{i}\right)_{i\in\mathcal{N}} \xrightarrow{\text{weighted}}\left(\hat{\mathbf{x}}_{n}^{\star},\mathbf{P}_{n}^{ \star}\right)=\lim_{\ell\to\infty}\left(\hat{\mathbf{x}}_{n,\ell}^{i},\mathbf{P}_{n,\ell}^{i}\right) \tag{30}\] Fig. 8: The UT propagation of mean and covariance: (_left_) The actual; (_center_) First-linearization of EKF; (_right_) The UT itself where the coupled-term \((\hat{\mathbf{x}}_{n,\ell}^{i},\mathbf{P}_{n,\ell}^{i})_{i\in\mathcal{N}}\) comprises the data provided at (\(i\)) point at the \(\ell\)-th cycle, satisfying Eq.(31) \[\hat{\mathbf{x}}_{n,\ell+1}^{i} =\sum_{j\in\mathcal{N}_{i}}\pi^{(ij)}\hat{\mathbf{x}}_{n,\ell}^{j}\] \[\mathbf{P}_{n,\ell+1}^{i} =\sum_{j\in\mathcal{N}_{i}}\pi^{(ij)}\mathbf{P}_{n,\ell}^{j} \rightarrow\pi^{(ij)}\geq 0,\sum_{j\in\mathcal{N}_{i}}\pi^{(ij)}=1 \tag{31}\] and the coupled \((\hat{\mathbf{x}}_{n,\mathbf{P}}^{i}_{n})_{i\in\mathcal{N}}\) is then achieved if the formula \(\Pi=\pi^{(1,j)}\mathcal{R}^{n}\) is primitive \[\hat{\mathbf{x}}_{n,\ell+1} =\left(\Pi\otimes I\right)\hat{\mathbf{x}}_{n,\ell}\] \[=\left(\Pi\otimes I\right)\ldots\left(\Pi\otimes I\right)\hat{ \mathbf{x}}_{n,0}=\left(\Pi^{\ell+1}\otimes I\right)\hat{\mathbf{x}}_{n,0}\] and therefore, \[\lim_{\ell\rightarrow\infty}\left(\Pi^{\ell+1}\right)=1\mathbf{v}^{\top}\] where as \(\ell\rightarrow\infty\) with the column vector of \(\mathbf{v}\), \[\hat{\mathbf{x}}_{n,\ell+1}=\left(1\mathbf{v}^{\top}\otimes I\right)\hat{ \mathbf{x}}_{n,0}\] the estimated state and the error covariance matrix would be, \[\hat{\mathbf{x}}_{n,\ell+1} =v_{1}\hat{\mathbf{x}}_{n,0}^{1}+v_{1}\hat{\mathbf{x}}_{n,0}^{2}+ \cdots+v_{1}\hat{\mathbf{x}}_{n,0}^{k}=\hat{\mathbf{x}}_{n}^{*}\] \[\mathbf{P}_{n,\ell+1} =v_{1}\mathbf{P}_{n,0}^{1}+v_{1}\mathbf{P}_{n,0}^{2}+\cdots+v_{1} \mathbf{P}_{n,0}^{k}=\mathbf{P}_{n}^{*}\] Furthermore, the algorithm of the weighted average consensus from Eq.(\(\beta_{1}\)) o Eq.(\(\beta_{5}\)) is written below as the extension of the standard UKF, 1. For every \(i\in\mathcal{N}\), collect the information of \(\hat{\mathbf{y}}_{n}^{(i)}\) and find \[\hat{\mathbf{x}}_{n}^{i} =\hat{\mathbf{x}}_{n}+\mathbf{W}_{n}z_{n}\longrightarrow z_{n}= \mathbf{y}_{n}-\hat{\mathbf{y}}_{n}^{(i)}\] ( \[\beta_{1}\] ) \[\mathbf{P}_{n}^{i} =\hat{\mathbf{P}}_{n}-\mathbf{W}_{n}\hat{\mathbf{S}}_{n}\mathbf{W} _{n}^{\top}\] ( \[\beta_{2}\] ) 2. Initialize that \(\hat{\mathbf{x}}_{n}^{i}=\hat{\mathbf{x}}_{n,0}^{i}\) and \(\mathbf{P}_{n,0}^{i}=\mathbf{P}_{n,0}^{i}\) 3. For the \(\ell=0,1,\ldots,l\), apply the method of the weighted average consensus, such that: 1. Broadcast the coupled node data \(\hat{\mathbf{x}}_{n,\ell}^{i}\) and \(\mathbf{P}_{n,\ell}^{i}\) to the surrounding neighborhoods \(j\in\mathcal{N}_{i}\setminus(i)\) 2. Ensure the information of \(\hat{\mathbf{x}}_{n,\ell}^{j}\) and \(\mathbf{P}_{n,\ell}^{j}\) from the whole neighborhoods \(j\in\mathcal{N}_{i}\setminus(i)\) 3. Collect the coupled data \(\hat{\mathbf{x}}_{n,\ell}^{j}\) and \(\mathbf{P}_{n,\ell}^{j}\) based on \[\left(\hat{\mathbf{x}}_{n,\ell+1}^{i},\mathbf{P}_{n,\ell+1}^{i}\right)=\sum_{j \in\mathcal{N}_{i}}\pi^{(ij)}\left(\hat{\mathbf{x}}_{n,\ell}^{j},\mathbf{P}_{n,\ell}^{j}\right)\] ( \[\beta_{3}\] ) 4. Setup the estimated state as, \[\hat{\mathbf{x}}_{n}^{i}=\hat{\mathbf{x}}_{n+l}^{i}\quad\text{and}\quad \mathbf{P}_{n}^{i}=\mathbf{P}_{n+l}^{i}\] ( \[\beta_{4}\] ) 5. Perform the updated of the prediction error, therefore \[\hat{\mathbf{x}}_{a,n}^{+} =\sum_{i=0}^{2L}W^{(i)}\hat{\chi}_{a,n}^{(i)}\] ( \[\beta_{5}\] ) As for theoretical feasibility of the stochastic boundedness with in-depth mathematical proof is well-explained in [29] ## VI Numerical Designs and Findings Having discussed the various dynamical systems denoted as \(P(s)\) along with some control scenarios \(C(s)\) and their stability, the network is portrayed as in Fig.(9) considering the process \(w\) and measurement \(v\) noise and the disturbance \(d\). These control scenarios are diverged from the classical perspective (PID, LQR, etc) to the modern actor-critic reinforcement learning and other dynamic control to match the condition of the system. Furthermore, the modified UKF estimation \(U(s)\) is then applied to touch the hidden states of the systems to reach the information leading to sensorless design. This estimation is then compared to other centralized and distributed filtering as explained in the followings. The whole dynamics of the tested systems are constructed according to certain plants \(\Phi_{n}(s),\forall n=1\to 4\) as shown in Fig.(6). As for the first \(\Phi_{1}\), the mass \(m\) and the damping parameter \(b\) are set to be 1000 kg and 50 N.s/m in turn while the nominal control \(u\) is 500 N with dynamic reference \(v_{r}\) in 10 m/s and 7 m/s. The second plant \(\Phi_{2}\) comes up with a quarter-body mass \(M_{1}\) as 2500 kg and the suspension mass \(M_{2}\) as 320 kg whereas the parameters of the spring with respect to system \(k_{1}\) and wheel \(k_{2}\) are 80,000 N/m and 500,000 N/m in turn while the damping of the same respected terms, \(b_{1}\) and \(b_{2}\), have 350 N.s/m and 15,020 N.s/m respectively. Regarding the third plant \(\Phi_{3}\), according to Eq.(15), the values of \(c_{k},\forall k=1\to 7\) constitute \(-0.313\), \(56.7\), \(0.232\), \(-0.0139\), \(-0.426\), \(0.0203\), and \(56.7\) while the fourth \(\Phi_{4}\) with speed \(\hat{\theta}\) and position \(\theta\) comprises the variables as follows. The moment \(J\) is 0.01 kg.m\({}^{2}\) with the friction parameter \(b\) of 0.1 N.m.s and the same gain \(\kappa\) makes of 0.01 to the system resistance \(R\) equals to 1 Ohm and inductance \(L\) with 0.5 H. Regarding the parameters of the proposed estimation methods, the time-sampling (\(t_{s}\)), covariance matrices of \(Q\) and \(R\) are \begin{tabular}{l|l|l|l} Plant & \(t_{s}\) & \(R\) & \(Q\) \\ \hline \(\Phi_{1}=\) & \(0.01\) & \(0.5\) & \(0.1\) \\ \(\Phi_{2}=\) & \(0.0005\) & \(0.05\) & \(10\) \\ \(\Phi_{3}=\) & \(0.01\) & \(1\) & \(pH^{\top}H\) \\ \(\Phi_{4}=\) & \(0.01\) & \(1\) & \(pH^{\top}H\) \\ \end{tabular} where the measurement noise \(v\) of the every system \(\Phi_{n}(s)\) is formulated based on the output matrix \(H\) with some unique distribution \(\mathcal{R}_{u}\in(0,1)\) by \[v=\sqrt{R}\times\text{randn}[\text{size}(H,1),1]\] Fig. 9: Block diagram of the system where (randn) means the normal distribution of GRV while the size\((\epsilon_{1},\epsilon_{2})\) denotes either the row or column dimension \(\epsilon_{2}\in(1,2)\) in turn of the matrix \(\epsilon_{1}\). \(p_{0}\) as the arbitrary constant solving the algebraic Riccati equation (ARE) equals to 50 with eye(\(\bullet\)) and discrete(\(\bullet\)) states the identity matrix of length (\(\bullet\)), known as the maximum size or dimension, and the discretized systems of (\(\bullet\)), such that it shows \[\textbf{P}_{0}=p_{0}\times\text{eye}(F)\longrightarrow F=\text{discrete}(A)\] while the performance of the estimation error is approached with the error as opposed to the true values, having no noise Fig. 11: (a), (b) The performance results of (\(x_{3}=\varphi\)) using plant \(\Phi_{2}\) of the proposed estimated (\(x_{u}\)) and the true (\(x_{t}\)) with some disturbance (\(\gamma\)); (c) while it highlights the closed-loop response of the states (\(x_{1},x_{2},x_{4},x_{5}\)); and (d) the open-loop response if \(\gamma\) is applied to the system Fig. 10: (a) The performance results of speed (\(x=v\)) using plant \(\Phi_{1}\) of the measured (\(y\)), the proposed estimated (\(x_{u}\)) and the true (\(x_{t}\)) states according to the reference (\(r\)); (b) while it shows the error comparisons of the measured and the estimated over the true values in the systems, \[e=x_{u}-x_{t}\] Regarding the performance of plant \(\Phi_{1}\), the desired references in 50 s are situated in two different speed values and the control along with the estimated values could deal with the changes as depicted in Fig.(10a) while the performance errors are written in Fig.(10b) showing the zero convergence. As for the second plant \(\Phi_{2}\), the dynamics of the disturbance-effect systems, the closed-loop and the open-loop are discussed, saying the capability of handling the external forces in under 5 seconds. The values \(x_{3}\to y_{1}\) between the true and the estimation parallels with slight difference on the peak and trough as illustrated in Fig.(11a) while the closed-loop in Fig.(11b) also performs almost on par with preceding dynamics. Fig.(11c) and Fig.(11d) constitute the stabilized and the non-control performance of the systems. Furthermore, the third dynamics \(\Phi_{3}\) describes the control of the pitch angle \(\theta\) of an aircraft and it is built into two divergent reference points, 0.2 and 0.5, within 7 seconds. In terms of control scenario, the systems could trace the reference while for the estimated states, the controlled state \(x_{3}\) in Fig.(12b) and the free-state \(x_{1}\) in Fig.(12b) are well-covered by the proposed algorithm as opposed to the true states. Likewise, the rotor dynamics \(\Phi_{4}\) for speed variable \(\dot{\theta}\) with two different reference scenarios in Fig.(13a) and Fig.(13b) are handled by the control design and well-estimated by the weighted average consensus method. Finally, the position-type variable \(\theta\) of \(\Phi_{4}\) comprises the same trends with respect to the comparison of the measured, the estimated and the true states with various dynamics as portrayed in Fig.(14). To conclude, the control designs from various plants \(\Phi_{n},\forall n=1\to 4\) successfully capture the dynamics of the systems while the proposed algorithm effectively tracks the true values under some disturbances this also leads to the sensorless design of the future works. The performances are affected solely on the covariance matrix pairs \((Q,R)\) and the measurements. Beyond that, the performances of the estimation are also weighed according to the reviews of distributed estimation [48, 49, 50] along with various applications in terms of non-linear uncertainties interconnected system [51, 52, 53, 54, 55, 56], and Pareto optimization [57]. The presented results from arbitrary initial conditions (\(x_{0}\)), in terms of estimation errors, highlight the convergence, almost identical as the centralized Kalman with different in the noise scalability, as compared to distributed estimation studied in [23, 49, 50]. For the more severe faulty unobserved systems, the mechanism to maturely detect the states is highly required while for large-scale systems, the information of purely local measurement is also possible as opposed to the local and its neighborhood. ## VII Conclusions The mathematical dynamics of the vehicle systems along with the graphs have been constructively designed under some disturbance as the object of the performance results. The control scenarios for the tested plants \(\Phi_{n}\) along with some stability analysis have also been discussed comprehensively to check the observability of the systems. The standard UKF and the proposed of the weighted average consensus estimation method is written considering some neighborhoods events as the local collecting information to obtain the more accurate estimated states. The results of the designs conclude the effectiveness of the control designs along with the proposed estimation algorithm to track the hidden states. For further research, the sensorless design to vehicles dynamics is elaborated leading to the autonomous concepts along with some fault-tolerant learning control if faults occur to negate the failured systems. ## Acknowledgment This research was provided by a funding granted by the Engineering Physics Department of Institut Teknologi Sepuluh Nopember (ITS), Indonesia with letter contract number: 1868/PKS/ITS/2022 in May 24, 2022. We thank our colleagues for the ideas, dedication, and times for the final paper Fig. 12: The performance results of pitch angle (\(x_{3}=\theta\)) using plant \(\Phi_{3}\) of the measured (\(y\)), the proposed estimated (\(x_{u}\)) and the true (\(x_{t}\)) states according to the reference (\(r\)) along with the free-state
2304.03009
Linear, Quasi-Linear and Nonlinear Radial Transport in the Earth's Radiation Belts
Observational studies of the Earth's radiation belts indicate that Alfv\'enic fluctuations in the frequency range of 2-25 mHz accelerate magnetically trapped electrons to relativistic energies. For decades, statistical models of the Earth's radiation belts have quantified the impact of Alfv\'enic waves in terms of quasi-linear diffusive models. However, quasi-linear models are inadequate to quantify Alfv\'enic radial transport occurring on timescales comparable to the azimuthal drift period of $0.1- 10$ MeV electrons. With recent advances in observational methodologies offering spatial and temporal coverage of the Earth's radiation belts on fast timescales, a theoretical framework that distinguishes between fast and diffusive radial transport can also be tested for the first time with in situ measurements. In this report, we present a drift kinetic description of radial transport for planetary radiation belts. We characterize linear processes that are too fast to be modelled by quasi-linear models and determine the conditions under which nonlinearities become dynamically significant. In the linear regime, wave-particle interactions are categorized in terms of resonant and non-resonant responses. We demonstrate that the phenomenon of zebra stripes is non-resonant and can originate from the injection of particles in the inner radiation belts. We derive a radial diffusion coefficient for a field model that satisfies Faraday's law and that contains two terms: one scaling as $L^{10}$ independent of the azimuthal number $m$, and a second one scaling as $m^2 L^6$. In the nonlinear regime, we show that azimuthally symmetric waves with properties consistent with in situ measurements can energize 10-100 keV electrons in less than a drift period. This coherent process provides new evidence that acceleration by Alfv\'enic waves in radiation belts cannot be fully contained within diffusive models.
Adnane Osmane, Emilia Kilpua, Harriet George, Oliver Allanson, Milla Kalliokoski
2023-04-06T11:53:32Z
http://arxiv.org/abs/2304.03009v2
# Linear, Quasi-Linear and Nonlinear Radial Transport in the Earth's Radiation Belts ###### Abstract Observational studies of the Earth's radiation belts indicate that Alfvenic fluctuations in the frequency range of 2-25 mHz accelerate magnetically trapped electrons to relativistic energies. For decades, statistical models of the Earth's radiation belts have quantified the impact of Alfvenic waves in terms of quasi-linear diffusive models. However, quasi-linear models are inadequate to quantify Alfvenic radial transport occurring on timescales comparable to the azimuthal drift period of \(0.1-10\) MeV electrons. With recent advances in observational methodologies offering spatial and temporal coverage of the Earth's radiation belts on fast timescales, a theoretical framework that distinguishes between fast and diffusive radial transport can also be tested for the first time with in situ measurements. In this report, we present a drift kinetic description of radial transport for planetary radiation belts. We characterize linear processes that are too fast to be modelled by quasi-linear models and determine the conditions under which nonlinearities become dynamically significant. In the linear regime, wave-particle interactions are categorized in terms of resonant and non-resonant responses. We demonstrate that the phenomenon of zebra stripes is non-resonant and can originate from the injection of particles in the inner radiation belts. We derive a radial diffusion coefficient for a field model that satisfies Faraday's law and that contains two terms: one scaling as \(L^{10}\) independent of the azimuthal number \(m\), and a second one scaling as \(m^{2}L^{6}\). In the nonlinear regime, we show that azimuthally symmetric waves with properties consistent with in situ measurements can energize 10-100 keV electrons in less than a drift period. This coherent process provides new evidence that acceleration by Alfvenic waves in radiation belts cannot be fully contained within diffusive models. _Keywords:_ Van Allen radiation belts (1758); Plasma physics (2089); Plasma astrophysics (1261); Alfven waves (23); Solar-terrestrial interactions(1473) ###### Contents * 1 Introduction * 1.1 Motivation and background * 1.2 Benefits of quasi-linear models in the Earth's radiation belts * 1.3 On the need for a new theoretical framework of radial transport * 1.4 Next generation of radial transport models for radiation belts * 1.5 Summary of main results * 2 Methodology * 2.1 Drift kinetic * 2.2 Review of electromagnetic fields used for radial diffusion models * 2.2.1 Mead field * 2.2.2 Asymmetric background field * 3 Linear, quasi-linear and nonlinear limits of radial transport * 3.1 Multiscale dynamics & separation between slow and fast variables * 3.2 Linear theory and radial transport on fast timescales * 3.2.1 Ballistic solution and the formation of zebra stripes * 3.2.2 Solution to the linear wave-particle interaction * 3.3 Quasi-linear theory of radial diffusion * 3.4 Beyond a quasi-linear theory of radial transport: nonlinear regime * 3.4.1 Criteria to determine when nonlinear radial transport becomes significant * 3.4.2 Nonlinear impact of symmetric perturbations on fast timescales * 4 Discussion * 4.1 When can we use quasi-linear radial diffusion? * 4.2 Fast radial transport * 4.2.1 Distinguishing between drift resonant and non resonant interactions * 4.2.2 Mechanisms for zebra stripes formation * 4.3 Nonlinear Parker mechanism * 5 Conclusion * A Derivation of the Quasi-Linear Equation * B Derivation of the nonlinear perturbed equation (32) * C Justification for neglecting the temporal variation of the background distribution in the linear response (32) ## 1 Introduction ### Motivation and background Radiation belts are torus-shaped plasma environments confined by planetary magnetic fields. Due to porous boundaries and energy-momentum deposition from the solar wind, the Earth's radiation belts are continuously driven away from a state of local thermodynamical equilibrium (LTE). With very low particle densities1 and mean free times between collisions of the order of several months to a few years, the Earth's radiation belts are weakly collisional but respond rapidly to departure from LTE by sustaining a wide-range of plasma instabilities that mimic collisions and thermalise the plasma. The plasma instabilities result in a broad spectrum of fluctuations that accelerate particles to relativistic energies on timescales of a few hours to a few days. With electron's energies spanning almost seven orders of magnitude, and reaching as high as several MeV, the Earth's radiation belts are the closest natural laboratory in which charged particles are accelerated close to the speed of light (Roederer & Zhang, 2014). Footnote 1: The thermal component of the electrons has particle densities of the order of \(n\leq 1\) cm\({}^{-3}\). The warmer electron populations of tens and hundreds of keV are much more dilute with densities several orders of magnitude lower. From a fundamental physics perspective, it is an observational fact that planetary radiation belts and a plethora of astrophysical plasma environments are efficient particle accelerators. The Earth's radiation belts constitute the most accessible environment to perform detailed _in situ_ studies relevant to a wide-range of fundamental physics' problems, such as cosmic rays' acceleration (Cronin, 1999), upper and middle atmosphere climatology (Turunen et al., 2009), and even the microphysics of accretion disks (Quataert & Gruzinov, 1999; Sironi & Narayan, 2015). With electron to magnetic pressure ratio (\(\beta_{e}=2\mu_{0}n_{e}k_{B}T_{e}/B^{2}\simeq 0.1-0.01\)) and relativistic electron energies (\(\gamma m_{e}c^{2}\simeq 10\) MeV) in accretion disks comparable to the Earth's radiation belts (\(\beta_{e}\simeq 10^{-3}-10^{-1}\) & \(\gamma m_{e}c^{2}\simeq 1-10\) MeV), kinetic plasma physics near black holes (but far from the event horizon), lies at our doorstep! From an applied physics perspective, and due to their high energies and confinement location around geostationary orbits, radiation belts' particles constitute a threat to satellites orbiting the Earth, and are therefore a research focus for communication and military industries. Driven by fundamental scientific questions and risk mitigation to communication infrastructures, radiation belts' research aims to quantify the acceleration and loss confinement processes of energetic electrons (Cannon, 2013; Horne et al., 2018; Hands et al., 2018). More than 60 years of research following the discovery of the Earth's radiation belts (Van Allen et al., 1958), plasma physicists have identified two dominant mechanisms responsible for the transport and acceleration of charged particles: 1) _spatially localised_ wave-particle interactions driven by small-scale kinetic fluctuations (Thorne, 2010), and 2) _large-scale_ electromagnetic fluctuations induced by global magnetospheric currents and encompassed under the formalism of radial diffusion (Lejosne & Kollmann, 2020). Both mechanisms can be understood in terms of adiabatic invariants' theory in nearly periodic Hamiltonian systems (Cary & Brizard, 2009). In the absence of collisions, the motion of magnetically trapped electrons can be decomposed fully in terms of three separate motions with very distinct timescales: 1. Larmor motion around the local magnetic field (\(\Omega\simeq 1-10\) kHz), 2. The bounce motion between magnetic mirror points (\(\omega_{b}\simeq 0.1-1\) Hz), 3. The azimuthal drift around the Earth's midplane (\(\Omega_{d}\simeq 0.1-1\) mHz). In order to break one of the three periodic motions, a wave with a frequency comparable to one of the periodic motions has to interact with the particles. Since the Earth's radiation belts sustain broadband fluctuations with frequencies ranging between \(10^{-4}\) Hz and \(10^{4}\) Hz (Murphy et al., 2020), all three invariants can repeatedly be violated. Small-scale kinetic fluctuations accelerate electrons if one of the first two adiabatic invariants \(\mu=E_{k\perp}/B\) and \(\mathcal{J}=\int p_{\parallel}\;ds_{\parallel}\), defined in terms of the perpendicular kinetic energy \(E_{k\perp}=|\mathbf{p}_{\perp}|^{2}/m\), the local magnetic field amplitude \(B\), and the relativistic momentum along the local mean field \(p_{\parallel}=\mathbf{p}\cdot\mathbf{B}/B=m\gamma v_{\parallel}\), are violated. On the other hand, the second dominant mechanism, radial diffusion, originates in large-scale Alfvenic waves in the Pc4 (\(\omega\sim 8-25\) mHz) and Pc5 (\(\omega\sim 2-7\) mHz) range that violate the third adiabatic invariant, i.e. the magnetic flux \(\Phi=\int\mathbf{B}\cdot d\mathbf{A}\)(Kulsrud, 2005; Roederer & Zhang, 2014). In a dipole magnetic field the inverse of the magnetic flux can be expressed more simply as the normalised radial distance in the midplane \(L=r/R_{E}\), in which \(R_{E}\) is the Earth's radius 2. Consequently, a collection of particles drift-resonant with Alfvenic fluctuations in the Pc5 range experience scattering along the radial distance. This scattering can be modelled statistically in terms of a Fokker-Planck equation and it's observational signature is a diffusive flattening of the distribution function along the radial distance \(L^{*}\). With the first and second adiabatic invariant conserved, particles carried to closer to Earth gain energy through a betatron process (Kulsrud, 2005) as they sample a larger magnetic field, whereas particles diffusing to higher radial distances sample a weaker magnetic field, loose energy, and experience greater likelihood for losses at the outer magnetopause boundary (Turner et al., 2012; George et al., 2022). Footnote 2: It should be kept in mind that when the background dipole magnetic field is deformed on long timescales compared to the drift period, that the third adiabatic invariant does not map into the normalised radial distance. The background magnetic field model used in this communication is dipolar and the third adiabatic invariant can be interpreted as the radial distance. Similarly, violation of the first and second adiabatic invariants for a collection of particles is also modelled in terms of Fokker-Planck equations (Lichtenberg & Lieberman, 1983). Contrary to radial diffusion, scattering associated with the first two invariants results in a localised enhancement along the radial distance. From an observational perspective it has therefore been possible to infer which acceleration mechanism is dominant by computing from satellites data the distribution function in terms of the three adiabatic invariants, i.e. \(f(\mu,\mathcal{J},L^{*})\)(Green & Kivelson, 2004). As shown in Figure (1), if radial diffusion dominates, the distribution function results in a flattening along the radial distance, but if small-scale waves are primary drivers, localised enhancements along the radial distance should be observed. Contemporary observational and modelling studies of the radiation belts rely on this conceptual framework to determine which of the two mechanisms dominate _on timescales of hours to several days_(Chen et al., 2007; Reeves et al., 2013; Jaynes et al., 2018). ### Benefits of quasi-linear models in the Earth's radiation belts The theoretical framework to quantify and interpret the dynamical evolution of radiation belts on timescales of a few hours to several days rely exclusively on quasi-linear theories (Kennel & Engelmann, 1966; Falthammar, 1965; Diamond et al., 2010; Brizard and Chan, 2022). The overwhelming reliance on quasi-linear models in radiation belts' research is not fortuitous as it offers two benefits alternative computational and theoretical approaches lack: 1. **Computationally inexpensive reduced models** The full particle motion requires a 7 dimensional description (three adiabatic invariants with three associated phases plus time). Since energetic electrons span four orders of magnitude in energy, and more than six orders of magnitude in time and space, reduced statistical models are necessary to account for geomagnetic storms occurring on timescales of at least a few hours. Quasi-linear models for small scale wave particle interactions (Summers, 2005; Shprits et al., 2006) and radial diffusion (Lejosne and Kollmann, 2020) take the form of Fokker-Planck equations that are computationally inexpensive and can be easily implemented in global magnetospheric models. 2. **Generalizability** With sparse measurements of electric and magnetic fields responsible for violation of the three adiabatic invariants, quasi-linear models encode the wave-particle interactions in diffusion coefficients that have simple algebraic forms. For instance, radial diffusion coefficients are amenable to parametrisation in terms of ground magnetometers' measurements (Brautigam and Albert, 2000) that are correlated with fluctuations that drive dynamically radiation belts. Current quasi-linear models can therefore be generalized to periods of unavailable _in situ_ measurements. Figure 1: Illustration of the conceptual frameworks for the acceleration of charged particles in planetary radiation belts. Following injection of particles at \(t=0\) (darker shaded region) and the generation of plasma instabilities, the phase-space density will be deformed. On the left panel, Alfvénic fluctuations drive radial diffusion and a flattening of the phase-space density along the equatorial radial distance (Lejosne and Kollmann, 2020). Particles scattered to lower radial distance sample a larger magnetic field, and gain energy through a betatron process. In comparison, the signature of small-scale fluctuations consists in a localised enhancement along the radial distance (Green and Kivelson, 2004), as shown on the right panel. The radial shift of the peak in the right panel illustrates that violation of the first and/or second adiabatic invariant results in a change in the third adiabatic invariant as well (Ozturk and Wolf, 2007; O’Brien, 2014; Desai et al., 2021). Both frameworks are expressed in terms of Fokker-Planck equations. Transport by Pc4 and Pc5 Alfvénic waves is encoded in a radial diffusion coefficient \(D_{LL}\). Transport by small-scale interactions is encoded in an energy diffusion coefficient \(D_{EE}\)(Summers, 2005; Shprits et al., 2006). Quasi-linear model comparisons with data yields, in several events, accurate estimates of electron fluxes (Reeves et al., 2013; Thorne et al., 2013; Jaynes et al., 2015). However, dominance of quasi-linear models also stems from the fact that building statistical models that are departing from quasi-linear assumptions is an outstanding theoretical challenge, since it falls into the class of multi-scale non-linear problems (Dupree, 1966; Orszag & Kraichnan, 1967; Dupree, 1972; Schekochihin et al., 2008; Diamond et al., 2010; Davidson, 2012)3. Moreover, a multi-point satellite methodology that can quantify the evolution of energetic particle fluxes on timescale comparable than a drift period have only recently been developed with the availability of 140 keV-4 MeV electrons GPS fluxes calibrated with the Van Allen Probes (Morley et al., 2016; Kalliokoski et al., 2023). GPS instruments combined with the Van Allen Probes offers, for the first time, an unprecedented large number of measurement points, and thus providing a broader spatial coverage of the radiation belts and a better temporal resolution in terms of drift-shells. Energetic electron fluxes inferred from GPS electron counts and calibrated against MagEIS and REPT instruments onboard the Van Allen Probe probes (Morley et al., 2017) can be used to quantify processes that are too fast to be quantified by radial diffusion. Thus, probing radiation belts' processes on timescales of the drift period is now observationally possible, and statistical models that quantify the impact of Pc4 and Pc5 waves on fast timescales comparable to the drift period are missing. Footnote 3: Studies of nonlinear multi-scale problems in kinetic plasma physics have a long history but only recently have we gained sufficient computational power to address them in plasma fusion and astrophysical environments (Schekochihin et al., 2016; Adkins & Schekochihin, 2018; Kawazura et al., 2019; Meyrand et al., 2019). New tools for the radiation belts that can complement and supersede quasi-linear models would have to provide the benefits listed above in order to be incorporated in global models. In this communication we provide the theoretical framework to address the limitation of radial diffusion models and extend radial transport beyond a quasi-linear description. But before doing so, we describe the limits of quasi-linear theory and how it constrains interpretation of radiation belts' observational studies. ### On the need for a new theoretical framework of radial transport Quasi-linear models in the radiation belts are mean field theories that assume that the _average_ interaction of electrons with small-amplitude waves will describe accurately the long timescale evolution of the particles and that nonlinearities arising due to mode-mode coupling or particle orbits can be neglected. Quasi-linear models in the radiation belts therefore contain the following inherent constraints4: Footnote 4: Current radial diffusion models also assume that the fluctuations are statistically homogeneous in space. This assumption is known from observations in the radiation belts to be incorrect (Murphy et al., 2020; Sandhu et al., 2021), but can nonetheless be modified under a quasi-linear framework so we have not included it as a limitation inherent to radial diffusion models. 1. **Scale separation between fast and diffusive timescales** In quasi-linear models the cumulative effect of many waves on the distribution functions is slow and diffusive (Vanden Eijnden, 1997). This slow timescale for diffusion is contrasted with the fast timescales associated with a single encounter/transit time of a wave with the particles. When the timescales for diffusion becomes comparable to the transit time for the wave-particle interactions the quasi-linear hierarchy breaks down (Kennel & Engelmann, 1966). 2. **Absence of nonlinear processes** The fast response of the distribution function is assumed to be unperturbed and nonlinear processes such as particle trapping (Bernstein et al., 1957; Artemyev et al., 2012; Osmane et al., 2016) or mode-mode coupling (Schekochihin et al., 2016; Adkins & Schekochihin, 2018) are ignored. On the basis of the first constraint, the slow diffusion expressed in terms of a Fokker-Planck equation cannot be used to describe particles acceleration on fast timescales comparable to a single interaction or transit time. Nonetheless, current diffusion coefficients used for radial transport become sufficiently large during high geomagnetic activity (Brautigam & Albert, 2000; Ozeke et al., 2014; Sandhu et al., 2021) to result in violation of the scale separation quasi-linear constraint. For instance, Figure 4 of Ozeke et al. (2014) shows that the diffusion coefficient \(D_{LL}\) can be of the order of \(10^{2}-10^{3}\) days\({}^{-1}\) for Kp \(>5\). Consequently, the diffusion time for a particle to be carried across one drift shell \(\Delta L^{*}\) scales between \(\tau_{D}\simeq 15\) minutes and a few minutes. Similarly, the impact of radial transport on losses cannot be quantified in terms of quasi-linear models if particles are depleted on timescales comparable or less than an azimuthal drift period. Olifer et al. (2018) shows through observations that fast losses on timescales as short as half an hour can take place during intense magnetic storms. Such transport timescales are inconsistent with a quasi-linear theory relying on a scale separation between fast and slow timescales, with the fast timescales comparable to azimuthal drift orbits of the order of tens of minutes to a few hours. The second constraint can be justified on the basis that large-amplitude fluctuations are statistically rare occurrences: an electron will be scattered hundreds of times by small-amplitude fluctuations before encountering a large-amplitude wave. However, from a theoretical perspective, waves' amplitudes do not need to be very large for nonlinearities to become comparable to linear terms and for a quasi-linear theory to break down. This property of nonlinear system is well-known among astrophysical and fluid turbulence experts and underlies the assumption of critical balance in which the transit time becomes comparable to the nonlinear interaction time (Goldreich & Sridhar, 1995)5. Footnote 5: In critical balance the linear transit timescale (time it takes for an Alfvén wave packet to transit across another Alfvén wave packet) becomes comparable to the nonlinear interaction time. In the nonlinear radial transport problem, the transit timescale (time it takes for a magnetically trapped particle to transit/sample an Alfvén wave) becomes comparable to the time it takes for nonlinear effects to be felt. This is quantified in Section 3.4. Observational evidence and theoretical studies of fast and nonlinear processes at the heart of the Earth's radiation belts have become substantial in the last 15 years but are typically associated with electron-scale whistlers and chorus (Cattell et al., 2008; Cully et al., 2008; Bortnik et al., 2008; Albert et al., 2012; Mozer et al., 2013; Malaspina et al., 2014; Santolik et al., 2014; Artemyev et al., 2012, 2015; Agapitov et al., 2015; Osmane et al., 2016, 2017; Tao et al., 2020; Omura, 2021) and ion-scale EMIC waves (Hendry et al., 2019; Grach et al., 2022; Bortnik et al., 2022). With the exceptions of the numerical studies of Degeling et al. (2008); Li et al. (2018), and extreme driving events such as the one reported by Kanekal et al. (2016), fast and nonlinear radial transport are rarely considered and have yet to be accounted for in global models. However, observational studies demonstrate the existence of large-amplitude fluctuations that can sustain radial transport. For instance, Hartinger et al. (2013) demonstrated that transient foreshock perturbations during moderate geomagnetic periods lead to the generation of ultra low frequency (ULF) electric and magnetic fields as high as 10 mV/m and 10 nT, respectively. A statistical study by Simms et al. (2018) and an information-theoretic analysis by Osmane et al. (2022) characterised the statistical dependence of energetic electron fluxes in the Earth's radiation belts on ULF wave power measured on the ground and at geostationary orbit. Both studies demonstrated that ULF wave power is nonlinearly coupled to energetic electron fluxes6. And as nonlinear effects become significant, the scale separation constraint of quasi-linear models also breakdown. In this communication, we present a theoretical framework to distinguish quasi-linear diffusion from fast linear and nonlinear processes. Footnote 6: Counterintuitively, energetic electrons with 100 keV were shown to possess the largest statistical dependency with ULF waves that should only resonate with relativistic electrons \(>1\) MeV. In Section 3.4.2 we provide a non-resonant mechanism, unaccounted by quasi-linear radial diffusion, that can explain the results of Simms et al. (2018) and Osmane et al. (2022) as a result of ULF driven impulsive acceleration of 100-400 keV. ### Next generation of radial transport models for radiation belts The physics of the Earth's radiation belts is nonlinear, high-dimensional and multi-scale and it is not computationally possible to resolve energetic particle motion ranging from milliseconds to hours during geomagnetic storms that can last from several hours to a few days. Consequently, reduced statistical models relying on quasi-linear theories have been developed to predict the dynamical evolution of energetic electrons in terms of physical drivers (i.e. in the solar wind and the magnetosphere). With growing satellite measurements and coverage, we now know that large-amplitude Alfvenic fluctuations and fast processes occurring on timescales beyond the reach of quasi-linear radial diffusion are commonly observed in the radiation belts (Li et al., 1993; Turner et al., 2012; Hartinger et al., 2013; Kanekal et al., 2016; Olifer et al., 2018). The current modelling tools are therefore unable to quantify the impact of fast and/or nonlinear radial transport on the energetic electrons, and thus unable to distinguish it from small-scale wave particle interactions. Figure (2) illustrates the spatial and temporal scales covered by radial diffusion in comparison to characteristic waves and particle motions. In order to characterise processes occurring on fast timescales we need to use a reduced statistical framework that accounts for variations during the drift motion. Drift kinetic models have been developed for decades, mostly for laboratory fusion plasma (Goldston & Rutherford, 1995; Parra & Catto, 2008), but is an ideal starting point to quantify the impact of Pc4 and Pc5 ULF waves on energetic electrons which belongs to the long wavelengths (\(k\rho_{e}\ll 1\)) and short frequency limit (\(\omega/\Omega_{e}\ll 1\)). ### Summary of main results * The choice of the magnetic field model to quantify radial transport is essential for radial transport models and needs to respect Maxwell's equations. If Faraday's equation is violated, we show that Liouville's theorem is also not respected, and thus phase-space density is not conserved. This result also has implications for test-particle experiments in global magnetospheric simulations (Tu et al, 2012). If Faraday's equation is not respected in the simulation box, the construction of the distribution function from the particle trajectories can violate Liouville's theorem. * The linear wave-particle response of the distribution function to a single Alfvenic ULF mode consists of three separate terms, two non-resonant processes and one resonant one: 1) a non-resonant modulation of the distribution function in terms of the ULF wave frequency \(\omega\), 2) a non-resonant modulation of the distribution function in terms of particle's drift frequency \(\Omega_{d}\), known as drift echoes, and 3) a drift-resonant response in the instance where the frequency of the ULF wave corresponds the drift frequency of the particle, i.e. \(\omega\simeq\Omega_{d}\). All three responses are a function of the radial gradient in the background distribution function, and the modulation in terms of the ULF wave frequency, sometimes interpreted as evidence of drift-resonance (Claudepierre et al., 2013), can also be the product of a non-resonant interaction. * Zebra stripes' formation do not require drift-resonant interactions, and can be the signature of injected particles in the inner belts in the absence of ULF waves and radial gradients of the distribution function. We argue that the injection events reported by Zhao & Li (2013) provide all the necessary ingredients for the formation of zebra stripes. * We derive from the drift kinetic equation a quasi-linear radial diffusion coefficient that consists of two terms. The first term is independent of the wave azimuthal number \(m\) and scales as \(L^{10}\), and the second term is a function of the azimuthal wave number and scales as \(L^{6}\). The diffusion coefficients accounts for electric and magnetic field fluctuations that respect Faraday's equations, and thus, the separation of the diffusion coefficient in terms of an electric and magnetic \(D_{LL}\), as commonly used in the literature (Fei et al., 2006; Ozeke et al., 2014; Sandhu et al., 2021), is made redundant. Our derived diffusion coefficient can be computed on the basis of the magnetic field wave power alone. * We provide criteria to determine the limit where nonlinear radial transport processes become significant on timescales comparable to the drift period. We demonstrate that when nonlinear effects are accounted for, symmetric and compressive ULF waves can accelerate electrons with energies of the order of 10 to a few hundreds keV by convecting them inward. This process is a nonlinear generalisation of the mechanism presented by Parker (1960) and does not require drift-resonance. ## 2 Methodology ### Drift kinetic In a strongly magnetized plasma, charged particle motion can be split into a fast gyration around the local magnetic field and the motion of its guiding centre. The Larmor motion is analytically solvable when the electric and magnetic fields, \(\mathbf{E}\) and \(\mathbf{B}\), respectively, are assumed constant in time and uniform in space. However, this solution can also be extended to more general electromagnetic fields that are approximately constant on time scales comparable to the Larmor period \(\Omega_{s}^{-1}=m_{s}/q_{s}B\) and spatial scales of the order of the Larmor radius \(\rho=v/\Omega_{s}\), where \(v\) is the characteristic speed of particles sampling the field, \(q_{s}\) is the charge, and \(m_{s}\) is the rest mass of a particle species (\(s=p\) for protons and \(e\) for electrons). We consider a system with characteristic scale size \(l\) and frequency \(\omega\sim v/l\). The time and spatial scales of the system are estimated from derivatives of the electromagnetic fields: \[\nabla\mathbf{E}\sim\frac{\mathbf{E}}{l},\ \ \nabla\mathbf{B}\sim\frac{ \mathbf{B}}{l},\ \ \frac{\partial\mathbf{E}}{\partial t}\sim\omega\mathbf{E},\ \ \frac{\partial\mathbf{B}}{\partial t}\sim\omega\mathbf{B}. \tag{1}\] For a sufficiently strong background magnetic field, the small parameter \(\varepsilon\) can be defined as: \[\varepsilon=\frac{\rho}{l}=\frac{mv}{qBl}\ll 1,\ \ \frac{\omega}{\Omega}=\frac{m \omega}{qB}\sim\varepsilon\ll 1. \tag{2}\] In this limit the particle does not sense significant variations in the electromagnetic field during characteristic Larmor time and spatial scales. By choosing appropriate coordinates, the fast gyration Figure 2: Spatial and temporal scales of electromagnetic fields and particle motion in the Earth’s radiation belts, and their relation to theoretical limits. The Larmor motion, bounce mirroring motion and azimuthal drift motions are represented as turquoise ellipses. ULF waves ranging from 2-100 mHz are shown in shaded rectangles. The regime of validity of quasi-linear radial diffusion is shown in yellow and the regime covered by drift kinetic, which encompasses quasi-linear radial diffusion is in gray. The left boundary of the quasi-linear regime is computed from the inverse of radial diffusion coefficient obtained from the Brautigam & Albert (2000) for \(L=8\) and Kp\(=6\), which corresponds to strong geomagnetic conditions. A \(D_{LL}\) at \(L=8\) and Kp\(=6\) indicates radial transport over one L-shell on a timescale of 30 minutes. For a \(>4\) MeV electron, a drift period is of the order of 3 minutes and radial diffusion over one drift shell after 10 azimuthal drift periods is very fast, but perhaps possible through quasi-linear diffusion. For lower energy electrons, e.g., 400 keV, a complete azimuthal drift is of the order of 20 minutes, and a diffusion over one drift shell in less than two azimuthal drift is inconsistent with the quasi-linear assumption of small changes over fast timescales. It should therefore be kept in mind that the range of validity of quasilinear radial diffusion becomes smaller for less energetic particles. around the guiding centre can be ignored and a kinetic theory for a collection of particles in a magnetised plasma can be constructed (Parra, 2019). Put differently, starting from the Lorentz equation or Hamilton's equations to compute the particle motion for slowly varying electromagnetic fields, one can build a statistical description of particles confined by large-scale inhomogeneous magnetic fields (Goldston & Rutherford, 1995; Parra & Catto, 2008; Cary & Brizard, 2009; Hazeltine & Meiss, 2013). In the Earth's radiation belts, such a description is therefore appropriate for energetic electrons with Larmor periods \(\Omega_{e}^{-1}\sim 0.1-1\) ms, and interacting with electromagnetic fluctuations in the Pc4 (\(\omega\sim 8-25\) mHz) and Pc5 (\(\omega\sim 2-7\) mHz) ultra-low frequency (ULF) range 7. Footnote 7: Terrestrial and planetary radiation belts also sustain high-frequency electromagnetic fluctuations with characteristic frequencies \(\omega\) comparable to the Larmor frequency \(\Omega_{s}\), e.g. the whistler-mode wave branch at Earth (ELF/VLF) (see Ukhorskiy & Sitnov (2012) for more detail). The drift-kinetic description relying on the small parameter ordering (2) can therefore not be generalised to wave-particle interactions with such modes and one needs to resort to a full Maxwell-Vlasov system (Kulsrud, 2005). In this study, we use a kinetic theory of guiding centres known as drift kinetics to quantify the radial transport of energetic particles interacting with ULF fluctuations. Our starting point is the conservative drift kinetic equation derived recursively by Hazeltine (1973)8: Footnote 8: A pedagogical step by step derivation of Hazeltine (1973) results can be found in the lectures notes of Parra (2019). The notes are accessible on [http://www-thphys.physics.ox.ac.uk/people/FelixParra/CollisionlessPlasmaPhysics/CollisionlessPlasmaPhysics.html](http://www-thphys.physics.ox.ac.uk/people/FelixParra/CollisionlessPlasmaPhysics/CollisionlessPlasmaPhysics.html). \[\frac{\partial}{\partial t}(B\langle f\rangle)+\nabla\cdot(B\mathbf{r}\langle f \rangle)+\frac{\partial}{\partial v_{\parallel}}(B\dot{v}_{\parallel}\langle f \rangle)+\frac{\partial}{\partial\mu}(B\dot{\mu}\langle f\rangle)=0, \tag{3}\] in terms of the gyro-averaged distribution function \(\langle f\rangle\) defined as \[\langle f\rangle=\frac{1}{2\pi}\int_{0}^{2\pi}f(\mathbf{r},v_{\parallel},\mu, \theta_{g},t)d\theta_{g}, \tag{4}\] the guiding-centre position vector \(\mathbf{r}\), parallel velocity \(v_{\parallel}\), and gyrophase \(\theta_{g}\), first adiabatic invariant \(\mu\), \[\mu=\frac{1}{2}\frac{m_{e}c^{2}(\gamma^{2}-1)}{B}\sin^{2}(\alpha). \tag{5}\] Equation (5) for \(\mu\) is written in terms of the parallel velocity \(v_{\parallel}\), pitch-angle \(\alpha=\tan^{-1}(v_{\perp}/v_{\parallel})\) and relativistic Lorentz factor \(\gamma=(1-v^{2}/c^{2})^{-1/2}\) to account for the relativistic correction that appear for particles with kinetic energies \(E_{c}=m_{e}c^{2}(\gamma-1)\) comparable to the electron rest mass \(m_{e}c^{2}=511\) keV 9. Footnote 9: In the Earth’s radiation belts particles are injected at energies of the order of 1-100 keV, but are accelerated to energies comparable to the rest mass and as high as a few MeV (Turner et al., 2017). It is therefore crucial to keep track of the relativistic effects. In our particular problem limited to equatorially trapped particles, the relativistic effects appear in the first adiabatic invariant but an extension to non-equatorially trapped particles will require a relativistic representation of the drift kinetic equation in terms of the parallel momentum \(p_{\parallel}=m_{e}\gamma v_{\parallel}\). The appearance of the magnetic field amplitude \(B\) in Equation (3) originates from the Jacobian when one transforms variables from \((\mathbf{r},\mathbf{v})\) to \((\mathbf{r},\mu,v_{\parallel},\theta_{g})\). In the absence of collisions, conservation of phase-space density for a collection of guiding centre particles requires that the following equation be respected: \[\frac{\partial}{\partial t}(B)+\nabla\cdot(B\dot{\mathbf{r}})+\frac{\partial }{\partial v_{\parallel}}(B\dot{v}_{\parallel})+\frac{\partial}{\partial\mu} (B\dot{\mu})=0, \tag{6}\] Equation (6) is a statement of Liouville's theorem, and is a function of the electromagnetic field model and of the guiding centre's particle trajectory. In open systems the impact of electromagnetic fluctuations will naturally lead to transport to the boundaries, and thus to irreversible losses. Terrestrial and planetary radiation belts are not closed systems and the inner and outer boundaries allow for particles' injection and losses (Millan and Thorne, 2007; Aryan et al., 2020; Walton et al., 2022). However, the wave-particle interactions with ULF waves, in the absence of boundary effects, have to conserve phase-space density. Equation (6) is therefore a different statement, independent of the presence of porous boundaries, and determines whether phase-space density, and thus the number of particles, are conserved in a closed phase-space volume. The choice of a fields' model that violate phase-space density is unphysical and necessarily results in erroneous quasi-linear diffusion coefficients. For instance if a field model that does not conserve phase-space density is chosen, and boundary effects are added, the resulting losses would either be amplified or underestimated. Liouville's theorem can therefore be used as a constraint for the electromagnetic fields, as shown in Section (2.2). The particle guiding-centre description in the \((\mathbf{r},v_{\parallel},\mu)\) phase-space, for a given problem, is a function of the strength of the electric field when compared with the magnetic force. If the characteristic speed of the particle is comparable to the \(E\times B\) drift, additional sources for perpendicular drifts can be ignored. For instance, in the collisionless MHD approximation, the perpendicular velocity of ion and electron fluids are to first order comparable to the \(E\times B\) drift and MHD fluid equations can be derived from the kinetic equation with the perpendicular velocity approximated by the \(E\times B\)(Hazeltine, 2018). However if additional drifts are comparable in size to the \(E\times B\) drift, or if the characteristic speed of a particle population is much greater than the \(E\times B\) drift, perpendicular velocities of ions and electrons are going to decouple, and additional drifts have to be taken into account. Hazeltine (1973) suggests two regimes to account for the ordering of the \(E\times B\) in a given problem: the high flow regime, with strong perpendicular electric fields \(|\mathbf{E}_{\perp}|\simeq vB\), and the low flow regime, with small electric fields, making the \(\mathbf{E}\times\mathbf{B}\) drift small compared to the characteristic speed of the particle. Thus, in the high flow regime, the perpendicular electric field can be comparable to the magnetic force, and the \(\mathbf{E}\times\mathbf{B}\) drift is the dominant drift. In the low flow ordering, the perpendicular electric field cannot balance the magnetic force, and since the \(\mathbf{E}\times\mathbf{B}\) drift is not dominant, additional magnetic drifts, such as the curvature drift and the magnetic gradient drift \(-\mu\nabla B\) have to be included. For an application to energetic electrons in the Earth's radiation belts possessing kinetic energy ranging between hundreds of keV and a few MeV, and interacting with ULF waves, the low flow regime is the correct limit since it accounts for the dominance of the magnetic gradient drift over the \(\mathbf{E}\times\mathbf{B}\) drift. Dominated by the magnetic gradient drift, energetic electrons in the Earth's radiation belts perform one complete azimuthal loop on timescales ranging from few minutes, for MeV electrons, to a few hours for 50 to a few hundreds of keV electrons. In comparison, the additional drifts present in Equation (7) are weaker on such timescales. However, we keep track of additional drifts since they are cumulatively responsible for irreversible transport of particles across drift shells on long timescales of several hours to a few days (Lejosne and Kollmann, 2020). In the low flow regime, the position is to first order in the small parameter \(\varepsilon\) evolving according to10: Footnote 10: Terms of order \(\varepsilon^{2}\simeq(\rho/l)^{2}\) are neglected. \[\mathbf{\dot{r}}=\left(v_{\parallel}+\frac{\mu}{q_{s}}\mathbf{b}\cdot\nabla \times\mathbf{b}\right)\mathbf{b}-\frac{\mathbf{E}\times\mathbf{b}}{B}+\frac {v_{\parallel}^{2}}{\Omega_{s}}\mathbf{b}\times(\mathbf{b}\cdot\nabla)\mathbf{ b}+\frac{\mu}{q_{s}B}\mathbf{b}\times\nabla B, \tag{7}\] in terms of the local magnetic field direction \(\mathbf{b}=\mathbf{B}/B\). The five terms are, respectively, the velocity parallel to the magnetic field, the Banos parallel drift, the \(E\) cross \(B\) drift, the curvature drift and the magnetic gradient drift. Coupled with particle's position, the evolution of the parallel velocity is given by \[\dot{v}_{\parallel}=\left[\frac{q_{s}}{m_{s}}\mathbf{E}-\left(\frac{\mu+\tilde {\mu}}{m_{s}}\right)\nabla B\right]\cdot\mathbf{b}+\frac{v_{\parallel}}{ \Omega_{s}}\left[\mathbf{b}\times(\mathbf{b}\cdot\nabla)\mathbf{b}\right] \cdot\left(\frac{q_{s}}{m_{s}}\mathbf{E}-\frac{\mu}{m_{s}}\nabla B\right)-v_{ \parallel}\frac{\mu}{q_{s}}\mathbf{b}\cdot\nabla\left[\mathbf{b}\cdot\nabla \times\mathbf{b}\right], \tag{8}\] in terms of the correction to the first adiabatic invariant \[\tilde{\mu}=-(v_{\parallel}\mu/q_{s}B)\mathbf{b}\cdot\nabla\times\mathbf{b}. \tag{9}\] The evolution equation for the first adiabatic invariant is given by \[\dot{\mu}=-m_{s}v_{\parallel}\mathbf{b}\cdot\nabla\tilde{\mu}-(q_{s}\mathbf{ b}\cdot\mathbf{E}-\mu\mathbf{b}\cdot\nabla B)\frac{\partial\tilde{\mu}}{ \partial v_{\parallel}}. \tag{10}\] Combining Equations (3), (7), (8),(10) with a model of electromagnetic fields consistent with Liouville's theorem (Equation 6), one can quantify the evolution of the distribution function for a collection of energetic particles in planetary magnetosphere on timescales much shorter than quasi-linear times and therefore comparable to the azimuthal drift periods of magnetically confined particles. The drift kinetic approach therefore provides the foundation for a variety of models (linear, quasi-linear, nonlinear, with or without porous boundaries) to account for ULF radial transport of particles. _A priori_ the set of drift-kinetic equations are nonlinear and therefore not easily tractable analytically. However, the equations can be simplified when energetic particles confined to the equator of the Earth's magnetosphere are studied. Equatorially trapped particles have pitch-angles \(\alpha=\tan^{-1}v_{\perp}/v_{\parallel}\simeq\pi/2\) and thus \(v_{\parallel}=0\). Moreover, the absence of ULF parallel electric field results in \(\dot{\mu}=0\), \(\dot{v}_{\parallel}=0\), and the evolution of the conservative kinetic equation for the distribution function \(f(\mathbf{r},v_{\parallel}=0,\mu=\mu_{c})\), for a fixed magnetic moment \(\mu_{c}\), takes the simple form: \[\frac{\partial}{\partial t}(B\langle f\rangle)+\nabla\cdot(B\mathbf{\dot{r}} \langle f\rangle)=0. \tag{11}\] In the remaining part of this communication, we will use kinetic Equation (11) to describe equatorially trapped particles and leave the generalisation to non-equatorial particles (\(\alpha\neq\pi/2\)) for future work11. But before solving the kinetic equation we need to complement it with an electromagnetic fields' model. Footnote 11: Since ULF waves propagate off the equatorial plane (Sarris et al., 2022), additional drifts have to be accounted for non-equatorial particles. ### Review of electromagnetic fields used for radial diffusion models In this section, we review the electromagnetic fields that have been chosen to model ULF radial transport. We focus solely on electromagnetic models that can be written analytically and that have been used to model coefficients for Fokker-Planck equations. Our aim in this section is also to demonstrate that an arbitrary choice of electromagnetic fields can violate conservation of phase-space density given by Equation (6). #### 2.2.1 Mead field The Mead field (Mead, 1964) consists in the superposition of two perturbations: an azimuthally symmetric fluctuation with amplitude \(S(t)\) and an azimuthally asymmetric fluctuation \(A(t)r\cos(\varphi)\) superposed to a background magnetic dipole field of amplitude \(B_{E}R_{E}^{3}/r^{3}\). The Mead model has the benefit to be mathematically simple yet to contain all the necessary ingredients, through the presence of an asymmetric perturbation, for the violation of the third adiabatic invariant experienced by a collection of magnetically trapped particles. The Mead field was therefore a natural choice for early models of radial diffusion (Faltham, 1965; Schulz and Eviatar, 1969; Schulz and Lanzerotti, 1974) and has been used as the field model for empirical (Brautigam and Albert, 2000; Cunningham, 2016; Sarma et al., 2020) and theoretical studies (Lejosne, 2019; Osmane and Lejosne, 2021) of quasi-linear radial diffusion in the past decades. In our analysis, we will argue that the choice of the Mead field is preferable for analytical studies. As stated in Section 2.1 we will focus here exclusively on equatorially trapped particles, but note that a generalisation to non-equatorial particles can also be done. We also generalise the Mead field to anti-symmetric perturbations with azimuthal wave numbers \(m\neq 1\) This generalisation of the Mead field will have little incidence for the linear and quasilinear radial transport equation since the perturbed distribution function due to various \(m\) modes are independent from one another another. In the nonlinear regime of radial transport in turn, as shown in Section (3.4), mode coupling of various \(m\) modes can interact with one another. Thus, the magnetic field for equatorial particles can be written in cylindrical coordinates \((r,\varphi,z)\), with \(r\) the radial distance, and \(\varphi\) the azimuthal angle, and \(z\) the cylindrical axis direction: \[\mathbf{B}=-\left(\frac{B_{E}R_{E}^{3}}{r^{3}}-S(t)-\sum_{m}A_{m}(t)re^{im \varphi}\right)\hat{z} \tag{12}\] in terms of the magnetic field dipole moment \(B_{E}\) and the Earth's radius \(R_{E}\). The original simplified Mead field can be recovered by setting \(m=1\) and taking the real part in the Fourier sum decomposition. This generalisation of the Mead to some arbitrary number of \(m\) modes is based on observational measurements demonstrating that the Earth's radiation belts can sustain a broad spectrum in \(m\) of ULF waves (Sarris, 2014; Barani et al., 2019) and that the \(m=1\) model is inaccurate during large driving conditions quantified by a geomagnetic Kp index greater than 4 (Lejosne et al., 2013). Using Faraday's law, the inductive electric field can be written as: \[\delta\mathbf{E}=\left(\frac{1}{7}r^{2}\sum_{m}\frac{i\dot{A}_{m}}{m}e^{im \varphi}\right)\hat{r}-\left(\frac{r\dot{S}}{2}+\frac{8r^{2}}{21}\sum_{m}\dot {A}_{m}e^{im\varphi}\right)\hat{\varphi}. \tag{13}\] The above Mead field results in two drifts, the \(E\times B\) drift, \[-\frac{\delta\mathbf{E}\times\mathbf{b}}{B}=\left(\frac{r\dot{S}}{2B}+\frac{8r^{2 }}{21B}\sum_{m}\dot{A}_{m}e^{im\varphi}\right)\hat{r}+\left(\frac{1}{7}\frac{r ^{2}}{B}\sum_{m}\frac{i\dot{A}_{m}}{m}e^{im\varphi}\right)\hat{\varphi} \tag{14}\] and the magnetic gradient drift12 written for the electron charge \(e=-q\): Footnote 12: On the other hand non-equatorial trapped particles (\(\alpha\neq\pi/2\)) will experience the Banos and curvature drift. \[\frac{\mu}{q\gamma}\frac{\nabla B\times\mathbf{b}}{B}=\left(\frac{3\mu B_{0}}{ qB\gamma r}+\frac{\mu}{q\gamma B}\sum_{m}A_{m}e^{im\varphi}\right)\hat{ \varphi}-\left(\frac{\mu}{q\gamma B}\sum_{m}imA_{m}e^{im\varphi}\right)\hat{r}, \tag{15}\] written in terms of the background magnetic dipole magnitude \(B_{0}=B_{E}R^{3}/r^{3}\) and the magnitude \(B=B_{0}-S(t)-\sum_{m}A_{m}(t)re^{im\varphi}\). Conservation of phase-space density for a collection of particles trapped in a magnetic dipolar field and interacting with ULF fluctuations can be written as: \[\frac{\partial B}{\partial t}+\nabla\cdot(B\mathbf{\dot{r}})+ \frac{\partial(B\dot{\mathbf{r}})}{\partial v_{\parallel}}+\frac{\partial(B \dot{\mathbf{r}})}{\partial\mu} = \frac{\partial B}{\partial t}+\nabla\cdot\left(\delta\mathbf{E} \times\hat{z}-\frac{\mu}{q\gamma}\nabla B\times\hat{z}\right) \tag{16}\] \[= \hat{z}\cdot\underbrace{\left(\frac{\partial\mathbf{B}}{\partial t }+\nabla\times\delta\mathbf{E}\right)}_{=0,\text{ by Faraday's law.}}-\frac{\mu}{q \gamma}\underbrace{\nabla\cdot(\nabla B\times\hat{z})}_{=0,\text{ identically.}}\] \[= 0\] in which the first term on the right-hand side of Equation (16) is the projection of Faraday's law along the background magnetic field direction \(\mathbf{b}\). Since we are focussing solely on equatorially trapped particles for the Mead field we can switch to cylindrical coordinates (\(r,\varphi,\theta=z\)). Thus, phase-space density is always conserved for particles confined in a magnetic dipole if Faraday's law projected unto the mean field is respected. A corollary is that the choice of time-varying electric fields that does not satisfy Faraday's law does not satisfy Maxwell's equation and also have the additional undesirable consequence that it does not conserve phase-space density. Since the electric field in the Mead model satisfies Faraday's equation, the Mead field conserves phase-space density. The choice of the Mead field is therefore appropriate to develop a kinetic theory of radial diffusion. #### 2.2.2 Asymmetric background field Elkington et al. (2003) argued that enhanced radial diffusion could take place by accounting for an asymmetric background magnetic field attributed to periods of high solar wind pressure and solar wind speeds. In their model, Elkington et al. (2003) chose a background dipole magnetic field with a superposed perturbation \(\Delta B\): \[B^{EK}(r,\varphi)=\frac{B_{E}R^{3}}{r^{3}}+\Delta B(r)\cos(\varphi) \tag{17}\] Here the azimuthal angle is chosen to be zero at noon and we denote the model as \(B^{EK}\) to distinguish it from the Mead field. In addition to the background field, ULF wave perturbations in the electric and magnetic field are chosen to be the sum of azimuthal Fourier components: \[\delta\mathbf{E}=\sum_{m}\delta\mathbf{E}_{m}(r,t)e^{im\varphi} \tag{18}\] \[\delta\mathbf{B}=\sum_{m}\delta\mathbf{B}_{m}(r,t)e^{im\varphi} \tag{19}\] The above perturbations have no particular polarisation, with unspecified toroidal (\(\delta E_{r,m}\)) and poloidal (\(\delta E_{\varphi,m}\)) electric fields components, and the relation between the magnetic and electric components are ignored. In order for these fields to conserve phase-space density, two constraints have to independently hold: the first one applies to the stationary background magnetic field given by (17), \[\nabla\cdot\left(\nabla B^{EK}\times\hat{z}\right)=0, \tag{20}\] and is respected for a perturbation \(\Delta B(r)\) with an existing first derivative along the radial direction. The second one is Faraday's law for the time varying electric and magnetic perturbations (18)-(19) which results in the following three constraints for the electric and magnetic field amplitudes: \[\frac{\partial}{\partial z}\delta E_{m,\varphi}=\frac{\partial}{\partial t} \delta B_{m,r} \tag{21}\] \[\frac{\partial}{\partial z}\delta E_{m,r}=-\frac{\partial}{\partial t}\delta B _{m,\varphi} \tag{22}\] \[\frac{1}{r}\frac{\partial}{\partial r}(r\delta E_{m,\varphi})-im\delta E_{m, r}=-\frac{\partial}{\partial t}\delta B_{m,z} \tag{23}\] For the sake of simplicity we assume that the magnetic field perturbations have no poloidal (\(B_{\varphi}=0\)) or toroidal (\(B_{r}=0\)) component, and thus only require the constraint (23) to be enforced. In terms of a Fourier decomposition in time (\(\delta B_{m,z}\sim e^{-i\omega t}\)), Equation (23) can thus be written as: \[\frac{1}{r}\frac{\partial}{\partial r}(r\delta E_{m,\varphi})-im\delta E_{m,r }=i\omega\delta B_{m,z}. \tag{24}\] This last equation constrains the choice of a poloidal or toroidal electric fields. For a purely toroidal electric field (\(\delta E_{m,r}\neq 0\), \(\delta E_{m,\varphi}=0\)) the complex coefficients have the following constraint: \(\delta E_{m,r}=-\omega\delta B_{m,z}/m\). For a purely poloidal electric field (\(\delta E_{m,\varphi}\neq 0\), \(\delta E_{m,r}=0\)) that has no radial dependence the following equality must be held: \(\delta E_{m,\varphi}/r=\omega\delta B_{m,z}/m\). We therefore conclude that the asymmetric model used to compute the radial diffusion coefficients in Fei et al. (2006) does not conserve phase-space density and that the diffusion coefficients derived on the basis of this field model yields unphysical results. The violation of Faraday's law in the model Fei et al. (2006) has already been noted by Lejosne (2019) and shown to enhance the diffusion coefficient by a factor of 2. By treating this problem kinetically, we have also shown that it violates Liouville's theorem. Equation (24) also provides a constraint on the electrostatic model (\(\nabla\times\delta\mathbf{E}=0\)) of Falthammar (1965). For the case of a purely poloidal component, Faraday's equation requires \(\frac{1}{r}\partial(r\delta E_{m,\varphi})/\partial r=0\) and thus \(\delta E_{m,\varphi}\sim 1/r\). The assumption of a poloidal field independent of the radial distance used in Falthammar (1965) therefore also violates Liouville's theorem and yields unphysical radial transport coefficient. We note that both the Fei et al. (2006) electromagnetic model and Falthammar (1965) electrostatic models can nonetheless be corrected by accounting for Faraday's law. This correction can be done by enforcing Equation (24) when computing the diffusion coefficient with or without the asymmetry introduced by Elkington et al. (2003). On the basis of this section and the previous one, we choose to use the Mead model since it conserves phase-space density for equatorially trapped particles and already contains all the key ingredients to model radial transport in the Earth's radiation belts 13. Footnote 13: A reader might then wonder why not simply use the field in Fei et al. (2006) after enforcing the constraint given by Equation (24). The short answer is that the main benefit in using the asymmetric field results in a modification of the diffusion coefficient of the order of \(\Delta B^{2}/B^{2}\ll 1\). This modification is therefore negligible. ## 3 Linear, Quasi-Linear and Nonlinear Limits of Radial Transport ### Multiscale dynamics & separation between slow and fast variables In this section we develop a mean-field theory from the drift-kinetic equation (3) for charges confined in a magnetic dipole and interacting with ULF fluctuations given by the Mead field (2.2.1). We will solely focus on particles confined in the equatorial plane (\(\alpha=\pi/2\)) and leave the more involved case of particles bouncing off at mirror points at higher and lower latitudes to future studies. In order to build a mean-field theory we separate slow changes in the third adiabatic invariant \(L^{*}\) and background quantities and fast changes in the associated invariant phase and fluctuation timescales parts of the distribution function14: Footnote 14: A scale separation between fast and slow motion is the basis of quasilinear theories in astrophysical plasmas (Kulsrud, 2005; Schekochihin, 2017; Diamond et al., 2010). This approach is identical to the one performed in Kennel & Engelmann (1966) for a quasilinear theory of magnetised charged particles interacting with plasma waves of frequencies comparable to the Larmor frequency. The resulting diffusion models written in the form of Fokker-Planck equations would not be possible without such a scale separation and constrains the timescales upon which the quasilinear theory can be used. \[f(r,\varphi,t)=f_{0}(r,\varepsilon^{a}\varphi,\varepsilon t)+\delta f(r, \varphi,t) \tag{25}\] in which \(r\) is the radial distance at the equator, \(\varphi\) is the azimuthal angle \(\varphi\in[0,2\pi]\), and the small parameter \(\varepsilon\) characterises the scale separation between large-scale and small-scale parts of the distribution. We note that it is possible to build a background distribution function with azimuthal dependence. For instance, in the presence of an azimuthal dependent source or loss term that evolves slowly in time compared to the azimuthal drift period of the particles. Such an azimutuhal dependence can then be accounted for in terms of \(\varepsilon^{a}\varphi\), for \(a>0\), and resulting in \(\partial f_{0}/\partial\varphi=\varepsilon^{a}f_{0}\). But for simplicity, and comparison with previous radial transport model, we will assume that the background distribution function has no dependence on the azimuthal angle, i.e., \[f_{0}=f_{0}(r,\varepsilon t) \tag{26}\] Formally, this equilibrium distribution can be defined as the average of the exact distribution function over the range of azimuthal angle and over timescales that are intermediate between the fast and the slow ones: \[f_{0}=f_{0}(r,t)=\langle f(r,\varphi,t)\rangle=\frac{1}{2\pi\Delta t}\int_{t- \Delta t/2}^{t+\Delta t/2}dt^{\prime}\int_{0}^{2\pi}d\varphi f(r,\varphi,t^{ \prime}) \tag{27}\] for \(\omega^{-1}\ll\Delta t\ll t_{eq}\), where \(\omega\sim\frac{1}{A_{m}}\frac{dA_{m}}{dt}\sim\frac{1}{S}\frac{dS}{dt}\) denotes the frequency of ULF fluctuations, and \(t_{eq}\sim\frac{1}{f_{0}}\frac{\partial f_{0}}{\partial t}\), the timescale for an equilibrium in the distribution to form. This definition of \(f_{0}\) constrains the time and spatial scales upon which the background distribution function can be computed. It is shown in Section 3.2 that particles with azimuthal drift frequencies \(\Omega_{d}\), as defined by Equation (29), comparable to ULF wave frequency with azimuthal mode number \(m\) experience resonance. Thus, since resonance requires \(\omega\simeq m\Omega_{d}\), Equation (27) also constrains the evolution of the background background distribution function \(f_{0}\) on timescales much larger to \(1/m\Omega_{d}\). For the mode \(m=1\), the implication on the quasi-linear theory is that the diffusion cannot take place on timescales comparable to the azimuthal drift periods. For equatorial particles with a conserved first adiabatic invariant \(\mu\) interacting with a Mead field, the kinetic Equation (11) takes the form: \[gB_{0}\frac{\partial f}{\partial t}+\frac{3\mu B_{0}}{q\gamma r^{2}}\frac{ \partial f}{\partial\varphi}+\sum_{m}e^{im\varphi}\left[\frac{\mu A_{m}}{q \gamma r}+i\frac{r\dot{A}_{m}}{7m}\right]\frac{\partial f}{\partial\varphi}=- \left[\frac{r\dot{S}}{2}+\sum_{m}e^{im\varphi}\left(\frac{8r^{2}\dot{A}_{m}}{ 21}-im\frac{\mu A_{m}}{q\gamma}\right)\right]\frac{\partial f}{\partial r} \tag{28}\] with the function \(g(r,\varphi,t)=1-S(t)/B_{0}-\sum_{m}e^{im\varphi}rA_{m}(t)/B_{0}\). We now define the drift frequency for equatorially trapped particles \[\Omega_{d}=3\mu/q\gamma r^{2} \tag{29}\] in terms of the first adiabatic invariant \(\mu\), and decompose the perturbed fluctuations along the azimuthal angle in Fourier space15: Footnote 15: The generalisation of the Mead field in section (2.2.1) was already expressed in terms of Fourier modes for the antisymmetric perturbations. \[f(r,\varphi,t)=f_{0}(r,t)+\sum_{m}e^{im\varphi}\delta f_{m}(r,t). \tag{30}\] Replacing the decomposition (30) in Equation (28) for \(m=0\) (the azimuthal average), and averaging over time according to (27) results, as shown in Appendix A, in the quasi-linear equation: \[\frac{\partial f_{0}}{\partial t} = -\sum_{m}\left[\frac{im\mu}{qB_{0}\gamma r}\frac{\partial}{ \partial r}\left(r\langle A_{m}^{*}\delta f_{m}\rangle\right)-\frac{r}{B_{0}} \frac{\partial}{\partial t}\langle A_{m}^{*}\delta f_{m}\rangle+\frac{8}{21} \frac{1}{rB_{0}}\frac{\partial}{\partial r}\langle r^{3}\dot{A}_{m}^{*} \delta f_{m}\rangle\right] \tag{31}\] The right-hand side of (31) describes the slow evolution of the background distribution due to the effect of fluctuations. As its often the case in space and astrophysical plasmas we need a closed equation for the evolution of the background. The correlation \(\langle\delta f_{m}A_{m}^{*}\rangle\)16 can be computed if we can write an equation for the perturbation \(\delta f_{m}\), replace it in Equation (31), and take the average defined by (27). The detail of this calculation can be found in the Appendix B, and results in the following nonlinear equation for the perturbation: Footnote 16: Since the magnetic field amplitude is real, we can write the Fourier coefficient \(A_{-m}=A_{m}^{*}\). \[\frac{\partial\delta f_{m}}{\partial t}+\underbrace{im\Omega_{d}\delta f_{m}}_ {\text{particle streaming}}=\underbrace{\frac{A_{m}r}{B_{0}}\frac{\partial f_{0}}{ \partial t}-\left(\frac{8r^{2}\dot{A}_{m}}{21B_{0}}-im\frac{\mu A_{m}}{qB_{0 }\gamma}\right)\frac{\partial f_{0}}{\partial r}}_{\text{Linear wave-particle interaction}}\underbrace{-\sum_{m^{\prime}}\mathcal{Q}[S,A_{m-m^{\prime}};\delta f_{m^{ \prime}}]}_{\text{Nonlinear wave-particle interaction}}. \tag{32}\] The three terms that control the evolution of the perturbed distribution in (32) represent free ballistic motion, or streaming, linear wave-particle interaction, and nonlinear wave-particle interaction. The term \(\mathcal{Q}\), given by Equation B8, is negligible in the limit \(m\Omega_{d}\delta f_{m}\gg\mathcal{Q}\), otherwise it has to be accounted for and will result in mode-mode coupling even if the ULF wave amplitudes are considered small, i.e., \(\delta B\simeq rA_{m}\ll B_{0}\) and \(S(t)\ll B_{0}\). In the next sections we solve these equations in the linear and quasi-linear regimes and describe the conditions in which nonlinear processes become significant. ### Linear theory and radial transport on fast timescales In the linear theory we consider small perturbations of the equilibrium that evolve on fast time scales comparable to the drift period. All nonlinear terms can then be ignored and the background distribution is assumed as constant in time, i.e. \(f_{0}(t)=\) const. The linear equation is therefore given by: \[\frac{\partial\delta f_{m}}{\partial t}+im\Omega_{d}\delta f_{m}=-\left(\frac{ 8r^{2}\dot{A}_{m}}{21B_{0}}-im\frac{\mu A_{m}}{qB_{0}\gamma}\right)\frac{ \partial f_{0}}{\partial r}. \tag{33}\] Equation (33) is linear and can be solved by Duhamel's principle for the initial condition \(\delta f_{m}(r,t=0)\) as: \[\delta f_{m}(r,t)=\underbrace{\delta f_{m}(r,0)e^{-im\Omega_{d}t}}_{\text{ Ballistic response}}\underbrace{-\frac{\partial f_{0}(r)}{\partial r}\int_{0}^{t}dt^{\prime}\ e^{im\Omega_{d}(t^{\prime}-t)}\left(\frac{8r^{2}\dot{A}_{m}(t^{ \prime})}{21B_{0}}-im\frac{\mu A_{m}(t^{\prime})}{qB_{0}\gamma}\right)}_{\text {Linear wave-particle response}}. \tag{34}\] The first term on the right-hand side of Equation (34) is a ballistic mode that we will see as responsible for the formation of transient structures in the phase-space \((r,\varphi)\). The second term on the right-hand side is the linear wave-particle response of the distribution function to the ULF wave. This problem is almost identical to the self-consistent electrostatic problem solved by Landau (1946), in which perturbations of the background distribution results in growing or decaying fluctuations. However, the radial transport problem in the linear regime, contained in Equation (34), is simpler than the one solved by Landau (1946), since the resonant energetic electrons with densities of the order of 0.1 % or less are passive tracers and self-consistent effects can be to very good degree of accuracy ignored17. One therefore has freedom to model the ULF fluctuations in a manner consistent with _in situ_ observations, as long as Faraday's law is respected. For a ULF fluctuation given as a single Fourier mode (\(A_{m}(t)\sim e^{-i\omega t}\)) or some stochastic noise, one can solve the linear system analytically. To the best of our knowledge, the analytical solution of Equation (33), i.e., Equation (34), has not appeared in peer-reviewed studies of terrestrial radial transport before so we proceed hereafter with a detailed analysis. #### 3.2.1 Ballistic solution and the formation of zebra stripes Inserting the ballistic solution in the perturbed distribution, i.e., the term \(\delta f_{m}(r,0)e^{-im\Omega_{d}t}\) in (34), in Equation (30), the total distribution in the linear limit is given by: \[f(r,\varphi,t)=f_{0}(r)+\sum_{m}\delta f_{m}(r,0)e^{-im\Omega_{d}t}e^{im\varphi}. \tag{35}\] We can consider the ballistic solution separately of the linear wave-particle response since the former is independent on the radial gradient of \(f_{0}\) and the latter is not. The ballistic response is therefore the only possible observable response when radial gradients in the distribution function are very small. If we consider a single Fourier mode \(m=1\), we note that an initial perturbation \(\delta f(r,t=0,\varphi)\) will develop fine structures in the \((r,\varphi)\) space as \(t\longrightarrow\infty\). The formation of fine structures in space occurs because an initial perturbation \(\delta f_{m}(r,0)\) will experience a differential shearing along the radial position \(r\). We can use the solution (35) to quantify the parametric dependence of the structures arising from ballistic motion in a magnetic dipolar field \(B=B_{E}/L^{3}\). For an initial phase \(\varphi_{0}\), the Figure 3: Ballistic motion for particles trapped in a dipolar field result in zebra stripes formation. The top left, top right, bottom left and bottom right are solutions at \(\varphi_{0}=0\) for \(t=30\) minutes, 1 hour, 2 hours and 8 hours, respectively. The initial distribution in uniform in \(L\) and kinetic energy \(E_{c}\). perturbed distribution, \(\delta f\) is constant along the curve \[\Delta\varphi=\varphi(t)-\varphi_{0} = \Omega_{d}t \tag{36}\] \[= \frac{3}{2}\frac{m_{e}c^{2}(\gamma^{2}-1)}{q\gamma B_{E}R_{E}^{2}} Lt\] \[= \frac{3}{2}\frac{m_{e}c^{2}}{qB_{E}R_{E}^{2}}\frac{E_{c}/m_{e}c^{ 2}+2}{E_{c}+m_{e}c^{2}}E_{c}Lt\] \[\sim \frac{E_{c}+2m_{e}c^{2}}{E_{c}+m_{e}c^{2}}E_{c}Lt\] \[\simeq E_{c}Lt\] in which we replaced the Lorentz factor \(\gamma\) by the kinetic energy \(E_{c}=m_{e}c^{2}(\gamma-1)\). For a fixed time \(t\neq 0\), Equation (36) indicates that energetic particles will have phase-space structures with a kinetic energy that is inversely proportional to the radial distance, i.e., \(E_{c}\sim 1/L\). The time evolution of the perturbed distribution function is shown in Figure (3) for time snapshots of 10 minutes, 1 hour, 2 hours and 8 hours. Energetic particles ranging between \(50-400\) keV experience a full azimuthal drift on the order of a few hours. Figure (3) shows that phase-space structures can form on timescales of the order of a single drift-period, that is on timescales that are far too rapid to be accounted for by radial diffusive effects. After several drift periods, phase-space structures in Figure 4: Zebra stripes formation after the injection of particles centred at \(L=2\) with a spread in radial distance of \(\Delta L=0.75\). \((E_{c},L)\) become thinner even though their numbers grow. This behaviour of the ballistic solution is consistent with the phenomenon of _zebra stripes_ commonly observed in the inner part of the Earth's radiation belts (Imhof & Smith, 1965; Datlowe et al., 1985). Zebra stripes are transient structured peaks and valleys observed on spectrograms of inner radiation belts' electrons with energies ranging between tens to hundreds of keV. The zebra stripes that are measured _in situ_ are also characterised by energy peaks and dips that vary as the inverse of the radial distance, i.e., \(E_{c}\sim 1/L\). They are also associated with substorms onsets and correlated with various geomagnetic indices, such as Kp and Dst, but are also able to form during quiet geomagnetic conditions (Sauvaud et al., 2013; Lejosne & Roederer, 2016; Lejosne & Mozer, 2020, 20). Mechanisms explaining formation of zebra stripes must therefore reproduce the \(E_{c}\sim 1/L\) dependence and explain the processes responsible for their transient nature and appearance under a wide range of geomagnetic conditions. Mechanisms suggested for the formation of zebra stripes can be categorised into two types. In the first type, particles sample an electric field that varies on timescales consistent with their drift motion (see, e.g., Lejosne et al. (2022) and references therein for the most recent advances on the subject). Consequently, a collection of trapped particle can experience drift resonance with the field, and result in zebra stripes structures as resonant particles are scattered to different drift-shells. In the second type, illustrated by the study of Ukhorskiy et al. (2014), zebra stripes also sample an electric field but are non-resonant. The formation of zebra stripes for this mechanism is akin to a phase-mixing process. Magnetically trapped particle's drifts are faster for more energetic particles. When fluxes are projected in energy and radial distance, the shearing of the distribution leads to a \(E_{c}\sim 1/L\) dependence. However, our analysis of the ballistic motion also demonstrates that phase-space structures consistent with _in situ_ observations of zebra stripes can form in the absence of both drift-resonance and electric field perturbations. The formation occurs on time-scales comparable to the drift period of energetic particles and is equivalent to the phase-mixing scenario presented by Ukhorskiy et al. (2014) in that it does not require drift-resonance. However, the ballistic solution we derived assumes a perturbation of the distribution function \(\delta f_{m}(t=0,r)\) at some arbitrary time. This perturbation of the distribution function can either be due to particles being lost \(\delta f_{m}(t=0,r)<0\), e.g. to the boundaries, or particles being injected \(\delta f_{m}(t=0,r)>0\). While more quiescent than the outer belts, the inner belts experience injection events of energetic electrons even during moderate geomagnetic storms (Zhao & Li, 2013)18. Footnote 18: Albedo neutron decay is also a constant source of energetic particles’ injection in the inner belts (Li et al., 2017) but the density might be too low for observational measurements of zebra stripes formation in energetic electrons or protons. In order to inject electrons in the inner belts a radial transport mechanism, such as a convective electric field, is required. But once injected in the inner belts the ballistic term shows that zebra stripes can form in the absence of any ULF perturbations. In Figure (4), we show the formation of the zebra stripes following localised loss of energetic electrons centred at \(L=2\) and spread with a standard deviation along radial distances of \(\Delta L=0.75\). Localised injection and losses also result in stripes on timescales comparable to the drift period but shearing of the distribution function results in structures spreading across radial distances beyond the injection or loss location. The transient nature of zebra stripes can also be evidenced when projecting the ballistic solution in the equatorial plane. Figure (5) shows the temporal evolution of 100 keV electrons's injection (at \(\varphi\in[0,\pi]\)) and losses (at \(\varphi\in[\pi,2\pi]\)). The drift period of 100 keV electrons between \(L=1\) and \(L=3\) ranges between 2.6 hours and 8 hours. After a single drift period the distribution function preserve their initial shape and have yet to phase-mix. In comparison, Figure (6) shows the temporal evolution of 400 keV electrons's injection (at \(\varphi\in[0,\pi]\)) and losses (at \(\varphi\in[\pi,2\pi]\)). The drift period of 400 keV electrons between \(L=1\) and \(L=3\) ranges between 45 minutes and 2.3 hours. For more energetic particles, since the drift period is shorter, shearing of the initial distribution phase-mixes the distribution on faster timescales. After 4 hours, the zebra stripes of 400 keV have very fine-scale structures in the equatorial plane. Injection or losses of particles can therefore result in the formation of zebra stripes without the need for drift-resonance or the presence of an electric field. The injection and losses are encoded in the ballistic solution but since shearing of the distribution function occurs on timescales of a few drift periods, the most energetic electrons develop quickly fine-scale structures in the distribution function that might not be resolved by spacecraft instruments. Nonetheless, the ballistic solution does not preclude the possibility for zebra stripes formation as a response to a ULF electric field for resonant or nonresonant particles. In the next section, we compute the linear solution to include the impact of ULF waves on the distribution function and differentiate between resonant and nonresonant responses. #### 3.2.2 Solution to the linear wave-particle interaction In the previous section we described the time evolution of the ballistic term in the distribution function and argued that it should dominate the particle's response when radial gradients in the distribution function are small. However, in the absence of phase-space injection and/or loss terms, Figure 5: Zebra stripes formation for 100 keV equatorially trapped electrons in terms of \((L-\varphi)\). The initial distribution correspond to a Gaussian distributed beam centered at \(L=2\) for \(\varphi\in[0,\pi]\) and a Gaussian distributed drop centred at \(L=2\) for \(\varphi\in[\pi,2\pi]\). After four hours the zebra stripes remain visible. and thus in instances where the ballistic term is zero, i.e., \(\delta f(t=0,r)=0\), and \(\partial f_{0}/\partial r\neq 0\), the linear wave-particle response should dominate. In this section we describe the linear wave-particle solution found in Equation (34). For the sake of simplicity, we assume a single Fourier mode for the ULF wave: \[A_{m}(t)=a_{m}e^{-i\omega_{m}t+\gamma_{m}t} \tag{37}\] with initial amplitude \(a_{m}\), frequency \(\omega\), and growth/damping rate \(\gamma_{m}\). We can generalise this solution to a spectrum of Fourier modes, but since the solution is linear, each are independent of one another. The linear solution is valid in the limit where the growth rate is sufficiently small, for the fluctuations to remain sufficiently small in amplitude and nonlinear effects negligeable (Davidson, 2012)19. We insert Equation (37) into Equation (34) to find the following linear wave-particle response \(\delta f_{m}^{L}(r,t)\): Footnote 19: See Section 3.4 in which we quantify the conditions for the linear regime to breakdown. It will not come as a surprise to readers’ familiar with solar wind turbulent problems that nonlinear effects can become dynamically significant for small amplitude electromagnetic fluctuations. In the magnetohydrodynamic limit this condition is associated with a state of critical balance (Goldreich and Sridhar, 1995) at fluid scales, but has also been generalised to kinetic problems in space plasmas (Schekochihin et al., 2016; Meyrand et al., 2019). For the problem of radial transport the nonlinear regime is reached even in the limit where ULF wave amplitudes and the perturbed distribution function are small, i.e., \(\delta B/B_{0}\ll 1\), and \(\delta f/f_{0}\ll 1\), respectively. \[\delta f_{m}^{L} = -a_{m}e^{-im\Omega_{dt}}\left[\frac{8r^{2}}{21B_{0}}(\gamma_{m}- i\omega_{m})-\frac{im\mu}{qB_{0}\gamma}\right]\frac{\partial f_{0}}{\partial r} \int_{0}^{t}dt^{\prime}e^{im\Omega_{dt^{\prime}}-i\omega_{m}t^{\prime}+\gamma _{m}t^{\prime}} \tag{38}\] \[= -a_{m}e^{-im\Omega_{dt}t}\left[\frac{8r^{2}}{21B_{0}}(\omega_{m} +i\gamma_{m})+\frac{m\mu}{qB_{0}\gamma}\right]\frac{\partial f_{0}}{\partial r }\left(\frac{e^{im\Omega_{d}t-i\omega_{m}t+\gamma_{m}t}-1}{\omega_{m}-m\Omega _{d}+i\gamma_{m}}\right).\] Equation (38) contains a resonant part indicating that particles with a drift frequency \(\Omega_{d}\) can be scattered across drift shell efficiently with ULF waves of frequencies \(\omega_{m}\). We can decompose equation (38) in terms of a linear wave-particle resonant part that can grow in time, and two oscillating parts Figure 6: Zebra stripes formation for 400 keV equatorially trapped electrons in terms of \((L-\varphi)\). The initial distribution correspond to a Gaussian distributed beam centered at \(L=2\) for \(\varphi\in[0,\pi]\) and a Gaussian distributed drop centred at \(L=2\) for \(\varphi\in[\pi,2\pi]\). After four hours the zebra stripes have phase-mixed. as follows: \[\delta f_{m}^{L}\!=\!-\frac{a_{m}r^{2}}{21B_{0}}\frac{\partial f_{0}}{\partial r} \left[\underbrace{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! series, and the dominant term for the perturbed distribution function gives: \[\delta f_{m}^{L}\simeq m\Omega_{d}te^{-i\omega_{m}t}, \tag{42}\] and thus demonstrates that fluctuations grow linearly in time due to resonant interactions. Equation (41) is an instance of a _Case-van Kampen mode_, initially derived for a Vlasov-Poisson plasma (Van Kampen, 1955; Case, 1959), but rederived here in the context of radial transport. In the limit where \(t\longrightarrow\infty\) but \(\gamma_{m}t\ll 1\) necessary to respect (40), the right-hand side of Equation (41) tends to a delta function21. Footnote 21: These non-eigenmodes are not only of theoretical interest. Non-eigenmodes have to be tracked in order to quantify entropy production in kinetic systems. See for instance Section 5.6 of Schekochihin (2017) for an introduction in terms of a Vlasov-Poisson system and Zhdankin (2022) for an application that can be used for wave-particle interactions in the radiation belts. The resonant linear response presented in this section occurs on timescales comparable or larger than the drift period but smaller than \(1/\gamma_{m}\), while phase-mixing and zebra stripes are taking place on timescales comparable to drift periods. For finite damping ULF rate \(\gamma_{m}<0\), the resonant part decays \(e^{\gamma_{t}\longrightarrow 0}\) on timescales \(|\gamma_{m}|t\gg 1\), and the ballistic response proportional to \(e^{-i\Omega_{d}t}\) in Equation (39) dominates. This criterion can be used to distinguish non-resonant to resonant drift particle interactions from spacecraft data since both require a radial gradient in the MLT averaged distribution function \(f_{0}\). The requirement for a non-zero radial gradient in \(f_{0}\) of a given energetic population is an experimental constraint on the observation of phase-space structures, as reported by Hartinger et al. (2020) and Sarris et al (2021), and is discussed further in Section 4.2. An additional criterion to distinguish resonant from non-resonant particle's response can also be achieved observationally for instruments recording energy-dependent fluxes. Drift-resonance is energy dependent, and the signature of resonance for resonant energies should be markedly different than for non-resonant particles, even though Equation (41) shows that both experience oscillations with frequencies comparable to the ULF wave frequency \(\omega_{m}\). Figure (7) shows the perturbed distribution function of 1.1 MeV electrons at \(L=8\) in comparison to the particle's response for energies at 700 and 900 keV. Thus, a shift in energy can take particles out of resonances and result in perturbed distribution function that are more than 5 times smaller in amplitude. Drift-resonance is therefore an efficient mechanism for ULF waves to exchange energy with energetic electrons. In Figure (8) we plotted the drift period as a function of kinetic energy and parametrized in terms of the radial distance \(L\). The top panel is made for 45 degrees pitch-angles and the bottom panel for 90 degrees pitch-angles. The shaded and dashed rectangles bound the resonant frequency \(\omega_{m}/m\) for Pc5 ULF frequencies with azimuthal mode numbers \(m=1,2\) and \(m=3\). From Figure (8) we note that energetic electrons with kinetic energies larger than 200 keV and up to a few MeV have access to drift orbit resonance across broad drift-shells. Figure (9) is the same as Figure (9) but the bounded rectangles are drawn for Pc4 waves with azimuthal wave numbers \(m=4,7\) and. 10 In the case of Pc4 waves, they can sustain drift resonance for energetic electrons with kinetic energy less than 400 keV, but require larger azimuthal wave numbers (Barani et al., 2019). Even though drift-resonance is strongly energy dependent, Figures (8) and (9) show that they can be accessible to a broad range of energy and pitch-angles across the radiation belts. We therefore conclude this section by pointing out that the linear perturbation of the distribution function due to ULF electromagnetic fluctuations, particle injections (\(\delta f(r,t=0)>0\)) or losses (\(\delta f(r,t=0)<0\)), all result in phase-space drift structures on non-diffusive timescales comparable to the drift periods. Some of the phase-space structures for the lower energetic electrons (\(E_{c}<m_{e}c^{2}\)), assuming particle injection or gradient in the background distribution, can appear as zebra stripes in the inner radiation belts. Even though Equation (39) shows that the resonant part of \(\delta f\) also experiences phase-mixing, drift echoes and zebra stripes nonetheless form for _non-resonant_ drift frequencies \(m\Omega_{d}\neq\omega_{m}\), and thus, stringent resonant conditions \(m\Omega_{d}\simeq\omega_{m}\) do not constitute _sine qua non_ constraints for the formation of drift echoes and zebra stripes. ### Quasi-linear theory of radial diffusion Figure 7: Example of resonant and nonresonant response in the electron distribution function. The ULF wave has a frequency \(\omega=7\) mHz and a mode number \(m=1\). The particles are located at \(L=8\) with pitch-angle \(\alpha=45\) degrees. Particles with kinetic energies of the order of \(\simeq 1.2\) MeV (\(2\pi/\Omega_{d}\simeq 15\) minutes) are resonant, but particles with energies less than 1 MeV (\(2\pi/\Omega_{d}\geq 17\) minutes) are not. The resonant particles experience fluctuations almost one order of magnitude greater than nonresonant particles with comparable kinetic energy. In the previous section we have described the fast linear response of the perturbed distribution function to an electromagnetic ULF wave. We assumed that the background distribution \(f_{0}\) was time independent, which is equivalent as saying that it did not experience significant variations on fast time scales. In this section, we compute the evolution of the background distribution function according to quasi-linear assumptions (Kennel & Engelmann, 1966; Diamond et al., 2010; Schekochihin, 2017; Allanson et al., 2022). In quasi-linear theories one assumes that perturbations start modifying the equilibrium before they reach nonlinear amplitudes. In other words, the nonlinear term \(\mathcal{Q}\) in Equation (32) can be ignored when the characteristic time for nonlinear effects is longer than the time for the equilibrium to be reached. We also neglect the linear term \(\frac{A_{m}r}{B_{0}}\frac{\partial f_{0}}{\partial t}\) on the right-hand Figure 8: Azimuthal drift period \((2\pi/\Omega_{d})\) dependence in terms of the kinetic energy \(E_{c}=[50-2000]\) keV and normalised radial distance \(L=r/R_{E}=[4,6,8]\). The top panel is for \(\alpha=45\) and the bottom one for \(\alpha=90\). The grey shaded area is when the drift frequency matches the Pc5 ULF fluctuations with \(\omega=[2,7]\) mHz and resonant interactions is possible. The areas bounded in dashed and dotted lines show the resonant boundary for \(m=2\) and \(m=3\) modes, respectively. side of (32) since it provides a correction of order \(|\delta B|^{2}/B_{0}^{2}\ll 1\) in the quasilinear limit, as shown in Appendix C. Thus, for our purpose, we assume that the evolution of the perturbation is determined by Equation (33). Similarly to the previous section, this linear equation can be solved by Duhamel's principle, for the initial condition \(\delta f_{m}(r,t=0)=0\): \[\delta f_{m}(r,t)=-e^{-im\Omega_{d}t}\int_{-\infty}^{t}dt^{\prime}\ e^{+im \Omega_{d}t^{\prime}}\left(\frac{8r^{2}\dot{A}_{m}(t^{\prime})}{21B_{0}}-im \frac{\mu A_{m}(t^{\prime})}{qB_{0}\gamma}\right)\frac{\partial f_{0}}{ \partial r}. \tag{43}\] Figure 9: Azimuthal drift period (\(2\pi/\Omega_{d}\)) dependence in terms of the kinetic energy \(E_{c}=[50-400]\) keV and normalised radial distance \(L=r/R_{E}=[4,6,8]\). The top panel is for \(\alpha=45\) and the bottom one for \(\alpha=90\). The grey shaded area is when the drift frequency matches the \(m=4\) Pc4 ULF fluctuations with \(\omega=[7,25]\) mHz and resonant interactions is possible. The areas bounded in dashed and dotted lines show the resonant boundary for \(m=7\) and \(m=10\) Pc4 modes, respectively. The linear solution given by Equation (43) can then be combined with the following quasilinear equation to described the time evolution of \(f_{0}\): \[\frac{\partial f_{0}}{\partial t}\!=\!-\sum_{m}\left[\frac{im\mu}{qB_{0}\gamma r} \frac{\partial}{\partial r}\left(r\langle A^{*}_{m}\delta f_{m}\rangle\right)+ \frac{8}{21}\frac{1}{rB_{0}}\frac{\partial}{\partial r}\langle r^{3}\dot{A}^{* }_{m}\delta f_{m}\rangle-\frac{r}{B_{0}}\langle A^{*}_{m}\frac{\partial}{ \partial t}\delta f_{m}\rangle-\frac{r}{B_{0}}\langle\dot{A}^{*}_{m}\delta f_{ m}\rangle\right] \tag{44}\] We note that the first two terms on the right-hand side of Equation (44) will result in a diffusion term, and the last two expressions in advection terms. Replacing the linear solution of \(\delta f_{m}\) into (44) to compute the correlation terms \(\langle A^{*}_{m}(t)\delta f_{m}(t)\rangle\) and \(\langle\dot{A}^{*}_{m}(t)\delta f_{m}(t)\rangle\), results in the following two integrals: \[\langle A^{*}_{m}(t)\delta f_{m}(t)\rangle=-\int_{-\infty}^{t}dt^{\prime}\ e^{+im \Omega_{d}(t^{\prime}-t)}\left(\frac{8r^{2}\langle A^{*}_{m}(t)\dot{A}_{m}(t ^{\prime})\rangle}{21B_{0}}-im\frac{\mu\langle A^{*}_{m}(t)A_{m}(t^{\prime}) \rangle}{qB_{0}\gamma}\right)\frac{\partial f_{0}}{\partial r}. \tag{45}\] \[\langle\dot{A}^{*}_{m}(t)\delta f_{m}(t)\rangle=-\int_{-\infty}^{t}dt^{\prime} \ e^{+im\Omega_{d}(t^{\prime}-t)}\left(\frac{8r^{2}\langle\dot{A}^{*}_{m}(t) \dot{A}_{m}(t^{\prime})\rangle}{21B_{0}}-im\frac{\mu\langle\dot{A}^{*}_{m}(t) A_{m}(t^{\prime})\rangle}{qB_{0}\gamma}\right)\frac{\partial f_{0}}{\partial r}. \tag{46}\] To compute the autocorrelations analytically we need to make some assumptions about the nature of the ULF amplitude \(A_{m}(t)\). To account for finite and zero correlation times we choose to model the fluctuations as different realisations of an Ornstein-Uhlenbeck process (Papoulis 1991) given by the following time evolution equation22: Footnote 22: Energetic electrons in the Earth’s radiation belts are passive tracers and the self-consistent response onto the field can therefore be ignored. This freedom allows one to model the ULF wave amplitudes in a manner empirically consistent with _in situ_ measurements. \[\frac{\partial A_{m}}{\partial t}=-A_{m}/\tau_{c}+\sqrt{2D}\chi(t), \tag{47}\] where \(\tau_{c}\) is a correlation time, \(\sqrt{2D}\) is a measure of the root mean square value of \(A_{m}\) and \(\chi(t)\) is a unit Gaussian white noise, \(\langle\chi(t)\chi(t^{\prime})\rangle=\delta(t-t^{\prime})\). The solution for \(A_{m}\), assuming \(A_{m}(t=0)=0\), is given by \[A_{m}(t)=\sqrt{2D}e^{-t/\tau_{c}}\int_{-\infty}^{t}dt^{\prime}\ e^{t^{\prime}/ \tau_{c}}\chi(t^{\prime}). \tag{48}\] Using Equation (48) we can compute the following quantities for a finite correlation time \(\tau_{c}\neq 0\): \[C_{1}(t,t^{\prime})=\langle A_{m}(t)A_{m}(t^{\prime})\rangle=\tau_{c}De^{-|t-t ^{\prime}|/\tau_{c}} \tag{49}\] \[C_{2}(t,t^{\prime})=\langle\dot{A}_{m}(t)\dot{A}_{m}(t^{\prime})\rangle=\frac {D}{\tau_{c}}e^{-|t-t^{\prime}|/\tau_{c}}+2D\delta(t-t^{\prime}) \tag{50}\] \[C_{3}(t,t^{\prime})=\langle A_{m}(t)\dot{A}_{m}(t^{\prime})\rangle=-De^{-|t-t^ {\prime}|/\tau_{c}} \tag{51}\] The above correlators are only a function of the time difference \(t-t^{\prime}\), and not the particular times \(t\) and \(t^{\prime}\), indicating that the Ornstein-Uhlenbeck process is stationary, or time-homogeneous. Returning to the integrals (45) and (46) it should be stressed that the gradient in the background distribution functions in the integrals is a function of time, i.e., \(f_{0}=f_{0}(r,t)\). The last step before solving the integral is to assume that a short decorrelation time \(\tau_{c}\) exists, such that the correlators \(C_{i}(t-t^{\prime})\ll C_{i}(0)\) if \(t-t^{\prime}>\tau_{c}\). We can thus replace \(f(r,t^{\prime})=f(r,t-\tau)\) by \(f(r,t)\) on the basis that \(C_{i}(\tau=t-t^{\prime})\) changes appreciably before any significant variation in the background distribution (Vanden Eijnden, 1997). This quasi-linear assumption indicates that the ULF wave amplitude cannot alter the background distribution function on timescales comparable to the ULF wave and drift period. The diffusion coefficient that follows in the next lines can therefore not lead to changes on timescales comparable to the azimuthal drift period and justifies the ensemble-average defined by Equation (27). For the sake of simplicity, we now assume zero correlation time23, which means \(e^{-|t-t^{\prime}|/\tau_{c}}\longrightarrow\tau_{c}\delta(t-t^{\prime})\) with \(D=|A_{m}|^{2}/\tau_{c}\). Using the above expressions, we compute the following correlators: Footnote 23: By keeping \(\tau_{c}\) finite but small (\(\Omega_{d}\tau_{c}\ll 1\)), the diffusion coefficient in the quasilinear limit is rescaled by a factor \(\frac{1}{1+\Omega_{d}^{2}\tau_{c}^{2}}\), thereby introducing an energy dependence to the radial transport, as shown in Osmane and Lejosne (2021). \[\langle A_{m}(t)\delta f_{m}(t)\rangle=\left(\frac{8r^{2}D\tau_{c}}{21B_{0}}+ im\frac{\mu D\tau_{c}}{\gamma_{d}qB_{0}\gamma}\right)\frac{\partial f_{0}}{ \partial r} \tag{52}\] \[\langle\dot{A}_{m}(t)\delta f_{m}(t)\rangle=-\left(\frac{8r^{2}D}{21B_{0}}+ im\frac{\mu D\tau_{c}}{qB_{0}\gamma}\right)\frac{\partial f_{0}}{\partial r} \tag{53}\] The quasilinear diffusion equation therefore takes the general form: \[\frac{\partial f_{0}}{\partial t} = \sum_{m}\left[\left(\frac{m\mu r}{\gamma qB_{E}R_{E}}\right)^{2} \frac{\partial}{\partial r}\left(\frac{D\tau_{c}^{2}r^{4}}{R_{E}^{4}}\frac{ \partial f_{0}}{\partial r}\right)+\left(\frac{8}{21}\right)^{2}\frac{r^{2}} {R_{E}^{2}}\frac{\partial}{\partial r}\left(\frac{Dr^{8}}{B_{E}^{2}R_{E}^{4} }\frac{\partial f_{0}}{\partial r}\right)-\frac{1}{3}m^{2}\Omega_{d}^{2}\tau_ {c}^{2}\frac{Dr^{9}}{B_{E}^{2}R_{E}^{6}}\frac{\partial f_{0}}{\partial r}\right] \tag{54}\] We normalise time and the radial distance in the quasi-linear Equation (54) as \(\tau=t/\tau_{c}\) and \(L=r/R_{E}\), and write \(|\delta B_{m}|^{2}=r^{2}|A_{m}|^{2}\) to find: \[\frac{\partial f_{0}}{\partial\tau} = \sum_{m}\left[L^{2}\frac{\partial}{\partial L}\left(\frac{1}{9}m^ {2}\Omega_{d}^{2}\tau_{c}^{2}L^{6}\frac{|\delta B_{m}|^{2}}{B_{E}^{2}}\frac{ \partial f_{0}}{\partial L}\right)+L^{2}\frac{\partial}{\partial L}\left( \frac{8^{2}}{21^{2}}L^{8}\frac{|\delta B_{m}|^{2}}{B_{E}^{2}}\frac{\partial f _{0}}{\partial L}\right)\right] \tag{55}\] \[= L^{2}\frac{\partial}{\partial L}\left(\frac{D_{LL}}{L^{2}}\frac{ \partial f_{0}}{\partial L}\right),\] in which the diffusion coefficient \(D_{LL}\) normalised by \(\tau_{c}\) is given by \[D_{LL}=\sum_{m}\left(\frac{1}{9}m^{2}\Omega_{d}^{2}\tau_{c}^{2}+\frac{64}{441} \right)\frac{|\delta B_{m}|^{2}}{B_{E}^{2}}L^{8}. \tag{56}\] Equation (55) conserves particles confined within a bounded volume since the total rate of change of particles is given by \(dN/dt\simeq\int_{L_{min}}^{L_{max}}\ B\ L\ dL\ \partial f/\partial t\simeq\frac{D_{ LL}}{L^{2}}\frac{\partial f_{0}}{\partial L}\big{|}_{L_{min}}^{L_{max}}\). Moreover, since this diffusion coefficient has been derived for an electromagnetic field model that respects Faraday's law it can be expressed in terms of the wave power in the magnetic field alone and does not require the separation in terms of an electric \(D_{LL}^{E}\) and magnetic \(D_{LL}^{B}\) diffusion coefficients commonly used in radial transport studies (Ozeke et al., 2014; Sandhu et al., 2021). The diffusion coefficient is dependent on the first adiabatic invariant \(\mu\) contained in the azimuthal drift frequency \(\Omega_{d}\). We note that for large \(m\gg 1\) azimuthal wave number, the diffusion coefficient is energy dependent and has a radial distance dependence that goes as \(L^{6}\), even though the short-correlation time assumption would constrain \(\Omega_{d}\tau_{c}\ll 1\). For \(m\simeq 1\), and \(\Omega_{d}\tau_{c}\ll 1\), the diffusion coefficient is independent of energy and has an \(L^{10}\) scaling: \[D_{LL}=\begin{cases}\frac{m^{2}\mu^{2}}{q^{2}\gamma^{2}}\tau_{c}^{2}\frac{| \delta B_{m}|^{2}}{B_{E}^{2}}L^{4}\sim L^{6},&\text{if }0.77m^{2}\Omega_{d}^{2}\tau_{c}^{2} \gg 1,\\ \frac{8^{2}}{21^{2}}\frac{|\delta B_{m}|^{2}}{B_{E}^{2}}L^{8}\sim L^{10},& \text{if }0.77m^{2}\Omega_{d}^{2}\tau_{c}^{2}\ll 1.\end{cases} \tag{57}\] This distinction between \(D_{LL}\) for high and low azimuthal wave numbers is important for modelling of the Earth's radiation belts because solar wind perturbations can result in both broad and narrow ULF azimuthal wave number spectrums (Murphy et al., 2020). For instance, interplanetary shocks can cause a broad spectrum in azimuthal wave numbers with \(\{m\in\mathbb{Z}^{+}:m<20\}\)(Sarris, 2014; Barani et al., 2019). In such an instance, the model predicts an energy dependent \(D_{LL}\) that can scales as \(L^{6}\) if the wave power of the high \(m\) modes is comparable to the wave power in the low \(m\) modes. On the other hand a narrow ULF wave spectrum along the azimuthal wave number \(m=1\) should result in a diffusion that is independent of energy and with a radial scaling dependence more sensitive to the radial distance. In other words, the parametric dependence of \(D_{LL}\) is a function of how broad the ULF wave spectrum is in \(m\). If the magnetospheric plasma is dominated by an \(m=1\) mode, with several orders of magnitude less power in \(m>1\) modes, a quasilinear modelling of the diffusion coefficient with an \(L^{10}\) dependence should be chosen. If the choice of a quasilinear model with an \(L^{6}\) dependence and an energy dependence in \(D_{LL}\) provides better accuracy, it would nonetheless be inconsistent with the above radial diffusion coefficients derived for a Mead field. ### Beyond a quasi-linear theory of radial transport: nonlinear regime In the preceding sections we have described the linear response of the perturbed distribution function \(\delta f\) and written a Fokker-Planck equation for the quasi-linear evolution of the ensemble-averaged distribution function \(f_{0}(L,t)\). Even in the quasi-linear limit the perturbed distribution function is assumed to be linear while the evolution of the background distribution is nonlinear in the sense that it depends on the correlator \(\langle\delta B_{m}\delta f_{m}\rangle\). However, the perturbed response given by Equation (32) contains a nonlinear term and this section aims to determine when linear assumptions of radial transport breakdown and nonlinear processes become dynamically important. We distinguish two type of nonlinear regimes. In the first type, nonlinear structures associated with ULF waves are produced but isolated in the sense that they cannot interact with one another. Such structures have been covered in the case of ULF radial transport by Li et al. (2018); Wang et al. (2018) and their observational signatures consist in the appearance of fluxes trapped in the potential well of electric fields. This regime of isolated trapped structures is equivalent to the formation of Bernstein-Green-Kruskal (BGK) mode for a Vlasov-Poisson system (Bernstein et al., 1957) and requires a sufficiently large-amplitude fluctuation to confine particles in their respective phase-space. In the second type, nonlinearities arise because multiple ULF modes are present and resulting fluctuations in the distribution function interact with one another. This second type of nonlinearity, unlike the first one, can be facilitated by the presence of large-amplitude fluctuations but does not require them. This regime is equivalent to the one presented by Dupree (1972) for a Vlasov-Poisson system and associated with the formation of phase-space granulations. These phase-space granulations can consist in linear fluctuations arising due to ballistic trajectories, such as drift echoes, or nonlinear trapped fluctuations equivalent to BGK modes. Theoretical and observational studies have indicated that such a nonlinear regime of non-isolated structures might be common in weakly collisional plasmas (Schekochihin et al., 2008; Schekochihin et al., 2016; Meyrand et al., 2019; Servidio et al., 2017; Kunz et al., 2018), prevent Landau damping from dissipating fluctuations (Wu et al., 2019), and can result in a phase-space turbulent cascade akin to what is observed in fluid and MHD turbulent systems (Goldreich and Sridhar, 1995). While we acknowledge that ULF wave amplitude in the Earth's radiation belts can be sufficiently large to sustain trapped structures derived by Li et al. (2018); Wang et al. (2018), the trapping along magnetic local time does not result in irreversible energy gain by the trapped populations. We focus hereafter on the second nonlinear regime which relies on the presence on more than one ULF Pc4 and Pc5 mode. We show hereafter that the second nonlinear regime can result in the transport of particles along magnetic drift shells, and thus irreversible energising of populations that would otherwise be unable to experience drift-resonance. We also demonstrate that the inclusion of nonlinear effects associated with the symmetric ULF fluctuation, which in the linear and quasi-linear regime had no impact, can suddenly become drivers of acceleration and losses. #### 3.4.1 Criteria to determine when nonlinear radial transport becomes significant The nonlinear terms contained in \(\mathcal{Q}\) is given by Equation (B8) can be understood as coupling terms in which a mode with azimuthal wave number \(p=m-m^{\prime}\) couples with a mode \(q=m^{\prime}\) to pump or sink energy from a mode number \(m\). For instance, a collection of particles interacting with azimuthal wave numbers \(m=3\) and encoded in \(\delta f_{m=3}\) and azimuthal wave number \(m^{\prime}=1\) encoded in \(\delta f_{m^{\prime}=1}\) can couple to another through a ULF mode with \(p=2\) with \(A_{p=2}\). This nonlinear wave-particle coupling can lead to acceleration of nonresonant energetic particles with slow azimuthal drift periods compared to Pc4 and Pc5 ULF frequencies, i.e. \(m\Omega_{d}\ll\omega\). However, satisfying the condition \(p+q=m\) is not enough to make nonlinear effects relevant dynamically for radial transport. The nonlinear coupling terms becomes significant when it becomes comparable to the linear transit term of a particle experiencing an azimuthal drift which is given by the second expression on the left-hand side of Equation (39). For instance, if we account for the nonlinear term associated with the symmetric ULF amplitude, \(S(t)\), with mode number \(p=0\), with the particle response to a mode \(q=m,\;\frac{S}{B_{0}}\frac{\partial\delta f_{m}}{\partial t}\simeq\omega\frac {S}{B_{0}}\delta f_{m}\), we find the following two criteria \[I_{1}=\frac{\text{symmetric nonlinear term \# 1}}{\text{linear transit time}}\simeq\frac{\omega_{m}}{m\Omega_{d}}\frac{S}{B_{0}} \simeq 1, \tag{58}\] \[I_{2}=\frac{\text{symmetric nonlinear term }\#2}{\text{linear transit time}}\simeq\frac{L}{2}\frac{\omega}{m\Omega_{d}}\frac{S}{B_{0}}\frac{\partial}{\partial L}\log \delta f_{m}\simeq 1, \tag{59}\] in which the frequency \(\omega_{m}\) is associated with time variations of the perturbed distribution function \(\delta f_{m}\) and the frequency \(\omega\) with the symmetric ULF wave amplitude \(S(t)\). We note that the linear ballistic response of the perturbed distribution function given by Equation (39) resulted in time variations with frequencies \(\omega_{m}=m\Omega_{d}\), thus nonlinear effects can be felt whenever the symmetric ULF amplitude becomes comparable to the local magnetic field. However, criteria \(I_{1}\) can also be satisfied in the limit where the symmetric ULF fluctuations are small in amplitude, i.e. \(S(t)/B_{0}\ll 1\), if \(\omega_{m}\gg m\Omega_{d}\). For criteria \(I_{2}\), nonlinear effects become significant for large gradients in the perturbed distribution (\(\partial\log(\delta f_{m})/\partial L\gg 1\)) even in the limit \(S\ll B_{0}\). We can account for nonlinearities associated with the anti-symmetric perturbation \(\frac{rA_{m-m^{\prime}}}{B_{0}}\frac{\partial\delta f_{m^{\prime}}}{\partial t }\simeq\omega_{m^{\prime}}\frac{rA_{m-m^{\prime}}}{B_{0}}\delta f_{m^{\prime}}\) by defining two additional criteria \(I_{3}\) and \(I_{4}\) in terms of the nonlinear terms \[I_{3}=\frac{\text{anti-symmetric nonlinear term }\#1}{\text{linear transit time}}\simeq\frac{\omega_{m^{\prime}}}{m\Omega_{d}}\frac{rA_{m-m^{\prime}}}{B_{0}}\frac{\delta f_{m^{\prime}}}{ \delta f_{m}}\simeq 1, \tag{60}\] \[I_{4}=\frac{\text{anti-symmetric nonlinear term }\#2}{\text{linear transit time}}\simeq L\frac{8}{21}\frac{\omega_{m-m^{\prime}}}{m\Omega_{d}}\frac{rA_{m-m^{\prime}}}{B_{0}} \frac{\delta f_{m^{\prime}}}{\delta f_{m}}\frac{\partial}{\partial L}\log \delta f_{m^{\prime}}\simeq 1. \tag{61}\] We note once more that since the linear response of the perturbed part \(\delta f_{m^{\prime}}\) can result in time variations with frequencies \(\omega\simeq m^{\prime}\Omega_{d}\), nonlinear effects can be sensed whenever \(\frac{m^{\prime}}{m}\frac{\delta B}{B_{0}}\frac{\delta f_{m^{\prime}}}{\delta f _{m}}\simeq 1\). If \(m=1\), \(m^{\prime}>m\), or in the presence of large gradients, small amplitude ULF fluctuations \(rA_{m-m^{\prime}}=\delta B_{m-m^{\prime}}\ll B_{0}\) can nonetheless result in dynamically relevant nonlinear effects. #### 3.4.2 Nonlinear impact of symmetric perturbations on fast timescales In the previous section we have defined four criteria to argue that nonlinear effects can become significant even for small amplitude ULF fluctuations. In this section we focus on the nonlinearity arising from the symmetric ULF perturbation \(S(t)\). The nonlinear Equation (32) for the perturbed distribution function \(\delta f_{m}\) can be solved analytically on fast timescales comparable to the drift period of particles. The linear solution to Equation (32) is independent of the symmetric perturbation \(S(t)\). But the nonlinear term \(\mathcal{Q}\) contains a coupling term between the symmetric perturbation and \(\delta f_{m}\). This nonlinear response of the particles with a mode \(m\) is due to the coupling between the \(m=0\) ULF mode contained in the symmetric perturbation and itself. If we assume that the nonlinear coupling due to \(S(t)\) is greater than the one due to the anti-symmetric ULF waves \(S\gg rA_{m}\), Equation (32) becomes: \[\frac{\partial\delta f_{m}}{\partial t}+im\Omega_{d}\delta f_{m}=-\left(\frac {8r^{2}\dot{A}_{m}}{21B_{0}}-im\frac{\mu A_{m}}{qB_{0}\gamma}\right)\frac{ \partial f_{0}}{\partial r}+\frac{S}{B_{0}}\frac{\partial\delta f_{m}}{ \partial t}-\frac{r\dot{S}}{2B_{0}}\frac{\partial\delta f_{m}}{\partial r}. \tag{62}\] In order to isolate the impact of the symmetric perturbation arising due to nonlinear coupling we split the perturbed distribution in terms of a linear part \(\delta f_{m}^{L}\) given by Equation (39) and a nonlinear part \(\delta f_{m}^{NL}\) that can be extracted from the following equation: \[\frac{\partial\delta f_{m}^{NL}}{\partial t}+im\Omega_{d}\delta f_{m}^{NL}= \frac{S}{B_{0}}\frac{\partial\delta f_{m}^{L}}{\partial t}-\frac{r\dot{S}}{2 B_{0}}\frac{\partial\delta f_{m}^{L}}{\partial r}. \tag{63}\] In Equation (63) we assume that the nonlinear perturbation remains smaller than the linear response, \(|\delta f_{m}^{NL}|<|\delta f_{m}^{L}|\), and thus we can solve the nonlinear equation perturbatively to drop the coupling terms proportional to \(\delta f_{m}^{NL}S(t)\). Equation (63) is linear in \(\delta f_{m}^{Nl}\) and can now be solved if we prescribe a solution for the linear response \(\delta f_{m}^{L}\). For the sake of simplicity, and in order to highlight that ULF radial transport can have an impact on non resonant particles on fast timescales comparable or less than the drift period, we assume that the linear perturbation \(\delta f_{m}^{L}\) is given by an injection or a loss of 100 keV electrons consistent with the linear solution, and set \(A_{m}=0\)24. Particles with 100 keV confined in the equatorial plane at normalised radial distances \(L\leq 8\) have azimuthal drift periods of the order of 90 to 120 minutes. Thus, frequencies of the order of \(\omega\simeq 1\) mHz would require azimuthal wave numbers of \(m\geq 10\)(Barani et al., 2019). Footnote 24: The linear response is taken as the ballistic one \(\delta f_{m}=\delta f_{m}(t=0,r)e^{-im\Omega_{d}t}\). Inclusion of the linear wave-particle response for \(A_{m}\neq 0\) leads to the same physical process and is left for future more detailed studies of higher order radial transport. We assume an injection of 50 keV given by a Gaussian centred at a radial distance \(L_{c}\) and with a radial spread \(\Delta L\) \[\delta f_{m}^{L}(L,t) = \delta f_{m}(0,L)e^{-im\Omega_{d}t} \tag{64}\] \[= e^{-\frac{(L-L_{c})^{2}}{\Delta L^{2}}}e^{-im\Omega_{d}t}.\] The symmetric perturbation is modeled as a compression of the magnetic field with a decay time \(\tau_{c}^{S}\), \[S(t)=\delta b\ e^{-t/\tau_{c}^{S}}. \tag{65}\] The perturbed solution for the distribution function \(\delta f_{m}^{L}+\delta f_{m}^{NL}\) following the Gaussian shaped injection and decaying symmetric ULF mode is given by \[\delta f_{m}^{L}+\delta f_{m}^{NL}=\delta f_{m}(0,L)e^{-im\Omega_{d}t}\left[1 -\frac{\delta b}{B_{0}}\bigg{(}1-e^{-t/\tau_{c}^{s}}\bigg{)}\left(im\Omega_{d }\tau_{c}^{S}+\frac{L(L-L_{c})}{\Delta L^{2}}\right)\right]. \tag{66}\] The nonlinear response given by (66) for the \(m=1\) mode is shown in Figure (10) for a symmetric ULF wave amplitude of \(\delta b=0.12B_{0}\). The top left panel corresponds to the linear response. After the injection of the particles at \(L_{c}=5\), the distribution function oscillates in time and gets sheared along \(L\). However, when we introduce a symmetric perturbation with a decay time that is smaller than the drift period (with \(\tau_{c}^{s}<\Omega_{d}\)), the distribution function splits at the injection point. This non-adiabatic behavior is shown in the top right and bottom left panels of Figure (10). In comparison, an adiabatic decay of the ULF mode with \(\tau_{c}^{S}\geq\Omega_{d}\) has no impact on the distribution function, as shown in the bottom right panel of Figure (10). The physical process responsible for this mechanism is illustrated in Figure (14). A symmetric ULF compression with amplitude \(S(t)\) results in an \(E\times B\) differential gradient that is larger in amplitude at higher than lower drift shells. Drift shells with negative (positive) gradients result in particles being driven inward (outward). If the ULF compression is adiabatic particles phase-mix along \(L\), but if the compression is non-adiabatic and the \(E\times B\) drift decays or grow too fast (compared to the azimuthal drift period) for phase-mixing to occur, the net radial drift is inward. This net motion of particles inward is shown in Figures (11) and (12), for a ULF symmetric amplitude corresponding to 25% and 62% of the background field at \(L=5\). The inward moving particles increase in energy in order to conserve the first adiabatic invariant whereas the outward moving particles loose energy. This process can result in the fast and irreversible acceleration of particles as well as losses associated with shadowing even though there is no drift-resonance with the ULF modes. These results demonstrate that the inclusion of higher order effects can lead to non-diffusive and irreversible radial transport on fast timescales. Such a process cannot be modeled with quasi-linear radial diffusion. ## 4 Discussion ### When can we use quasi-linear radial diffusion? A drift kinetic description of ULF wave interaction with energetic particles is a convenient methodology to define the regime of validity of quasi-linear radial diffusion problems. In comparison, the derivation in terms of the particle's trajectories (Falthammar, 1965; Elkington et al., 1999; Lejosne, 2019) is mathematically more transparent than the one provided in Section 3.3 but since it does not require computation of the perturbed orbits, it does not distinguish explicitly between the fast Figure 10: Impact on the perturbed distribution function of 100 keV injected electrons at \(L=5\) by symmetric perturbation on timescales less than one drift period for symmetric perturbation of amplitude \(\delta b=0.12B_{0}\) at \(L=5\). The color scale denote the perturbed distribution amplitude. perturbed part and the slow background part of the distribution function. The procedure to derive the radial diffusion coefficient is identical to the one pursued for other quasi-linear theories in laboratory and astrophysical plasmas (Kennel & Engelmann, 1966; Diamond et al., 2010; Schekochihin, 2017). Quasi-linear theories require temporal and spatial scale separation of the distribution function in terms of a slow ensemble-averaged background component and a fast perturbed component. The fast component can evolve on timescales comparable to the periods of electromagnetic fluctuations responsible for the wave-particle interactions. For instance, for seed electrons of 10-100 keV interacting with high frequency whistler waves, the quasi-linear theory of Kennel & Engelmann (1966) is explicitly clear that the perturbed component evolves on timescales of the order of the whistler period, and thus the Larmor period as well, since \(\omega\simeq\Omega_{s}\). The diffusive evolution of the distribution function requires a large number of interactions with whistler waves and is therefore computed on timescales that have been averaged over a large number of whistler wave period. The perturbed part is computed linearly, and thus quasi-linear theory assumes that nonlinear effects such as trapping and mode-mode coupling associated with large-amplitudes can be neglected. For a quasi-linear theory of radial transport to be consistent, one needs to preserve the scale separation defined by Equations (25) and (27). The background distribution function is not only independent of magnetic local time, and thus \(\varphi\), it cannot change significantly on timescales comp Figure 11: Same as Figure 10 but with symmetric perturbation of amplitude \(\delta b=0.25B_{0}\) at \(L=5\). rable to the drift period \(\Omega_{d}\). A radial diffusion coefficient that becomes comparable to the drift period (\(D_{LL}\simeq\Omega_{d}\)) indicates that a collection of particles can be carried across one drift shell (\(\sqrt{\langle\Delta L^{2}\rangle}\simeq 1\)) during a single drift period. This argument stems from the fact that dimensionsally the radial diffusion coefficient scales as \(D_{LL}\simeq\langle\Delta L^{2}\rangle/t\), and that the inverse of the diffusion coefficients gives a characteristic time for transport across one drift-shell. Taking into account that the derivation of the quasi-linear diffusion coefficients requires a short decorrelation time of the ULF wave amplitude, and the observational fact that ULF waves are long-lived and coherent (Hartinger et al., 2013), it is inconceivable that a diffusive scattering along drift-shells can occur over a single drift period 25 Footnote 25: This heuristic argument is to some degree arbitrary, but for lack of a better alternative, provides a reasonable and reliable constraint on radial diffusion coefficients. The determination of accurate radial diffusion coefficients is not merely of academic interest and has important consequences on space weather models and in radiation belts' studies focused on distinguishing between local and global acceleration processes Green & Kivelson (2004). Current global magnetospheric models accounting for radial diffusion rely on \(D_{LL}\) coefficients that become comparable, and for large geomagnetic activity larger, than the drift periods of energetic electrons trapped in the Earth's radiation belts (Brautigam & Albert, 2000; Ozeke et al., 2014). For instance, the radial diffusion coefficient of Ozeke et al. (2014) can become as large as \(10^{2}\) in units of day\({}^{-1}\) for Kp\({}_{\dot{t}}\)5, which corresponds to drift period of electrons with energies of 1 MeV. Additionally, derived radial diffusion coefficients assume that the ULF wave correlation \(\langle\delta{\bf B}(t)\delta{\bf B}(t+\tau)\rangle\) is time and space Figure 12: Same as Figure 10 but with symmetric perturbation of amplitude \(\delta b=0.62B_{0}\) at \(L=5\). homogeneous along the particle's orbits. However, ULF waves are sustained by a wide range of processes that are not co-located, ranging from Kelvin-Helmholtz instabilities (Mills & Wright, 1999), pressure pulses in the solar wind (Takahashi & Ukhorskiy, 2007), foreshock transients (Hartinger et al., 2013), magnetospheric substorms (Volwerk, 2016), and unstable plasma distributions (Southwood et al., 1969). As a consequence, ULF waves are not homogeneously distributed in the magnetosphere (Murphy et al., 2020), and unless the ULF waves decay very fast compared to the drift period, quasi-linear radial diffusion coefficients accounting for non-homogeneous statistics have to be derived 26. Footnote 26: Osmane & Lejosne (2021) show that spatially homogeneous ULF waves with finite correlation time comparable to the drift period result in sub-diffusive radial transport and the slowing down of radial diffusion. The inclusion of non-homogeneous effects in radial diffusion are, to the best our knowledge, currently missing. The above mentioned limitations of quasi-linear radial diffusion do not imply that ULF waves cannot sustain transport on timescales comparable to the drift period. Rather, what is argued is that current quasi-linear radial diffusion models have clear limitations, and should not be used beyond their range of validity. Radial transport coefficients encoding the impact of ULF waves on fast timescales require models that are not quasi-linear. A drift kinetic approach to radial transport is not confined to theoretical or modelling studies. With GPS flux measurements calibrated by Van Allen Probes' instruments, it is now possible to quantify observationally radial transport on timescales of the order Figure 13: Cut of the linear and nonlinear perturbed distribution function at \(\varphi=0\) in Figure 12. The non-adiabatic symmetric perturbation splits the distribution functions by pushing particles inward and outward. of a single drift period for electrons with energies less than 1 MeV (Morley et al., 2016; Morley et al., 2017; Kalliokoski et al., 2023). ### Fast radial transport #### 4.2.1 Distinguishing between drift resonant and non resonant interactions The scale separation described in Section (3.1) forms the basis to derive a quasi-linear theory of radial transport, but is also appropriate to quantify the linear and nonlinear response of the distribution function that occurs on fast timescales comparable to a few drift periods. Section (3.2) described three different types of linear responses associated with a ULF wave of frequency \(\omega_{m}\), growth or damping rate \(\gamma_{m}\), and azimuthal wave number \(m\). Three of these responses are non-resonant and one corresponds to drift resonance of particles drifting the Earth's magnetic field with frequency \(\Omega_{d}\simeq\omega_{m}/m\). The first type of non-resonant response is a modulation of the distribution function with the frequency of the ULF wave \(\omega_{m}\), and the second type of non-resonant response is an oscillation of the distribution function at the drift frequency \(\Omega_{d}\). While both resonant and non-resonant responses to a ULF wave are a function of the local gradient in the background distribution function, the resonant particles are energy dependent and the perturbed distribution is amplified by up to one order of magnitude and is therefore distinguishable from non-resonant responses. Models of ULF drift resonance predict that satellites should observe the largest modulations in particle flux at energies corresponding to the resonant energy, with smaller modulation at lower/higher energy (Southwood and Kivelson, 1981). Equation (41) confirms this signature for resonance but also demonstrates that non-resonant, as well resonant particles, can oscillate at the ULF wave frequency. _In situ_ observation of distribution functions or fluxes oscillating at a ULF frequency \(\omega_{m}\) should Figure 14: Explanation for the nonlinear mechanism presented in Section 3.4.2. A symmetric ULF compression with amplitude \(S(t)\) results in an \(E\times B\) differential gradient that is larger in amplitude at higher than lower drift shells. Drift shells with negative (positive) gradients result in particles being driven inward (outward). If the ULF compression is adiabatic particles phase-mix along \(L\), but if the compression is non-adiabatic and the \(E\times B\) drift decays too quickly for phase-mixing to occur, the net drift is inward. therefore not be assumed as a signature of drift-resonance unless the response is localised in energy spectrograms. For drift resonance, the timescales associated with the resonant interaction and the width are a function of the growth rate, and we here stress that seeing comparable modulation across multiple energy levels for a monochromatic ULF wave spectrum is an indication that the interaction is non-resonant. In the study of Claudepierre et al. (2013) fluxes of energetic electrons between 20 and 500 keV are identified as unambiguous signatures of localised drift-resonant interaction with a ULF wave. However, no analysis is provided to quantify the radial gradient of the distribution function for each respective energy fluxes. As shown in this report, the modulation of particle fluxes in terms of a the ULF wave frequency does not require drift-resonance and can be observed for non resonant particles as well. The difference in amplitude between fluxes can be explained in terms of radial gradient differences between energetic fluxes. The localised modulation in time can be explained by a ULF wave that is being damped at a rate \(\gamma_{m}\), and the spatially localised modulation seen on one Van Allen Probe but missed by the second probe can be an indication that the radial gradient of the distribution function is highly spatially localised. Large and localised radial gradients of the distribution function have been reported for case studies. For instance, Hartinger et al. (2020) points out that at \(L=4.5\) and \(L=6.6\) the reported radial PSD gradients are 30-300 times larger at values corresponding to energies of 200 keV compared to 1 MeV. Consequently, residual flux oscillations in this particular case would be 30-300 larger for electrons with energies of 200 keV rather than 1 MeV. Thus, characterizing flux oscillations without accounting for radial gradients, known to vary by several order of magnitude, can lead to erroneous interpretation of wave-particle processes. As shown in Figures (8) and (9) ULF waves in the Pc4 and Pc5 range can be resonant with electrons of energies ranging between 100 keV and a few MeV, yet signatures of drift resonances for the most energetic MeV populations is rare. Hartinger et al. (2020) addresses this inconsistency between observations and theoretical assumptions. On the basis of the theoretical study of Southwood & Kivelson (1981), in order for drift resonances to be observed, one requires finite radial gradients in the background distribution function. Drift resonant interactions could still occur but would be masked by small radial gradients in the background distribution function. While we are in agreement with the conclusions of Hartinger et al. (2020), that drift resonance requires observable gradients to be detected, our analysis of the resonant response provides one additional constraint. Drift resonant signatures result in an amplification of the particle's response, as shown in Equation (41), that is localised in time and can be comparable to a single drift period. Moreover, if the ULF wave damps quickly, that is, on timescales comparable to a few drift periods, the resonant exchange could be too weak to be observed or distinguishable from the non-resonant one. Keeping in mind the conclusions of Hartinger et al. (2020) regarding the importance of radial gradients, our analysis provides an additional explanation as to why observation of drift resonant signatures have been rare when detected by a few spacecrafts. Drift-resonance is a transient process and detection by one spacecraft can be entirely missed by another spacecraft sampling the same orbit but on timescales larger than a few drift periods. Even though phase-space structures in the radiation belts are not necessarily indicative of violation of the third adiabatic invariant, and thus acceleration, their observed signatures can be used to test the validity of radial transport models or be used as diagnostic for electric fields or particle injections. In Section 3.2 we showed that phase-mixing of trapped electrons can result in the formation of structures known as zebra stripes. Zebra stripes are transient structured peaks and valleys observed on spectrograms of inner radiation belts' electrons with energies ranging between tens to hundreds of keV. The zebra stripes that are measured _in situ_ are also characterised by energy peaks and dips that vary as the inverse of the radial distance, i.e., \(E_{c}\sim 1/L\)(Sauvaud et al., 2013; Lejosne & Roederer, 2016; Lejosne & Mozer, 2020, 2020). Since the zebra stripes can be produced on timescales of the order of a few drift periods, a radial diffusion mechanism should be immediately rejected. Our analysis also shows that zebra stripes can form without drift resonance with ULF waves, and as a result of a phase-mixing process described for non resonant particles. The phase-mixing process described in this report is triggered by particle injection or losses from the the radiation belts, and the requirement for an electric field that sustains drift-resonance, as shown in Ukhorskiy et al. (2014), is therefore unnecessary. The requirement for a drift resonant interactions to produce zebra stripes is also more constraining than a non resonant phase-mixing mechanisms, since resonance requires a ULF fluctuations with a narrow set of parameters and finite radial gradient in \(f_{0}\)27. Footnote 27: On the basis of Occam’s razor argument we would favor a phase-mixing mechanism of non resonant particles to explain zebra stripes formation (Popper, 2005). How can we distinguish between zebra stripes formation mechanisms? We note that the first phase-mixing mechanisms, described in Section 3.2.1, requires injection or losses of particles but no electric field. The second type, appearing as the ballistic term in Equation (39), requires a ULF fluctuations and a finite radial gradient in the distribution function. The third type, described by Ukhorskiy et al. (2014), but appearing as the drift-resonant term in Equation (39), requires a ULF fluctuations that can resonate with wide rage of energies, and also a finite radial gradient in the distribution function. For all three types the formation and shearing occurs on the same timescales. In order to distinguish both phase-mixing mechanisms one needs to measure radial gradients in the phase-space density and determine if the amplitude of the ULF fluctuations can provide the amplitude of stripes structures observed. If such a test proves inconclusive, the phase-mixing process requiring injection such as the one detailed in Zhao & Li (2013), might be favored. If future observational studies demonstrate that injection or loss of particles in the inner belts correlate with phase-mixed structures, one could use zebra stripes as proxies for injection and losses. Similarly, if the phase-mixing process is primarily driven by ULF fluctuations, appearance of zebra stripes could be used as proxies to extract properties of electric fields in the inner belts. ### Nonlinear Parker mechanism The first radial transport model resulting in irreversible acceleration of particles was presented by Parker (1960) and did not require drift resonance processes. In the Parker (1960) scenario, magnetically confined particles experience non-adiabatic transport as a result of an asymmetric magnetic field perturbations. Since particles at different MLT sectors of the same drift shells sensed a different perturbation, they collectively experienced a net radial transport. The mechanism presented in Section 3.4.2 is a higher order generalisation of the Parker (1960) mechanism in that it does not require drift-resonance with ULF waves. This nonlinear mechanism is also the product of non-adiabatic perturbations but does not require asymmetric magnetic fluctuations. Rather the only two ingredients required for this nonlinear process to result in irreversible radial transport is 1. Large amplitude symmetric perturbation \(\delta B/B_{0}\simeq 10\%\) decaying or growing non-adiabatically, and 2. Opposite radial gradients in the distribution function, or put differently, a localised minimum or maximum of the distribution function along the radial distance. While particles on the same drift-shells sense the same electromagnetic field and radial drift speeds, particles on different drift shells drift at a different speed, and the combined inward and outward transport in the presence of opposite gradients results in irreversible acceleration as more particles are pushed inward than outward. If the waves decorrelate very slowly (adiabatically) compared to the drift period, particles will phase-mix radially and instead of a net injection inward, a plateau along the radial distance will form. Are symmetric ULF fluctuations observed in the Earth's radiation belts? In a recent observational study, Takahashi et al. (2022) provides the first description of symmetric compressional ULF fluctuations with magnetic field amplitudes comparable to the background magnetic field. The symmetric ULF waves are excited outside of the plasmasphere, and localised in MLT and radial distance. The large-amplitude (\(\delta B/B_{0}\geq 0.1\)) and compressional nature of the fluctuation described by Takahashi et al. (2022) are consistent with the one used for the acceleration processe presented in Section 3.4. Moreover the waves are observed in association with injection of particles, and thus symmetric fluctuations are associated with local radial enhancements of particles. Even though it is too speculative at this point to determine whether this mechanism is commonly occurring in the radiation belts, we want to stress that the two required ingredients for the occurrence of this nonlinear mechanism have been observed in the radiation belts. Unlike radial diffusion, which operates on long timescales and requires a large number of drift-resonant interactions, fast and nonlinear acceleration mechanisms can be both seldom and more efficient. ## 5 Conclusion In this report, we have presented a drift kinetic description of ULF radial transport for the Earth's radiation belts. The use of a drift kinetic formalism is particularly convenient to distinguish quasi-linear diffusion occurring on slow timescales, with fast wave-particle interactions associated with linear or nonlinear processes. Theoretically, current global models of the Earth's magnetosphere account for ULF radial transport solely in terms of quasi-linear diffusion models. Our analysis demonstrates that linear and nonlinear processes occurring on timescales of the order of the drift period and with a spatial dependence on magnetic local time cannot be modelled in terms of quasi-linear diffusion. Observationally, fast and localised radial transport have been known for decades, but have been limited to extreme driving events or serendipitous satellite measurements (Li et al., 1993; Kanekal et al., 2016). In the recent years, calibration of GPS electron flux measurements with Van Allen Probes' instruments are offering for the first time unprecedented spatial and temporal coverage of the Earth's radiation belts on timescales comparable to the azimuthal drift period (Morley et al., 2016; Morley et al., 2017; Kalliokoski et al., 2023). Thus, a modelling framework that distinguishes between fast and slow radial transport is not only of theoretical interest, but can also be tested for the first time with _in situ_ measurements for a wide-range of geomagnetic driving conditions. In the last two decades, dominant acceleration processes in the Earth's radiation belts have been categorised as belonging to local wave-particle interactions or global ULF radial diffusion. The observational signature of local wave-particle processes in the phase-space density consists in localised enhancements, whereas ULF radial diffusion results in the flattening of the phase-space density along the radial distance (Green & Kivelson, 2004; Reeves et al., 2013). When including higher order terms in the radial transport equation, we found that seed electrons with \(50-100\) keV injected in the outer belts can experience additional betatron acceleration in the presence of symmetric ULF wave amplitudes with amplitudes comparable to the one reported by Takahashi et al. (2022). This impulsive nonlinear process requires no drift resonance yet results in a localised enhancement of the phase-space density on timescales that are much shorter than the drift period. This theoretical result is therefore of particular interest to observational studies radiation belts since ULF waves are also able to produce localised signatures attributed to small-scale wave-particle interactions. With growing satellite coverage and the capacity to measure electron fluxes on timescales comparable to the drift period the binary quasi-linear framework developped in the past decades need to be revisited. The main focus of this paper has been on radial transport of energetic electrons in the Earth's radiation belts. However, a drift kinetic description based on the work of (Hazeltine, 1973) can also be used to describe energetic ring current protons (\(>100\) keV) with Larmor frequencies \(\Omega_{p}\sim 1-10\) Hz responding to ULF fluctuations \(\omega\sim 1\) mHz (Murphy et al., 2014) and energetic electrons in a wide-range of planetary environments, such as those of Jupiter or Saturn (Lejosne & Kollmann, 2020)28. The main limitations of our paper is that it focused solely on equatorially trapped particles, and it ignored boundary effects that are known observationally as a sink for energetic electron fluxes (Millan & Thorne, 2007). A growing number of _in situ_ experiments are showing that energetic electrons can be depleted on timescales comparable to a few drift periods (Turner et al., 2012; Jaynes et al., 2018; Olifer et al., 2018). While such sudden losses can in theory be explained by local wave-particle interactions (Zhang et al., 2022), in some events the small scale waves appear insufficient to account for the losses (Albert, 2014). Since ULF waves can effectively transport energetic electrons on fast timescales, it is worth investigating the net impact that they have when it comes to particle losses. The nonlinear Parker scenario described in Section 3.4.2, and in association with a sudden and symmetric reduction of the magnetic field, will result in the outward transport of particles to the outer magnetopause boundary. Future studies will quantify the role of radial transport in losses occurring on timescales comparable to the drift period and therefore too fast to be explained by radial diffusion. ## Acknowledgments Support for AO was provided by the Academy of Finland profiling action Matter and Materials (grant # 318913). OA gratefully acknowledges financial support from the University of Exeter, the University of Birmingham, and also from the United Kingdom Research and Innovation (UKRI) Natural Environment Research Council (NERC) Independent Research Fellowship NE/V013963/1. We are grateful to Yohei Kawazura, Solene Lejosne, Lucile Turc, Jay Albert, Richard Horne, Leonid Olifer, Hannu Koskinen and Jacob Bortnik for discussions of this work. ## Appendix A Derivation of the Quasi-linear Equation In this Appendix we provide a detailed derivation of the quasi-linear equation (31). For equatorial particles with a conserved first adiabatic invariant \(\mu\) interacting with a Mead field, the kinetic equation takes the form: \[gB_{0}\frac{\partial f}{\partial t}+\frac{3\mu B_{0}}{q\gamma r^{2}}\frac{ \partial f}{\partial\varphi}+\sum_{m}e^{im\varphi}\left[\frac{\mu A_{m}}{q \gamma r}+i\frac{r\dot{A}_{m}}{7m}\right]\frac{\partial f}{\partial\varphi}=- \left[\frac{r\dot{S}}{2}+\sum_{m}e^{im\varphi}\left(\frac{8r^{2}\dot{A}_{m}}{21} -im\frac{\mu A_{m}}{q\gamma}\right)\right]\frac{\partial f}{\partial r}\] with the function \(g(r,\varphi,t)=1-S(t)/B_{0}-\sum_{m}e^{im\varphi}rA_{m}(t)/B_{0}\). After decomposing the perturbed fluctuations along the azimuthal angle in Fourier space \(f(r,\varphi,t)=f_{0}(r,t)+\sum_{m}e^{im\varphi}\delta f_{m}(r,t)\), the kinetic equation takes the following form: \[g\frac{\partial f_{0}}{\partial t} = -\sum_{m}e^{im\varphi}\left[\frac{\partial\delta f_{m}}{\partial t }+im\Omega_{d}\delta f_{m}+\left(\frac{8r^{2}\dot{A}_{m}}{21B_{0}}-im\frac{\mu A _{m}}{qB_{0}\gamma}+\frac{r\dot{S}}{2B_{0}}\delta_{m0}\right)\frac{\partial f _{0}}{\partial r}\right]\] (A1) \[- \sum_{m}\sum_{n}e^{i(m+n)\varphi}\left[\left(i\frac{m\mu A_{n}}{q B_{0}\gamma r}-\frac{mr\dot{A}_{n}}{7nB_{0}}\right)\delta f_{m}+\left(\frac{8r^{2} \dot{A}_{n}}{21B_{0}}-in\frac{\mu A_{n}}{qB_{0}\gamma}+\frac{r\dot{S}}{2B_{0}} \delta_{n0}\right)\frac{\partial\delta f_{m}}{\partial r}\right]\] \[+ \sum_{m}\sum_{n}e^{i(m+n)\varphi}\left(\frac{S}{B_{0}}\delta_{n0} +\frac{A_{n}r}{B_{0}}\right)\frac{\partial\delta f_{m}}{\partial t},\] with the Kroenecker delta, \[\delta_{mn}=\begin{cases}1,&\text{if }m=n,\\ 0,&\text{if }m\neq n.\end{cases}\] (A2) The first term in bracket of Equation (A1) contains the linear term, and the following two brackets with the double sums contain the nonlinear terms. We solve this equation with the aid of the Fourier Convolution theorem: \[\frac{1}{2\pi}\int_{-\pi}^{\pi}\mathcal{F}(\varphi)\mathcal{G}( \varphi)e^{-ip\varphi}\ d\varphi = \frac{1}{2\pi}\int_{-\pi}^{\pi}\left(\sum_{m}\mathcal{F}_{m}e^{im \varphi}\right)\left(\sum_{n}\mathcal{G}_{n}e^{in\varphi}\right)e^{-ip\varphi} \ d\varphi\] (A3) \[= \sum_{m}\sum_{n}\mathcal{F}_{m}\mathcal{G}_{n}\frac{1}{2\pi}\int _{-\pi}^{\pi}d\varphi e^{i(m+n-p)\varphi}\] \[= \sum_{m}\sum_{n}\mathcal{F}_{m}\mathcal{G}_{n}\delta_{m+n,p}\] \[= \sum_{m}\mathcal{F}_{m}\mathcal{G}_{p-m},\] which gives us \[\left(1-\frac{S}{B_{0}}-\frac{A_{p}r}{B_{0}}\right)\frac{\partial f_ {0}}{\partial t} = -\left[\frac{\partial\delta f_{p}}{\partial t}+ip\Omega_{d}\delta f _{p}+\left(\frac{8r^{2}\dot{A}_{p}}{21B_{0}}-ip\frac{\mu A_{p}}{qB_{0}\gamma}+ \frac{r\dot{S}}{2B_{0}}\delta_{p0}\right)\frac{\partial f_{0}}{\partial r}\right]\] (A4) \[- \sum_{m}\left[\left(i\frac{m\mu A_{p-m}}{qB_{0}\gamma r}-\frac{mr \dot{A}_{p-m}}{7B_{0}(p-m)}\right)\delta f_{m}-\left(\frac{S}{B_{0}}\delta_{p-m,0}+\frac{A_{p-m}r}{B_{0}}\right)\frac{\partial\delta f_{m}}{\partial t}\right]\] \[- \sum_{m}\left(\frac{8r^{2}\dot{A}_{p-m}}{21B_{0}}-i(p-m)\frac{\mu A _{p-m}}{qB_{0}\gamma}+\frac{r\dot{S}}{2B_{0}}\delta_{p-m,0}\right)\frac{ \partial\delta f_{m}}{\partial r}.\] In order to obtain the quasi-linear equation we first set \(p=0\) which corresponds to the spatial average of the kinetic equation, \[\left(1-\frac{S}{B_{0}}-\frac{A_{0}r}{B_{0}}\right)\frac{\partial f _{0}}{\partial t} = -\left[\frac{\partial\delta f_{p=0}}{\partial t}+\left(\frac{8r^ {2}\dot{A}_{0}}{21B_{0}}+\frac{r\dot{S}}{2B_{0}}\right)\frac{\partial f_{0}}{ \partial r}\right]\] (A5) \[- \sum_{m}\left[\left(i\frac{m\mu A_{-m}}{qB_{0}\gamma r}+\frac{r \dot{A}_{-m}}{7B_{0}}\right)\delta f_{m}-\left(\frac{S}{B_{0}}\delta_{m,0}+ \frac{A_{-m}r}{B_{0}}\right)\frac{\partial\delta f_{m}}{\partial t}\right]\] \[- \sum_{m}\left(\frac{8r^{2}\dot{A}_{-m}}{21B_{0}}+im\frac{\mu A_{-m }}{qB_{0}\gamma}+\frac{r\dot{S}}{2B_{0}}\delta_{m,0}\right)\frac{\partial \delta f_{m}}{\partial r}.\] and then perform the time average defined in Equation (27) to find Equation (31), \[\frac{\partial f_{0}}{\partial t} = -\sum_{m}\left(\frac{im\mu}{qB_{0}\gamma r}\langle A_{m}^{*} \delta f_{m}\rangle+\frac{r}{7B_{0}}\langle\dot{A}_{m}^{*}\delta f_{m}\rangle -\frac{r}{B_{0}}\bigg{\langle}A_{m}^{*}\frac{\partial\delta f_{m}}{\partial t }\bigg{\rangle}+\frac{8r^{2}}{21B_{0}}\frac{\partial}{\partial r}\langle\dot{A }_{m}^{*}\delta f_{m}\rangle+\frac{im\mu}{qB_{0}\gamma}\frac{\partial}{ \partial r}\langle A_{m}^{*}\delta f_{m}\rangle\right)\] (A6) \[= -\sum_{m}\left[\frac{im\mu}{qB_{0}\gamma r}\frac{\partial}{ \partial r}\left(r\langle A_{m}^{*}\delta f_{m}\rangle\right)+\frac{8}{21} \frac{1}{rB_{0}}\frac{\partial}{\partial r}\langle r^{3}\dot{A}_{m}^{*}\delta f _{m}\rangle-\frac{r}{B_{0}}\langle\frac{\partial}{\partial t}(A_{m}^{*}\delta f _{m})\rangle\right]\] The right-hand side of (A6) describes the slow evolution of the background distribution due to the effect of fluctuations. ## Appendix B Derivation of the Nonlinear Perturbed Equation (32) In order to obtain an equation for perturbed part of the distribution function for Fourier modes \(m\neq 0\), we substract Equation (A5) from (A4), which results in Equation (32) \[\frac{\partial\delta f_{m}}{\partial t}+im\Omega_{d}\delta f_{m}=\frac{A_{m}r} {B_{0}}\frac{\partial f_{0}}{\partial t}-\left(\frac{8r^{2}\dot{A}_{m}}{21B_{0 }}-im\frac{\mu A_{m}}{qB_{0}\gamma}\right)\frac{\partial f_{0}}{\partial r}- \sum_{m^{\prime}}\mathcal{Q}[A_{m-m^{\prime}};\delta f_{m^{\prime}}].\] (B7) with the nonlinear term given by \[\mathcal{Q}[S;A_{m-m^{\prime}};\delta f_{m^{\prime}}] = \left[\left(i\frac{m^{\prime}\mu A_{m-m^{\prime}}}{qB_{0}\gamma r }-\frac{m^{\prime}r\dot{A}_{m-m^{\prime}}}{7B_{0}(m-m^{\prime})}\right) \delta f_{m^{\prime}}-\left(\frac{S}{B_{0}}\delta_{m,m^{\prime}}+\frac{A_{m-m^ {\prime}}r}{B_{0}}\right)\frac{\partial\delta f_{m^{\prime}}}{\partial t}\right]\] (B8) \[+ \left(\frac{8r^{2}\dot{A}_{m-m^{\prime}}}{21B_{0}}-i(m-m^{\prime} )\frac{\mu A_{m-m^{\prime}}}{qB_{0}\gamma}+\frac{r\dot{S}}{2B_{0}}\delta_{m,m^{ \prime}}\right)\frac{\partial\delta f_{m^{\prime}}}{\partial r}.\] The linear wave-particle interaction only depends on the anti-symmetric magnetic field fluctuation \(A_{m}\). However, the nonlinear perturbations is also a function of the symmetric magnetic field fluctuations, i.e. \(S(t)\). The traditional quasilinear assumption consists in ignoring the nonlinearities by setting \(\mathcal{Q}=0\), and thus compute the fast linear response due to anti-symmetric ULF waves. It is however possible, as shown in Sections (3.4.2), to derive the fast nonlinear response on timescales less than a drift period, and a nonlinear quasi-linear theory for long timescales, by accounting for the nonlinear terms associated with the symmetric ULF perturbations. Appendix C Justification for neglecting the temporal variation of the background distribution in the linear response (32) We note that the linear equation in (B7) contains a term proportional to \(\frac{A_{m}r}{B_{0}}\frac{\partial f_{0}}{\partial t}\). In the quasi-linear limit of short autocorrelation this term will introduce an additional term in the diffusion equation that results in the following correction \[\left(1+\sum_{m}\frac{8}{3}\frac{r^{2}|A_{m}|^{2}}{B_{0}^{2}}\right)\frac{ \partial f_{0}}{\partial t}=L^{2}\frac{\partial}{\partial L}\left(\frac{D_{L} L}{L^{2}}\frac{\partial f_{0}}{\partial L}\right)\] (C9) and thus in the limit of small ULF wave amplitude given by the Mead field \(|\delta B_{m}|^{2}=r^{2}|A_{m}^{2}|\ll B_{0}^{2}\) and the correction reduces the diffusion by a factor much less than one. One can also give a dimensional argument to neglect the first term on the right-hand side of Equation (B7) to compute the linear response under quasi-linear assumptions. The diffusion coefficient \(D_{LL}\) has units of one over time, and is bounded by the drift period \(\Omega_{d}\) of a particle. With \(D_{LL}\ll\Omega_{d}\) and thus \(D_{LL}\ll 1\), the diffusion equation requires that \(\frac{\partial f_{0}}{\partial t}\simeq D_{LL}\frac{\partial^{2}f_{0}}{ \partial L^{2}}\). If the time and spatial variations of the background distribution are slow and determined by non-dimensional small parameters \(\varepsilon_{t}\ll 1\) and \(\varepsilon_{L}\ll 1\), respectively, then \(f_{0}=f_{0}(\varepsilon_{t}t,\varepsilon_{L}L)\) inserted into the diffusion equation gives the following scaling: \(\varepsilon_{t}\simeq D_{LL}\varepsilon_{L}^{2}\). Therefore, the time variation of the background is smaller than the radial gradient of the background distribution in the linear response by a factor of \(\frac{\partial f_{0}/\partial t}{\partial f_{0}/\partial L}\simeq D_{LL} \varepsilon_{L}\ll 1\). We note that even if the diffusion coefficient is artificially increased to values comparable to the drift period, thereby implying that transport across one drift shell is possible for one single drift period, the short autocorrelation time limit \(\Omega_{d}\tau_{c}<1\) would nonetheless hold the above dimensional analysis and justify the neglect of the partial time variation of \(f_{0}\) in the linearised Equation (B7). ## Appendix D List of Symbols \(a_{m}\) Wave mode amplitude \(A_{m}(t)\) ULF asymmetric fluctuation amplitude \(\mathbf{B}\) Magnetic field \(B_{0}\) Earth's magnetic field dipole magnitude \(B_{E}\) Earth's magnetic field dipole moment \(B^{EK}\) Asymmetric magnetic field model of Elkington et al. (1999) \(c\) Speed of light \(C_{i}\) Correlator \(D\) Root mean square of antisymmetric field perturbation amplitude \(A_{m}\) \(D_{LL}\) Quasi-linear radial diffusion coefficient \(\delta\mathbf{B}\) Magnetic field perturbation \(\delta\mathbf{E}\) Electric field perturbation \(\mathbf{E}\) Electric field \(E_{c}\) Relativistic kinetic energy \(f\) Distribution function \(\langle f\rangle\) Gyro-averaged distribution function \(\delta f_{m}^{L}\) Linear perturbation of the distribution function \(\delta f_{m}^{NL}\) Non-linear perturbation of the distribution function \(f_{0}\) Background distribution function \(\mathcal{J}\) Second adiabatic invariant \(I_{1}\&\)\(I_{2}\) Nonlinear criteria associated with the symmetric ULF perturbations \(I_{3}\&\)\(I_{4}\) Nonlinear criteria associated with the anti-symmetric ULF perturbations \(l\) Characteristic scale size \(L\) Normalised radial distance from the Earth's midplane \(L^{*}\) Magnetic drift shell and third adiabatic invariant \(m_{s}\) Rest mass of particle species's' \(m\) Wave number \(\mathcal{Q}\) Nonlinear term \(\mathbf{p}_{\parallel}\) Relativistic momentum along the local magnetic field direction \(\mathbf{p}_{\perp}\) Relativistic momentum perpendicular to the local magnetic field direction \(q_{s}\) Charge of particle specie \(s\) \(\mathbf{r}\) Position \(R_{E}\) Earth's radius \(S(t)\) Azimuthally symmetric fluctuation amplitude \(s\) Label for particle specie \(s=i,e\) \(t\) Time \(t_{eq}\) Timescale to reach stationary state \(v\) Characteristic speed \begin{tabular}{c l} \(\alpha\) & Pitch angle \\ \(\epsilon\) & Nondimensional small parameter \\ \(\gamma\) & Lorentz factor \\ \(\gamma_{m}\) & Wave mode growth rate \\ \(\mu\) & First adiabatic invariant \\ \(\tilde{\mu}\) & First adiabatic invariant correction \\ \(\varphi\) & Azimuthal angle \\ \(\rho\) & Larmor radius \\ \(\tau_{C}\) & Correlation/decay time for the anti-symmetric ULF perturbation \\ \(\tau_{C}^{s}\) & Correlation/decay time for the symmetric ULF perturbation \\ \(\tau_{D}\) & Drift period \\ \(\theta\) & Polar angle \\ \(\theta_{g}\) & Gyrophase \\ \(\chi\) & Gaussian white noise \\ \(\Phi\) & Third adiabatic invariant and magnetic flux \\ \(\omega\) & Frequency \\ \(\omega_{m}\) & ULF wave mode frequency \\ \(\Omega_{d}\) & Azimuthal drift frequency \\ \(\Omega_{s}\) & Larmor frequency for specie \(s\) \\ \end{tabular}
2310.11592
Characterization of diamond-turned optics for SCALES
High-contrast imaging has been used to discover and characterize dozens of exoplanets to date. The primary limiting performance factor for these instruments is contrast, the ratio of exoplanet to host star brightness that an instrument can successfully resolve. Contrast is largely determined by wavefront error, consisting of uncorrected atmospheric turbulence and optical aberrations downstream of AO correction. Single-point diamond turning allows for high-precision optics to be manufactured for use in astronomical instrumentation, presenting a cheaper and more versatile alternative to conventional glass polishing. This work presents measurements of wavefront error for diamond-turned aluminum optics in the Slicer Combined with an Array of Lenslets for Exoplanet Spectroscopy (SCALES) instrument, a 2-5 micron coronagraphic integral field spectrograph under construction for Keck Observatory. Wavefront error measurements for these optics are used to simulate SCALES' point spread function using physical optics propagation software poppy, showing that SCALES' contrast performance is not limited by wavefront error from internal instrument optics.
Isabel J. Kain, Phil Hinz, Marius Doetz, Benjamin Bulla, Renate Kupke, Daren Dillon, Andrew Skemer, Deno Stelter, Michael Gonzales, Nicholas MacDonald, Aditi Gangadharan, Cristian Rodriguez, Christopher Ratliff, Mackenzie R. Lach, Steph Sallum
2023-10-17T21:25:33Z
http://arxiv.org/abs/2310.11592v1
# Characterization of diamond-turned optics for SCALES ###### Abstract High-contrast imaging has been used to discover and characterize dozens of exoplanets to date. The primary limiting performance factor for these instruments is contrast, the ratio of exoplanet to host star brightness that an instrument can successfully resolve. Contrast is largely determined by wavefront error, consisting of uncorrected atmospheric turbulence and optical aberrations downstream of AO correction. Single-point diamond turning allows for high-precision optics to be manufactured for use in astronomical instrumentation, presenting a cheaper and more versatile alternative to conventional glass polishing. This work presents measurements of wavefront error for diamond-turned aluminum optics in the Slicer Combined with an Array of Lenslets for Exoplanet Spectroscopy (SCALES) instrument, a 2-5 micron coronagraphic integral field spectrograph under construction for Keck Observatory. Wavefront error measurements for these optics are used to simulate SCALES' point spread function using physical optics propagation software poppy, showing that SCALES' contrast performance is not limited by wavefront error from internal instrument optics. optics, exoplanets, imaging spectroscopy, spectrographs ## 1 Introduction The Slicer Combined with an Array of Lenslets for Exoplanet Spectroscopy (SCALES) instrument is an infrared (\(2-5\mu m\)) integral field spectrograph being built for the Keck II Adaptive Optics system. Designed for high-contrast imaging of exoplanets, SCALES will be able to directly spectroscopically characterize the atmospheres of exoplanets and brown dwarfs as cold as 300K. These cold objects represent an older demographic of exoplanets compared to the few dozen planets characterized using existing high-contrast imaging instruments, which operate at shorter wavelengths and are limited to young giant planets which are self-luminous with residual heat from recent formation. High-contrast imaging involves suppressing starlight from the planet's host, which outshines the planet itself by many orders of magnitude. The intensity of the stellar halo (and thus the best achievable contrast) is driven by the wavefront error (WFE) of the incoming signal. WFE can be imparted by atmospheric turbulence (which can be compensated by adaptive optics) and by aberrations in the optical surfaces within a telescope and instrument. Optical aberrations may create quasi-static artifacts in the PSF that are difficult to correct for (and may even look identical to planets), degrading performance and decreasing an instrument's sensitivity to high-contrast targets. Characterization of optics is crucial for understanding an instrument's eventual sensitivity. Characterization of SCALES optics is also motivated by our use of diamond turned aluminum mirrors, a technique which has not historically had comparable performance to conventional polishing for use in astronomical instruments. The SCALES instrument is designed using a 6061-T651 bench and mounts, when possible. Optics are constructed of RSA 6061 aluminum to produce an instrument of near uniform coefficient of thermal expansion (CTE). By using diamond turned aluminum substrate optics the instrument can be aligned at room temperature and maintain its alignment when cooled to the 100k operating temperature. Bare aluminum is difficult to polish to an optical quality appropriate for astronomical instrumentation given the \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Optic} & Mechanical & \multirow{2}{*}{Center thickness} & \multirow{2}{*}{Parent radius of curvature} & \multirow{2}{*}{Conic} & Off-axis \\ & aperture & & & & distance \\ & diameter & [mm] & [mm] & & [mm] \\ \hline \hline OAP1.1 & 35.14 & 28.01 & 401.60 \(\pm\) 0.1\% & -1.0 & 52.87 \(\pm\) 0.2 \\ OAP1.2 & 50.19 & 34.52 & 803.10 \(\pm\) 0.25\% & -1.0 & 220.85 \(\pm\) 0.2 \\ OAE & 40.15 & 28.52 & 391.27 \(\pm\) 0.25\% & -0.7 & 61.76 \(\pm\) 0.2 \\ FM1 & 30.12 & 16.00 & – & – & – \\ FM2 & 45.17 & 21.00 & – & – & – \\ FM3 & 15.33 & 10.50 & – & – & – \\ FM4 & 60.23 & 15.00 & – & – & – \\ FM5 & 90.35 & 21.00 & – & – & – \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of foreoptics specifications (specified for 293\({}^{\circ}\)K), including flat fold mirrors (FMs), off-axis parabolas (OAPs), and an off-axis ellipse (OAE). All optics are manufactured by son-x. softness of the material, though polishing recipes exist which trade excellent surface finish for less dependable surface form[1]; instead, single-point diamond turning can be used, where the surface of the optic is turned into the desired shape with a diamond-tipped cutting tool. This technique is useful for making aspheric optics, though tooling grooves left over from manufacturing may impart unwanted scattering and diffraction effects. The overall SCALES instrument design is presented in Skemer et al. 2022[2], and the optical design is presented in Kupke et al. 2022[3]. SCALES comprises four major optical subsystems, which are annotated in Figure 2: a set of foreoptics that relay the input beam into the spectrograph, an integral field spectrograph with low-resolution (R\(\sim\)35-200) and medium-resolution (R\(\sim\)3,000-6,000) modes, and an imaging channel. The foreoptics are fed by the Keck AO system and place a focal plane image on a lenslet array, and a flat optic (FM3) mounted to a tip-tilt mechanism manufactured by Physik Instruments (PI) is used to steer the image between different resolution modes of the IFU. In the low-resolution mode, the grid of micro-pupils from the lenslet array is sent directly through the spectrograph. The medium resolution IFU uses a "slenslit" architecture, where the lenslet spots are sent first through an image slicer which reformats a nine by nine lens area of the lenslet array into a pseudo-slit which can be dispersed across the full detector width. The slicer output injects the reformatted beam back into the main spectrograph. The SCALES wavefront is spatially sampled at the lenslet array, meaning any optical aberrations downstream of the lenslet array won't impact the PSF. Only aberrations of the foreoptics, the optical subsystem between SCALES' entrance window and the lenslet array, have an effect on contrast performance. The optical specifications of these foreoptics are outlined in Table 1. This paper characterizes the low- and high-spatial frequency wavefront error contributions and thermal stability of all received SCALES foreoptics, and simulates the corresponding impact on contrast performance. Section 2 presents measurements for surface figure (2.1) and surface roughness (2.2) of all received optics, and discusses the rejection of one optic after failing thermal stability testing (2.3). Finally, wavefront error measurements are used in Section 3 to calculate contrast performance for the SCALES instrument. All measurements are summarized in Section 4. ## 2 Characterization of Diamond-Turned Optics Wavefront error covers a range of spatial frequencies, which are divided into different regimes by their effects on overall optical performance. Low-spatial-frequency errors are deviations that span the entire surface of an optic, i.e. the lower-order Zernike terms, which reshape the core Figure 1: Top: Optical aberrations can create speckles, which can be indistinguishable from exoplanets. Bottom: OAP1.1 during warm testing, fixed to a low-stress mount designed and built at UCO. of an optical system's point spread function (PSF). Mid-spatial-frequency errors, sometimes referred to as waviness or ripple, scatter light out of the core of the PSF without changing its shape, displacing it instead into the rest of the focal plane image, pushing light into the Airy rings or introducing hazes or patchy illumination several \(\lambda\)/D from the core of the PSF. High-spatial-frequency errors correspond to small-scale surface texture, and scatter light at large angles out of the beam path. While spatial frequency is typically specified in cycles per millimeter, its impact on the PSF is better defined by cycles per aperture. For example, a 0.1 cycles per millimeter aberration across a 1 centimeter diameter optic is 1 cycle per aperture, which might look like tip-tilt error. The same spatial frequency across a 2 meter diameter optic is 200 cycles per aperture, and instead scatters light into the outskirts of the PSF. Because the optics presented in this work are of similar sizes (ranging from 30-90mm in diameter), we stick with a conventional definition in terms of cycles per millimeter. This work refers to low-spatial-frequency error as surface Figure 2: Labeled optical diagram of SCALES from Kupke et al. 2022[3]. SCALES accepts an f/35 beam from the Keck adaptive optics system, which then passes through the foreoptics and forms an image on the lenslet array. The tip-tilt mechanism steers the beam between the low-resolution IFU (where the beam passes through the lenslet array and into the spectrograph) and the medium-resolution IFU (where the beam passes through a small section of the lenslet array, is sent through a slicer, then is sent through the spectrograph). The eight foreoptics, which are measured in this work, are outlined in blue. figure, and high-spatial-frequency error as surface roughness. We define low spatial frequencies as 0.0018 - 0.45 mm\({}^{-1}\), mid spatial frequencies as 0.45-1 mm\({}^{-1}\), and high spatial frequencies as 1-530 mm\({}^{-1}\), following the sensitivity ranges of our laboratory instruments as per industry convention. The tolerances for propagated wavefront error through the SCALES foreoptics are detailed in Kupke et al. 2022[3]. To keep distortions imparted by SCALES foreoptics well below the error level of the Keck AO system, a maximum wavefront error of 24 nm RMS surface figure and 5 nm RMS surface roughness (with a goal of 3 nm RMS surface roughness) for each optic were established. At the time of submission, we have received two foreoptics (including OAP1.1 and two versions of FM3, one made of RSA 443 and one made of RSA 6061), while six out of the eight foreoptics (OAP1.2, OAE, FM1, FM2, FM4, FM5) are still being machined and coated. OAP1.1 has been received, tested, and accepted. An initial version of FM3 (the RSA 443 substrate version) failed during testing (see Section 2.3). The optic was remade by son-x with an RSA 6061 substrate, but for the sake of scheduling was integrated directly with the tip-tilt mechanism before we carried out the tests detailed in this work. Testing of the remade RSA 6061 FM3 will run concurrently with testing of the tip-tilt mechanism. The following describes the tests carried out on the optics received so far, and the same test procedure will be applied to the remaining optics as they are received. ### Surface figure Low-spatial-frequency error refers to the overall surface figure of an optic, which deforms the point spread function of an optical system. The surface form of SCALES foreoptics must meet specifications at both ambient and operating temperatures, so surface form must remain stable and within specifications through repeat cryogenic cycles. All optics are cryo-annealed before machining to relieve internal material stresses and prevent later deformation. Once parts are machined, coated, and shipped to UCSC, surface figure measurements are taken with a Zygo Verifire(tm)Fizeau interferometer, which has a spatial frequency range of 0.0018-0.45 mm\({}^{-1}\). A baseline measurement of surface form at ambient temperature is taken for each optic using the setup shown in Figure 3(a). However, the SCALES instrument will undergo dozens of cryocycles within its lifetime, which requires stable surface figure over repeat cryocycles. To measure elastic deformation, the RSA 443 version of FM3 (and, in future work, every other flat optic) was continuously monitored throughout two full cryocycles inside a retrofitted liquid nitrogen dewar inherited from the Keck Interferometer (Figure 3(b)), with the temperature of each optic stabilizing at SCALES' operating temperature of \(<\)100 K for at least an hour. This monitoring was possible only with flat optics, as the powered optics would have required an optical setup that the dewar did not have space for; however, powered optics were still thermally cycled inside the dewar, just without monitoring measurements being taken throughout. After the dewar was allowed to return to ambient temperature, the optic was then remeasured directly to detect any plastic deformation. Any optic that showed any change in surface form underwent at least one additional cryocycle. By combining before-and-after comparative measurements and monitoring throughout cryocycles, we are able to characterize both plastic and elastic deformation of each optic. The RMS wavefront error from surface figure for all optics and surface figure stability over repeat cryocycles is summarized in Table 2. Figure 3 shows measurements of OAP1.1 before (left) and after (right) it underwent one thermal cycle inside the cryostat. While the clear aperture of the optic shows a change in surface form of \(\leq\)1.4 nm RMS, the surface form within the 15 mm beam footprint showed no change, and we decided to accept this optic without going through a second cryocycle. Additionally, this optic was very difficult to align, and the error bar on our alignment is approximately the same magnitude as the observed change. Figure 4: Test setups for measuring low-spatial frequency error with the Zygo interferometer. Left: setup used to characterize powered optics warm and outside vacuum (calibration sphere is omitted for flat optics). Right: setup used to monitor flat optics during cryocycling. Figure 3: Surface form stability over one cryocycle of OAP1.1. Both measurements are taken at ambient temperature. ### Surface roughness Surface roughness refers to the texture of the optical surface, measured as the RMS of the surface such that the distribution of surface irregularities is Gaussian about the mean. This high spatial frequency error scatters light at high angles out of the beam path, reducing the efficiency of the optical system and degrading overall performance. The surface roughness of each optic is measured with a Veeco Wyko white light interferometric profiler, which is sensitive to spatial frequencies between 1-530 mm\({}^{-1}\). A 602 x 451 \(\mu m\) measurement is taken of an unblemished spot within the clear aperture of each optic. Figure 6 shows an example measurement, and final values are reported in Table 2. Of the two optics measured so far, both meet the surface roughness specification of \(<\)3 nm RMS. ### Dimensional instability of RSA 443 under cryogenic conditions The first optic received for testing was FM3 (see Figure 2 for an optical diagram of SCALES), a flat mirror which was to be mounted to a cryogenic piezoelectric tip-tilt stage and used to steer between the low- and medium-resolution modes of the SCALES IFU. To avoid a CTE differential between the optic and the ceramic stage of the tip-tilt mechanism, the mirror substrate was initially chosen to be RSA 443 (where all other optics are RSA 6061), a polycrystalline matrix of 40% silicon and 60% aluminum. The substrate is coated with electroless nickel (nickel-phosphorous) before machining, since RSA 443 is too soft to yield excellent surface finish after diamond turning.[4] The nickel coating and aluminum substrate have closely matching CTEs, and bimetallic bending was not expected. We stepped through the measurement procedure for surface figure and surface roughness outlined in Subsections 2.1 and 2.2. Initial measurements of the mirror satisfied tolerances, with a measured surface roughness of 1.63 nm RMS and surface figure error of 23.3 nm RMS (Figure 6). However, cryocycling the mirror inside the dewar caused plastic deformation of the mirror, and the surface figure worsened with each cryocycle. The surface figure creep over two full cryocycles is shown in Figure 9; the slight power and astigmatism (which are artifacts from manufacturing) visible in initial warm measurements intensify with each thermal cycle. The initial hope was that after a sufficiently high number of cryocycles, the material would stabilize and no longer experience plastic deformation, at which point the optic would be resurfaced and recoated to return its surface form to specifications. Since cryocycles in the liquid nitrogen dewar take about a week, and because plastic deformation post-cryocycle was our primary concern, we decided to rapidly cryocycle the mirror by suspending it over a bath of liquid nitrogen. Since the calculated radiative cooling timescale of the optic was on the order of weeks, we attached screws to the anchor points on the mirror and allowed these to dangle in the LN2 for thermal contact. To minimize condensation on the surface of the mirror, we flowed Figure 7: Measurements of the failed version of FM3 through two thermal cycles. All measurements are taken inside the cryostat at and vacuum pressure. Warm measurements are taken at ambient temperature, cold measurements are taken once the optic has settled near 77K for several hours, and rewarmed measurements are taken once the optic warms back up to ambient temperature. The optic has been removed from and replaced inside the dewar between the bottom left and top right panels, but has otherwise been subjected to no changes – the change in RMS can be attributed to differences in vacuum pressure affecting the curvature of the cryostat window. dry nitrogen gas across the face of the optic through the entire rapid cryocycle. Temperature sensors were anchored to various parts of the mirror to track cooling time and avoid dunking it completely. Surface form was remeasured after each rapid cryocycle (Figure 9). While plastic deformation asymptotically leveled off across 16 cryocycles, it did not fully stabilize, and any amount of creep was deemed unacceptable. With sufficient prior cryocycling, RSA 443 optics could be appropriate for certain cryogenic applications, though it is unknown whether complete dimensional stability is possible. The culprit for this perpetual creep was eventually identified as the material properties of the mirror substrate, RSA 443 (though bimetallic bending from a CTE-mismatch between the optic substrate and coating is not ruled out). The lack of dimensional stability under thermal changes for a similar alloy has been reported in the literature by Caleta et al. 2017[5] (Figure 5, Table 2). They found that samples of AlSi40, an alloy with similar composition to RSA 443, showed strong deformation during the first \(\sim\)6 thermal cycles from ambient to 77\({}^{\circ}\)K ( Figure 8: Surface form stability over multiple cryocycles of the failed RSA 443 optic (FM3). All measurements in this figure were taken warm and outside the dewar (Setup shown in Figure 3(a)). Figure 9: RMS figure error over 16 cryocycles, including both full multi-day cooldowns in a liquid nitrogen dewar (orange) and rapid cooldowns where the mirror was suspended over a bath of liquid nitrogen (blue). 196\({}^{\circ}\)C). Further thermal cycling that reached minimum temperatures of 233\({}^{\circ}\)K (-40\({}^{\circ}\)C) showed dimensional creep that asymptotically leveled off (though never completely stabilized) after \(\sim\)20 cryocycles. Their findings included that AlSi40 alloys with finer particle sizes (like those yielded by the melt-spinning process involved in the manufacturing of RSA 443) tend to have poor dimensional stability, but that coarser-grained alloys show excellent stability even when subjected to cryogenic cycling. RSA 443 was also known to be dimensionally unstable,[6] but this was not publicly well-documented. While the cause of plastic deformation of RSA 443 is not fully characterized, the elastic deformation is thought to be caused by a CTE mismatch between the aluminum matrix and silicon particles.[6] Ultimately, we find that RSA 443 is an unacceptable substrate for this instrument. son-x remade the part using an RSA 6061 substrate, and the tip-tilt mechanism was partially remanufactured by Physik Instruments to accommodate the change in material from RSA 443 to RSA 6061. ## 3 Simulating the scales PSF and limitations on contrast performance This work characterizing SCALES' WFE is motivated by impact on contrast performance. Using poppy (Physical Optics Propagation in PYthon),[7] we constructed a high-fidelity Fresnel physical optics propagation model that begins with the Keck primary and terminates at the focus placed on the lenslet array by the foreoptics. This assembly includes a representation of each existing optic, with the foreoptics represented by real measurements of WFE across spatial scales, and all other optics (Keck telescope and AO optics) are represented by unaberrated surfaces of specified size and optical power. Representations of WFE from the SCALES foreoptics between 0.0018-70 mm\({}^{-1}\) are created from lab measurements and inserted into poppy. A low spatial frequency WFE component (0.0018-0.45 mm\({}^{-1}\)) for each optic is built from Zygo measurements (Section 2.1). A mid-high spatial frequency component is constructed using a power law model fit to the WFE power spectral density (PSD) constructed from all measurements; for OAP1.1 (the one optic where all measurements have been completed), we found a best fit power law index of \(\alpha\) = -1.5 (see Figure 6). The upper limit of spatial frequency included in this model is bounded by the sampling of the poppy model, not by the sensitivity of our lab measurements, since the optical profiler used to take surface roughness measurements is sensitive to spatial frequencies from 1-530 mm\({}^{-1}\). However, neglecting high spatial frequency error in our model does not significantly change the resulting simulated PSF - high spatial frequency error effectively lowers the throughput of an optical system, scattering light at wide angles out of the PSF. While this slightly degrades contrast performance, it does not shape the PSF. This simulation also considers the residual uncorrected wavefront error from the Keck AO system, which is quantified in the Keck AO Error Budget.[8] We ignore error sources that vary quickly in time and thus average out over realistic exposure times (we specifically neglect atmospheric fitting error, bandwidth error, high-order measurement error, and high-order aliasing error). Power-law representations with a negative index of 3 (by convention) of the residual wavefront error from the current Keck AO system (189 nm RMS) and a future Keck AO upgrade, HAKA (High order Advanced Keck Adaptive optics, 162 nm RMS) are incorporated into the simulation. HAKA introduces a 60x60 actuator high-order deformable mirror into the Keck AO system (compared to the current 20x20 actuator DM), and involves upgrading the real time controller (RTC) and the Shack-Hartmann wavefront sensor. HAKA is expected to be commissioned at around the same time as SCALES, which is slated for delivery in summer of 2025. Improved AO performance with HAKA will primarily drive down error occurring on short timescales (with greatest improvements in atmospheric error and bandwidth error) and at close separations, which this simulation is agnostic to. Since this work neglects the improvements in short-timescale error terms, the primary difference between this representation of current and future Keck AO WFE lies in uncorrectable telescope and AO system aberrations, which will be additionally driven down by improved phasing of the Keck primary mirror segments[9, 10] and more advanced controls that improve the nulling of aberrations that originate within the AO system[11]. Thus the true difference in science performance between current and future Keck AO will be more pronounced than what is shown here. The modeled SCALES PSF is shown in Figure 10. To compare the impact of different sources of WFE on contrast, three wavefront error sources are considered: * **SCALES**: The Keck telescope and AO system are assumed to be perfect, and only measured WFE from the SCALES foreoptics is included. * **SCALES + AO**: WFE from SCALES foreoptics and the current Keck AO system are both considered. The WFE contribution from the current AO system for a 5th magnitude natural guide star, neglecting quickly varying error terms, is 189 nm RMS. * **SCALES + HAKA**: WFE from SCALES foreoptics and the proposed future Keck AO upgrade, HAKA (High order All-sky Keck Adaptive optics), are both considered. HAKA integrates an additional high-order deformable mirror into the AO bench, and the projected residual WFE for a 5th magnitude natural guide star (neglecting quickly varying error terms) is decreased to 162 nm RMS. While poppy does not have the option to directly model the vector vortex coronagraph that will be integrated into SCALES, we insert a a Lyot-style coronagraph with a 6\(\lambda\)/D diameter circular top-hat occulter (3\(\lambda\)/D inner working angle) and a Lyot stop designed to match the shape of the Keck pupil[12] to approximate the starlight suppression of a more sophisticated coronagraph. The top panel of Figure 10 shows the unsuppressed PSF, and the middle panel shows the PSF with the majority of on-axis light blocked by the coronagraph. Directly imaged exoplanets are often too faint to be visible even with starlight suppression from a coronagraph, and are only revealed after post-processing. To create a rough approximation of reference differential imaging (RDI), a template PSF is created by averaging together five simulated PSFs (which show stochastic differences due to the way WFE is represented in Figure 10: SCALES PSF simulated in poppy at \(\lambda=2\mu m\), showing three wavefront error cases: only considering WFE from the SCALES foreoptics (left column); considering WFE from SCALES and 189 nm RMS of uncorrected WFE from the current Keck AO system (center column); and considering WFE from SCALES and 162 nm RMS WFE from HAKA, an upcoming upgrade to the Keck AO system. The top panel shows the PSF placed by the foreoptics on the lenslet array (see Figure 2) with no coronagraph. The middle panel shows the PSF with a Lyot-style coronagraph with a \(6\lambda\)/D circular occulter and a Lyot stop designed to match the shape of the Keck pupil[12]. The bottom panel shows a rough approximation of reference differential imaging (RDI), where five realizations of the simulated PSF are averaged together and subtracted from one simulated PSF to reveal a starlight-subtracted PSF. A 3-15 \(\lambda\)/D annulus is overlaid onto the subtracted PSFs, and the RMS fractional flux contained in that annulus is annotated. Each horizontal panel shares the same color scale. the model), and this template is then subtracted from a single realization of the simulation to reveal a starlight-subtracted PSF, shown in the bottom panel. The starlight-subtracted images outside of a 3-15 \(\lambda\)/D annulus are masked. These subtracted PSFs are annotated with the RMS flux contained within that annulus, which is where aberrations from mid-spatial resolution WFE become apparent, and where SCALES will look for exoplanets. The flux values in the final PSF are not in physical units, but represent fractional brightness compared to the entrance pupil, which has an integrated flux of 1. The subtracted SCALES WFE PSF has an RMS flux of 9.63e-9; the SCALES + Keck AO WFE PSF has an RMS flux of 3.36e-7; and the SCALES + HAKA WFE PSF has an RMS flux of 1.64e-7. These subtracted PSFs demonstrate that the speckle noise floor is dominated by uncorrected wavefront error from the Keck AO system, not WFE contributions from SCALES foreoptics, even when the near-future HAKA upgrade is considered. The contrast curves in Figure 11 are derived from the suppressed PSFs in the bottom panel of Figure 10. The suppressed PSF is convolved with a Gaussian with a FWHM of 1.22\(\lambda\)/D. 1\(\sigma\) contrast is calculated by taking the azimuthal standard deviation at a range of separations from the center of the Gaussian-smoothed PSF and normalizing by the peak brightness of an unsuppressed simulated PSF. The 3\(\lambda/D\) inner working angle of the coronagraph included in the poppy model is masked. Predicting instrument contrast is notoriously tricky, and developing an accurate contrast prediction is not the purpose of this work. It is also difficult to quantify whether this contrast curve is optimistic or pessimistic, though we note a number of generalizations we include in our modeling. The coronagraph included in the poppy model is a hard-edged circular occulter, which successfully suppresses on-axis light but introduces diffraction effects around the hard edges of the mask. Other more sophisticated coronagraphs avoid this issue and show better contrast performance, including the vector vortex coronagraph that will be installed in SCALES. Additionally, our modeling takes into account telescope and instrument optics, but does not consider any realistic observing scenario. Normally, contrast can be deepened by extending integration time to collect more signal, and by doing some combination of spectral differential imaging (SDI), angular differential imaging (ADI), and reference differential imaging (RDI). We do a quick approximation of RDI by subtracting a template PSF (built from five realizations of the PSF simulation averaged together) from a single simulated PSF, but a real observation would combine several post-processing techniques. For a SCALES contrast curve that approximates a real observation, see Sallum et al 2023.[13] The measurements and simulations presented in this work are not tailored specifically for accurate contrast prediction, which requires much more intensive modeling. Instead, this is meant to comparatively demonstrate the limiting wavefront error sources contributing to contrast performance. Figure 10 and 11 demonstrate that SCALES' contrast performance is dominated by wavefront error from the Keck AO system, and that the wavefront error imparted by SCALES foreoptics is not the limiting factor in determining contrast performance. Figure 11: Simulated 5\(\sigma\) contrast for three wavefront error cases: SCALES foreoptics only; SCALES foreoptics plus 189 nm RMS of uncorrected WFE representing the summed WFE of the current Keck AO system; and SCALES foreoptics plus 162 nm RMS of uncorrected WFE representing an upcoming upgrade to the Keck AO system, HAKA. All AO WFE estimates are taken from KAON 1303 [8], and assume a 5th magnitude natural guide star. This simulation neglects error terms that vary on short timescales and thus average down during realistic exposure times, and thus is dominated by uncorrectable static and dynamic telescope aberrations. These contrast curves are not intended to be contrast predictions, but instead demonstrate that SCALES’ contrast is not limited by internal optics, but by performance of the AO system. For a dedicated simulation of SCALES + HAKA contrast performance, see Sallum et al. 2023 [13]. ## 4 Summary of Results * Diamond-turned aluminum-substrate optics demonstrate excellent WFE (a range of 1.5-3.0 nm RMS surface roughness and 16-29 nm RMS reflected WFE) appropriate for use in astronomical instrumentation. * The only RSA 6061 optic tested cryogenically thus far, OAP1.1, shows an upper bound of \(\leq\)1.4 nm RMS of plastic deformation. We were unable to monitor elastic deformation during cryocycling due to the size constraints of our cryostat. Future updates will present typical values for elastic and plastic deformation during cryocycling for the full suite of aluminum foreoptics. * RSA 443 is inappropriate for use in the SCALES instrument. The first cryocycle of an RSA 443 optic showed plastic deformation of 117 nm RMS and elastic deformation of 173 nm RMS; after 16 thermal cycles, plastic deformation asymptotically decreased but never fell below 1 nm RMS of creep per thermal cycle. * Contrast performance of the SCALES instrument will be limited by the performance of the Keck AO system, not by WFE introduced by internal instrument optics * All measurements are presented in Table 2. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Optic} & \multicolumn{2}{c|}{Surface roughness WFE (nm} & \multicolumn{2}{c}{Surface figure WFE (nm} \\ & RMS) & \multicolumn{2}{c}{RMS)} \\ \hline & & 300K & 77K & \(\Delta\)RMS\({}^{a}\) \\ \hline \hline OAP1.1 & 2.5 & 27.6 & – & 1.4 \\ OAP1.2 & – & – & – & – \\ OAE & – & 16\({}^{b}\) & – & – \\ FM1 & 3.12\({}^{b}\) & 20\({}^{b}\) & – & – \\ FM2 & – & – & – & – \\ FM3 (RSA 443) & 1.63 & 23.3 & 140.5 & 117.2 \\ FM3 (RSA 6061) & – & 18\({}^{b}\) & – & – \\ FM4 & – & – & – & – \\ FM5 & – & – & – & – \\ \hline \end{tabular} \({}^{a}\Delta\)RMS refers to the difference in RMS surface figure between the first and second cryocycles, i.e. plastic deformation, with both measurements being taken inside the cryostat at 77K. \({}^{b}\) These measurements were taken by son-x before the optics were coated, and will be remeasured at UCSC once they are coated and shipped. \end{table} Table 2: Summary of measurements. Missing measurements will be filled in as finished optics are shipped to UCSC. ## 5 Future Work At the time of publication, we have received and tested OAP1.1 and an initial version of FM3 that failed during testing. FM3 was remade with a different substrate, but for the sake of scheduling was integrated directly into the tip-tilt mechanism before running the tests described in this work. We will repeat the outlined tests on these six remaining optics (FM1, FM2, FM4, FM5, OAP1.2, and OAE) once they're received, and we will complete testing of FM3 alongside testing of the tip-tilt stage. Tooling marks from single-point diamond turning have been known to cause diffraction effects [14]. We intend to measure IR diffraction behavior of all optics once received using a ThorLabs SLS202L 0.45 - 5.5 \(\mu m\) broadband source and a ThorLabs 180C 2.9 - 5.5 \(\mu m\) power sensor. This measurement was carried out on the RSA 443 version of FM3 that later failed in testing, but subsequent optics were manufactured with a different groove spacing by request of the SCALES team, and thus the diffraction behavior of other optics is expected to be different. ## 6 Acknowledgements We are grateful to the Heising-Simons Foundation, the Alfred P. Sloan Foundation, and the Mt. Cuba Astronomical Foundation for their generous support of our efforts. This project also benefited from work conducted under NSF Grant 2216481 and the NSF Graduate Research Fellowship Program. We thank the generous individual donors who have supported the SCALES project and our student researchers. The specific work conducted in this paper was made possible by Barton Robinson, Philip Rice and the Webster Fellowship program. We thank Rebecca Jensen-Clem and Peter Wizinowich for their technical correspondence.
2307.14025
Topologically Regularized Multiple Instance Learning to Harness Data Scarcity
In biomedical data analysis, Multiple Instance Learning (MIL) models have emerged as a powerful tool to classify patients' microscopy samples. However, the data-intensive requirement of these models poses a significant challenge in scenarios with scarce data availability, e.g., in rare diseases. We introduce a topological regularization term to MIL to mitigate this challenge. It provides a shape-preserving inductive bias that compels the encoder to maintain the essential geometrical-topological structure of input bags during projection into latent space. This enhances the performance and generalization of the MIL classifier regardless of the aggregation function, particularly for scarce training data. The effectiveness of our method is confirmed through experiments across a range of datasets, showing an average enhancement of 2.8% for MIL benchmarks, 15.3% for synthetic MIL datasets, and 5.5% for real-world biomedical datasets over the current state-of-the-art.
Salome Kazeminia, Carsten Marr, Bastian Rieck
2023-07-26T08:14:18Z
http://arxiv.org/abs/2307.14025v2
# Topologically-Regularized Multiple Instance Learning for Red Blood Cell Disease Classification ###### Abstract Diagnosing rare anemia disorders using microscopic images is challenging for skilled specialists and machine-learning methods alike. Due to thousands of disease-relevant cells in a single blood sample, this constitutes a complex multiple-instance learning (MIL) problem. While the spatial neighborhood of red blood cells is not meaningful per se, the topology, i.e., the geometry of blood samples as a whole, contains informative features to remedy typical MIL issues, such as vanishing gradients and overfitting when training on limited data. We thus develop a topology-based approach that extracts multi-scale topological features from bags of single red blood cell images. The topological features are used to regularize the model, enforcing the preservation of characteristic topological properties of the data. Applied to a dataset of 71 patients suffering from rare anemia disorders with 521 microscopic images of red blood cells, our experiments show that topological regularization is an effective method that leads to more than 3% performance improvements for automated classification of rare anemia disorders based on single-cell images. This is the first approach that uses topological properties for regularizing the MIL process. Keywords:Topological regularization Multiple instance learning Red blood cell classification. ## 1 Introduction The shape of human red blood cells is known to change depending on their volume, with normally discoid concave shapes becoming star-shaped or spherical in different physiological conditions. An improperly formed cell membrane in hereditary hemolytic anemias results in anomalous morphologies and reduced deformability. These factors contribute to conditions such as hereditary spherotosis, sickle cell disease, and thalassemia, where irregularly-shaped cells appear. For example, in sickle cell anemia, a genetic mutation causes red blood cells to take on a sickle shape, which is crescent-shaped rather than discoid. This shape can make it difficult for the cells to pass through small blood vessels, leading to reduced oxygen flow and a range of health complications [1]. Close monitoring of blood samples is essential for the diagnosis, progression tracking, and severity estimation of hereditary hemolytic anemias [10]. However, identifying hallmark cells in a patient's sample is challenging, and the presence of a small number of anomalous cells does not necessarily indicate an underlying condition, making diagnosis even more complicated. The _manual_ annotation of blood samples for supervised model training is challenging and expensive, manifesting in a large degree of intra-expert variability. This necessitates the use of (weakly) supervised methods for supporting experts. Multiple Instance Learning (MIL) is one such supervised learning method that can automate the analysis of blood samples. MIL is a form of weakly-supervised learning in which each training example is a bag containing several instances. The goal is to learn a classifier that can accurately label new bags based on their contained instances [12]. However, due to the difficulty of obtaining a large amount of training data for rare diseases, MIL models suffer from a high risk of overfitting [19]. In this context, our experiments show that standard regularization techniques in deep learning, e.g., early stopping or \(L_{1}/L_{2}\) regularization, turn out to be insufficient. Therefore, more domain-specific, expressive regularization constraints are necessary to address this challenge (see Fig. 1). One such approach is to add a constraint that encourages the model to focus on the most informative instances in each bag. This can be achieved by incorporating attention mechanisms into the model, which allows the model to selectively focus on the most relevant instances [7]. Following the same intuition, Sadafi et al. [14] could significantly improve MIL performance compared to basic MIL methods, i.e., using average or max-pooling techniques. Another approach is to incorporate domain knowledge or prior information into the model, which can help to guide the learning process. For example, leveraging anomalous cells, Kazeminia et al. [10] proposed an anomaly detection-based pooling technique. Figure 1: Topological regularization is a beneficial addition to all MIL models as it aids in mitigating overfitting. Attention mechanism as an influential part in both [14] and [10] is a learning-based technique and is thus dependent on the amount of training data. This fact restricts the robustness of both solutions as regularizing the learning scheme and leads to a limited attainable level of performance. We here suggest imbuing the model with _multi-scale characteristic shape information_, measured using geometrical-topological descriptors to overcome this issue. Specifically, working in a MIL setting, we propose to preserve geometrical-topological information of bags in the latent space when compared to the image space during training. Our results demonstrate that this serves as a powerful way to _regularize models_, improving more than 3% over the state-of-the-art for classifying rare anemia disorders. ## 2 Background Our work is based on recent advances in topological machine learning. We employ _persistent homology_, a technique for calculating multi-scale geometrical-topological information from data. Persistent homology uses a metric such as the Euclidean distance to calculate multi-scale shape information of point clouds, including information about connected components, cycles, and higher-dimensional voids. Such information is collected in a set of _persistence diagrams_, i.e., multi-scale topological descriptors (see Fig. 2). Despite its origins in computational topology, persistence diagrams can be shown to carry a large amount of geometrical information [3, 16], making them a useful shape descriptor. Recent work in computational topology showed that persistent homology can be integrated with deep-learning models, leading to a new class of hybrid models that are capable of capturing geometrical and topological aspects of data. Such models have shown exceptional performance as regularization terms [5, 13, 17, 18] in different applications. The reader is referred to a recent survey for more details on the integration into modern machine-learning models [9]. ## 3 Method Our study presents a unified framework, incorporating four different Multiple Instance Learning (MIL) models, with the topological signature calculator playing a crucial role in summarizing the topological features of a bag (see Fig. 3). The framework comprises various components, including an instance detector, deep encoder, topological signature calculator, pooling techniques, and classifier heads. The use of topological signatures is a critical aspect of our model, as we calculate them for both the instance images and the latent space. Input data.Our framework is tailored for analyzing microscopic images in a MIL setting, where bags \(B_{1},...,B_{M}\) represent sets of blood sample images containing red blood cells as instances \(I_{1},...,I_{N}\in B_{m\in M}\). Notably, the framework is customized to meet the unique demands of this particular are distributed along independent spatial locations within the image. This is different from cases where tiling techniques are employed to capture instances [15]. In our scenario, a pre-trained object detector, such as a mask R-CNN [8], is used to identify an instance and extract instance-level features \(H_{1},...,H_{N}\) from the region of interest (see Fig. 3). Then an encoder is embedded to extract class-related features of instances and map them to the latent space \(F_{1},...,F_{N}\). Topological features.We calculate multi-scale topological features via persistent homology based on the Vietoris-Rips complex [6] derived from the Euclidean distances of the latent space created from each bag. The Vietoris-Rips complex serves to characterize the multi-scale topological information of the encoded bags. The outcome of this computation is a set of \(d\)-dimensional diagrams and pairings \(\pi\) that signify the relevant point indices involved in the creation and destruction of topological features5. Given that calculating high-dimensional topological features can significantly increase computation time, focusing on low-dimensional features is often more practical. Previous work showed that even 0-dimensional features, i.e., _connected components_, already contain a wealth of information that is crucial for data analysis [2, 6]. Therefore, we proceed by analyzing only the 0-dimensional topology of the object. To obtain a topology-based regularization, we let ourselves be inspired by the method developed by Moor et al. [13], which addresses the challenge of backpropagating topology-based loss terms (or topological information in general). Specifically, this approach identifies the most salient topological features--represented by a pair of point indices--and subsequently maps them back to the distances in the space, represented as a matrix \(A^{Z}\) and \(A^{X}\) for the hidden and image spaces, respectively. Figure 2: A simplified illustration of persistent homology (PH). Using a multi-scale approximation of a point cloud, PH results in a _persistence diagram_ that tracks topological features. In this example, the feature consists of a single cycle (shown in red), whose appearance and disappearance during the approximation process is tracked. In practice, the persistence diagram will contain more points, i.e., topological features, that characterize the input point cloud from various topological perspectives. The corresponding topological features in both matrices are then compared to define a loss. Essentially, the loss measures to what extent topological features in one space are being preserved when mapping to a different space. To avoid biases originating from looking at a specific space, we measure the difference between topological features when mapping from the image space to the hidden space \(L_{X\to Z}\), as well as their respective opposite direction \(L_{Z\to X}\), i.e., mapping from the hidden space to the image space. Intuitively, our objective is to assess the dissimilarity between topological features in both spaces based on matrices of pairwise distances. Therefore, the final topological loss \(L_{\text{topo}}\) is defined as: \[L_{\text{topo}}:=L_{X\to Z}+L_{Z\to X}, \tag{1}\] where \[L_{X\to Z}:=\frac{1}{2}\left\|A^{X}\left[\pi^{X}\right]-A^{Z}\left[\pi^{X} \right]\right\|^{2}, \tag{2}\] and \[L_{Z\to X}:=\frac{1}{2}\left\|A^{Z}\left[\pi^{Z}\right]-A^{X}\left[\pi^{Z} \right]\right\|^{2}. \tag{3}\] _Framework._ The remaining part of our framework consists of the pooling strategy and classifier heads. The pooling strategy aggregates predictions across instances within a bag by summarizing instance-level features into a bag-level representation. This allows models to consider the global features of a bag rather Figure 3: Overview of our topologically-regularized MIL framework: A pre-trained mask R-CNN detects single red blood cells in microscopic brightfield images and provides general features (\(H_{n}\)). Then a deep encoder maps cell images to latent vectors (\(F_{n}\)). Our topological regularizer calculates the topological signatures of cells in the image space and in the latent space; their difference is used as a topological loss \(L_{topo}\). We tested our topological regularizer on basic MIL with (a) average pooling and (b) max pooling, (c) attention-based MIL [14], and (d) anomaly-aware MIL [10]. \(L_{MIL}\) is the general MIL loss function on bag classification and \(L_{SIC}\) is the auxiliary loss function used in [14, 10] for instance classification. than just the individual instances. Our framework includes average pooling, max pooling, and an improved version of attention-based pooling [14] for limited training data. Furthermore, we have included the anomaly-aware pooling technique Kazeminia et al. [10] proposed recently as a state-of-the-art (see Fig. 3). Loss terms.We adopted a dual classifier head approach that comprised a bag classification head and a single-cell classification head. The bag classification head is trained using a cross-entropy loss function \(L_{\mathrm{MIL}}\), calculated as the difference between the predicted bag label and the corresponding ground truth label for the bag. Conversely, the single-cell classification head is trained using a cross-entropy loss function \(L_{\mathrm{SIC}}\) that utilizes the noisy labels of instances as the repeated labels of the bag for all instances. This approach allowed us to incorporate bag-level and instance-level information into our models and evaluate their contributions to classification performance. The final classification loss \(L_{\mathrm{class}}\) is defined as: \[L_{\mathrm{class}}=(1-\beta)L_{\mathrm{MIL}}+\beta L_{\mathrm{SIC}}, \tag{4}\] where \(\beta=0\) for average and max pooling strategies. However, for attention and anomaly-aware pooling techniques, it decreases relative to the epoch number. The final loss \(L_{\mathrm{total}}\) is the weighted sum of \(L_{\mathrm{class}}\) and our topological regularization term \(L_{\mathrm{topo}}\): \[L_{\mathrm{total}}=L_{\mathrm{class}}+\lambda L_{\mathrm{topo}}, \tag{5}\] where \(\lambda\) is a hyperparameter to adjust the influence of the topological loss. ## 4 Experiments Data.We applied our method to microscopic images of blood samples from hereditary hemolytic anemia patients. The dataset includes 3630 microscopic images of blood samples obtained from 71 patients who underwent various treatments at different times. The data is distributed among several classes: Sickle Cell Disease (SCD) with 13 patients and 170 samples; Thalassemia with 3 patients and 25 samples; Hereditary Xerocytosis with 9 patients and 56 samples; and Hereditary Spherocytosis (HS) with 13 patients and 89 samples, in addition to a healthy control group consisting of 33 individuals and 181 samples. Training setting.We partitioned the dataset into three folds. We reserved one fold for testing during each experiment iteration while the others were employed for training. In exploring anomaly-aware MIL, we adopted the same configuration used in [10], which entailed recalibrating the Gaussian mixture model on the control distribution every five epochs. We utilized the Adam optimizer with a learning rate of \(5\times 10^{-4}\) for optimization. We set the constraint values of \(\beta=0.95^{\mathrm{epoch}}\), and \(\lambda=5\times 10^{-3}\). Evaluation.We employed three standard evaluation metrics: Accuracy, F1-Score (weighted macro), and Area Under the Receiver Operating Characteristic Curve (AUROC). Our results (Table 1) show a significant impact of topology-based regularization on the performance of MIL baselines. This is not restricted to Max and Average pooling techniques but also applies to the attention-based and anomaly-aware pooling techniques proposed in previous state-of-the-art studies [14, 10]. Moreover, this substantial performance increase is not limited to the average value of results, as the proposed regularization method also reduces the error margin. These finding show the increased robustness afforded by our proposed regularization strategy. Remarkably, the utilization of topological regularization in combination with average pooling in MIL showed a superior level of performance, which refers to the inherent ambiguity present within the explored dataset. Specifically, blood samples may contain a low ratio of deformed cells that falls below a specified threshold to identify a disorder. Such data introduces a significant challenge for attention-based and anomaly-aware pooling techniques, which do not conform to the fundamental assumptions of these mechanisms. In contrast, the average pooling technique, which has previously struggled with issues of vanishing gradient and inadequate training, is poised to leverage a topologically structured latent space to identify instances relevant to the disorder and fine-tune its classifier weights to incorporate their respective ratios. However, average pooling provides less interpretability of classifier performance than the anomaly-aware pooling technique. This limitation makes it potentially less preferable for applications in the medical environment where accurate interpretation of classifier performance is crucial. We evaluated the effectiveness of our proposed method by conducting additional tests on the anomaly-aware MIL approach, which was the second-best-performing method in our experiments. Insufficient training data for the \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Average pooling} & \multicolumn{3}{c}{Anomaly pooling [10] Attention pooling [14]} & \multicolumn{3}{c}{Max pooling} \\ \hline Topological regularization & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ \\ \hline Accuracy & 72.25\(\pm\)7.0 & **81.29\(\pm\)2.5** & 77.85\(\pm\)3.7 & 79.50\(\pm\)1.2 & 73.72\(\pm\)3.8 & 77.76\(\pm\)1.6 & 64.33\(\pm\)5.8 & 71.44\(\pm\)5.6 \\ F1-Score & 70.47\(\pm\)7.4 & **80.28\(\pm\)3.1** & 76.69\(\pm\)4.0 & 77.01\(\pm\)1.8 & 72.38\(\pm\)3.8 & 74.69\(\pm\)1.6 & 62.96\(\pm\)5.0 & 68.77\(\pm\)5.4 \\ AUROC & 89.88\(\pm\)2.7 & **93.72\(\pm\)4.4** & 89.05\(\pm\)4.3 & 90.89\(\pm\)2.5 & 91.58\(\pm\)3.0 & 91.88\(\pm\)2.5 & 84.83\(\pm\)2.8 & 89.73\(\pm\)3.1 \\ \hline \hline \end{tabular} \end{table} Table 1: Topological regularization improves classification performance for all pooling strategies. We applied it on different MIL methods with average/max pooling, attention-based pooling [14], and anomaly-aware pooling [10]. Numbers show the average classification performance along with the standard deviation from 3 cross-validation and 3 runs. Best performance is indicated by bold text. Additionally, for each pooling method, we compared the classification performance without (✗) and with (✓) topological regularization, and the winner is underlined for clarity. anomaly-aware MIL method may fail to detect anomalous cells and inconsistent assignment of anomaly scores. Our topological regularizer resolves this issue by penalizing the mapping of topologically similar instances far apart from each other (Fig. 4). To analyze the impact of regularization in more detail, we visualized the distance matrix heatmap for instances within a bag, following the convergence of anomaly-aware MIL, both with and without topological regularization (Fig. 5). Our findings demonstrate that incorporating topological regularization successfully distinguishes more anomalies by creating higher distances in the latent space and enhances the overlap between the distance patterns in the image and latent spaces. The observed performance improvement provides empirical evidence of the functionality of our proposed approach. CO2 Emission Related to Experiments.All experiments were conducted for 228 hours on institute's infrastructure with 0.432 kgCO\({}_{2}\)eq/kWh carbon efficiency and A100 PCIe 40/80GB hardware (TDP 250W), resulting in 24.62 kgCO\({}_{2}\)eq total emissions with no direct offset. Calculations utilized the MachineLearning Impact calculator [11]. ## 5 Conclusion Our work introduces the first approach for preserving topological characteristics in image space while learning latent representations using deep neural networks for MIL. Our topologically-regularized loss, as measured by persistent homology, enhances the performance of data-hungry MIL models, especially when dealing Figure 4: Adding topological regularization helps the model to detect more disease-relevant cells. An exemplary illustration of how our regularization method enhances the performance of anomaly-aware MIL. The application of topological regularization in Anomaly-aware MIL (left) results in uniform high anomaly scores for deformed cells, whereas the absence of regularization (right) leads to non-uniform scores. with limited training data. This is demonstrated through our evaluation of the technique for classifying rare anemia disorders, which showed that preserving topological features significantly improves performance. For future work, we will investigate other ways to describe image geometry and topology, using cubical complexes that can operate directly on images. We also plan to analyze the geometrical and topological properties of the spaces of bags, drawing on recent developments in metric geometry, such as the Gromov-Hausdorff distance used to characterize shapes in previous studies [4]. #### 4.0.1 Acknowledgements The Helmholtz Association supports the present contribution under the joint research school "Munich School for Data Science - MUDS". C.M. has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 866411). CoMMiTMenT study was funded by the the European Seventh Framework Program under grant agreement number 602121 (CoMMiTMenT) and from the European Union's Horizon 2020 Research and Innovation Programme. MemSID (NCT02615847) clinical trial was funded by the Foundation for Clinical Research Hematology for supporting the clinical trail at the Division of Hematology, University Hospital Zurich, and, partially, by the following foundations: Baugarten Zurich Genossenschaft und Stiftung, the Ernst Goehner Stiftung, the Rene und Susanna Braginsky Stiftung, the Stiftung Symphasis and the Botnar Foundation. B.R. is supported by the Bavarian state government with funds from the Hightech Agenda Bavaria. Further funding for analysis of the obtained data was obtained European Union's Horizon 2020 Research and Innovation Programme under grant agreement number 675115-RELEVANCE-H2020-MSCA-ITN-2015/H2020-MSCA-ITN-2015. Figure 5: Heatmaps depicting distances between instance images (left), instance latent vectors estimated by topologically-regularized anomaly-aware MIL (middle), and instance latent vectors estimated by anomaly-aware MIL (right), all of which belong to the same bag. Our topological regularization ensures that the model preserves the topological characteristics of the image space in the latent spaces, resulting in latent spaces that resemble the original high-dimensional image space more closely. **Author Contributions:** Conceptualization: Salome Kazeminia and Bastian Rieck. Implementation and experiments: Salome Kazeminia. Data curation: Ario Sadafi, Asya Makhro, and Anna Bogdanova. Writing: Salome Kazeminia, Bastian Rieck, and Carsten Marr.
2302.11783
A Semantics for Counterfactuals in Quantum Causal Models
We introduce a formalism for the evaluation of counterfactual queries in the framework of quantum causal models, generalising Pearl's semantics for counterfactuals in classical causal models, thus completing the last rung in the quantum analogue of Pearl's "ladder of causation". To this end, we define a suitable extension of Pearl's notion of a 'classical structural causal model', which we denote analogously by 'quantum structural causal model', and a corresponding extension of Pearl's three-step procedure of abduction, action, and prediction. We show that every classical (probabilistic) structural causal model can be extended to a quantum structural causal model, and prove that counterfactual queries that can be formulated within a classical structural causal model agree with their corresponding queries in the quantum extension -- but the latter is more expressive. Counterfactuals in quantum causal models come in different forms: we distinguish between active and passive counterfactual queries, depending on whether or not an intervention is to be performed in the action step. This is in contrast to the classical case, where counterfactuals are always interpreted in the active sense. Another distinctive feature of our formalism is that it breaks the connection between causal and counterfactual dependence that exists in the classical case: quantum counterfactuals allow for counterfactual dependence without causal dependence. This distinction between classical and quantum causal models may shed light on how the latter can reproduce quantum correlations that violate Bell inequalities while being faithful to the relativistic causal structure.
Ardra Kooderi Suresh, Markus Frembs, Eric G. Cavalcanti
2023-02-23T05:00:14Z
http://arxiv.org/abs/2302.11783v2
# A Semantics for Counterfactuals ###### Abstract We introduce a formalism for the evaluation of counterfactual queries in the framework of quantum causal models, by generalising the three-step procedure of abduction, action, and prediction in Pearl's classical formalism of counterfactuals [1]. To this end, we define a suitable extension of Pearl's notion of a 'classical structural causal model', which we denote analogously by _'quantum structural causal model'_. We show that every classical (probabilistic) structural causal model can be extended to a quantum structural causal model, and prove that counterfactual queries that can be formulated within a classical structural causal model agree with their corresponding queries in the quantum extension - but the latter is more expressive. Counterfactuals in quantum causal models come in different forms: we distinguish between _active_ and _passive_ counterfactual queries, depending on whether or not an intervention is to be performed in the action step. This is in contrast to the classical case, where counterfactuals are always interpreted in the active sense. As a consequence of this distinction, we observe that quantum causal models break the connection between causal and counterfactual dependence that exists in the classical case: _(passive) quantum counterfactuals allow counterfactual dependence without causal dependence_. This illuminates an important distinction between classical and quantum causal models, which underlies the fact that the latter can reproduce quantum correlations that violate Bell inequalities while being faithful to the relativistic causal structure. ## 1 Introduction The world of alternative possibilities has been pondered upon and analyzed routinely, in many fields of study including but not limited to social [2] and public policy [3], psychiatry [4], economy [5], weather and climate change [6], artificial intelligence [7], philosophy and causality [8, 9]. For example, questions involving counterfactuals can have important social and legal implications, such as "Given that the patient has died after treatment, would they have survived had they been given a different treatment?" The status of counterfactual questions also figures centrally in debates about quantum mechanics [10], where results such as Bell's theorem [11] and the Kochen-Specker theorem [12] have been interpreted as requiring the abandonment of "counterfactual definiteness", encapsulated in Peres' famous dictum "unperformed experiments have no results" [13]. So can this assertion be used by a lawyer as an argument to dismiss a medical malpractice lawsuit as meaningless? Presumably not. Since the world is fundamentally quantum, dismissing _all_ counterfactual questions as meaningless in quantum theory seems too strong. Here, we seek to delineate what questions can be unambiguously answered when unambiguously formulated, and to provide some direction for resolving the ambiguity that is inherent in counterfactual questions that are not so carefully constructed. The semantics of counterfactuals has a controversial history. One of the early accounts is due to David Lewis [14]. He proposed to evaluate counterfactuals by a similarity analysis of possible worlds, where "a counterfactual 'If it were that A, then it would be that C' is (non-vacuously) true if and only if some (accessible) world where both A and C are true is more similar to our actual world, overall, than is any world where A is true but C is false" [14]. This analysis is inevitably vague, as it requires an account of "similarity" among possible worlds, which Lewis attempts to resolve via a system of priorities. The goal is to identify _closest worlds_ as possible worlds in which things are kept more or less the same as in our actual world, except for some'minimal changes', required to make the antecedent of a given counterfactual true. A recent approach, due to Judea Pearl, proposes to define counterfactuals in terms of a sufficiently well-specified causal model for a given situation, denoted by a (classical) _structural causal model_[1]. In Pearl's approach, the'minimal changes' required to make the antecedent of a counterfactual true are conceptualised in terms of an _intervention_, which breaks the causal connections into the variable being intervened upon while fixing it to the required counterfactual value. Structural causal models feature at the top of a hierarchy of progressively sophisticated models that can answer progressively sophisticated questions, which Pearl has dubbed the "ladder of causation" [15] (see Fig. 1). As is well known, however, the classical causal model-framework of Pearl fails to reproduce quantum correlations, while maintaining faithfulness to the relativistic causal structure--as vividly expressed by Bell's theorem [11] and recent 'fine-tuning theorems' [16, 17, 18]. The program of _quantum causal models_ aims to extend the classical causal model-framework, while maintaining compatibility with relativistic causality. One of the aims of our work is to complete the last rung in the quantum analogue of Pearl's "ladder of causation", by proposing a framework to answer counterfactual queries in quantum causal models. A key distinction from the classical case is that, due to the indeterminism inherent in quantum causal models, counterfactual queries do not always have truth values (unlike in Lewis' and Pearl's accounts). Another difference is that an intervention is not always required in order to make the antecedent of a counterfactual true. This leads to a richer semantics for counterfactual in the quantum case, which contains Pearl's classical structural causal model as a special case, as we show. Finally, an important distinction regards the connection between counterfactual dependence and causal dependence. In Pearl's account, counterfactual dependence requires causal dependence. Following an informal definition by David Hume proposed an analysis of causal dependence starting from his notion of counterfactual dependence. In contrast, in quantum causal models there can be counterfactual dependence among events without causal dependence. This fact sheds new light on the nature of the compatibility with relativistic causality that is offered by quantum causal models. The rest of the paper is organised as follows. In Sec. 2, we review the basic ingredients to Pearl's ladder of causation (see Fig. 1), as well as his three-step procedure for evaluating counterfactuals based on the notion of (classical) structural causal models. In Sec. 3, we highlight the issues in accommodating quantum theory within this framework, in the light of Bell's theorem and the assumption of "no fine-tuning" [16, 17]. The framework of quantum causal models aims to resolve this discrepancy. We introduce some key notions and notation of the latter in Sec. 4, which will set the stage for our definition of quantum counterfactuals and their semantics based on a novel notion of quantum structural causal models in Sec. 5. In Sec. 6, we show that Pearl's classical formalism of counterfactuals naturally embeds into our framework; conversely, in Sec. 7 we elaborate on how our framework generalizes Pearl's formalism, by distinguishing passive from active counterfactuals in quantum causal models. This results in a difference between causal and counterfactual dependence in quantum causal models, which pinpoints a remarkable departure from classical counterfactual reasoning, and is discussed using the pertinent example of the Bell scenario in Sec. 7.3. Sec. 8 reflects on some of the key assumptions to our notion of quantum counterfactuals in the context of recent developments on quantum statistical inference. Sec. 9 concludes. ## 2 The Classical Causal Model Framework This section contains the minimal background on classical causal models and the evaluation of counterfactuals, required for the generalization to the quantum case in Sec. 5. We will review a modest fraction of the framework outlined in much more detail in Ref. [1]; readers familiar with the latter may readily skip this section. In his book on causality [1], Judea Pearl identifies a hierarchy of progressively sophisticated models that are capable of answering progressively sophisticated causal queries. This hierarchy is often depicted as the three-step 'Ladder of Causation', with different rungs corresponding to the different levels of causal queries (see Fig. 1). At the bottom of the ladder is the level of _association_ ('level 1'), related to observations and statistical relations. It answers questions such as "how would seeing \(X\) change my belief in \(Y\)?" The second rung is the level of _intervention_ ('level 2'), which considers questions such as "If I take aspirin, will my headache be cured"? Levels 1 and 2 are related to _Bayesian networks_ and _causal Bayesian networks_, respectively, and we will formally define these in the coming subsections. The final rung in the ladder of causation is the level of Figure 1: A depiction of “The Ladder of Causation”. [Republished with permission from Ref. [15].] _counterfactuals_ ('level 3'), associated with activities such as imagining, retrospecting, and understanding. It considers questions such as "Was it the aspirin that stopped my headache?", "Had I not taken the aspirin, would my headache not have been cured?" etc. In other words, counterfactuals deal with 'why'-questions. Pearl argues that the levels of intervention and counterfactuals are particularly important for human intelligence and understanding, as they are crucial for our internal modeling of the world and the effects of our actions. In contrast, he argues that current artificial intelligence (AI)--however impressive-- is still restricted to level 1 of the hierarchy. Considering the coming age of quantum computation, one of the motivations of our work is to extend Pearl's analysis to the framework of quantum causal models, which may have applications for future quantum AI. ### Level 1 - Bayesian networks In Pearl's framework, level 1 of the ladder of causation (Fig. 1) is the level of association, which encodes statistical data in the form of a probability distribution \(P(\mathbf{v})=P(v_{1},\cdots,v_{n})\) over random variables \(\mathbf{V}=\{V_{i}\}_{i=1}^{n}\).1 The latter are assumed to take values in a finite set, whose elements are denoted by the corresponding lowercase letters \(v_{i}\)'s. The proposition '\(V_{i}=v_{i}\)' describes the _event_ where the random variable \(V_{i}\) takes the value \(v_{i}\), and \(P(v_{i})\coloneqq P(V_{i}=v_{i})\) denotes the probability that this event occurs. Footnote 1: Throughout, we will use boldface notation to indicate tuples of variables. Statistical independence conditions in a probability distribution can be conveniently represented graphically using directed acyclic graphs (DAGs), which in this context are also known as _Bayesian networks_. The nodes in a Bayesian network \(G\) represent the random variables \(\mathbf{V}=\{V_{i}\}_{i=1}^{n}\), while arrows ('\(V_{j}\to V_{k}\)') in \(G\) impose a 'kinship' relation: we call \(\mathrm{Pa}(V_{i})=\mathrm{Pa}_{i}\coloneqq\{V_{j}\in\mathbf{V}\mid(V_{j}\to V_{ i})\in G\}\) the "parents" and \(\mathrm{Ch}(V_{i})=\mathrm{Ch}_{i}\coloneqq\{V_{j}\in\mathbf{V}\mid(V_{i}\to V_{ j})\in G\}\) denotes the "children" of the node \(V_{i}\). For example, in Fig. 2, \(V_{1}\) is the parent node of \(V_{2}\) and \(V_{3}\); \(V_{4}\) is a child node of \(V_{2}\), \(V_{3}\) and \(V_{6}\). **Definition 1** (Classical Markov condition).: _A joint probability distribution \(P(\mathbf{v})=P(v_{1},\cdots,v_{n})\) is said to be Markov relative to a DAG \(G\) with nodes \(\mathbf{V}=\{V_{i}\}_{i=1}^{n}\) if and only if there exist conditional probability distributions \(P(v_{i}|pa_{i})\) for each \(V_{i}\in\mathbf{V}\) such that,_ \[P(\mathbf{v})=\prod_{i=1}^{n}P(v_{i}|pa_{i})\;. \tag{1}\] In general, a probability distribution will be Markov relative to different Bayesian networks, corresponding to different ways it can be decomposed into conditional distributions. Moreover, a Bayesian network will have many distributions which are Markov with respect to it. Note that at this level (level 1), the DAG \(G\) representing a Bayesian network does not carry causal meaning, but is merely a convenient representation of statistical conditional independences. ### Level 2 - Causal Bayesian networks and classical causal models At level 2 of the hierarchy are _causal (Bayesian) networks_. In contrast to Bayesian networks, the arrows between nodes \(\mathbf{V}=\{V_{i}\}_{i=1}^{n}\) in a causal Bayesian network _do_ encode causal relationships. In particular, the parents \(\mathrm{Pa}(V_{i})\) of a node \(V_{i}\) are now interpreted as _direct causes_ of \(V_{i}\). Figure 2: A directed acyclic graph (DAG) with nodes \(\mathbf{V}=\{V_{1},\cdots,V_{6}\}\) representing random variables, and arrows representing (causal) statistical dependencies. **Definition 2** (Classical Causal Model).: _A classical causal model is a pair \((G,P)\), consisting of a directed acyclic graph \(G\) encoding a causal Bayesian network, and a probability distribution \(P\) that is Markov with respect to \(G\), according to Def. 1._ Moreover, a causal network is an oracle for interventions. The effect of an intervention is modeled as a "mini-surgery" in the graph that cuts all incoming arrows into the node being intervened upon and sets it to a specified value. What is more, given a classical causal model \((G,P)\), we define the _do-intervention_\(\text{do}(\mathbf{X}=\mathbf{x})\) on a subset of nodes \(\mathbf{X}\subset\mathbf{V}\) as the submodel \((G_{\mathbf{x}},P_{\mathbf{x}})\), where \(G_{\mathbf{x}}\) is the modified DAG with the same nodes as \(G\), but with all incoming arrows \(V_{j}\to V_{i}\) for \(V_{i}\in\mathbf{X}\) removed from \(G\), and where \(P_{\mathbf{x}}\) arises from \(P\) by setting the values at \(\mathbf{X}\) to \(\mathbf{x}\). More precisely, letting \(\mathbf{V_{x}}=\mathbf{V}\backslash\mathbf{X}\) \[P_{\mathbf{x}}(\mathbf{v})=\prod_{V_{i}\subset\mathbf{X}}\delta_{v_{i},x_{i}} \prod_{V_{i}\in\mathbf{V_{x}}}P(v_{i}|pa_{i})\;. \tag{2}\] For example, if we perform a do-intervention \(\text{do}(V_{2}=v_{2})\) on the classical causal model \((G,P)\) with DAG \(G\) in Fig. 2, then \(G_{v_{2}}\) is the DAG shown in Fig. 3, and the truncated factorization formula for the remaining variables reads \[P_{v_{2}}(v_{1},v_{3},v_{4},v_{5},v_{6})=P(v_{1})P(v_{3}|v_{1})P(v_{4}|V_{2}=v _{2},v_{3},v_{6})P(v_{5}|v_{4})P(v_{6})\;. \tag{3}\] ### Level 3 - Structural causal models and the evaluation of counterfactuals At level 3 of the hierarchy are (classical) _structural causal models_. Such models consist of a set of nodes \((\mathbf{V},\mathbf{U})\), distinguished into _endogenous_ variables \(\mathbf{V}\) and _exogenous_ variables \(\mathbf{U}\), together with a set of functions \(\mathbf{F}\) that encode structural relations between the variables. The term "exogenous" indicates that any causes of such variables lie outside the model; they can be thought of as local 'noise variables'. **Definition 3** (Classical Structural Causal Model).: _A (classical) structural causal model (CSM)\(M\) is a triple \(M=(\mathbf{U},\mathbf{V},\mathbf{F})\), where \(\mathbf{V}=\{V_{1},\ldots,V_{n}\}\) is a set of endogenous random variables, \(\mathbf{U}=\{U_{1},\cdots,U_{n}\}\) is a set of exogenous random variables and \(\mathbf{F}=\{f_{1},\ldots,f_{n}\}\) is a set of functions of the form \(f_{i}:U_{i}\times\mathrm{Pa}_{i}\to V_{i}\), where \(\mathrm{Pa}_{i}:=\{V_{j}\in\mathbf{V}\mid V_{j}\subset f_{i}^{-1}(V_{i})\} \subset\mathbf{V}\)._ Every structural causal model \(M\) is associated with a directed graph \(G(M)\), which represents the causal structure of the model as specified by the relations \(G(M)\ni(V_{j}\xrightarrow{f_{i}}V_{i})\Leftrightarrow V_{j}\in\mathrm{Pa}_{i}\). Here, we will restrict CSMs to those defining directed acyclic graphs. For example, the causal model of Fig. 2 can be extended to a CSM with causal relations as depicted in Fig. 4. In analogy with the do-interventions for causal Bayesian networks in Sec. 2.2, we define do-interventions in a CSM \(M=(\mathbf{U},\mathbf{V},\mathbf{F})\). Let \(\mathbf{X}\subset\mathbf{V}\) with corresponding exogenous variables \(\mathbf{U}(\mathbf{X})\subset\mathbf{U}\) and functions \(\mathbf{F}(\mathbf{X})\subset\mathbf{F}\), and let \(\mathbf{V_{x}}=\mathbf{V}\backslash\mathbf{X}\), \(\mathbf{U_{x}}=\mathbf{U}\backslash\mathbf{U}(\mathbf{X})\) and \(\mathbf{F_{x}}=\mathbf{F}\backslash\mathbf{F}(\mathbf{X})\). Then the do-intervention \(\text{do}(\mathbf{X}=\mathbf{x})\) defines a submodel \(M_{\mathbf{x}}=\{\mathbf{U_{x}},(\mathbf{V_{x}},\mathbf{X}=\mathbf{x}), \mathbf{F_{x}}\}\). In terms of the causal graph \(G(M)\), the action \(\text{do}(\mathbf{X}=\mathbf{x})\) removes all incoming arrows to the nodes \(X_{i}\), thus generating a new graph \(G(M_{\mathbf{x}})\). The submodel \(M_{\mathbf{x}}\) represents a _minimal change_ to the original model \(M\) such that \(\mathbf{X}=\mathbf{x}\) is true while keeping the values of the exogenous variables fixed - which are thought of as "background conditions". In turn, we can use \(M_{\mathbf{x}}\) to analyze counterfactual statements with antecedent \(\mathbf{X}=\mathbf{x}\). Figure 3: The directed acyclic graph from Fig. 2 after a do-intervention on node \(V_{2}\). The effect of this do-intervention is graphically represented by removing all the arrows into \(V_{2}\). **Definition 4** (Counterfactual).: _Let \(M=\{\mathbf{U},\mathbf{V},\mathbf{F}\}\) be a structural causal model, and let \(\mathbf{X},\mathbf{Y}\subseteq\mathbf{V}\). The counterfactual statement "\(\mathbf{Y}\) would have been \(\mathbf{y}\), had \(\mathbf{X}\) been \(\mathbf{x}\), in a situation specified by the background variables \(\mathbf{U}=\mathbf{u}\)" is denoted by \(\mathbf{Y}_{\mathbf{x}}(\mathbf{u})=\mathbf{y}\), where \(\mathbf{Y}_{\mathbf{x}}(\mathbf{u})\) is the potential response of \(\mathbf{Y}\) to the action \(\mathrm{do}(\mathbf{X}=\mathbf{x})\), that is, the solution for \(\mathbf{Y}\) of the modified set of equations \(\mathbf{F}_{\mathbf{x}}\) in the submodel \(M_{\mathbf{x}}\). \(\mathbf{X}=\mathbf{x}\) is called the antecedent and \(\mathbf{Y}=\mathbf{y}\) is the consequent of the counterfactual._ Note that given any complete specification \(\mathbf{U}=\mathbf{u}\) of the exogenous variables, every counterfactual statement of the form above has a truth value. Denoting a "causal world" by the pair \((M,\mathbf{u})\), we can say that a counterfactual has a truth value in every causal world where it can be defined. This is the case even when the background variables determine \(\mathbf{X}\) to have a value different from the value specified in the antecedent since the counterfactual is evaluated relative to the modified submodel \(M_{\mathbf{x}}\). **Definition 5** (Probabilistic structural causal model).: _A probabilistic structural causal model (PSM) is defined by a pair \((M,P(\mathbf{u}))\), where \(M=(\mathbf{V},\mathbf{U},\mathbf{F})\) is a structural causal model (see Def 3) and \(P(\mathbf{u})\) is a probability distribution defined over the exogenous variables \(\mathbf{U}\) of \(M\)._ Since every endogenous variable \(V_{i}\in\mathbf{V}\) is a function of \(U_{i}\) and its parent nodes, \(f_{i}:U_{i}\times\mathrm{Pa}_{i}\to V_{i}\), the distribution \(P(\mathbf{u})\) in a PSM \((M,P(\mathbf{u}))\) defines a probability distribution over every subset \(\mathbf{Y}\subseteq\mathbf{V}\) by \[P(\mathbf{y})\coloneqq P(\mathbf{Y}=\mathbf{y})=\sum_{\mathbf{u}|\mathbf{Y}( \mathbf{u})=\mathbf{y}}P(\mathbf{u})\;. \tag{4}\] In particular, the _probability of the counterfactual_ "\(\mathbf{Y}\) would have been \(\mathbf{y}\), had \(\mathbf{X}\) been \(\mathbf{x}\), given \(\mathbf{U}\) is \(\mathbf{u}\)" can be computed using the submodel \(M_{\mathbf{x}}\) as \[P(\mathbf{Y}_{\mathbf{x}}=\mathbf{y})=\sum_{\mathbf{u}|\mathbf{Y}_{\mathbf{x} }(\mathbf{u})=\mathbf{y}}P(\mathbf{u})\;. \tag{5}\] More generally, the probability of a counterfactual query might be conditioned on prior observations '\(\mathbf{e}\)'. In this case, we first update the probability distribution \(P(\mathbf{u})\) in the PSM to obtain a modified probability distribution \(P(\mathbf{u}\mid\mathbf{e})\) conditioned on observed data \(\mathbf{e}\) and then use this updated probability distribution to evaluate the probability for the counterfactual as in Eq. (5). Combining the above steps, one arrives at the following theorem, proved in Ref. [1]: **Theorem 1** (Pearl [1]).: _Given a probabilistic structural causal model (PSM) \((M,P(\mathbf{u}))\) (see Def 5), and subsets \(\mathbf{X},\mathbf{Y},\mathbf{E}\subset\mathbf{V}\), the probability for the counterfactual "would \(\mathbf{Y}=\mathbf{y}\), had \(\mathbf{X}=\mathbf{x}\), given that \(\mathbf{E}=\mathbf{e}\)", is denoted by \(P(\mathbf{Y}_{\mathbf{x}}|\mathbf{e})\) and can be evaluated systematically by a three-step procedure:_ * _Step 1:_ **Abduction**_: using the observed data_ \(\mathbf{E}=\mathbf{e}\)_, use Bayesian inference to update the probability distribution_ \(P(\mathbf{u})\) _corresponding to the PSM_ \((M,P(\mathbf{u}))\) _to obtain_ \(P(\mathbf{u}|\mathbf{e})\)_._ * _Step 2:_ **Action**_: perform a do-intervention_ \(\mathrm{do}(\mathbf{X}=\mathbf{x})\)_, by which the values of_ \(\mathbf{X}\subset\mathbf{V}\) _are specified independent of their parent nodes. The resultant model is denoted as_ \(M_{\mathbf{x}}\)_._ * _Step 3:_ **Prediction**_: in the modified model_ \((M_{\mathbf{x}},P(\mathbf{u}|\mathbf{e}))\)_, compute the probability of_ \(\mathbf{Y}\) _as by Eq. (_5_)._ Figure 4: A classical structural causal model (CSM) with endogenous nodes \(\mathbf{V}=\{V_{1},\cdots,V_{6}\}\) and exogenous nodes \(\mathbf{U}=\{U_{1},\cdots,U_{6}\}\). As an example, consider the situation where \(\mathbf{X}=\mathbf{x}\) and \(\mathbf{Y}=\mathbf{y}\) are observed, that is, \(\mathbf{E}=(\mathbf{X},\mathbf{Y})\).2 Footnote 2: Note that \(\mathbf{X},\mathbf{E}\) and \(\mathbf{Y},\mathbf{E}\) in Thm. 1 are not necessarily disjoint. We evaluate the probability of the counterfactual "\(\mathbf{Y}\) would have been equal to \(\mathbf{y}^{\prime}\) had \(\mathbf{X}\) been \(\mathbf{x}^{\prime\prime}\)" as: \[P(\mathbf{Y}_{\mathbf{x}^{\prime}}=\mathbf{y}^{\prime}|\mathbf{X}=\mathbf{x},\mathbf{Y}=\mathbf{y})=\sum_{\mathbf{u}|\mathbf{Y}_{\mathbf{x}^{\prime}}( \mathbf{u})=\mathbf{y}^{\prime}}P(\mathbf{u}|\mathbf{X}=\mathbf{x},\mathbf{Y }=\mathbf{y})=\sum_{\mathbf{u}|\mathbf{Y}_{\mathbf{x}^{\prime}}(\mathbf{u})= \mathbf{y}^{\prime}}\frac{P(\mathbf{X}=\mathbf{x},\mathbf{Y}=\mathbf{y}| \mathbf{u})P(\mathbf{u})}{P(\mathbf{X}=\mathbf{x},\mathbf{Y}=\mathbf{y})}\;, \tag{6}\] where we used Eq. (5) in the first and Bayes' theorem in the second step.3 Footnote 3: Note that ‘counterfactual definiteness’ implies the existence of a joint probability distribution over all variables (‘hidden variable model’). In this case, an alternative expression for the probability of the counterfactual in Eq. (6) reads \[P(\mathbf{Y}_{\mathbf{x}^{\prime}}=\mathbf{y}^{\prime}|\mathbf{X}=\mathbf{x},\mathbf{Y}=\mathbf{y})=\frac{P(\mathbf{Y}_{\mathbf{x}^{\prime}}=\mathbf{y}^{ \prime},\mathbf{X}=\mathbf{x},\mathbf{Y}=\mathbf{y})}{P(\mathbf{X}=\mathbf{x}, \mathbf{Y}=\mathbf{y})}\;. \tag{7}\] In the quantum case such a distribution does not generally exist [12]. Nevertheless, by resorting to Eq. (6)—thereby avoiding Eq. (7)—Bayesian inference can be generalized to the quantum case [19] (see also Sec. 8). In temporal metaphors, step 1 explains the past (the exogenous variables \(\mathbf{U}\)) in light of the current evidence \(\mathbf{e}\); step 2 minimally bends the course of history to comply with the hypothetical antecedent and step 3 predicts the future based on our new understanding of the past and our newly established condition. ## 3 Quantum violations of classical causality Classical causal models face notorious difficulties in explaining quantum correlations. Firstly, Bell's theorem [11, 20, 21] can be interpreted in terms of classical causal models, thus proving that such models cannot reproduce quantum correlations (in particular, those that violate a Bell inequality) while maintaining relativistic causal structure and the assumption of "free choice". The latter is the assumption that experimentally controllable parameters like measurement settings can always be chosen via "free variables", which can be understood as variables that have no _relevant causes_ in a causal model for the experiment. That is, they share no common causes with, nor are caused by, any other variables in the model. Thus, "free variables" can be modeled as exogenous variables. For concreteness, consider the standard Bell scenario with a causal structure represented in the DAG in Fig. 5, where variables \(A\) and \(B\) denote the outcomes of experiments performed by two agents, Alice and Bob. Variables \(X\) and \(Y\) denote their choices of experiment, which are assumed to be "free variables" and thus have no incoming arrows. Since Alice and Bob perform measurements in space-like separated regions, no relativistic causal connection is allowed between \(X\) and \(B\) nor between \(Y\) and \(A\). In this scenario, Reichenbach's principle of common cause [22, 23] - which is a consequence of the classical causal Markov condition - implies the existence of common causes underlying any correlations between the two sides of the experiment. \(\Lambda\) denotes a complete specification of any such common causes. As we are assuming a relativistic causal structure, those must be in the common past light cone of Alice's and Bob's experiments. Marginalizing over the common cause variable \(\Lambda\), the causal Markov condition applied to the DAG in Fig. 5 implies the factorization: \[P(AB|XY)=\sum_{\Lambda}P(\Lambda)P(A|X\Lambda)P(B|Y\Lambda)\;. \tag{8}\] A model satisfying Eq. (8) is also called _local hidden variable model_. Importantly, local hidden variable models satisfy the Bell inequalities [11, 20], which have been experimentally violated by quantum correlations [24, 25, 26, 27].4 It follows that no classical causal model can explain quantum correlations under the above assumptions. Figure 5: A Directed Acyclic Graph (DAG) depicting the standard Bell scenario. More recently, Wood and Spekkens [16] showed that certain Bell inequality violations cannot be reproduced by any classical causal model that satisfies the assumption of "no fine-tuning". This is the requirement that any conditional independence between variables in the model be explained as arising from the structure of the causal graph, rather than from fine-tuned model parameters. This assumption is essential for causal discovery - without it, it is generally not possible to experimentally determine which of a number of candidate graphs faithfully represents a given situation. This result was later generalized to arbitrary Bell and Kochen-Specker inequality violations in Refs. [17, 18]. These results motivate the search for a generalization of classical causal models that accommodates quantum correlations and allows for causal discovery, while maintaining faithfulness to relativistic causal structure. Ref. [23] considers modifications of Reichenbach's principle of common cause [22]--which is implied by the causal Markov condition in the special case of the common cause scenario in Fig. 5, as assumed in Bell's theorem [11, 20]. The authors of Ref. [23] argue that one could maintain the _principle of common cause_--the requirement that correlations between two causally disconnected events should be explained via common causes--by relaxing the condition that a full specification of those common causes factorizes the probabilities for the events in question, as by Eq. (8). Using the Leifer-Spekkens formalism for _quantum conditional states_, they instead propose that Eq. (8) should be replaced by the requirement that the _channels_ between the common cause and Alice and Bob's labs factorize--or more precisely, the Choi-Jamiolkowski operators corresponding to those channels. This is essentially the type of resolution of Bell's theorem that is provided by _quantum causal models_, to which we now turn. After introducing structural quantum causal models in Sec. 4.1 and quantum counterfactuals queries in Sec. 5, in Sec. 7.3 we will revisit the Bell scenario from the perspective of counterfactuals in quantum causal models. ## 4 Quantum causal models In recent years a growing number of papers have addressed the problem of generalizing the classical causal model formalism to accommodate quantum correlations, in a way that is compatible with relativistic causality and faithfulness. This has led to the development of various frameworks for quantum causal models. The more developed of those are the frameworks by Costa and Shrapnel [28] and Barrett, Lorenz and Oreshkov [29]. In this work, we use a combination of the notation and features of both of these formalisms. **Quantum nodes and quantum interventions.** Recall that in a classical causal model, a node represents a locus for potential interventions. In order to generalize this to the quantum case, we start by introducing a _quantum node_\(A\), which is associated with two Hilbert spaces \(\mathcal{H}_{A^{\text{in}}}\) and \(\mathcal{H}_{A^{\text{out}}}\), corresponding to the incoming system and the outgoing system, respectively. An intervention at a quantum node \(A\) is represented by a _quantum instrument_\(\mathcal{I}_{A}^{z}\) (see Fig. 6). This is a set of trace-non-increasing completely positive (CP) maps from the space of linear operators on \(\mathcal{H}_{A^{\text{in}}}\) to the space of linear operators on \(\mathcal{H}_{A^{\text{out}}}\), \[\mathcal{I}_{A}^{z}=\{\mathcal{M}_{A}^{a|z}:\mathcal{L}(\mathcal{H}_{A^{\text {in}}})\rightarrow\mathcal{L}(\mathcal{H}_{A^{\text{out}}})\}_{a}\;, \tag{9}\] such that \(\mathcal{M}_{A}=\sum_{a}\mathcal{M}_{A}^{a|z}\) is a completely positive, trace-preserving (CPTP) map--i.e. a _quantum channel_.5 Here, \(z\) is a label for the (choice of) instrument, and \(a\) labels the classical outcome of the instrument, which occurs with probability \(P_{z}(a)=\text{Tr}[\mathcal{M}_{A}^{a|z}(\rho_{A^{\text{in}}})]\) for an input state \(\rho_{A^{\text{in}}}\in\mathcal{L}(\mathcal{H}_{A^{\text{in}}})\); consequently, the state on the output system conditioned on the outcome of the intervention is given by \(\mathcal{M}_{A}^{a|z}(\rho_{A^{\text{in}}})/P_{z}(a)\). For simplicity, we consider finite-dimensional systems only. Footnote 5: We sometimes write \(\mathcal{M}_{A}^{|z}\) for this CPTP map to indicate that it is associated with the instrument \(\mathcal{I}_{A}^{z}\). Note however that a given CPTP map will in general be associated with many different instruments. Using the Choi-Jamiolkowski (CJ) isomorphism,6 we represent a quantum instrument \(\mathcal{I}_{A}^{z}=\{\mathcal{M}_{A}^{a|z}\}_{a}\) in terms of a positive operator-valued measure \(a\mapsto\tau_{A}^{a|z}\). More precisely, every completely positive map \(\mathcal{M}_{A}^{a|z}\) is represented by a positive semi-definite operator \(\tau_{A}^{a|z}\in\mathcal{L}(\mathcal{H}_{A^{\text{out}}}\otimes\mathcal{H}_{ A^{\text{in}}})\) given by Footnote 6: Here, we follow the notation in Ref. [28]. This differs from the one used in Refs. [30, 31, 29], which applies a basis-independent version of the Choi-Jamiolkowski isomorphism, by identifying the Hilbert space associated with outgoing systems with its dual (see also Ref. [32]). \[\tau_{A}^{a|z}=\sum_{i,j}\mathcal{M}_{A}^{a|z}(|i\rangle\!\langle j|)_{A^{ \text{out}}}^{T}\otimes|j\rangle\langle i|_{A^{\text{in}}}\;. \tag{10}\] In a slight abuse of notation, we will write \(\mathcal{I}^{z}_{A}=\{\mathcal{M}^{a|z}_{A}\}_{a}\stackrel{{ CJ}}{{\leftrightarrow}}\{\tau^{a|z}_{A}\}_{a}\) also for the representation of an instrument in terms of positive operators under the Choi-Jamiolkowski isomorphism. Note that the fact that \(\mathcal{M}_{A}=\sum_{a}\mathcal{M}^{a|z}_{A}\) is trace-preserving imposes the following trace condition on \(\tau^{|z}_{A}=\sum_{a}\tau^{a|z}_{A}\) (cf. Ref. [33]), \[\mathrm{Tr}_{A^{\mathrm{out}}}[\tau^{|z}_{A}]=\mathbb{I}_{A^{\mathrm{in}}}\;. \tag{11}\] **Quantum process operators.** In a quantum causal model we will distinguish between two types of quantum operations: quantum interventions, which are local to a quantum node, and a quantum process operator, which acts between quantum nodes and contains information about the causal (influence) relations between the nodes in the model. To motivate the general definition (Def. 6 below), we first consider the simplest case: for a single quantum node \(A\), a quantum process operator is any operator \(\sigma_{A}\in\mathcal{L}(\mathcal{H}_{A^{\mathrm{in}}}\otimes\mathcal{H}_{A^ {\mathrm{out}}})\) such that the pairing7 Footnote 7: With Ref. [29], we will adopt the shorthand \(\mathrm{Tr}_{A}[\cdots]=\mathrm{Tr}_{A^{\mathrm{in}}A^{\mathrm{out}}}[\cdots]\). \[\mathrm{Tr}_{A}[\sigma_{A}\tau^{a|z}_{A}]=\mathrm{Tr}_{A^{\mathrm{in}}A^{ \mathrm{out}}}[\sigma_{A}\tau^{a|z}_{A}]\eqcolon P_{z}(a)\in[0,1]\;, \tag{12}\] defines a probability for every positive semi-definite operator \(\tau^{a|z}_{A}\), and satisfies the normalisation condition \[\sum_{a}P_{z}(a)=\mathrm{Tr}_{A}[\sigma_{A}\tau^{|z}_{A}]=1\;, \tag{13}\] for every quantum channel (CPTP map) \(\tau^{|z}_{A}\). Consequently, given a process operator \(\sigma_{A}\), we may interpret \(P_{z}(a)\) as the probability to obtain outcome \(a\) when performing an instrument \(z\). As a generalisation of the Born rule (on the composite system \(\mathcal{H}_{A^{\mathrm{in}}}\otimes\mathcal{H}_{A^{\mathrm{out}}}\)), Eq. (12) in particular implies that \(\sigma_{A}\) is positive, hence, corresponds to a completely positive map \(\mathcal{E}\colon\mathcal{L}(\mathcal{H}_{A^{\mathrm{out}}})\to\mathcal{L}( \mathcal{H}_{A^{\mathrm{in}}})\). More generally, it will be useful to introduce a notation for the positive semi-definite operator \(\rho^{\mathcal{E}}\) corresponding to a bipartite channel of the form \(\mathcal{E}:\mathcal{L}(\mathcal{H}_{A^{\mathrm{out}}})\to\mathcal{L}( \mathcal{H}_{B^{\mathrm{in}}})\): \[\rho^{\mathcal{E}}_{B|A}=\rho^{\mathcal{E}}_{B^{\mathrm{in}}|A^{\mathrm{out}}} \eqcolon=\sum_{i,j}\mathcal{E}(|i\rangle\!\langle j|)_{B^{\mathrm{in}}}\otimes |i\rangle_{A^{\mathrm{out}}}\langle j|\;. \tag{14}\] Note that \(\rho^{\mathcal{E}}_{B|A}\) is distinguished from the representation of the Choi matrices corresponding to quantum instruments in Eq. (10) by an overall transposition, indicating the different roles played by instruments and processes in the inner product of Eq. (12). In particular, we have \(\sigma_{A}=\rho^{\mathcal{E}}_{A|A}=\rho^{\mathcal{E}}_{A^{\mathrm{in}}|A^{ \mathrm{out}}}\) for some channel satisfying the normalisation condition in Eq. (13). Generalizing this idea to finitely many quantum nodes, a quantum process operator is defined as follows. **Definition 6** (Process operator).: _A (quantum) process operator over quantum nodes \(\mathbf{A}=\{A_{1},\cdots,A_{n}\}\) is a positive semi-definite operator \(\sigma_{\mathbf{A}}=\sigma_{A_{1},\cdots,A_{n}}\in\mathcal{L}(\otimes_{i=1}^{ n}\mathcal{H}_{A_{i}^{\mathrm{in}}}\otimes\mathcal{H}_{A_{i}^{\mathrm{out}}})_{+}\), which satisfies the normalisation condition,_ \[\mathrm{Tr}_{A_{1}\cdots A_{n}}[\sigma_{A_{1},\cdots,A_{n}}(\tau^{|z_{1}}_{A_{1 }}\otimes\cdots\otimes\tau^{|z_{n}}_{A_{n}})]=1\;, \tag{15}\] _for any choice of quantum channels \(\tau^{|z_{1}}_{A_{1}},\cdots,\tau^{|z_{n}}_{A_{n}}\) at nodes \(A_{1},\cdots,A_{n}\).8_ Footnote 8: Every process operator satisfies a trace condition analogous to Eq. (11): \(\mathrm{Tr}_{A_{1}^{\mathrm{in}}\cdots A_{n}^{\mathrm{in}}}[\sigma_{A_{1}, \cdots,A_{n}}]=\mathbb{I}_{A_{1}^{\mathrm{out}}}\otimes\cdots\otimes\mathbb{I}_ {A_{n}^{\mathrm{out}}}\), hence, \(\sigma_{A_{1}\cdots A_{n}}\) defines a CPTP map \(\mathcal{L}(\mathcal{H}_{A_{1}^{\mathrm{out}}}\otimes\cdots\otimes\mathcal{H}_ {A_{n}^{\mathrm{out}}})\to\mathcal{L}(\mathcal{H}_{A_{1}^{\mathrm{in}}}\otimes \cdots\otimes\mathcal{H}_{A_{1}^{\mathrm{in}}})\). Yet, the converse is generally not true. Figure 6: A quantum node \(A\) is associated with an incoming (\(\mathcal{H}_{A^{\mathrm{in}}}\)) and outgoing Hilbert space (\(\mathcal{H}_{A^{\mathrm{out}}}\)). It can be intervened on via a quantum instrument \(\mathcal{I}^{z}_{A}=\{\mathcal{M}^{a|z}_{A}\}_{a}\), resulting in an outcome ‘\(a\)’ corresponding to a completely-positive (CP) map \(\mathcal{M}^{a|z}_{A}\). Comparing with Eq. (12), we define the probability of obtaining outcomes \(\mathbf{a}=\{a_{1},\cdots,a_{n}\}\) when performing interventions \(\{\{\tau_{A_{1}}^{a_{1}|z_{1}}\}_{a_{1}},\cdots,\{\tau_{A_{n}}^{a_{n}|z_{n}}\}_{ a_{n}}\}\) at quantum nodes \(\mathbf{A}=\{A_{1},\cdots,A_{n}\}\) by \[P_{\mathbf{z}}(\mathbf{a})=\mathrm{Tr}_{A_{1}\cdots A_{n}}[\sigma_{A_{1}, \cdots,A_{n}}(\tau_{A_{1}}^{a_{1}|z_{1}}\otimes\cdots\otimes\tau_{A_{n}}^{a_{n }|z_{n}})]\;. \tag{16}\] Eq. (16) defines a generalization of the Born rule (on the composite system \(\bigotimes_{i=1}^{n}\mathcal{H}_{A_{i}^{\mathrm{in}}}\otimes\mathcal{H}_{A_{i} ^{\mathrm{out}}}\)) [34, 35]. **Quantum causal models.** With the above ingredients, we obtain quantum generalizations of the causal Markov condition in Def. 1 and thereby of classical causal models (causal networks) in Def. 2. **Definition 7** (Quantum causal Markov condition).: _A quantum process operator \(\sigma_{\mathbf{A}}=\sigma_{A_{1},\cdots,A_{n}}\) is Markov for a given DAG \(G\) if and only if there exist operators \(\rho_{A_{i}|\mathrm{Pa}(A_{i})}\) for each quantum node \(A_{i}\) of \(G\) such that9_ Footnote 9: Here and below, we implicitly assume the individual operators \(\rho_{A_{i}|\mathrm{Pa}(A_{i})}\) to be ‘padded’ with identities on all nodes not explicitly involved in \(\rho_{A_{i}|\mathrm{Pa}(A_{i})}\) such that the multiplication of operators is well-defined. \[\sigma_{\mathbf{A}}=\prod_{i=1}^{n}\rho_{A_{i}|\mathrm{Pa}(A_{i})}\;, \tag{17}\] _and \([\rho_{A_{i}|\mathrm{Pa}(A_{i})},\rho_{A_{j}|\mathrm{Pa}(A_{j})}]=0\) for all \(i,j\in\{1,\cdots,n\}\)._ **Definition 8** (Quantum causal model).: _A quantum causal model is a pair \((G,\sigma_{\mathbf{A}})\), consisting of a DAG \(G\), whose vertices represent quantum nodes \(\mathbf{A}=\{A_{1},\cdots,A_{n}\}\), and a quantum process operator that is Markov with respect to \(G\), according to Def. 7._ ### Quantum structural causal models Recall that in the classical case, counterfactuals are evaluated relative to a classical structural causal model (CSM) \(M=\langle\mathbf{U},\mathbf{V},\mathbf{F}\rangle\) (see Def. 3), which associates an exogenous variable \(U_{i}\in\mathbf{U}\) and a function \(\mathbf{F}\ni f_{i}:\mathrm{Pa}(A_{i})\times U_{i}\to A_{i}\), to every node \(V_{i}\in\mathbf{V}\). Given a CSM, we thus have full information about the underlying process and any uncertainty arises solely from our lack of knowledge about the values of the variables at exogenous nodes, which is encoded in the probability distribution \(P(\mathbf{u})\) of the probabilistic structural causal model (PSM) \((M,P(\mathbf{u}))\). In order to define a notion of quantum structural causal models, we find it useful to introduce the lack of knowledge on exogenous nodes directly in terms of a special type of quantum instruments,10 Footnote 10: Here, our formalism diverges from the one in Ref. [29], which assigns the lack of knowledge about exogenous degrees of freedom as part of the process operator \(\sigma\), and which does not distinguish between different state preparations. This is a change in perspective in so far as we will assume knowledge about how states are prepared. \[\{\tau_{\Lambda}^{\lambda}\}_{\lambda}:=\{P(\lambda)(\rho_{\Lambda^{\mathrm{ out}}}^{\lambda})^{T}\otimes\mathbb{I}_{\Lambda^{\mathrm{in}}}\}_{\lambda}\;. \tag{18}\] Quantum instruments of this form discard the input to the node \(\Lambda\) and with probability \(P(\lambda)\) prepare the state \(\rho^{\lambda}\) in the output. In other words, \(\{\tau^{\lambda}_{\Lambda}\}_{\lambda}\) is a discard-and-prepare instrument. Ignoring the outcome of this instrument, one obtains the channel \(\tau^{\rho}_{\Lambda}=\sum_{\lambda}\tau^{\lambda}_{\Lambda}\), corresponding to the preparation of state \(\rho=\sum_{\lambda}P(\lambda)\rho^{\lambda}\) in the output of node \(\Lambda\). Note that the outcome and output of a discard-and-prepare instrument are independent of the input state \(\rho_{\Lambda^{\text{in}}}\). In order to avoid carrying around arbitrary input states in formulas below (as required for normalization), we will therefore adopt the convention, \[\{\widehat{\tau}^{\lambda}_{\Lambda}\}_{\lambda}:=\{P(\lambda)(\rho^{\lambda}_ {\Lambda^{\text{out}}})^{T}\otimes\frac{1}{\dim(\mathcal{H}_{\Lambda^{\text{in }}})}\mathbb{I}_{\Lambda^{\text{in}}}\}_{\lambda}\,, \tag{19}\] such that \(\operatorname{Tr}_{\Lambda^{\text{in}}}[\widehat{\tau}^{\lambda}_{\Lambda}]= \operatorname{Tr}_{\Lambda^{\text{in}}}[\tau^{\lambda}_{\Lambda}\rho_{\Lambda ^{\text{in}}}]\) for any state \(\rho_{\lambda^{\text{in}}}\). **Definition 9**.: _(no-influence condition). Let \(\rho^{U}_{CD|AB}\) be the Choi-Jamiolkowski (CJ) representation of the channel corresponding to the unitary transformation \(U:\mathcal{H}_{A}\otimes\mathcal{H}_{B}\rightarrow\mathcal{H}_{C}\otimes \mathcal{H}_{D}\). We say that system \(A\) does not influence system \(D\) (denoted as \(A\nrightarrow D\)) if and only if there exists a quantum channel \(\mathcal{M}:\mathcal{L}(\mathcal{H}_{B})\rightarrow\mathcal{L}(\mathcal{H}_{ D})\) with corresponding CJ representation \(\rho^{\mathcal{M}}_{D|B}\) such that \(\operatorname{Tr}_{C}[\rho^{U}_{CD|AB}]=\rho^{\mathcal{M}}_{D|B}\otimes \mathbb{I}_{A}\).11_ Footnote 11: We remark that the labels \(A,B,C,D\) refer to arbitrary systems, not necessarily nodes in a quantum causal model. Within a quantum causal model, two of those labels, say \(A\) and \(C\), may refer to output and input Hilbert spaces of the same node. Given these preliminaries, we define a quantum version of the structural causal models in Def. 3. **Definition 10** (Quantum structural causal model).: _A quantum structural causal model (QSM) is a triple \(M_{Q}=((\mathbf{A},\mathbf{\Lambda},S),\rho^{U}_{\mathbf{A}S|\mathbf{A}\mathbf{ \Lambda}},\{\tau^{\lambda_{i}}_{\Lambda_{i}}\}_{\lambda_{i}})\), specified by:_ 1. _a set of quantum nodes, which are split into_ * _a set of endogenous nodes_ \(\mathbf{A}=\{A_{1},\cdots,A_{n}\}\)_,_ * _a set of exogenous nodes_ \(\mathbf{\Lambda}=\{\Lambda_{1},\cdots,\Lambda_{n}\}\)_,_ * _and a sink node_ \(S\)_;_ 2. _a unitary_ \(\rho^{U}_{\mathbf{A}S|\mathbf{A}\mathbf{\Lambda}}\) _that satisfies the no-influence conditions_ \[\{\Lambda_{j}\nrightarrow A_{i}\}_{j\nmid i}\] (20) _according to Def._ 9_; and_ 3. _a set of discard-and-prepare instruments_ \(\{\tau^{\lambda_{i}}_{\Lambda_{i}}\}_{\lambda_{i}}\) _for every exogenous node_ \(\Lambda_{i}\in\mathbf{\Lambda}\)_._ Note that in general we need to include an additional _sink node_\(S\), in order for the process operator \(\rho^{U}_{\mathbf{A}S|\mathbf{A}\mathbf{\Lambda}}\) to be unitary. \(S\) contains any excess information that is discarded in the process (cf. Ref. [29]). We emphasize the subtle, but conceptually crucial difference between Def. 4.5 in Ref. [29] and our Def. 10. The former specifies the input states on ancillary nodes directly, as part of a 'unitary process with inputs', while the latter encodes input states in terms of discard-and-prepare instruments, acting on an arbitrary input state. This will enable us to use classical Bayesian inference on the outcomes of instruments at exogenous nodes in the (abduction step of the) evaluation of quantum counterfactuals in Thm. 3 below, while this is not possible using Def. 4.5 in Ref. [29], but instead requires a generalisation of Bayesian inference to the quantum case (see Sec. 8). Following Ref. [29], we define a notion of _structural compatibility_ of a process operator \(\sigma_{\mathbf{A}}\) with a graph \(G\). **Definition 11**.: _[Compatibility of a quantum process operator with a DAG] A quantum process operator \(\sigma_{\mathbf{A}}=\sigma_{A_{1}\cdots A_{n}}\) over nodes \(\mathbf{A}=\{A_{1},\cdots,A_{n}\}\) is said to be structurally compatible with a DAG \(G\) if and only if there exists a quantum structural causal model (QSM) \(M_{Q}=((\mathbf{A},\mathbf{\Lambda},S),\rho^{U}_{\mathbf{A}S|\mathbf{A}\mathbf{ \Lambda}},\{\tau^{\lambda_{i}}_{\Lambda_{i}}\}_{\lambda_{i}})\) that recovers \(\sigma_{\mathbf{A}}\) as a marginal,_ \[\sigma_{\mathbf{A}}=\operatorname{Tr}_{S^{\text{in}}\mathbf{\Lambda}}[\rho^{U}_ {\mathbf{A}S|\mathbf{A}\mathbf{\Lambda}}(\widehat{\tau}^{\rho_{1}}_{\Lambda_ {1}}\otimes\cdots\otimes\widehat{\tau}^{\rho_{n}}_{\Lambda_{n}})]\, \tag{21}\] _where \(\rho^{U}_{\mathbf{A}S|\mathbf{A}\mathbf{\Lambda}}\) satisfies the no-influence relations_ \[\{A_{j}\nrightarrow A_{i}\}_{A_{j}\notin\operatorname{Pa}(A_{i})}\, \tag{22}\] _with \(\operatorname{Pa}(A_{i})\) defined by \(G\)._ Similar to Thm. 4.10 in Ref. [29], one shows that a process operator \(\sigma_{\mathbf{A}}\) is structurally compatible with \(G\) if and only if it is Markov for \(G\). **Theorem 2** (Equivalence of quantum compatibility and Markovianity).: _For a DAG \(G\) with nodes \(\mathbf{A}=\{A_{1},\cdots,A_{n}\}\) and a quantum process operator \(\sigma_{\mathbf{A}}\), the following are equivalent:_ 1. \(\sigma_{\mathbf{A}}\) _is structurally compatible with_ \(G\)_._ 2. \(\sigma_{\mathbf{A}}\) _is Markov for_ \(G\)_._ Proof.: The difference between our definition of'structural compatibility' in Def. 11 and that of 'compatibility' in Def. 4.8 in Ref. [29] is that the latter applies to a "unitary process with inputs" (see Def. 4.5 in Ref. [29]), while Def. 11 applies to a QSM as defined in Def. 10. Yet, we show that \(\sigma_{\mathbf{A}}\) is compatible with \(G\) if and only if it is structurally compatible with \(G\). The result then follows from the proof of Thm. 4.10 in Ref. [29]. First, let \(\sigma_{\mathbf{A}}\) be compatible with \(G\), then by Def. 4.8 in Ref. [29] there exists a unitary process \(\rho^{U}_{\mathbf{A}\mathbf{A}\mathbf{A}}\) that satisfies the no-influence conditions \(\{A_{j}\nrightarrow A_{i}\}_{A_{j}\notin\mathbf{p}_{\mathbf{A}}(A_{i})}\) and \(\{\Lambda_{j}\nrightarrow A_{i}\}_{j=i}\), and states \(\rho_{\Lambda_{1}}\otimes\cdots\otimes\rho_{\Lambda_{n}}\) such that \(\sigma_{\mathbf{A}}\) is recovered as a marginal, \[\sigma_{\mathbf{A}}=\operatorname{Tr}_{S^{\mathrm{in}}\mathbf{A}^{\mathrm{out }}}\left[\rho^{U}_{\mathbf{A}\mathbf{S}|\mathbf{A}\mathbf{A}}(\rho^{T}_{ \Lambda_{1}}\otimes\cdots\otimes\rho^{T}_{\Lambda_{n}})\right]\,, \tag{23}\] where we traced over the inputs of exogenous nodes \(\Lambda_{i}\). Choosing discard-and-prepare measurements \(\{\tau^{\lambda_{i}}_{\Lambda_{i}}\}_{\lambda_{i}}\) such that \(\tau^{\rho_{\Lambda_{i}}}_{\Lambda_{i}}:=\sum_{\lambda_{i}}\tau^{\lambda_{i}}_ {\Lambda_{i}}\) (cf. Eq. (19)), \(\{(\mathbf{A},\mathbf{A},S),\rho^{U}_{\mathbf{A}\mathbf{S}|\mathbf{A}\mathbf{A }},\{\tau^{\lambda_{i}}_{\Lambda_{i}}\}_{\lambda_{i}}\}\) defines a QSM (cf. Def. 10): in particular, \(\rho^{U}_{\mathbf{A}\mathbf{S}|\mathbf{A}\mathbf{A}}\) satisfies Eq. (20). Moreover, \(\rho^{U}_{\mathbf{A}\mathbf{S}|\mathbf{A}\mathbf{A}}\) also satisfies Eq. (22), and Eq. (23) implies Eq. (21. From this it follows that \(\sigma_{\mathbf{A}}\) is structurally compatible with \(G\). Conversely, if \(\sigma_{\mathbf{A}}\) is structurally compatible with \(G\) it admits a QSM \(\{(\mathbf{A},\mathbf{A},S),\rho^{U}_{\mathbf{A}\mathbf{S}|\mathbf{A}\mathbf{ A}},\{\tau^{\lambda_{i}}_{\Lambda_{i}}\}_{\lambda_{i}}\}\), from which we extract the unitary process operator \(\rho^{U}_{\mathbf{A}\mathbf{S}|\mathbf{A}\mathbf{A}}\) satisfying the no-influence conditions in Eq. (20) and Eq. (22), and which recovers \(\sigma_{\mathbf{A}}\) as a marginal in Eq. (23) for inputs \(\rho_{\Lambda_{i}}=\operatorname{Tr}_{\Lambda^{\mathrm{in}}}[\mathscr{F}^{ \rho_{\Lambda_{i}}}_{\Lambda_{i}}]=\sum_{\lambda_{i}}\operatorname{Tr}_{ \Lambda^{\mathrm{in}}}[\mathscr{F}^{\lambda_{i}}_{\Lambda_{i}}]\), as a consequence of Eq. (21). It then follows that \(\sigma_{\mathbf{A}}\) is compatible with \(G\). Theorem 2 establishes that for every process operator that is Markov for a graph \(G\), there exists a QSM model over \(G\) that reproduces that process. Note however that this does not necessarily give us information about which QSM _correctly_ describes a given physical process. This requires that the outcomes of instruments at the exogenous nodes correspond to stable events (cf. Ref. [36]), e.g. due to decoherence. The evaluation of counterfactuals will be relative to a QSM, and different QSMs compatible with the same process \(\sigma_{\mathbf{A}}\) will in general give different answers to the same counterfactual query. This situation is analogous to the classical case. The question of determining _which_ (classical or quantum) structural causal model correctly describes a given physical realisation of a process is an important question, but beyond the scope of this work. Finally, we need the following notion (cf. Eq. (19) in Ref. [35]). Given a particular set of outcomes \(\boldsymbol{\lambda}=(\lambda_{1},\cdots,\lambda_{n})\) at the exogenous instruments, we define a _conditional process operator_ as follows, \[\sigma^{\boldsymbol{\lambda}}_{\mathbf{A}}=\frac{\operatorname{Tr}_{S^{\mathrm{ in}}\mathbf{A}}\left[\rho^{U}_{\mathbf{A}\mathbf{S}|\mathbf{A}\mathbf{A}}( \mathscr{F}^{\lambda_{1}}_{\Lambda_{1}}\otimes\cdots\otimes\mathscr{F}^{ \lambda_{n}}_{\Lambda_{n}})\right]}{P(\lambda_{1},\cdots,\lambda_{n})}\,. \tag{24}\] This allows us to calculate the conditional probability '\(P_{\mathbf{z}}(\mathbf{a}|\boldsymbol{\lambda})\)' to obtain a set of outcomes \(\mathbf{a}=(a_{1},\cdots,a_{n})\) for a set of instruments \(\mathbf{z}=(z_{1},\cdots,z_{n})\) at endogenous nodes, given a set of outcomes \(\boldsymbol{\lambda}\) for the exogenous instruments: \[P_{\mathbf{z}}(\mathbf{a}|\boldsymbol{\lambda})=\operatorname{Tr}_{\mathbf{A}} [\sigma^{\boldsymbol{\lambda}}_{\mathbf{A}}\tau^{\mathbf{a}|\mathbf{z}}_{ \mathbf{A}}]\qquad\qquad\text{with}\qquad\tau^{\mathbf{a}|\mathbf{z}}_{ \mathbf{A}}=\tau^{a_{1}|z_{1}}_{A_{1}}\otimes\cdots\otimes\tau^{a_{n}|z_{n}}_{A _{n}}\,. \tag{25}\] Assuming that the a QSM correctly describes a given physical scenario, and in particular that the events associated with \(\boldsymbol{\lambda}\) can be thought of as well-decohered, stable events, we can think of Eq. (24) as representing the _actual_ process realised in a given run of the experiment, where our (prior) ignorance about which process is actually realised is encoded in the subjective probabilities \(P(\lambda_{1},\cdots,\lambda_{n})\). ## 5 Counterfactuals in Quantum Causal Models Classically, a counterfactual query has the form "Given evidence \(\mathbf{e}\), would \(\mathbf{Y}\) have been \(\mathbf{y}\) had \(\mathbf{Z}\) been \(\mathbf{z}\)?". In Pearl's formalism, the corresponding counterfactual statement can be assigned a truth value given a full specification \(\mathbf{U}=\mathbf{u}\) of the background conditions in a structural causal model. In that formalism, probabilities only arise out of our lack of knowledge about exogenous variables, and one can define the probability for the counterfactual to be true as the probability that \(\mathbf{u}\) lies in the range of values where the counterfactual is evaluated as true. In contrast, in quantum causal models, a counterfactual statement will in general not have a truth value! This is the case even if we are given maximal information about the process (represented as a unitary process) and maximal information about the events at the exogenous nodes (represented as a full specification of the exogenous variables '\(\boldsymbol{\Lambda}=\boldsymbol{\lambda}\)' in a quantum structural causal model12). Footnote 12: Here we are assuming that maximal information about an event corresponding to the preparation of a quantum state is given by a (pure) quantum state. This of course assumes that quantum mechanics is “complete” in the sense that there are no hidden variables that would further specify the outcomes of instruments. While this is admittedly an important assumption, it is the natural assumption to make in the context of quantum causal models—which aim to maintain compatibility with relativistic causality [37]. In order to avoid the implicit assumption of 'counterfactual definiteness' inherent to the notion of a _probability of a counterfactual_ as in the classical case (see Def. 4), we seek a notion of _counterfactual probabilities_ in the quantum case. More precisely, we define a standard quantum counterfactual query as follows. **Definition 12** (standard quantum counterfactual query).: _Let \(M_{Q}=\langle(\mathbf{A},\boldsymbol{\Lambda},S),P^{U}_{\boldsymbol{\Lambda} \boldsymbol{\Lambda}},\{\tau^{\lambda_{i}}_{A_{i}}\}_{\lambda_{i}}\rangle\) be a quantum structural causal model. Then a standard quantum counterfactual query, denoted by \(P^{\mathbf{a}\mathbf{a}}_{z^{\prime}}(\mathbf{c}^{\prime}|\mathbf{b}^{\prime})\), is the probability that outcomes \(\mathbf{c}^{\prime}\) would have obtained for a subset of nodes \(\boldsymbol{C}\), had instruments \(\boldsymbol{z}^{\prime}=(z^{\prime}_{1},\cdots,z^{\prime}_{n})\) been implemented and outcomes \(\mathbf{b}^{\prime}\) obtained at a set of nodes \(\boldsymbol{B}\) (disjoint from \(\boldsymbol{C}\)), given the evidence that a set of instruments \(\boldsymbol{z}=(z_{1},\cdots,z_{n})\) has been implemented and outcomes \(\mathbf{a}=(a_{1},\cdots,a_{n})\) obtained._ Note that to obtain an unambiguous answer, one needs to specify all the instruments in all the nodes, both actual and counterfactual. Def. 12 may not look general enough to accommodate all types of counterfactuals one can envisage, but we will discuss later how the answer to seemingly different types of counterfactual queries can be obtained from the answer to a standard query after suitable interpretation. At times there will be ambiguity in how to interpret some counterfactual queries, and the task of interpretation will be to reduce any counterfactual query to the appropriate standard query--we will return to this later. We now proceed to show how we can answer a quantum counterfactual query. ### Evaluation of counterfactuals The evaluation of a standard counterfactual query within a quantum structural causal model proceeds through a _three-step process of abduction, action and prediction_, in analogy with the classical case. **Abduction.** We infer what the past must have been, given information we have at present, that is, we want to update our information about the instrument outcomes \(\lambda_{i}\) at the exogenous nodes \(\Lambda_{i}\), given that outcomes \(a_{i}\) have been observed upon performing instruments \(z_{i}\) at nodes \(A_{i}\).13 Since we are talking about jointly measured variables, we can perform Bayesian updating to calculate the conditional probability14 Footnote 13: In the language of Ref. [36], we treat the outcomes \(\lambda_{i}\) at exogenous nodes \(\Lambda_{i}\) as “stable facts”. Footnote 14: Here, we assume that \(P_{\mathbf{z}}(\mathbf{a})>0\) as we interpret \(\mathbf{a}\) as an actually observed event. \[P_{\mathbf{z}}(\boldsymbol{\lambda}|\mathbf{a})=\frac{P_{\mathbf{z}}(\mathbf{ a}|\boldsymbol{\lambda})P(\boldsymbol{\lambda})}{P_{\mathbf{z}}(\mathbf{a})}= \frac{\mathrm{Tr}_{\mathbf{A}}\left[\sigma^{\boldsymbol{\lambda}}_{\mathbf{A} }\tau^{\mathbf{a}\mathbf{i}\mathbf{z}}_{\mathbf{A}}\right]P(\lambda_{1}, \cdots,\lambda_{n})}{\mathrm{Tr}_{\mathbf{A}}\left[\sigma_{\mathbf{A}}\tau^{ \mathbf{a}\mathbf{i}\mathbf{z}}_{\mathbf{A}}\right]}\;. \tag{26}\] We then define an updated process operator, which is a weighted sum of the conditional process operators \(\sigma^{\boldsymbol{\lambda}}_{\mathbf{A}}\) (see Eq. (24)) with the updated probability distribution \(P_{\mathbf{z}}(\boldsymbol{\lambda}|\mathbf{a})\) for exogenous variables: \[\sigma^{\mathbf{a}\mathbf{i}\mathbf{z}}_{\mathbf{A}}=\sum_{\lambda_{1},\cdots,\lambda_{n}}P_{\mathbf{z}}(\boldsymbol{\lambda}|\mathbf{a})\sigma^{\boldsymbol {\lambda}}_{\mathbf{A}}\;. \tag{27}\] **Action.** Next, we modify the instruments at endogenous nodes to \(\{\tau^{a_{i}|z^{\prime}_{i}}_{A_{i}}\}_{a^{\prime}_{i}}\), as required by the antecedent of the counterfactual query. We highlight an important distinction from the classical case: unlike in Pearl's formalism, we do not need to modify the process itself, since an 'arrow-breaking' intervention at a node \(A\) can always be emulated via some appropriate discard-and-prepare instrument, for example, by the instrument \[\tau^{\mathrm{do}(\rho)}_{A}:=\left(\rho_{A^{\mathrm{out}}}\right)^{T}\otimes \mathbb{I}_{A^{\mathrm{in}}}\;. \tag{28}\] Deciding what instruments are appropriate for a given counterfactual query not in standard form is part of the interpretational task we will return to in Sec. 7 below. For a standard quantum counterfactual query, this is unambiguous since the counterfactual instruments are defined as part of the query (see Def. 12). **Prediction.** Finally, we calculate \(P_{\mathbf{x}^{\prime}}^{\mathbf{a}\mathbf{j}}(\mathbf{c}^{\prime}|\mathbf{b}^{ \prime})\), using the updated process operator in Eq. (27) with the instruments specified in the counterfactual (henceforth marked with a prime). For \(P_{\mathbf{x}^{\prime}}^{\mathbf{a}\mathbf{j}}(\mathbf{b}^{\prime})\neq 0\), we set \[P_{\mathbf{x}^{\prime}}^{\mathbf{a}\mathbf{j}}(\mathbf{c}^{\prime}|\mathbf{b}^ {\prime})=\frac{\mathrm{Tr}_{\mathbf{a}}\left[\sigma_{\mathbf{A}}^{\mathbf{a} \mathbf{j}\mathbf{z}}(\tau_{\mathbf{B}}^{\mathbf{b}^{\prime}\mathbf{j}^{ \prime}\mathbf{z}}\otimes\tau_{\mathbf{C}}^{\mathbf{c}^{\prime}\mathbf{j}^{ \prime}\mathbf{c}^{\prime}}\otimes\tau_{\mathbf{A}\cdot\mathbf{B}\cdot\mathbf{ C}}^{\mathbf{j}^{\prime}\mathbf{z}^{\prime}}\right]}{\mathrm{Tr}_{\mathbf{A}}\left[ \sigma_{\mathbf{A}}^{\mathbf{a}\mathbf{j}\mathbf{z}}(\tau_{\mathbf{B}^{ \prime}\mathbf{j}^{\prime}\mathbf{z}}\otimes\tau_{\mathbf{A}\cdot\mathbf{B} \cdot\mathbf{C}}^{\mathbf{j}^{\prime}\mathbf{z}^{\prime}})\right]}\;, \tag{29}\] where \(\tau_{\mathbf{B}}^{\mathbf{b}^{\prime}\mathbf{j}^{\prime}\mathbf{z}^{\prime}_{ B}}=\bigotimes_{B_{j}\in\mathbf{B}}\tau_{B_{j}}^{b^{\prime}\mathbf{j}^{ \prime}_{j}}\), \(\tau_{\mathbf{C}}^{\mathbf{c}^{\prime}\mathbf{j}^{\prime}_{C}}=\bigotimes_{C_{ k}\in\mathbf{C}}\tau_{C_{k}}^{c^{\prime}\mathbf{j}^{\prime}_{k}}\), \(\tau_{\mathbf{A}\cdot\mathbf{B}\cdot\mathbf{C}}^{\mathbf{j}^{\prime}_{k}}= \bigotimes_{A_{i}\in\mathbf{B}\cdot\mathbf{C}}\tau_{A_{i}}^{\mathbf{j}^{ \prime}_{i}}\) and \(\tau_{\mathbf{A}\cdot\mathbf{B}}^{\mathbf{k}^{\prime}}=\bigotimes_{A_{i}\in \mathbf{B}\cdot\mathbf{C}}\eta_{A_{i}}^{\mathbf{j}^{\prime}_{i}}\). For \(P_{\mathbf{x}^{\prime}}^{\mathbf{a}\mathbf{j}}(\mathbf{b}^{\prime})=0\), we set \(P_{\mathbf{x}^{\prime}}^{\mathbf{a}\mathbf{j}}(\mathbf{c}^{\prime}|\mathbf{b}^ {\prime})=\ast\) for counterfactuals with impossible antecedent ('counterpossibles').15 Footnote 15: We will not be concerned with the interpretation of the precise value assigned to a counterpossible (for a debate on this issue, see e.g. Ref. [38, 39]). We merely fix a value such that Def. 12 is well-defined; we discuss their disambiguation in Sec. 7.2. In summary, we obtain the following generalization of Thm. 1. **Theorem 3**.: _Given a structural quantum causal model (QSM) \(M_{Q}=\{(\mathbf{A},\mathbf{\Lambda},S),\rho_{\mathbf{A}S|\mathbf{A}\mathbf{ \Lambda}}^{\{\lambda_{i}\}},\{\tau_{\Lambda_{i}}^{\lambda_{i}}\}_{\lambda_{i}}\}\) (see Def 10), and disjoint subsets \(\mathbf{B},\mathbf{C}\in\mathbf{\Lambda}\), the counterfactual probability \(P_{\mathbf{x}^{\prime}}^{\mathbf{a}\mathbf{j}}(\mathbf{c}^{\prime}|\mathbf{b}^ {\prime})\) (see Def. 12) can be evaluated systematically by the three-step procedure outlined in Eqs. (26)-(29)._ If a counterfactual query can be interpreted as a standard quantum counterfactual query, then it will have an unambiguous answer as above. In Sec. 7, we will discuss the task of interpreting a general quantum counterfactual query that is not already in standard form. Before doing so, we proceed by proving how the present formalism extends Pearl's classical formalism. ## 6 From classical to quantum structural causal models Having defined a notion of quantum structural causal models (QSM) in Def. 10, it is an important question to ask in what sense this definition extends that of a probabilistic structural causal model (PSM) in Def. 5 and, in particular, that of a classical structural causal model (CSM) in Def. 3. In this section, we show that QSMs indeed provide a generalization of PSMs--by extending an arbitrary PSM \((M,P(\mathbf{u}))\) to a QSM \(M_{Q}\). In order to do so, we need to take care of two crucial physical differences between Def. 3 and Def. 10. First, note that the structural relations \(\mathbf{F}\) in a CSM \(M=(\mathbf{U},\mathbf{V},\mathbf{F})\) are generally not reversible, while unitary evolution in QSMs postulates an underlying reversible process. We therefore need to lift a generic CSM to a reversible CSM, whose structural relations are given in terms of bijective functions, yet whose independence conditions coincide with those of the original CSM. Second, while classical information (in a CSM) can be copied, quantum information famously cannot. We therefore need to find a mechanism to encode classical copy operations into a QSM. This will require us to introduce auxiliary systems, which also need to preserve the no-influence conditions required between exogenous variables in Def. 10, (ii). The next theorem asserts that an extension of a CSM to a QSM satisfying these constraints always exists. **Theorem 4**.: _Every PSM \((M,P(\mathbf{u}))\), consisting of a CSM \(M=(\mathbf{U},\mathbf{V},\mathbf{F})\) and a probability distribution \(P(\mathbf{u})\) over exogenous variables, can be extended to a QSM \(M_{Q}=\{(\mathbf{V}^{\prime\prime},\mathbf{\Lambda}^{\prime\prime},S^{\prime \prime}),\rho_{\mathbf{V}^{\prime\prime}S^{\prime\prime}|\mathbf{V}^{\prime \prime}\mathbf{A}^{\prime\prime}}^{W}\{\tau_{\Lambda_{i}^{\prime\prime}}^{u_{i}}\}\) such that_ \[P(\mathbf{v}) =\sum_{u_{i}}\prod_{i=1}^{n}\delta_{v_{i},f_{i}(pa_{i},u_{i})}P(u_ {i}) \tag{30}\] \[=\sum_{\mathbf{u}}\mathrm{Tr}_{S^{\prime\prime\prime}S^{\prime \prime}[\rho_{\mathbf{V}^{\prime\prime}S^{\prime\prime}|\mathbf{V}^{\prime \prime}\mathbf{A}^{\prime\prime}}^{W}(\tau_{V_{1}^{\prime\prime}}^{v_{1}} \otimes\cdots\otimes\tau_{V_{n}^{\prime\prime}}^{v_{n}})\otimes(\widetilde{\tau }_{\Lambda_{1}^{\prime}}^{u_{1}}\otimes\cdots\otimes\widetilde{\tau}_{\Lambda_{n}^ {\prime\prime}}^{u_{n}})]}\;, \tag{31}\] _In particular, \(M_{Q}\) preserves the independence conditions between variables \(\mathbf{V}\) in \(M\) (as defined by \(\mathbf{F}\)),_ \[\rho_{\mathbf{V}^{\prime\prime}S^{\prime\prime}|\mathbf{V}^{\prime \prime}\mathbf{A}^{\prime\prime}}^{W}=\prod_{i=1}^{n}\rho_{V_{i}^{\prime\prime}S ^{\prime\prime}|}^{W_{i}}P_{u_{i}^{\prime\prime}\Lambda_{i}^{\prime\prime}}^{a_{i }^{\prime\prime}\Lambda_{i}^{\prime\prime}}\;. \tag{32}\] Proof.: _(Sketch)_ The proof consists of several parts: 1. we find a binary extension of the CSM \(M\), 2. we extend the binary CSM to a binary, reversible CSM, where all functional relations are bijective, 3. we encode classical copy operations in a QSM using CNOT-gates, 4. by promoting classical variables to quantum nodes, and by linearly extending bijective functions between classical variables to isometries, we construct a QSM \(M_{Q}\), which extends the PSM \(\langle M,P(\mathbf{u})\rangle\) as desired. For details of the proof, see App. A. We will see in Sec. 7 that a QSM admits different types of counterfactual queries, some of which are genuinely quantum, that is, they do not arise in a CSM. Nevertheless, Thm. 4 implies that counterfactual queries arising in a PSM \(\langle M,P(\mathbf{u})\rangle\) coincide with the corresponding queries in its quantum extension \(M_{Q}\). **Corollary 1**.: _The evaluation of a counterfactual in a (PSM) \(\langle M,P(\mathbf{u})\rangle\) coincides with the evaluation of the corresponding do-interventional counterfactual (see also Sec. 7) in its quantum extension \(M_{Q}\)._ Proof.: Given a distribution over exogenous nodes, Thm. 4 assures that do-interventions in Eq. (2) yield the same prediction--whether evaluated via Eq. (6) in \(M\) or as a do-interventional counterfactual via Eq. (29) in \(M_{Q}\). This leaves us with the update step in Pearl's analysis of counterfactuals (cf. Thm. 1). More precisely, we need to show that the Bayesian update in Eq. (26) does not affect the distribution over the space of additional ancillae \(\mathbf{T}^{\prime}\) and \(\boldsymbol{\Lambda}^{\prime}\) in the proof of Thm. 4. This is a simple consequence of the way distributions \(P(\mathbf{u})\) over exogenous nodes in \(M\) are encoded in \(M_{Q}\). First, the distribution over copy ancillae \(\Lambda^{\prime}_{i}\) is given by a \(\delta\)-distribution peaked on the state \(|0\rangle\langle 0|_{\Lambda_{i}}\) (see Eq.(88) in App. A). In other words, we have full knowledge of the initialization of the copy ancillae, hence, the update step in Eq. (26) is trivial in this case. Second, let \(P(\mathbf{u}^{\prime})=P(\mathbf{u},\mathbf{t}^{\prime})\) be any distribution over exogenous nodes in the binary, reversible extension \(M^{\prime}\) of \(M\) (see (i) and (ii) in App. A) such that \(P(\mathbf{u})=\sum_{\mathbf{t}^{\prime}\in\mathbf{T}^{\prime}}P(\mathbf{u}, \mathbf{t}^{\prime})\), that is, \(P(\mathbf{u})\) arises from \(P(\mathbf{u}^{\prime})\) by marginalisation under the discarding operation \(\pi\) (see (ii) in App. A).16 But since the variables \(T^{\prime}_{i}\) in \(U^{\prime}_{i}=T^{\prime}_{i}\times U_{i}\) are related only to the sink node \(S^{\prime}_{i}\) via \(f^{\prime}_{i}\) (see Eq. (77) in App. A), we have \(P_{\mathbf{z}}(\mathbf{a}|\mathbf{u}^{\prime})=P_{\mathbf{z}}(\mathbf{a}| \mathbf{u},\mathbf{t}^{\prime})=P_{\mathbf{z}}(\mathbf{a}|\mathbf{u})\). The marginalised updated distribution thus reads Footnote 16: A canonical choice for \(P(\mathbf{u}^{\prime})\) is the product distribution of \(P(\mathbf{u})\) and the uniform distribution over \(\mathbf{T}^{\prime}\), \(P(\mathbf{u}^{\prime})=\frac{1}{|\mathbf{T}^{\prime}|}P(\mathbf{u})\). \[\sum_{\mathbf{t}^{\prime}\in\mathbf{T}^{\prime}}P_{\mathbf{z}}(\mathbf{u}, \mathbf{t}^{\prime}|\mathbf{a})=\sum_{\mathbf{t}^{\prime}\in\mathbf{T}^{\prime }}\frac{P_{\mathbf{z}}(\mathbf{a}|\mathbf{u},\mathbf{t}^{\prime})P(\mathbf{u}, \mathbf{t}^{\prime})}{P_{\mathbf{z}}(\mathbf{a})}=\sum_{\mathbf{t}^{\prime}\in \mathbf{T}^{\prime}}\frac{P_{\mathbf{z}}(\mathbf{a}|\mathbf{u})P(\mathbf{u}, \mathbf{t}^{\prime})}{P_{\mathbf{z}}(\mathbf{a})}=\frac{P_{\mathbf{z}}( \mathbf{a}|\mathbf{u})P(\mathbf{u})}{P_{\mathbf{z}}(\mathbf{a})}=P_{\mathbf{z}} (\mathbf{u}|\mathbf{a})\;. \tag{33}\] In other words, Bayesian inference in Eq. (26) commutes with marginalisation. Thm. 4 and Cor. 1 show that our definition of QSMs in Def. 10 generalizes that of CSMs in Def. 3. What is more, this generalization is proper: a QSM cannot generally be thought of as a CSM, while also keeping the relevant independence conditions between the variables of the model. Indeed, casting a QSM to a CSM is to specify a local hidden variable model for the QSM, yet a general QSM will not admit a local hidden variable model.17 In short, the _counterfactual probabilities_ defined by a QSM can generally not be interpreted as _probabilities of counterfactuals_. We leave a more careful analysis of this subtle distinction for future work. Nevertheless, in the next section, we will see an instance of this distinction, namely we will identify counterfactual queries in the quantum case that do not have an analog in the classical case. Footnote 17: The existence of a joint probability distribution in Eq. (7) is guaranteed under the assumption of counterfactual definiteness’, which is violated in quantum theory (cf. Ref. [12]). ## 7 Interpretation of counterfactual queries In this section, we emphasize some crucial differences between the semantics of counterfactuals in classical and quantum causal models. Recall that in order to compute the probability of a counterfactual in a classical structural causal model (CSM), a do-intervention has to be performed in at least one of the nodes. Indeed, there is no other way for the antecedent of the counterfactual query to be true otherwise, since a specification of the values of exogenous variables completely determines the values of endogenous variables, and thus determines the antecedent to have its actual value. CSMs are inherently deterministic. In contrast, in a quantum structural causal model (QSM) the probability of obtaining a different outcome can be nonzero even without a do-intervention, since even maximal knowledge of the events at the exogenous nodes does not, in general, determine the outcomes of endogenous instruments. QSMs are inherently probabilistic. As a consequence, we will distinguish between two kinds of counterfactuals in the quantum case, namely, _passive_ and _active_ counterfactuals, which we define and discuss examples of in Sec. 7.1. In Sec. 7.2, we provide an argument for the disambiguation of passive from active counterfactuals, when faced with an ambiguous (classical) counterfactual query. Moreover, as a consequence of the richer semantics of quantum counterfactuals, in Sec. 7.3 we show how (passive) counterfactuals break the equivalence of causal and counterfactual dependence in the classical setting. We discuss this explicitly in the case of the Bell scenario. ### Passive and active counterfactuals Thm. 3 in Sec. 5 outlines a three-step procedure to evaluate counterfactual probabilities in quantum causal models. Note that, unlike in its classical counterpart (Thm. 1), an arrow-breaking do-intervention is not necessary in order to make the antecedent of the counterfactual true. Counterfactual queries can therefore be evaluated without a do-intervention on the underlying causal graph, and, in particular, without changing the instruments performed at quantum nodes at all. Indeed, in the setting of Def. 12 the antecedent has a nonzero probability of occurring while keeping both exogenous and endogenous instruments fixed, if there exists \(\boldsymbol{\lambda}\in\boldsymbol{\Lambda}\) that is compatible with the evidence \(\mathbf{a}\) and that gives nonzero probability for the antecedent \(\mathbf{b}^{\prime}\), \[P_{\boldsymbol{\mathrm{z}}}(\boldsymbol{\lambda}|\mathbf{a})>0\quad\wedge \quad P_{\boldsymbol{\mathrm{z}}^{\prime}}(\mathbf{b}^{\prime}|\boldsymbol{ \lambda})>0\;, \tag{34}\] where \(\mathbf{a}\) denotes the outcomes of performed instruments \(\mathbf{z}\), \(\mathbf{z}^{\prime}_{\mathbf{B}}=\mathbf{z}_{\mathbf{B}}\) and \(\mathbf{b}^{\prime}\) represents the antecedent of the counterfactual. The background variables \(\boldsymbol{\mathrm{\lambda}}\) that satisfy Eq. (34) are compatible with the actual observation \(\mathbf{a}\) as well as the counterfactual observation \(\mathbf{b}^{\prime}\) at a subset of nodes, without changing the instruments at any node. Crucially, unlike in the classical case, we will see that this may be the case even if the antecedent \(\mathbf{b}^{\prime}\) is incompatible with the observed values \(\mathbf{a}\). This motivates the following distinction of quantum counterfactuals. **Definition 13**.: _Let \(M_{Q}=\langle(\boldsymbol{\mathrm{A}},\boldsymbol{\mathrm{A}},S),\rho^{U}_{ \boldsymbol{\mathrm{A}}S|\boldsymbol{\mathrm{A}}\boldsymbol{\mathrm{A}}},\{ \tau^{\lambda_{i}}_{\lambda_{i}}\}_{\lambda_{i}}\rangle\) be a quantum structural causal model. A counterfactual \(P^{\boldsymbol{\mathrm{A}}\boldsymbol{\mathrm{z}}}_{*}(\mathbf{c}^{\prime}| \mathbf{b}^{\prime})\) in Def. 12 is called a passive counterfactual if \(\mathbf{z}^{\prime}_{\mathbf{B}}=\mathbf{z}_{\mathbf{B}}\), that is, if no intervention is performed on the nodes specified by the antecedent; otherwise it is called an active counterfactual._ _The special case of an active counterfactual where \(\mathbf{z}^{\prime}\) specifies a do-intervention, \(\tau^{\mathrm{do}(\boldsymbol{\mathrm{\rho}})}_{A}=\{(\boldsymbol{\mathrm{ \rho}}_{\boldsymbol{\mathrm{A}}^{\mathrm{out}}})^{T}\otimes\mathbb{I}_{ \boldsymbol{\mathrm{A}}^{\mathrm{in}}}\}\) (see Eq. (28)), will also be called a do-interventional counterfactual._ In the following, we discuss two examples of passive, active and do-interventional counterfactuals. _Example 1_.: Consider the causal graph in Fig. 8 and a compatible QSM \(M_{Q}=\langle(\boldsymbol{\mathrm{A}},\boldsymbol{\mathrm{A}}),\rho^{U}_{AB|A \Lambda},\{\tau^{\lambda}\}_{\lambda}\rangle\), where \(\boldsymbol{\mathrm{A}}=\{A,B\}\) represent endogenous nodes, \(\Lambda\) represents an exogenous node with the following discard-and-prepare instrument, \[\{\tau^{\lambda}\}_{\lambda=0,1}=\Big{\{}\frac{1}{2}(([0]_{\Lambda^{\mathrm{out }}})^{T}\otimes\mathbb{I}_{\lambda^{\mathrm{in}}}),\frac{1}{2}(([1]_{\Lambda^ {\mathrm{out}}})^{T}\otimes\mathbb{I}_{\Lambda^{\mathrm{in}}})\Big{\}}\;, \tag{35}\] such that \(\tau^{\frac{1}{2}\mathbb{I}}_{\Lambda}=\sum_{\lambda=0,1}\tau^{\lambda}_{\Lambda}\) prepares the maximally mixed state, and we assume identity channels between pairs of nodes, \[\rho^{U}_{AB|A\Lambda}=\rho^{\mathrm{id}}_{B|A}\rho^{\mathrm{id}}_{A|\Lambda }=\rho^{\mathrm{id}}_{B^{\mathrm{in}}|A^{\mathrm{out}}}\rho^{\mathrm{id}}_{A^{ \mathrm{in}}|A^{\mathrm{out}}}\;. \tag{36}\] Figure 8: A quantum causal model with endogenous nodes \(\boldsymbol{A}=\{A,B\}\), and exogenous node \(\Lambda\). With respect to the model \(M_{Q}\), we will calculate counterfactual probabilities of the form \(P^{a=+|\mathbf{z}}_{\mathbf{z}^{\prime}}(b^{\prime}|a^{\prime}=-)\), where we fix actual instruments \(\mathbf{z}=(z_{A}=1,z_{B})\) at endogenous nodes with \[\mathcal{I}^{z_{A}=1}_{A}=\{([+]_{A^{\text{out}}})^{T}\otimes[+]_{A^{\text{in}} },([-]_{A^{\text{out}}})^{T}\otimes[-]_{A^{\text{in}}}\}\;, \mathcal{I}^{z_{B}}_{B}=\{\tau^{b_{B}z_{B}}\}_{b}\;, \tag{37}\] but consider different counterfactual instruments \(\mathbf{z}^{\prime}=(z^{\prime}_{A},z^{\prime}_{B})\), corresponding to (i) passive, (ii) do-interventional, and (iii) active counterfactual queries. To this end, we first calculate the probabilities in Eq. (34), which requires the respective (conditional) process operators (cf. Eq. (24)). These are given as \[\sigma_{AB}=\text{Tr}_{\Lambda}\left[\rho^{U}_{AB|AA}\tau^{\underline{\gamma }\lambda 1}_{\Lambda}\right]=\text{Tr}_{\Lambda}\left[\rho^{\text{id}}_{B^{ \text{in}}|AA^{\text{out}}}\rho^{\text{id}}_{A^{\text{in}}|A^{\text{out}}}( \frac{1}{2}\mathbb{I}_{\Lambda^{\text{out}}}\otimes\frac{1}{2}\mathbb{I}_{ \Lambda^{\text{in}}})\right]=\rho^{\text{id}}_{B^{\text{in}}|A^{\text{out}}} \otimes\frac{1}{2}\mathbb{I}_{\Lambda^{\text{in}}}\;. \tag{38}\] The conditional process operators, conditioned on outcomes \(\lambda\in\{0,1\}\) of the instrument in Eq. (35), read \[\sigma^{\lambda=0}_{AB} =\frac{\text{Tr}_{\Lambda}\left[\rho^{U}_{AB|AA}\tau^{\underline{ \gamma}\lambda 0}_{\Lambda}\right]}{P(\lambda=0)}=\rho^{\text{id}}_{B^{\text{in}}|A^{ \text{out}}}\otimes[0]_{A^{\text{in}}}\;, \tag{39}\] \[\sigma^{\lambda=1}_{AB} =\frac{\text{Tr}_{\Lambda}\left[\rho^{U}_{AB|AA}\tau^{\underline{ \gamma}\lambda 1}_{\Lambda}\right]}{P(\lambda=1)}=\rho^{\text{id}}_{B^{\text{in}}|A^{ \text{out}}}\otimes[1]_{A^{\text{in}}}\;, \tag{40}\] If instrument \(\mathcal{I}^{1}_{A}\) yields outcome \(a=+\), we find the updated process operator (cf. Eq. (27)) to take the form \[\sigma^{a=+}_{AB}=\sum_{\lambda}P_{\mathbf{z}}(\lambda|a=+)\sigma^{\lambda}_{ AB}=\frac{1}{2}\sigma^{\lambda=0}_{AB}+\frac{1}{2}\sigma^{\lambda=1}_{AB}= \rho^{\text{id}}_{B^{\text{in}}|A^{\text{out}}}\otimes\frac{1}{2}\mathbb{I}_{ A^{\text{in}}}=\sigma_{AB}\;. \tag{41}\] In this particular case, we see that the actual outcomes do not give us any information about the exogenous variables, as expressed in the following conditional probabilities, \[P_{\mathbf{z}}(\lambda=0|a=+) =\frac{1}{2}\quad\wedge\quad P_{\mathbf{z}}(a^{\prime}=-|\lambda= 0)=\frac{1}{2}\;, \tag{42}\] \[P_{\mathbf{z}}(\lambda=1|a=+) =\frac{1}{2}\quad\wedge\quad P_{\mathbf{z}}(a^{\prime}=-|\lambda= 1)=\frac{1}{2}\;. \tag{43}\] Note that Eq. (34) is thus satisfied. 1. passive case: _"given that \(a=+\) occurred in the actually performed instrument \(\mathcal{I}^{1}_{A}\), what is the probability that \(b^{\prime}\) would have obtained using the instrument \(\mathcal{I}^{z^{\prime}_{B}}_{B}\), had it been that \(a^{\prime}=-\), using the instrument \(\mathcal{I}^{1}_{A}\,\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{ \mathcal{ \mathcal{ }}}}}}}}}}}}\)._ Since \(z^{\prime}_{A}=z_{A}=1\), this is a passive counterfactual, hence, no action step, that is, no intervention is needed. The prediction step (cf. Eq. (29)) thus yields the answer to our passive counterfactual as \[P^{a=+|\mathbf{z}}_{\mathbf{z}^{\prime}}(b^{\prime}|a^{\prime}=-) =\frac{P^{a=+|\mathbf{z}}_{\mathbf{z}^{\prime}}(b^{\prime},a^{ \prime}=-)}{P^{a=+|\mathbf{z}}_{\mathbf{z}^{\prime}}(a^{\prime}=-)}=\frac{P^{a =+|\mathbf{z}}_{\mathbf{z}^{\prime}}(b^{\prime},a^{\prime}=-)}{\big{(}1-P^{a=+ |\mathbf{z}}_{\mathbf{z}^{\prime}}(a^{\prime}+)\big{)}}\] (44) \[=2\,\text{Tr}_{AB}[\sigma^{a=+|\mathbf{z}}_{AB}(\tau^{a^{\prime} =-1}_{A}\otimes\tau^{b^{\prime}|z^{\prime}_{B}}_{B})]\] (45) \[=2\,\text{Tr}_{AB}[(\rho^{\text{id}}_{B^{\text{in}}|A^{\text{out} }}\otimes\frac{1}{2}\mathbb{I}_{A^{\text{in}}})(([-]_{A^{\text{out}}})^{T} \otimes[-]_{A^{\text{in}}}\otimes\tau^{b^{\prime}|z^{\prime}_{B}}_{B})]\] (46) \[=\text{Tr}_{B}[[-]_{B^{\text{in}}}\tau^{b^{\prime}|z^{\prime}_{B}}_ {B}]\;.\] (47) The counterfactual probability thus depends on the instrument \(z^{\prime}_{B}\), hence, it differs from \(\ast\) in Eq (29). 2. do-interventional case: _"given that \(a=+\) occurred in the actually performed instrument \(\mathcal{I}^{1}_{A}\), what is the probability that \(b^{\prime}\) would have obtained using the instrument \(\mathcal{I}^{z^{\prime}_{B}}_{B}\), had it been that \(a^{\prime}=-\), using the instrument \(\tau^{\text{do}([-])}_{A}\)?"_. Here, instead of \(\mathcal{I}^{1}_{A}\), we perform the do-intervention \[\mathcal{I}^{z^{\prime}_{A}=2}_{A}=\tau^{\text{do}([-])}_{A}=([-]_{A^{\text{out} }})^{T}\otimes\mathbb{I}_{A^{\text{in}}}\;,\] (48) which discards the input and prepares the state \([-]\) at the output of A. The updated process operator is the same as in Eq. (41). By setting \(\mathbf{z}^{\prime}=(z_{A}^{\prime}=2,z_{B}^{\prime})\) the counterfactual probability evaluates to \[P_{\mathbf{z}^{\prime}}^{a=+|\mathbf{z}}(b^{\prime}|a^{\prime}=-) =\frac{P_{\mathbf{z}^{\prime}}^{a=+|\mathbf{z}}(b^{\prime},a^{ \prime}=-)}{P_{\mathbf{z}^{\prime}}^{a=+|\mathbf{z}}(a^{\prime}=-)}\] (49) \[=\operatorname{Tr}_{AB}[(\rho_{B^{\mathrm{in}}|A^{\mathrm{out}}}^ {\mathrm{id}}\otimes\frac{1}{2}\mathbb{I}_{A^{\mathrm{in}}})(([-]_{A^{\mathrm{ out}}})^{T}\otimes\mathbb{I}_{A^{\mathrm{in}}}\otimes\tau_{B}^{b|z_{B}^{ \prime}})]\] (50) \[=\operatorname{Tr}_{B}[[-]_{B^{\mathrm{in}}}\tau_{B}^{b|z_{B}^{ \prime}}]\;,\] (51) where we used that \(P_{\mathbf{z}^{\prime}}^{a=+|\mathbf{z}}(a^{\prime}=-)=1\). 3. active case: we ask _"given that_ \(a=+\) _occurred in the actually performed instrument_ \(\mathcal{I}_{A}^{1}\)_, what is the probability that_ \(b^{\prime}\) _would have obtained using the instrument_ \(\mathcal{I}_{B}^{z_{B}}\)_, had it been that_ \(a^{\prime}=-\)_, using the instrument_ \(\mathcal{I}_{A}^{3}\)_"._ This is an active counterfactual whenever_ \(\mathcal{I}_{A}^{1}\neq\mathcal{I}_{A}^{3}\neq\mathcal{I}_{A}^{2}=\tau_{A}^{ \mathrm{do}}([-])\)_. Specifically, for \[\mathcal{I}_{A}^{3}=\{([+]_{A^{\mathrm{out}}})^{T}\otimes[\phi]_{A^{\mathrm{ in}}},([-]_{A^{\mathrm{out}}})^{T}\otimes[\overline{\phi}]_{A^{\mathrm{in}}} \}\;,\qquad\text{where}\qquad|\phi\rangle=\cos(\phi)|+\rangle+\sin(\phi)|- \rangle\;,\] (52) we find the counterfactual probability to be \[P_{\mathbf{z}^{\prime}}^{a=+|\mathbf{z}}(b^{\prime}|a^{\prime}=-) =\frac{P_{\mathbf{z}^{\prime}}^{a=+|\mathbf{z}}(b^{\prime},a^{ \prime}=-)}{P_{\mathbf{z}^{\prime}}^{a=+|\mathbf{z}}(a^{\prime}=-)}\] (53) \[=2\operatorname{Tr}_{AB}[(\rho_{B^{\mathrm{in}}|A^{\mathrm{out}} }^{\mathrm{id}}\otimes\frac{1}{2}\mathbb{I}_{A^{\mathrm{in}}})(([-]_{A^{ \mathrm{out}}})^{T}\otimes[\overline{\phi}]_{A^{\mathrm{in}}}\otimes\tau_{B}^{ b^{\prime}|z_{B}^{\prime}})]\] (54) \[=\operatorname{Tr}_{B}[[-]_{B^{\mathrm{in}}}\tau_{B}^{b^{\prime}|z _{B}^{\prime}}]\;,\] (55) where we used that \(P_{\mathbf{z}^{\prime}}^{a=+|\mathbf{z}}(a^{\prime}=-)=P_{\mathbf{z}}^{a=+| \mathbf{z}}(a^{\prime}=-)\) since \(\sigma_{AB}^{a=+}=\sigma_{AB}\) by Eq. (41). _Example 2_.: Consider the same setup as in Ex. 1, but with a different instrument at the exogenous node \(\Lambda\), \[\mathcal{I}_{\Lambda}^{2}=\{\tau^{\lambda}\}_{\lambda=+,-}=\left\{\frac{1}{2} (([+]_{A^{\mathrm{out}}})^{T}\otimes\mathbb{I}_{\Lambda^{\mathrm{in}}}),\frac {1}{2}(([-]_{A^{\mathrm{out}}})^{T}\otimes\mathbb{I}_{\Lambda^{\mathrm{in}}}) \right\}, \tag{56}\] for which \(\tau_{\Lambda}^{\frac{1}{2}1}=\sum_{\lambda=+,-}\tau_{\Lambda}^{\lambda}\) also. Of course, this yields the same marginalised process operator \(\sigma_{AB}\) as in Eq. (38). Yet, in contrast to Ex. 1, the conditional process operators for outcomes \(\lambda\in\{+,-\}\) now read \[\sigma_{AB}^{\lambda=+} =\frac{\operatorname{Tr}_{\Lambda}\left[\rho_{AB|A\Lambda}^{U} \widetilde{\tau}_{\Lambda}^{\lambda=+}\right]}{P(\lambda=+)}=\rho_{B^{\mathrm{ in}}|A^{\mathrm{out}}}^{\mathrm{id}}\otimes[+]_{A^{\mathrm{in}}}\;, \tag{57}\] \[\sigma_{AB}^{\lambda=-} =\frac{\operatorname{Tr}_{\Lambda}\left[\rho_{AB|A\Lambda}^{U} \widetilde{\tau}_{\Lambda}^{\lambda=-}\right]}{P(\lambda=-)}=\rho_{B^{\mathrm{ in}}|A^{\mathrm{out}}}^{\mathrm{id}}\otimes[-]_{A^{\mathrm{in}}}\;, \tag{58}\] such that the updated process operator for the instrument in Eq. (56) is given by \[\sigma_{AB}^{a=+} =\sum_{\lambda}P_{\mathbf{z}}(\lambda|a=+)\sigma_{AB}^{\lambda= \sigma_{AB}^{\lambda=+}} \tag{59}\] \[=2\operatorname{Tr}_{\Lambda}[\rho_{B^{\mathrm{in}}|A^{\mathrm{ out}}}^{\mathrm{id}}\rho_{A^{\mathrm{in}}|\Lambda^{\mathrm{out}}}^{\mathrm{id}}\frac{1}{2}(([+]_{A^{ \mathrm{out}}})^{T}\otimes\mathbb{I}_{\Lambda^{\mathrm{in}}})\rho_{\Lambda^{ \mathrm{in}}}]=\rho_{B^{\mathrm{in}}|A^{\mathrm{out}}}^{\mathrm{id}}\otimes[+]_{A^ {\mathrm{in}}}\neq\sigma_{AB}\;. \tag{60}\] Hence, we obtain the following conditional probabilities in this case, \[P_{\mathbf{z}}(\lambda=+|a=+) =1 \wedge P_{\mathbf{z}}(a^{\prime}=-|\lambda=+)=0\;, \tag{61}\] \[P_{\mathbf{z}}(\lambda=-|a=+) =0 \wedge P_{\mathbf{z}}(a^{\prime}=-|\lambda=-)=1\;. \tag{62}\] Again, we compute (i) the passive, (ii) the do-interventional, and (iii) the active counterfactual probabilities for the counterfactual queries in Ex. 1. 1. passive case: note that we are dealing with a counterpossible, that is, \(P^{a^{\prime}=+\|\underline{a}}_{\mathbf{z}^{\prime}}(a^{\prime}=-)=0\) (with \(z^{\prime}_{A}=z_{A}=1\)), hence, by convention \(P^{a^{\prime}=+}_{\mathbf{z}^{\prime}}(b^{\prime}|a^{\prime}=-)=*\). In order to accommodate the antecedent of the counterfactual, we thus need to perform an intervention, that is, we need to consider the active case. 2. do-interventional case: using the do-intervention in Eq. (48), we compute \[P^{a^{\prime}=+\|\underline{a}}_{\mathbf{z}^{\prime}}(b^{\prime}|a^{ \prime}=-) =\frac{P^{a^{\prime}=+\|\underline{a}}_{\mathbf{z}^{\prime}}(b^{ \prime},a^{\prime}=-)}{P^{a^{\prime}=+\|\underline{a}}_{\mathbf{z}^{\prime}} (a^{\prime}=-)}\] (63) \[=\operatorname{Tr}_{AB}[(\rho^{\mathrm{id}}_{B^{\mathrm{in}}|A^{ \mathrm{out}}}\otimes[+]_{A^{\mathrm{in}}})(([-]_{A^{\mathrm{out}}})^{T} \otimes\mathbb{I}_{A^{\mathrm{in}}}\otimes\tau_{B}^{b^{\prime}|\tau_{B}^{ \prime}})]\] (64) \[=\operatorname{Tr}_{B}[(-]_{B^{\mathrm{in}}}\tau_{B}^{b^{\prime}| \tau_{B}^{\prime}}]\,.\] (65) 3. active case: using the instrument in Eq. (52), we compute \[P^{a^{\prime}=+\|\underline{a}}_{\mathbf{z}^{\prime}}(b^{\prime}|a ^{\prime}=-) =\frac{P^{a=+\|\underline{a}}_{\mathbf{z}^{\prime}}(b^{\prime},a^ {\prime}=-)}{P^{a^{\prime}=+\|\underline{a}}_{\mathbf{z}^{\prime}}(a^{\prime}= -)}\] (66) \[=\frac{\operatorname{Tr}_{AB}[(\rho^{\mathrm{id}}_{B^{\mathrm{in} }|A^{\mathrm{out}}}\otimes[+]_{A^{\mathrm{in}}})(([-]_{A^{\mathrm{out}}})^{T} \otimes[\overline{\phi}]_{A^{\mathrm{in}}}\otimes\tau_{B}^{b^{\prime}|\tau_{B }^{\prime}})]}{\operatorname{Tr}_{AB}[(\rho^{\mathrm{id}}_{B^{\mathrm{in}}|A^{ \mathrm{out}}}\otimes[+]_{A^{\mathrm{in}}})(([-]_{A^{\mathrm{out}}})^{T} \otimes[\overline{\phi}]_{A^{\mathrm{in}}}\otimes\tau_{B}^{|\tau_{B}^{\prime }})]}\] (67) \[=\operatorname{Tr}_{B}[[-]_{B^{\mathrm{in}}}\tau_{B}^{b^{\prime}| \tau_{B}^{\prime}}]\,.\] (68) Comparing the two examples, we see that while the passive counterfactual in Ex. 2 is a counterpossible, that is, it has an impossible antecedent (and is thus assigned a conventional value \(*\)), the same passive counterfactual evaluated in Ex. 1 yields a counterfactual probability that is not a counterpossible. This is in stark contrast to the classical case, where - as a consequence of the intrinsic determinism of structural causal models - a passive interpretation of a counterfactual query would always result in a counterpossible. Note also that both examples have the same state (a maximally mixed state) prepared at the exogenous node, showing that different contexts for the state preparations of the same mixed state can result in different evaluations for a quantum counterfactual. Note also that in both examples we obtained the same counterfactual probabilities in the do-interventional and active cases. In general, passive, active and do-interventional counterfactuals (a special case of the latter), yield different counterfactual probabilities. As we will see, this has important consequences. ### Disambiguation of counterfactual queries: the principle of minimality Note that classical counterfactuals evaluated with respect to a probabilistic structural causal model \(\langle M,P(\mathbf{u})\rangle\) correspond to do-interventional (quantum) counterfactual queries when evaluated with respect to the quantum extension \(M_{Q}\) (cf. Cor. 1). In fact, a classical counterfactual query in Def. 4 is always defined in terms of a do-intervention, since this is the only way to make the antecedent true. In this sense, we may say that classical counterfactual queries naturally embed into our formalism as do-interventional counterfactuals. Yet, the richer structure of quantum counterfactuals, as seen in the previous section, may sometimes allow for a different interpretation of a classical counterfactual query, in particular, the antecedent of a quantum counterfactual can sometimes be true without an intervention. This leaves a certain ambiguity if we want to interpret a classical counterfactual as a quantum counterfactual query according to Def. 12: for the latter, one must specify a counterfactual instrument, in particular, one must decide whether to interpret the classical counterfactual query passively or actively (do-interventionally). For example, again referring to the scenario represented in Fig. 9 and defined at the end of the previous subsection, consider the query: _Given that \(a=1\), what is the probability that \(b^{\prime}=1\), had it been that \(a^{\prime}=0\)?_ The antecedent of this counterfactual could be satisfied without any changes in the model or the instruments at the nodes, since there is a nonzero probability that the instrument \(\tau_{A}\) produces the outcome \(a^{\prime}=0\). It could, however, also be satisfied by e.g. a do-intervention of the form \(\tau_{A}^{[0]}=([0]_{A^{\mathrm{out}}})^{T}\otimes\mathbb{I}_{A^{\mathrm{in}}}\). This ambiguity does not occur in a classical structural causal model (CSM), since in that case all the variables are determined by a complete specification of the exogenous variables. Consequently, the only way the antecedent of a counterfactual like the one above could be realized while keeping the background variables fixed, is via some modification of the model.18 Pearl justifies the do-intervention as "the minimal change (to a model) necessary for establishing the antecedent" [1]. Footnote 18: We remark that, contrary to Pearl, a counterfactual may also be interpreted as a _backtracking counterfactual_, where the background conditions are not necessarily kept fixed. A semantics for backtracking counterfactuals within a classical SCM has recently been proposed in Ref. [40]. To decide whether a (classical) counterfactual query should be analyzed as passive or active when interpreted with respect to a QSM, we propose a _principle of minimality_, motivated by the minimal changes from actuality required in Pearl's analysis. If the antecedent of a counterfactual can be established with _no_ change to a model - as in a passive reading of the counterfactual - this is by definition the minimal change. **Definition 14** (Principle of minimality).: _Whenever it is ambiguous whether a (classical) counterfactual query should be interpreted as a passive or active counterfactual in a QSM (as by Def. 13), it should be interpreted passively if it is not a counterpossible, that is, if its antecedent is not impossible (as by Eq. (34))._ Lewis' account of counterfactuals invokes a notion of _similarity_ among possible worlds [14]. For Lewis, one should order the closest possible worlds by some measure of similarity, based on which a counterfactual is declared true in a world \(w\) if the consequent of the counterfactual is true in all the closest worlds to \(w\) where the antecedent of the counterfactual is true. Arguably, a world in which the model and instruments are the same, but where a counterfactual outcome occurred, is closer to the actual world than a world where a different instrument was used. ### Causal dependence and counterfactual dependence in the Bell scenario A conceptually important consequence of our semantics for counterfactuals (especially of the disambiguation of passive from active counterfactual queries when going from the classical to the quantum) is that, unlike in the case of Pearl's framework, counterfactual dependence does not imply causal dependence. We establish this claim using the pertinent example of a Bell scenario, as shown in Fig. 9. _Example 3_ (Bell scenario).: Consider the causal scenario in Fig. 9, with instruments \[\mathcal{I}_{A} =\{([0]_{A^{\text{out}}})^{T}\otimes[0]_{A^{\text{in}}},([1]_{A^{ \text{out}}})^{T}\otimes[1]_{A^{\text{in}}}\}\;, \tag{69}\] \[\mathcal{I}_{B} =\{([0]_{B^{\text{out}}})^{T}\otimes[0]_{B^{\text{in}}},([1]_{B^{ \text{out}}})^{T}\otimes[1]_{B^{\text{in}}}\}\;,\] (70) \[\mathcal{I}_{C} =\left\{([\Phi_{*}]_{C^{\text{out}}})^{T}\otimes\mathbb{I}_{C^{ \text{in}}}\right\}, \tag{71}\] where the output of \(C\) factorises as \(C^{\text{out}}=C^{\text{out}}_{A}\otimes C^{\text{out}}_{B}\) and where \(|\Phi_{*}\rangle=\frac{1}{\sqrt{2}}(|0\rangle_{C^{\text{out}}_{A}}|0\rangle_{ C^{\text{out}}_{B}}+|1\rangle_{C^{\text{out}}_{A}}|1\rangle_{C^{\text{out}}_{B}}\) is a Bell state. The unitary \(\rho^{U}_{AB|C}\) is given by \[\rho^{U}_{AB|C}=\rho^{\text{id}}_{A|C^{\text{out}}_{A}}\rho^{\text{id}}_{B|C^{ \text{out}}_{B}}\;. \tag{72}\] Now, consider the counterfactual _"Given that \(a=b=0\), what is the probability that \(b=1\) had it been that \(a=1\)?"_. This yields a well-defined (classical) counterfactual as by Def. 4. If we want to interpret it as a quantum counterfactual \(P^{a=b=0|\mathbf{z}}_{\mathbf{z}^{\prime}}(b^{\prime}=1|a^{\prime}=1)\) as by Def. 12, we also need to specify a counterfactual instrument \(\mathbf{z}^{\prime}\). On the one hand, interpreting as a do-interventional counterfactual with \(\tau^{[-]_{A}}_{A}\), we obtain \[P^{a=b=0|\mathbf{z}}_{\text{do}(a^{\prime}=1)}(b^{\prime}=1|a^{\prime}=1)=P^ {a=0|\mathbf{z}}_{\text{do}(a^{\prime}=0)}(b^{\prime}=1|a^{\prime}=1)=\frac{ 1}{2}\;, \tag{73}\] Figure 9: Causal graph of a Bell scenario. \(C\) is a common cause of \(A\) and \(B\). which shows that there is no interventional counterfactual dependence between \(A\) and \(B\). On the other hand, interpreting it as a passive counterfactual query (with \(\mathbf{z}^{\prime}=\mathbf{z}\)) evaluates to \[P_{\mathbf{z}}^{a=b=0|\mathbf{z}}(b=1|a=1)=1. \tag{74}\] In other words, it would have been the case that \(b=1\) with certainty had it been the case that \(a=1\). In Pearl's classical semantics, counterfactual dependence of the type in Eq. (74) would imply that \(A\) is a cause of \(B\).19 Nevertheless, the quantum structural causal model we used to derive this result has by construction no causal dependence from \(A\) to \(B\). This shows that _(passive) counterfactual dependence does not imply causal dependence in quantum causal models_. Footnote 19: In Lewis’s account [41], such counterfactual dependence also implies causal dependence. The difference is that while for Pearl counterfactuals are analyzed in terms of causation, for Lewis causation is analyzed in terms of counterfactuals. Note also that in the passive reading, the counterfactual antecedent corresponds to an event - in the technical sense of a CP map - that was included in the actual instrument. A counterfactual antecedent interpreted as a do-intervention is indeed a _different event_ altogether - distinct from any event in the actual instrument. This fact is obscured in the classical case since in Pearl's formalism we identify the incoming and outgoing systems, and it is implicitly assumed that we can in principle perform ideal non-disturbing measurements of the variables involved. Classically, the event \(X=x\) can ambiguously correspond to "a non-disturbing measurement of \(X\) has produced the outcome \(x\)" or "the variable \(X\) was set to the value \(X=x\)", with the distinction between those attributed to the structural relations in the model, i.e. to a surgical excision of causal arrows that leave the identity of the variables otherwise intact. In a quantum causal model, on the other hand, a do-intervention corresponds to a related but _different_ event in an otherwise intact model. ## 8 Generalisations and related work: quantum Bayes' theorem Thm. 4 and Cor. 1 show that our formalism for counterfactuals in quantum causal models (see Sec. 5) is a valid generalization of Pearl's formalism in the classical case (see Sec. 2). In this section, we review the key assumptions of our formalism, discuss possible generalizations, and draw parallels with related work on quantum Bayesian inference. Recall that our notion of a 'quantum counterfactual' in Def. 12 is evaluated with respect to a quantum structural causal model (QSM) (see Def. 10). A QSM \(M_{Q}\) reproduces a given physical process operator \(\sigma_{\mathbf{A}}\) over observed nodes \(\mathbf{A}\), that is, \(\sigma_{\mathbf{A}}\) arises from coarse-graining of ancillary (environmental) degrees of freedom in \(M_{Q}\) (cf. Eq. (21)). As such, \(M_{Q}\) encodes additional information that is not present in \(\sigma_{\mathbf{A}}\): namely, (i) it assumes an underlying unitary process \(\rho^{U}_{\mathbf{A}\mathbf{S}|\mathbf{A}\mathbf{A}}\), and (ii) it incorporates partial knowledge about the preparation of ancillary states at exogenous nodes in the form of preparation instruments \(\{\tau^{\lambda_{i}}_{\Lambda_{i}}\}_{\lambda_{i}}\) (cf. Eq. (19)), acting on an arbitrary input state. Together, this allowed us to reduce the abduction step in our formalism to classical Bayesian inference: in particular, note that for any choice of instruments \(\mathbf{z}\) at endogenous nodes, the quantum process operator reduces to a classical channel \(P_{\mathbf{z}}(\mathbf{a}|\boldsymbol{\lambda})\) (cf. Eq. (26)). We remark that this situation (of a unitary background process with ancillas prepared in a fixed basis) arises naturally in the context of quantum circuits, future quantum computers, and thus supposedly in the context of future quantum AI. Nevertheless, for other use cases it might be less clear how to model our background knowledge on a physical process \(\sigma_{\mathbf{A}}\) in terms of a QSM, thus prompting relaxations of the assumptions baked into Def. 10. First, one may want to drop our assumption of a unitary background process. This assumption closely resembles Pearl's classical formalism, which models any uncertainty about a stochastic physical process as a probabilistic mixture of deterministic processes. Yet, one might argue that assuming a unitary (deterministic) background process is too restrictive (or even fundamentally unwarranted) and that one should allow for arbitrary convex decompositions of a quantum stochastic process (CPTP map). To this end, note that knowledge about stable facts [36] that lead to a preferred convex decomposition of the process operator \(\sigma_{\mathbf{A}}=\sum_{\boldsymbol{\lambda}}P(\boldsymbol{\lambda})\sigma ^{\boldsymbol{\lambda}}_{\mathbf{A}}\) (into valid process operators \(\sigma^{\boldsymbol{\lambda}}_{\mathbf{A}}\)) is all that is necessary to perform (classical) Bayesian inference (cf. Eq. (26) and Eq. (27)). A more radical generalization could arise by taking our information about exogenous variables to be inherently quantum. That is, without information in the form of stable facts regarding the distribution over outcomes of preparation instruments in a QSM, our knowledge about exogenous variables merely takes the form of a generic quantum state \(\rho_{\mathbf{A}}=\rho_{\Lambda_{1}}\otimes\cdots\otimes\rho_{\Lambda_{n}}\).20 In this case, inference can no longer be described by (classical) Bayes' theorem but requires a quantum generalization. Much recent work has been devoted to finding a generalization of Bayes' theorem to the quantum case, which has given rise to various different proposals for a quantum Bayesian inverse - see Ref. [42, 43, 19] for a recent categorical (process-diagrammatic) definition and Ref. [44] for an attempt at an axiomatic derivation. Once a definition for the Bayesian inverse has been fixed - and provided it exists21 - we can perform a generalized abduction step in Sec. 5.1 and, consequently, obtain a generalised formalism for counterfactuals. We defer a more careful analysis of counterfactuals arising in this way and their comparison to our formalism for future study. Footnote 21: Ref. [19] characterises the existence of a Bayesian inverse in the categorical setting for finite-dimensional \(C^{*}\)-algebras. ## 9 Discussion We defined a notion of counterfactual in quantum causal models and provided a semantics to evaluate counterfactual probabilities, generalizing the three-step procedure of evaluating probabilities of counterfactuals in classical causal models due to Pearl [1]. The third level in Pearl's ladder of causality (see Fig. 1) had thus far remained an open gap in the generalization of Pearl's formalism to quantum causal models; here, we fill this gap. To this end, we introduce the notion of a quantum structural causal model, which takes inspiration from Pearl's notion of a classical structural causal model, yet differs from the latter in many ways: a quantum structural model is fundamentally probabilistic; it does not assign truth values to all counterfactuals, and in this sense violates 'counterfactual definiteness'22); non-trivial events at the nodes are inherently associated with some form of intervention - any non-trivial instrument causes some disturbance. Despite these differences, we prove that every classical structural causal model admits an extension to a quantum structural causal model, which preserves the relevant independence conditions and yields the same probabilities for counterfactual queries arising in the classical case. Thus, quantum structural causal models and the evaluation of counterfactuals therein subsume Pearl's classical formalism. Footnote 22: We leave a detailed exploration of ‘counterfactual definiteness’ in quantum causal models for future work. On the other hand, quantum structural causal models have a richer structure than their classical counterparts. We identify different types of counterfactual queries arising in quantum causal models, and explain how they are distinguished from counterfactual queries in classical causal models. Based on this distinction, we evaluate these different types of quantum counterfactual queries in the Bell scenario and show that counterfactual dependence does not generally imply causal dependence in this case. In this way, our analysis provides a new way of understanding how quantum causal models generalize Reichenbach's principle of common cause to the quantum case [22, 45, 23]: a quantum common cause allows for counterfactual dependence without causal dependence, unlike a classical common cause. Our work opens up several avenues for future study. Of practical importance are applications of counterfactuals in quantum technologies including but not limited to quantum artificial intelligence and quantum cryptography. For example, questions such as "Given that certain outcomes were observed at receiver nodes in a quantum network, what is the probability that different outcomes would have been observed, had there been an eavesdropper in the network?" can be relevant in security protocols, where one wants to ensure that an intended message has been received without a security breach. It is well known that quantum theory violates 'counterfactual definiteness in the sense of the phenomenon often referred to as 'quantum contextuality' [12]. The latter has been identified as a key distinguishing feature between classical and quantum physics, as well as a resource for quantum computation [46, 47, 48, 49]. It would thus be interesting to study contextuality from the perspective of the counterfactual semantics spelled out here. Finally, our analysis hinges on the classicality ('stable facts' in Ref. [36]) of background (exogenous) variables in the model, as it allows us to apply (classical) Bayes' inference on our classical knowledge about exogenous variables. In turn, considering the possibility of our 'prior' knowledge about exogenous variables to be genuinely quantum motivates a generalization of Bayes' theorem to the quantum case (see Sec. 8). We expect that combining our ideas with recent progress along those lines will constitute a fruitful direction for future research. Acknowledgements.The authors acknowledge financial support through grant number FQXi-RFP-1807 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation, and ARC Future Fellowship FT180100317.
2301.08782
Estimation of mitral valve hinge point coordinates -- deep neural net for echocardiogram segmentation
Cardiac image segmentation is a powerful tool in regard to diagnostics and treatment of cardiovascular diseases. Purely feature-based detection of anatomical structures like the mitral valve is a laborious task due to specifically required feature engineering and is especially challenging in echocardiograms, because of their inherently low contrast and blurry boundaries between some anatomical structures. With the publication of further annotated medical datasets and the increase in GPU processing power, deep learning-based methods in medical image segmentation became more feasible in the past years. We propose a fully automatic detection method for mitral valve hinge points, which uses a U-Net based deep neural net to segment cardiac chambers in echocardiograms in a first step, and subsequently extracts the mitral valve hinge points from the resulting segmentations in a second step. Results measured with this automatic detection method were compared to reference coordinate values, which with median absolute hinge point coordinate errors of 1.35 mm for the x- (15-85 percentile range: [0.3 mm; 3.15 mm]) and 0.75 mm for the y- coordinate (15-85 percentile range: [0.15 mm; 1.88 mm]).
Christian Schmidt, Heinrich Martin Overhoff
2023-01-20T19:46:16Z
http://arxiv.org/abs/2301.08782v1
Estimation of mitral valve hinge point coordinates - deep neural net for echocardiogram segmentation ###### Abstract Cardiac image segmentation is a powerful tool in regard to diagnostics and treatment of cardiovascular diseases. Purely feature-based detection of anatomical structures like the mitral valve is a laborious task due to specifically required feature engineering and is especially challenging in echocardiograms, because of their inherently low contrast and blurry boundaries between some anatomical structures. With the publication of further annotated medical datasets and the increase in GPU processing power, deep learning-based methods in medical image segmentation became more feasible in the past years. We propose a fully automatic detection method for mitral valve hinge points, which uses a U-Net based deep neural net to segment cardiac chambers in echocardiograms in a first step, and subsequently extracts the mitral valve hinge points from the resulting segmentations in a second step. Results measured with this automatic detection method were compared to reference coordinate values, which with median absolute hinge point coordinate errors of 1.35 mm for the x- (15-85 percentile range: [0.3 mm; 3.15 mm]) and 0.75 mm for the y- coordinate (15-85 percentile range: [0.15 mm; 1.88 mm]). Medical image segmentation, echocardiography, deep learning, U-Net, mitral valve ## 1 Introduction According to the World Health Organization (WHO), 17.9 million people died from cardiovascular diseases (CVDs) in 2019, which is 32% of all global deaths. Advances in medical imaging significantly improved the process of diagnostics and treatment of CVDs over the past years, with cardiac image segmentation playing an important role. Cardiac image segmentation is the process of partitioning an image by assigning a label to each pixel of the image in such a way, that pixels of a certain anatomical structure share the same label. Anatomical and functional parameters such as left ventricle (LV) volume, left atrium (LA) volume, ejection fraction and mitral valve (MV) dimensions can be determined using segmented images. Direct segmentation of the two mitral valve leaflets (MVLs) with purely feature-based algorithms often fails, because of low contrast in echocardiograms or lack of visualization of both MVLs at the same time, which is common in clinical settings. Therefore, we propose to assess MV hinge point coordinates by using deep learning (DL) segmentations of the LV and LA. In 2019 Leclerc et al. [19] published the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) dataset in conjunction with an image segmentation challenge [https://www.creaties.insa-lyon.fr/Challenge/camus/](https://www.creaties.insa-lyon.fr/Challenge/camus/). This is to our knowledge the first large-scale, publicly available transthoracic echocardiography (TTE) dataset, which includes ground truth segmentations of the LV and LA. In this paper, we use the CAMUS dataset (Fig. 1, Fig. 2) to segment the LV and LA in apical four (a4c) and two-chamber views (a2c) in a first step, and in a second step extract the mitral valve diameter and hinge point coordinates from the resulting segmentation. To the best of our knowledge, this is the first attempt of DL-based mitral valve measurement in transthoracic four- and two-chamber view echocardiograms. ## 2 Related Works Conventional machine learning techniques, e.g., active shape and atlas-based models [14, 15], showed good performance in cardiac image segmentation, but rely on user-based, manual feature extraction. In recent years, with the increased availability of high-performance GPUs and more access to image training data, deep learning (DL) models, which automatically learn features from image data, have outperformed conventional machine learning techniques. These DL segmentation approaches mainly consist of encoder-decoder convolutional neural networks (CNN), in particular fully convolutional networks [14] and the U-Net architecture [13], using ResNet [15], Inception [21] or VGG [20] as popular encoder backbones, as these have performed best in the field of medical image segmentation. TTE is the most commonly performed imaging examination in cardiology, due to the fact that it is non-invasive, has low cost and high accessibility, yet up to 80% of annual publications between 2016 and 2019 [18] on DL-based cardiac image segmentation worked with magnetic resonance imaging (MRI) data [15, 16], mainly because of larger dataset availability. Computer tomography (CT) [17, 18] and echocardiography [19, 20, 21], despite their clinical importance, only played a subordinate role due to the lack of annotated datasets. As for MV measurement in particular, in clinical practice, MV dimensions are either manually obtained by a user manually selecting points on frozen frames throughout the cardiac cycle [1, 16] or by semi-automatic segmentation. E.g., [16] proposes an MV morphometry method, which requires user initialized selection of a region of interest and anatomical landmarks followed by feature-based contour segmentation. Further feature-based (semi)-automatic methods for MV assessment often require vendor-specific software for analysis. In addition, they have high computational run times and have only been assessed in single-center studies with small patient numbers and little variety in MV conditions [16]. Figure 1: **Exemplary labelled TTE of the CAMUS dataset in a4c (left) and a2c view (right), showing the ventricles (LV and RV), atria (LA and RA) and the mitral valve (MV).** Figure 2: **Ground truth annotation of the LV and LA in a4c image (left), overlay of the annotation on the original echocardiogram (right).** ## 3 Proposed Solution ### Dataset The CAMUS dataset includes 450 annotated patient sub-datasets consisting of TTE apical four- and two-chamber views. Each patient sub-dataset consists of one cardiac cycle per view, but ground truth segmentations are only provided at the image frames at end-diastole (ED) and end-systole (ES). Image sizes vary, in a range between 400 x 700 pixels and 700 x 1000 pixels, with a spatial grid resolution of 0.3 mm along the \(x\)- and 0.15 mm along the \(y\)-axis. The dataset includes images of various acquisition settings, e.g., resolution, image contrast and transducer angle. Furthermore, some images are not properly centered, therefore certain anatomical structures (ventricles, atria) are potentially not fully visible on them. No further data selection or preprocessing has been performed. This results in a heterogeneous dataset, which is a realistic representation of data acquired in clinical practice. Manual delineation of the LV, LA, and epicardium was performed by a cardiologist using a defined segmentation protocol. In particular, the LV delineation contour was to be terminated at the points where the MVLs are hinging. Epicardium annotation is not relevant to mitral valve measurement and is therefore not considered in this work. ### Training the segmentation model We introduce a two-step method for estimating MV hinge point coordinates. This method uses a deep learning algorithm for segmentation of the left ventricle and left atrium and subsequent feature-based image processing for the estimation of MV hinge point coordinates. _Step 1: deep learning segmentation algorithm_ Since [10] demonstrated that the U-Net architecture showed slightly better segmentation accuracy on the CAMUS dataset than more sophisticated encoder-decoder networks, a U-Net with the VGG16 backbone was used to train our model. Model training was implemented in Python version 3.7.7 using Tensorflow 2.0.0 in conjunction with the Keras API. The Adam optimizer [13], a learning rate of 10\({}^{3}\) and the categorical cross-entropy loss function were used for training. No data augmentation was performed on the dataset. 450 patient sub-datasets were divided into three groups, 350 for training, 50 for validation, and 50 for testing, a roughly 80%/10%/10% split. Model training and validation were performed on an NVIDIA GeForce RTX 2060 and ran for 50 epochs, after which the validation accuracy stagnated or dropped, and training was terminated to avoid overfitting (Fig. 3). As a result of this first step, image pixels are assigned to the left ventricle (LV), left atrium (LA), and background. ### Extraction of mitral valve hinge points _Step 2: feature-based hinge point extraction_ A purely feature-based algorithm for mitral valve detection, which uses e.g., thresholding, edge-detection, or histogram methods, is likely to perform insufficiently in regions of low pixel grayscale gradients, and thus does not detect both MVLs reliably. In numerous clinical cases, anatomical structures are not clearly visible on echocardiograms, due to low image contrast and blurry boundaries between anatomical structures. Figure 4: Apical four-chamber view (a4c) at end diastole (ED). The intersecting line between image plane and anterior mitral valve leaflet (aMVL) is clearly visible, whereas the posterior leaflet (pMVL) is barely visible. Figure 3: Dice coefficient of the left ventricle and left atrium for the validation dataset, training was stopped after 50 epochs. In particular, MVLs are hardly distinguishable from the background in many cases (Fig. 4). Therefore, we use the DL-generated segmentations of a4c- and a2c echocardiograms (see section 3.2) to estimate MV hinge point coordinates. Figure 5 gives an overview of the individual steps in our proposed method. According to the contouring protocol in [11], the LV contour was to be terminated in the MV plane, at the MV hinge points. Using the resulting contact line between LV and LA segmentation (Fig. 5c), we define the anteriermost point of the contact line as the anterior mitral valve leaflet (aMVL) and the posterioriermost point of the contact line as the posterioriural valve leaflet (pMVL). This second step results in \(x\)- and \(y\)-coordinates for the aMVL and pMVL. ### Evaluation Metrics The segmentation CNN in combination with the following MV hinge point extraction, is to be considered as a measurement tool for the pixel-coordinates of the MV hinge points. In this method, each measurement \(\hat{z}\) is determined by \[\hat{z}=z+\Delta\hat{z}=z+\left(\Delta\hat{z}^{\text{(bias)}}+\hat{e}^{(2)}\right)\] where \(z\) is the true value, \(\Delta\hat{z}^{\text{(bias)}}\) the systematic error and \(\hat{e}^{(2)}\) the random error (DIN 1319-1/ISO 11843-1). Typically, a normal distribution of errors \(\Delta\hat{z}\) is assumed and characterized by its mean \(\mu\) and standard deviation \(\sigma\). To assess the normality of our error distributions, we performed Shapiro-Wilk tests for \(\Delta\hat{z}\) data series of each subgroup (\(\Delta\hat{x}_{\text{aMVL}}\), \(\Delta\hat{y}_{\text{aMVL}}\), \(\Delta\hat{x}_{\text{pMVL}}\), \(\Delta\hat{y}_{\text{pMVL}}\)). Figure 5: **Overview of the proposed method: The original echocardiogram (a) is first segmented, using the CNN. The resulting contact line (c) between the segments of left ventricle and left atrium (b) is then used to extract the mitral valve hinge points (d).** These tests resulted in \(p\)-values of \(p<0.05\) for all but one data series, thus the assumption of normal distribution is rejected. Therefore, we characterize the distributions by their 15-, 50- (median), and 85-percentiles instead, as equivalents to \(\mu-\sigma\), \(\mu\) and \(\mu+\sigma\). To account for systematic errors \(\Delta\mathbb{\hat{z}}^{\text{(bias)}}\) and subsequently only evaluate random errors \(e^{(\mu)}\), median deviations \(\Delta\mathbb{\hat{z}}^{\text{(bias)}}\) (\(\Delta\mathbb{\hat{z}}^{\text{(bias)}}_{\text{aMVL}}\), \(\Delta\mathbb{\hat{y}}^{\text{(bias)}}_{\text{aMVL}}\), \(\Delta\mathbb{\hat{z}}^{\text{(bias)}}_{\text{pMVL}}\)) are subtracted from measured values \(\hat{z}\) (calibration). We then evaluate the 15-85 percentile range of calibrated values. (Fig. 6).Calibrated, random \(x\)- and \(y\)- coordinate errors of \(\Delta\mathbb{\hat{z}}^{\text{(bias)}}\) (\(\Delta\mathbb{\hat{z}}^{\text{(bias)}}_{\text{aMVL}}\), \(\Delta\mathbb{\hat{y}}^{\text{(bias)}}_{\text{aMVL}}\), \(\Delta\mathbb{\hat{z}}^{\text{(bias)}}_{\text{pMVL}}\)) are subtracted from measured values \(\hat{z}\) (calibration). We then evaluate the 15-85 percentile range of calibrated values. (Fig. 6).Calibrated, random \(x\)- and \(y\)- coordinate errors of \(\Delta\mathbb{\hat{z}}^{\text{(bias)}}\) (\(\Delta\mathbb{\hat{z}}^{\text{(bias)}}_{\text{aMVL}}\), \(\Delta\mathbb{\hat{y}}^{\text{(bias)}}_{\text{aMVL}}\), \(\Delta\mathbb{\hat{y}}^{\text{(bias)}}_{\text{aMVL}}\), \(\Delta\mathbb{\hat{y}}^{\text{(bias)}}_{\text{pMVL}}\)) are subtracted from measured values \(\hat{z}\) (calibration). The combined Dice Coefficient of all LVs (at ED and ES) is \(D_{\text{LV}}\) = 0.923, with segmentations at ED performing slightly better than at ES (\(D_{\text{LV, ED}}\) = 0.931, \(D_{\text{LV, ES}}\) = 0.915). Unlike the procedure described in [Lee19], we did not perform any post-processing (e.g. connected component analysis) on the segmentation result, yet are still in line with their best U-Net \begin{table} \begin{tabular}{|l|l|l|} \hline & \(D_{\text{LV\,-\,ED}}\) & \(D_{\text{LV\,-\,ES}}\) \\ \hline proposed & 0.931 & 0.915 \\ \hline [Lee19] & 0.939 & 0.916 \\ \hline \end{tabular} \end{table} Table 1: Dice Coefficients of LV U-Net segmentation. Figure 8: Boxplot diagrams of hinge point coordinate errors \(\Delta\mathbb{\hat{z}}_{\text{aMVL}}\), \(\Delta\mathbb{\hat{y}}_{\text{aMVL}}\), \(\Delta\mathbb{\hat{y}}_{\text{aMVL}}\), \(\Delta\mathbb{\hat{y}}_{\text{aMVL}}\) [px] of a4c, a2c images at ES and ED before calibration. Figure 7: Sorted ground truth and predicted MV diameters (mm) before (left) and after (right) calibration of the systematic error \(\Delta\mathbb{\hat{z}}^{\text{(bias)}}\). The predicted MV diameters were systematically underestimated by 13.8%. segmentation accuracy (\(D_{\text{LV,EB}}=0.939\), \(D_{\text{LV,ES}}=0.916\)). ### Mitral valve annulus diameter Our experimental measurements of the MV annulus diameters (Table 1) lie well within the range of empiric MV measurement data. E.g., in [Dwi14], 5-95 percentile ranges for the mitral annulus diameter of 22-38 mm are stated for women and 25-41 mm for men. While median diameter estimations for a2c images and a4c images (Table 2) at ED conformed well with ground truth values, a significant systematic estimation bias was observed in a4c images at ES. Here, the median diameter was underestimated by 13.8% (Fig. 7). This systematic error can also be seen in the individual hinge point coordinates in section 4.3. ### MV hinge point coordinates Evaluation of individual hinge point coordinate errors (Fig. 8) shows further systematic estimation errors \(\Delta\hat{z}^{\text{(bias)}}\). Since both the anterior and posterior hinge points are estimated too far inwardly (median) in a4c view at ES, the corresponding underestimation in diameters (see section 4.2) is explained. Figure 9 displays the individual systematic errors \(\Delta\hat{z}^{\text{(bias)}}\) for the aMVL- and pMVL hinge point of each view type. The coordinate estimation is, on average, biased towards the bottom (0.5 mm) and the right (0.3 mm). The estimation accuracy in terms of absolute coordinate error distance in mm was much lower for the \(x\)- compared to the \(y\)-coordinate, as can be seen in Table 3. This is almost fully explained by the spatial resolution of the images, which is 0.3 mm along the \(x\)- and 0.15 mm along the \(y\)- axis, as described in section 3.1. This results in absolute median coordinate errors of 1.35 mm for all \(x\)-coordinates and 0.75 mm for all \(y\)-coordinates. When comparing the median of absolute error distances \(\big{(}\big{|}e^{(2)}\big{|}\big{)}\) of the different views (a4c-ED, a4c-ES, a2c-ED, a2c-ES), estimation accuracy was approximately equal in the four subgroups. ### Impact of off-center images Looking at the correlation plot (Fig. 10) between predicted and ground truth \(x_{\text{aMVL}}\), \(x_{\text{pMVL}}\) coordinates of the MV hinge points in a4c views, a subdivision of data points into two groups can be observed. We suspect this is likely due inaccurate centering of the displayed portion of the a4c view. Since most of the misaligned, atypical a4c images are heavily centered on the LV (Fig. 11), the LA is not properly displayed, which leads to lower segmentation accuracy and thus higher estimation errors of the MV hinge point coordinates. No similar phenomenon of data subdivision was observed in hinge point coordinates in a2c view. \begin{table} \begin{tabular}{|l|l|l|} \hline & predicted & ground truth \\ \hline a4c-ED & 27.9 mm & 28.8 mm \\ \hline a4c-ES & 24.4 mm & 28.3 mm \\ \hline a2c-ED & 31.3 mm & 31.7 mm \\ \hline a2c-ES & 26.3 mm & 26.1 mm \\ \hline \end{tabular} \end{table} Table 2: Predicted and ground truth median MV annulus diameter values a2c and a4c views at ES and ED. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\(e_{\text{aMVL}}^{\mathcal{F}}\)} & \multicolumn{2}{c|}{\(e_{\text{aMVL}}^{\mathcal{F}}\)} & \multicolumn{2}{c|}{\(e_{\text{pMVL}}^{\mathcal{F}}\)} & \multicolumn{2}{c|}{\(e_{\text{pMVL}}^{\mathcal{F}}\)} \\ \cline{2-9} & signed & median(abs) & signed & median(abs) & signed & median(abs) & signed & median(abs) \\ \hline a4c-ED & \([-1.2;1.5]\) & 1.35 & \([-0.9;0.75]\) & 1.2 & \([-2.2;1.35]\) & 0.9 & \([-1.65;1.8]\) & 0.75 \\ a4c-ES & \([-2.3;1.95]\) & 1.65 & \([-0.75;0.9]\) & 0.53 & \([-2.55;2.3]\) & 1.8 & \([-1.8;0.83]\) & 0.53 \\ a2c-ED & \([-1.5;1.2]\) & 1.35 & \([-1.8;1.35]\) & 0.83 & \([-2.1;2.1]\) & 1.2 & \([-1.88;0.8]\) & 1.05 \\ a2c-ES & \([-1.8;2.8]\) & 1.65 & \([-1.35;0.9]\) & 0.45 & \([-2.6;2.1]\) & 1.2 & \([-1.2;0.75]\) & 0.75 \\ \hline \end{tabular} \end{table} Table 3: Results of hinge point estimations [mm]. In addition to the signed 15-85 percentile ranges of \(e^{(2)}\) (\(e_{\text{aMVL}}^{\mathcal{F}}\), \(e_{\text{aMVL}}^{\mathcal{F}}\), \(e_{\text{pMVL}}^{\mathcal{F}}\)), the median of absolute error distances \(\mathrm{median}\big{(}\big{|}e^{(2)}\big{|}\big{)}\) is stated. Figure 9: Systematic errors \(\Delta\hat{z}^{\text{(bias)}}\) [ps] of each individual subgroup (a4c-ED, a4c-ES, a2c-ED, a2c-ES). On average (marked by blue x) hinge points are estimated about 0.3 mm too far posterior (in the image: right) and about 0.5 mm too far cranial (in the image: down). ## 5 Conclusion We demonstrated a two-step method for estimating MV hinge point coordinates using deep learning segmentations of the left ventricle and left atrium and subsequent feature based image processing. With 15-85 percentile ranges of random coordinate estimation errors \(\boldsymbol{e}^{(d)}\) between [-0.9 mm; 0.75 mm] and [-2.55 mm; 2.3 mm] and absolute median coordinate errors of 1.35 mm and 0.75 mm respectively, the resulting estimations are satisfactory, but further improvements can be made. If the LV and LA can be adequately segmented by the neural net, the resulting segmentation mask can be used to reliably determine the MV hinge point coordinates. We used the CAMUS dataset in this work, which is quite heterogeneous and, as such, close to clinical practice, as described above. This is beneficial for the generalizability of the network. On the other hand, the heterogeneity (e.g., low-quality images, edge cases described in section 4.4) is detrimental to estimation accuracy. Depending on the use case, adjustments to the training data set can be made. If generalizability is the highest priority, further low-quality and off-centered images should be added to adequately represent them in the training data. Otherwise, if estimation accuracy is the priority, low-quality images can be removed from the dataset with instructions to the physician to record more appropriate images.
2310.10854
Assessing equation of state-independent relations for neutron stars with nonparametric models
Relations between neutron star properties that do not depend on the nuclear equation of state offer insights on neutron star physics and have practical applications in data analysis. Such relations are obtained by fitting to a range of phenomenological or nuclear physics equation of state models, each of which may have varying degrees of accuracy. In this study we revisit commonly-used relations and re-assess them with a very flexible set of phenomenological nonparametric equation of state models that are based on Gaussian Processes. Our models correspond to two sets: equations of state which mimic hadronic models, and equations of state with rapidly changing behavior that resemble phase transitions. We quantify the accuracy of relations under both sets and discuss their applicability with respect to expected upcoming statistical uncertainties of astrophysical observations. We further propose a goodness-of-fit metric which provides an estimate for the systematic error introduced by using the relation to model a certain equation-of-state set. Overall, the nonparametric distribution is more poorly fit with existing relations, with the I-Love-Q relations retaining the highest degree of universality. Fits degrade for relations involving the tidal deformability, such as the Binary-Love and compactness-Love relations, and when introducing phase transition phenomenology. For most relations, systematic errors are comparable to current statistical uncertainties under the nonparametric equation of state distributions.
Isaac Legred, Bubakar O. Sy-Garcia, Katerina Chatziioannou, Reed Essick
2023-10-16T22:06:44Z
http://arxiv.org/abs/2310.10854v2
# Assessing equation of state-independent relations for neutron stars with nonparametric models ###### Abstract Relations between neutron star properties that do not depend on the nuclear equation of state offer insights on neutron star physics and have practical applications in data analysis. Such relations are obtained by fitting to a range of phenomenological or nuclear physics equation of state models, each of which may have varying degrees of accuracy. In this study we revisit commonly-used relations and re-assess them with a very flexible set of phenomenological nonparametric equation of state models that are based on Gaussian Processes. Our models correspond to two sets: equations of state which mimic hadronic models, and equations of state with rapidly changing behavior that resemble phase transitions. We quantify the accuracy of relations under both sets and discuss their applicability with respect to expected upcoming statistical uncertainties of astrophysical observations. We further propose a goodness-of-fit metric which provides an estimate for the systematic error introduced by using the relation to model a certain equation-of-state set. Overall, the nonparametric distribution is more poorly fit with existing relations, with the I-Love-Q relations retaining the highest degree of universality. Fits degrade for relations involving the tidal deformability, such as the Binary-Love and compactness-Love relations, and when introducing phase transition phenomenology. For most relations, systematic errors are comparable to current statistical uncertainties under the nonparametric equation of state distributions. ## I Introduction While most neutron star (NS) properties depend sensitively on the unknown equation of state (EoS) of dense nuclear matter, some properties are interrelated in an approximate EoS-independent way [1]. The impact of EoS-independent relations ranges from enhancing our understanding of NS physics [2; 3; 4; 5; 6] to practical applications in analyses of data. For example, relations between the NS multiple moments [7; 8; 9; 4] have led to a generalization of the no-hair theorem for black holes to the three-hair relations for Newtonian NSs [8], while the so-called "I-Love-Q" relations [10; 11] have been attributed to the self-similarity of isodensity contours [3]. On the data analysis side, EoS-independent relations reduce the number of degrees of freedom [12; 13; 14; 15; 16] and enable consistency tests [17; 18; 19; 20; 11]. EoS-independent relations may include static or dynamic and macroscopic or microscopic quantities. One of the earliest proposed such relation is the one between the (complex) NS modes and their mass and radius, which can be used to translate gravitational wave (GW) observations from isolated NSs to constraints on the radius [21; 22; 23; 24; 25]. Additionally, relations including the NS tidal parameters can simplify analysis of GW data. In general, the signal emitted during the coalescence of two NSs depends on a list of tidal deformability parameters and the rotational quadrupole moment of each star. Relations between the different tidal parameters and the quadrupole moment [10; 11; 12; 26] reduce the number of free parameters to one per star, typically the so-called dimensionless tidal deformability \(\Lambda_{i}\), \(i\in\{1,2\}\). A relation between \(\Lambda_{1}\) and \(\Lambda_{2}\) (and the binary mass ratio) further reduces the number of free parameters to just one [27; 28; 13; 14; 29]. EoS-independent relations are typically constructed empirically by fitting a large number of EoS models, obtained either through phenomenological or theoretical nuclear models. Their applicability is therefore limited to the nuclear physics represented in the set of EoSs, while deviations from the relations may be a sign of new (relevant) physics. For example, an observed deviation from the relation between the frequency content of the post-merger GW signal from a NS coalescence and the tidal properties of the pre-merger signal that hold for hadronic matter [30; 31; 32; 33; 34; 5] can signal the presence of quark matter in the merger remnant [19; 35; 20; 36]. The breakdown of universal behavior in a catalog of observations can further be used to identify outliers that can be attributed to quark matter [37] or NS-black hole binaries [38]. Beyond relations breaking down outside their regime of validity, EoS-independent relations display different degrees of independence even within it, which furthermore varies across the NS parameter space. The set of EoSs the relation is fitted to sensitively impacts the degree of EoS-independence. A potential choice of such a set is EoS candidates from nuclear theory, and corresponds to evaluating the degree of independence present in existing nuclear models [10; 27; 39]. Nonetheless, the extent to which current nuclear models cover the entire range of possible behaviors of matter at high densities is unclear. More extended sets of EoSs can be obtained by considering phenomenological models, designed to mimic nuclear theory while maintaining some degree of flexibility at high densities. Examples of such phenomenological models include piecewise polytropes [40; 41] and spectral representations [42; 43; 44]. This approach leads to large sets of EoSs and statistical distributions on the EoS which can further be conditioned on astrophysical observations. Such studies directly quantify the impact of astrophysical constraints on the degree of EoS-independence compared to fully agnostic nuclear behavior. For example, Carson _et al._[45] considered spectral EoSs that have been conditioned on GW170817 [46; 47] and found that the degree of EoS-independence can be improved by more than 50% compared to an agnostic EoS set. Similar improvements have been reported in [48; 49; 50]. Though more generic than a set of selected nuclear models, parametric EoS representations are still limited in flexibility by the functional form of the EoS, which is usually not determined from first principles. This can lead to strong correlations between the EoS at different densities that are not an outcome of nuclear insight, but of the arbitrary functional form of the representation [51]. These correlations effectively cause many EoSs in the fitting set to share similar macroscopic and microscopic features, mimicking or strengthening true EoS-independence [51]. Figure 1 shows an example of such emerging EoS-independence in the radius \(R_{1.4}\) and dimensionless tidal deformability \(\Lambda_{1.4}\) of a \(1.4M_{\odot}\) NS, and the pressure at twice saturation1\(p_{2.0}\). The \(R_{1.4}\)-\(\Lambda_{1.4}\) relation is an outcome of the so called C-Love relation [1; 52] (discussed more later), while a correlation with \(p_{2.0}\) has been observed in several theoretical models [2]. Under the spectral EoS and both the heavy-pulsar informed (dashed orange) and astrophysically-informed (solid orange) EoS set, there are correlations between these quantities. Using the spectral parameterization, perfect knowledge of \(\Lambda_{1.4}\) would give a \(R_{1.4}\) uncertainty of \(\sim 1\,\)km, consistent with the error in the C-Love relation computed in [45]. Footnote 1: We define the saturation density as \(2.8\times 10^{14}\,\)g/cm\({}^{3}\). Figure 1 also shows the same relations obtained with a more flexible set of nonparametric EoSs based on Gaussian Processes [53; 54] that is only minimally informed by nuclear physics. The nonparametric EoSs are drawn from a collection of Gaussian Processes and explore a wide range of intra-density correlations lengths and strengths. As shown in Legred _et al._[51], this EoS set is extremely agnostic and intra-density correlations are only imposed by physical considerations such as causality and thermodynamic stability. Due to its flexibility, the set also inherently includes EoSs with phase-transition-like behavior, including nonmonotonic behavior in the speed of sound and multiple stable branches [55]. As expected, under the nonparametric EoSs, perfect knowledge of \(\Lambda_{1.4}\) yields an increased uncertainty in \(R_{1.4}\) of \(\sim 2\,\)km, larger than the nominal error of the \(C\)-Love relation. In this work and motivated by Fig. 1, we revisit common EoS-independent relations and assess them under nonparametric EoSs. Following Ref. [45], we evaluate EoS-independent relations separately against hadronic EoS sets as well as mixed hadronic and hybrid EoSs. Because of the difficulty in fitting the relation over an unstable branch of the \(M\)-\(R\) relation, we only study EoSs with a single stable branch, thus restricting to weak phase transitions. We also consider EoSs that are only required to be consistent with the existence of massive pulsar measurements, contrasted with a set required to be consistent with additional GW and X-ray measurements [56]. With a focus on the applicability of EoS-independent relations, we further revisit the issue of EoS-independence across the parameter space. In general, relations have wider applications in regions of the parameter space where data are most informative. A higher degree of EoS-independence in these regions will therefore expand their applicability. For example, the rela Figure 1: Marginalized pulsar-informed (dashed) and fully astrophysically-informed (solid) posterior distributions for \(R_{1.4}\), \(\Lambda_{1.4}\), and \(p_{2.0}\) when using nonparametric (blue) and spectral (orange) EoSs. Beyond pulsar masses, astrophysical distributions are additionally conditioned on mass-radius, and mass-tidal deformability measurements; see Sec. II.3. The spectral EoS result shows less variability in \(R_{1.4}\) at a fixed value of \(\Lambda_{1.4}\) than the nonparametric one. This suggests that the degree of EoS-independence in \(R_{1.4}\)–\(\Lambda_{1.4}\) is linked to the flexibility of the EoS model. Similar conclusions hold for \(p_{2.0}\). tions that link the dimensional tidal deformabilities of two NSs in a binary to each other are most useful for NS with masses \(\lesssim 1.7M_{\odot}\) as GW observations are largely uninformative about the tidal properties of more massive NSs [57; 58]. In Sec. II, we propose a statistic to measure the goodness-of-fit of an EoS-independent relation, by comparing to a _tolerance factor_ which is chosen based on the application. Fitting via optimization of this metric allows more control over the precision of the EoS-independent relation as a function of NS mass. With the extended EoS set and goodness-of-fit metric in hand, we revisit the following relations in Sec. III: * _I-Love-Q_[10; 11], Sec. III.1: a relation between the (normalized) moment of inertia \(I\), the tidal deformability \(\Lambda\), and the rotational quadrupole moment \(Q\) of a NS. The I-Love-Q relations remain highly universal, likely useful even with sensitivities more than ten times current GW detectors. * _C-Love_[1; 52], Sec. III.2: a relation between the compactness \(C=m/R\) and the tidal deformability \(\Lambda\) of a NS. Its main applicability is in translating GW tidal constraints to radii, given the NS mass \(m\). The C-Love relation is relatively non-universal; for nonparametric EoS distributions, it leads to systematic errors of \(\sim 30\%\) compared to statistical uncertainties at current sensitivity. This holds true for EoSs both with and without strong phase transitions. * _Binary-Love_[27], Sec. III.3: a relation between the dimensionless tidal deformabilities of two NSs in a binary \(\Lambda_{1}\) and \(\Lambda_{2}\) given the mass ratio \(q\). Its main applicability is in reducing the number of parameters in GW analyses on NS binaries, though its EoS-independence breaks down for EoSs with phase transitions [59; 45]. The binary-Love relation is similarly non-universal under the nonparametric EoS distribution with systematic errors \(\sim 50\%\) of current statistical uncertainties. The Binary-Love relation universality is further degraded for EoSs with phase transitions. * _\(R_{1.4}\)-Love_[14], Sec. III.4: a relation between the NS radius and the chirp mass and chirp tidal deformability of a NS binary, essentially combining the C-Love and Binary-Love relations above. \(R_{1.4}\)-Love likely would introduce bias before the advent of next-generation detectors, with systematic errors becoming comparable to statistical uncertainties for a GW170817-like but \(\mathcal{O}(3-5)\) times louder. * \(\alpha_{c}\)-\(C\)[6], Sec. III.5: a relation between the EoS stiffness measure \(\alpha_{c}\equiv p_{c}/\epsilon_{c}\) where \(p_{c}\) and \(\epsilon_{c}\) are the central pressure and energy density respectively, and the compactness. The \(\alpha_{c}-C\) relation is a very poor fit to the nonparametric mixed distribution with systematic errors greater than or equal to current statistical uncertainties. The relation is somewhat better fit by the parametric and hadronic nonparametric distributions. ## II Goodness-of-fit and quantifying EoS-independence In this section we formalize the discussion of EoS-independent relations by quantifying EoS-independence through a goodness-of-fit metric in Sec. II.1, introducing a tolerance factor for the fit in Sec. II.2, and describing the EoS sets we use in Sec. II.3. In general, an individual NS is characterized by an EoS \(\epsilon\) and the NS central density \(\rho_{c}\). Given two NS properties \(F(\epsilon,\rho_{c})\) and \(G(\epsilon,\rho_{c})\) which are each one-to-one2 with \(\rho_{c}\), we define their relation \(G(\epsilon,F(\epsilon,\rho_{c}))\). Remarkably, for a number of property pairs the induced function \(G(\epsilon,F(\epsilon,\rho_{c}))\) is nearly independent of \(\epsilon\). These are so-called universal, or EoS-independent relations. Footnote 2: If \(F\) is not one-to-one with \(\rho_{c}\) (for example the mass \(m(\epsilon,\rho_{c})\) for EoSs with multiple stable branches and twin stars), then this construction works on each monotonic branch. ### Defining a goodness-of-fit metric Following [11], we fit an analytic phenomenological approximant to the EoS-independent relation \[\tilde{G}(F;\mathbf{\theta})\approx G(\bullet,F(\bullet,\rho_{c}))\,, \tag{1}\] where \(\bullet\) in place of the EoS \(\epsilon\) indicates this should hold regardless of the EoS and \(\mathbf{\theta}\) are \(N_{p}\) fitting parameters. Given a functional form for \(\tilde{G}(F;\mathbf{\theta})\) (typically in terms of simple functions such as polynomials and logarithms) and a particular EoS \(\epsilon\), we select \(\mathbf{\theta}\) such that a goodness-of-fit metric is minimized. A least-squares metric is3 Footnote 3: Though this metric is not strictly a \(\chi^{2}\) statistic, as there is no statistical interpretation of the scatter which induces the \(\chi^{2}\), we use familiar notation since many conventional intuitions hold. For instance, \(\chi^{2}/N_{\rm dof}=1\) is a threshold for a good fit, and any value significantly smaller than 1 would be regarded as overfitting [60]. In our case, we expect the EoS-independent relations to overfit the “data”, \(\chi^{2}/N_{\rm dof}\ll 1\). A large value would be considered a poor fit. \[\chi^{2}_{\epsilon}(\mathbf{\theta})\equiv\sum_{i}^{N}\frac{\left[\tilde{G}(F_{i}; \mathbf{\theta})-G(\epsilon,F_{i}(\epsilon,\rho_{c,i}))\right]^{2}}{\sigma_{i}^{2} }\,, \tag{2}\] where \(i\) iterates over individual stellar solutions (i.e., central densities), \(\sigma_{i}\) is a tolerance factor for the goodness-of-fit of data point \(i\), and \(N_{p}\) is the number of parameters in the fit. In what follows we use \(N=200\) which ensures smooth relations and that \(\chi^{2}/(N-N_{p})\), the \(\chi^{2}\) per number of degrees of freedom, is independent of \(N\). Unless otherwise stated, we fit each relation on a grid of NS central densities. We build a linear grid for each EoS in the central rest-mass density, \(\rho_{c}\), for \(1.0M_{\odot}\) to \(M_{\rm max}\), the maximum TOV mass. We use only EoSs with a single stable branch in the \(M-R\) relation. Both the choice of grid used, and the truncation are inputs and represent a _de facto_ choice of relative significance weighting between mass scales, which may or may not be realistic depending on the true distribution of NS masses (equivalently, given an EoS, the distribution of central densities). For most EoSs, the spacing of central density favors higher masses; given the uncertainty in populations of NSs, we do not attempt to modify this distribution substantially. Implications of this choice are discussed further in Sec. IV. The tolerance factor \(\sigma_{i}\) can be freely chosen and, as its name suggests, quantifies the degree of deviation from EoS-independence we tolerate. Different choices for \(\sigma_{i}\) will result in different best-fit \(\mathbf{\theta}\) parameters and goodness-of-fit estimates. We discuss the tolerance factor extensively in the next section. Beyond a single EoS \(\epsilon\), we consider a (normalized) distribution on EoSs \(P(\epsilon)\), potentially conditioned on observations. The distribution-dependent goodness-of-fit is then defined as the distribution average of \(\chi^{2}_{\epsilon}\) over \(P(\epsilon)\), \[\chi^{2}(\mathbf{\theta})\equiv\int\chi^{2}_{\epsilon}(\mathbf{\theta})P(\epsilon)d \epsilon=\sum_{\epsilon}P_{\epsilon}\chi^{2}_{\epsilon}(\mathbf{\theta})\,, \tag{3}\] where \(P_{\epsilon}\) is the weight of each EoS in the distribution \(P(\epsilon)\). EoSs are sampled for the Monte Carlo sum by directly sampling an EoS prior set for each distribution; we use the same prior distributions as [51]. In Eq. (3) the fitting parameters \(\mathbf{\theta}\) are shared among and fitted with all \(\chi^{2}_{\epsilon}(\mathbf{\theta})\) -this is equivalent to seeking a set of parameters which are EoS-independent over \(P(\epsilon)\). In practice, we sample EoSs uniformly from the approximate support of \(P_{\epsilon}\), i.e. \(\{\epsilon|P_{\epsilon}>P_{th}\}\) for some threshold \(P_{th}\), and weigh each EoS draw by \(P_{\epsilon}\). This allows us to better resolve the "tails" of the EoS distribution where \(\chi^{2}_{\epsilon}\) may be large. W sample 1000 draws from the given EoS set in order to approximate the integral, as we found reasonable convergence of the total \(\chi^{2}\) was achieved by this point for all EoS distributions (see Sec. II.3). ### Role of the tolerance factor Setting \(\sigma_{i}=1\) would be sufficient to uniquely specify a fitting problem for \(\mathbf{\theta}\) if the goal is simply to obtain a fit. However, in this case, no information about goodness-of-fit is contained in Eq. (3), because rescaling \(\sigma_{i}\to\alpha\sigma_{i}\) changes \(\chi^{2}\to\chi^{2}/\alpha^{2}\); any level of goodness-of-fit could be achieved by rescaling. In fact, no specific fit corresponds in any sense to the "best fit" possible as a different (non-constant) \(\sigma_{i}\) would produce a different fit. This is analogous to a nonlinear change of variables producing a different fit. We instead select \(\sigma_{i}\) by considering the tolerance we have for error in the EoS-independent relation. This results in a \(\chi^{2}(\mathbf{\theta})\) that is simultaneously used during the fitting procedure and whose (dimensionless) numerical value can be interpreted as a goodness-of-fit. To clarify, we dig further into a common application of EoS-independent relations in inference, namely the computation of certain NS properties from others without knowledge of the EoS. The Binary-Love relations [1; 27] facilitate the computation of the tidal deformability of one NS \(\Lambda_{1}\) from that of another \(\Lambda_{2}\) given their mass ratio \(q\)[13]. The systematic error in the estimation of \(\Lambda_{1}\) due to the relation's error is \(\delta\Lambda_{sys}\). Whether this systematic error is tolerable in a GW analysis depends on the statistical measurement uncertainty \(\delta\Lambda_{stat}\). If \(\delta\Lambda_{sys}\gtrsim\delta\Lambda_{stat}\), then the application of the relation introduces an uncertainty comparable to the statistical uncertainty, which is undesirable. If, however, \(\delta\Lambda_{sys}\ll\delta\Lambda_{stat}\), then the relation may be useful as the statistical uncertainty dominates. This consideration motivates choosing the tolerance factor \(\sigma_{i}\) to be the approximate measurability of the quantity of interest. In doing do, the goodness-of-fit \(\chi^{2}(\mathbf{\theta})\) is a direct check of the relation between \(\delta\Lambda_{sys}\) and \(\delta\Lambda_{stat}\). Unless otherwise stated, throughout this work we use a fiducial estimate of \(\delta\Lambda_{stat}=210\), a constant motivated by the tidal measurement of GW170817 and rescaling the symmetric 90% region to 1-\(\sigma\)[28]. Improvements in detector sensitivity mean that a GW170817-like event observed today would have a lower statistical uncertainty; per Eq. (2), halving the statistical uncertainty in \(\Lambda\) would increase the \(\chi^{2}\) (i.e., decrease the goodness-of-fit) by a factor of 4. In certain cases the measurability of NS tidal deformability is a very poor estimate for the measurability of other NS properties. For example, for higher-mass NSs, the compactness will likely be better measured by non-GW techniques, such as X-ray pulse-profile modeling. In such cases, we approximate statistical uncertainty by assuming that the compactness, \(C(M)\), can be measured to within \(\delta C=0.02\), a constant representing the uncertainty from X-ray observations [61; 62; 63; 64; 65]. See Secs. III.3 and III.5 for more details of how we simultaneously incorporate separate estimates of NS measurability. Generically, the \(\chi^{2}\) value represents how poorly fit the relation is to the EoS distribution. Per Eq. (2), \(\chi^{2}_{\epsilon}\) represents the square error in the quantity predicted by the relation relative to the tolerance factor. Given a value for \(\chi^{2}\), the typical error in the underlying variable is \[\Delta G\sim\sigma_{G}(F)\sqrt{\chi^{2}/N_{\rm dof}}\,. \tag{4}\] Here, \(\sigma(F)\) represents the tolerance factor on the quantity \(F\) used in evaluating the fit. This is to be taken as an order of magnitude estimate, and is useful for quickly diagnosing the error expected from applying an EoS-independent relation. For example, if \(\sigma(F)\) represents statistical measurement uncertainty, then a \(\chi^{2}\) value of \(10^{-4}\) indicates that systematic errors in parameters are of order \(0.01=1\%\) statistical uncertainties. An alternative choice for the tolerance factor would be \[\sigma(F,\epsilon)=G(F)\,. \tag{5}\] This corresponds to constant tolerance for the _fractional_ error in the fit. This tolerance factor is independent of measurement uncertainty, and so the best fit bears a different interpretation. In many cases a constant relative tolerance may be preferred, especially when an observable varies over orders of magnitude. We give an example of a fit where a constant fractional error tolerance gives a seemingly better fit in Sec. III.2.1. The \(\chi^{2}\) in this case is a measure of the total fractional deviation in the relation. Nonetheless, there are subtleties to interpretation of the \(\chi^{2}\) value within the fractional uncertainty approach. For example, assume we decided to try to identify EoS-independent relations for \(R(M)\) or \(\Lambda(M)\). Since \(R(M)\) is approximately constant for a large class of EoSs, we adopt a constant fit : \[\chi^{2}=\sum_{i}\frac{(R(M_{i})-\hat{R})^{2}}{\hat{R}_{0}^{2}}\,, \tag{6}\] with \(\hat{R}\) the universal predictor and \(\hat{R}_{0}=12\)km a crude estimate of \(\hat{R}\). Since \(R(M)\in[10,14]\) km for the majority of astrophysical EoSs, we would find \(\chi^{2}/N_{\rm dof}\sim(2/10)^{2}=0.04\). On the other hand, \(\Lambda(M)\) varies over orders of magnitude, and \(\Lambda_{1.4}\in[200,800]\), see Fig. 1. Then the goodness-of-fit will average to \(\chi^{2}/N_{\rm dof}\sim(300/500)^{2}\sim 0.36\). The radius is relatively EoS-independent by this metric under a fractional uncertainty approach; this contrasts with the use of measurement uncertainty as the tolerance factor, where both relations would be comparably poor. Therefore, the choice of tolerance factor sensitively impacts what the resulting goodness-of-fit represents. This is true even when only the fit parameters are of interest, as those will also depend on the tolerance. The tolerance factors we use are coarse heuristics for potentially better-motivated choices. For example, a complete GW simulation study would allow a precise estimate of \(\sigma_{\Lambda}\) for a range of binary parameters and detector sensitivities. There are additional choices for the tolerance factor that we do not investigate, such as \(\sigma_{\Lambda}=\alpha\delta\Lambda_{stat}^{\beta}\), for some (potentially dimensionful) constant \(\alpha\) and exponent \(\beta\). Additionally, the tolerance factor may be designed to be agnostic to errors of the fit in certain mass ranges; if for example, sub-solar mass NSs cannot be formed astrophysically, then it is not necessary that the relation is well fit below \(M_{\odot}\). This choice is degenerate, however, with a choice of which NSs the \(\chi^{2}\) is marginalized over; see the discussion in Sec. IV. ### EoS set The final ingredient of the EoS-independent relation fits is the EoS set and its distribution \(P(\epsilon)\). Since our goal is to assess EoS-independence for flexible EoS sets, we use the model-agnostic prior of Ref. [54], constructed to minimize the impact of nuclear theory input. EoSs are drawn from multiple Gaussian Processes sampling a range of covariance kernels (correlation scale and strength) between different densities. The final EoS set, not conditioned on any astrophysical or experimental data, predicts NSs with a very wide range of \(R\in(8,16)\) km. We condition this set against radio data [66; 67; 68] for the maximum NS mass, and refer to this as the _pulsar-informed set_. We also consider an _astrophysically-informed set_, obtained in [56] by further conditioning on X-ray [62; 63; 64; 65] and GW [46; 57] data. Due to its flexible construction, both the pulsar-informed and the astrophysically-informed sets contain EoSs with phase transitions, both strong and weak. We therefore further split each set in EoSs without (referred to as the _hadronic set_) and with (referred to as the _mixed-composition set_) phase transitions. In order to identify EoSs with phase transitions, we use the moment-of-inertia-based feature extraction procedure from [55]. This procedure can identify both strong and weak phase transitions, including phase transitions that do not result in multiple stable branches or have a large impact on the macroscopic observables. We set a high threshold for phase transitions, requiring a change in internal energy per particle of \(\Delta(E/N)\geq 30\)MeV, see Ref. [55]. As before, we also only use EoSs with a single stable branch in the \(M\)-\(R\) relation. Including EoSs with multiple stable branches would require choices in the construction of the \(\chi^{2}\) to weight each branch and exclude unstable branches, but would likely decrease the goodness-of-fit of the relations to the mixed composition EoS set. Finally, for comparison, we repeat the same fits with piecewise-polytropic and spectral EoSs, using the pulsar-informed and astrophysically-informed distributions from Ref. [51]. The parametric EoS priors induced here are not necessarily required to obey causality. We follow Ref. [69] in allowing the EoS prior to extend up to \(c_{s}\leq 1.1\), in order to allow an acausal model to represent a potential causal model which is not representable by the parametrization. In Legred _et al._[51], this choice was found to affect the distribution on the piecewise-polytrope EoS; it may additionally affect EoS-independence by allowing additional (unphysical) variation in the EoS. ## III EoS-independent relations We fit a set of proposed EoS-independent relations to different EoS distributions and evaluate their universality. Throughout, unless otherwise stated, we use a fixed tolerance factor value of \(\sigma_{\Lambda}=210\).4 When \(\Lambda\) is not predicted by the fit but it is the independent variable of the relation, we propagate the uncertainty through the relation to the dependent quantity. For example Footnote 4: Simulations suggest that measurement uncertainty in \(\Lambda\) is approximately independent of the value of \(\Lambda\) (equivalently, the NS mass) and inversely proportional to the signal strength [70]. \[\sigma_{I}(\Lambda)=\frac{d\tilde{I}(\Lambda,\mathbf{\theta}_{f})}{d\Lambda}\Big{|} _{\Lambda}\sigma_{\Lambda}\,, \tag{7}\] where \(\mathbf{\theta}_{f}\) are fiducial parameters of the fit, and \(\tilde{I}\) is the EoS-independent predictor of \(I\) from \(\Lambda\) which depends on \(\Lambda\) via the derivative of the predictor. When neither the independent or dependent variable are \(\Lambda\), we use a different strategy; see Secs. III.4 and III.5. For relations where \(\Lambda\) is indeed the dependent quantity and the tolerance factor is constant, this strategy results in optimization problems which are mathematically identical to previous work, e.g. [45]. Crucially though, NOW the goodness-of-fit statistic can be interpreted as a measure of EoS-independence relative to observations. In this section we show plots for the nonparametric-mixed and spectral astrophysically-informed EoS distributions. We display additional plots for the piecewise-polytrope and hadronic-nonparametric EoS distributions as well as fit parameters in Appendix A. ### I-Love-Q We begin with the I-Love-Q relations [11] for the dimensionless quadrupole moment \(Q\), moment of inertia \(I\), and tidal deformability \(\Lambda\) of a NS. The existence of such relations, at least approximately, may not be surprising. In Newtonian gravity, for example, the quadrupole moment can be computed from the moment of inertia exactly. In GR, however, the definitions of these quantites do not coincide, which is to say the relationship of angular momentum, angular velocity, and the second multipole of the gravitational field is nontrivial for slowly-spinning compact objects [71]. We use a slightly modified form for the I-Love-Q relations compared to Ref. [11], which was shown by Ref. [45] to produce better behavior in the Newtonian limit: \[\hat{I}(\Lambda;a,b,K_{yx}) =K_{yx}\Lambda^{\alpha}\frac{1+\sum_{i=1}^{3}a_{i}\Lambda^{-i/5}} {1+\sum_{i=1}^{3}b_{i}\Lambda^{-i/5}}\,, \tag{8}\] \[\hat{Q}(\Lambda;a,b,K_{yx}) =K_{yx}\Lambda^{\alpha}\frac{1+\sum_{i=1}^{3}a_{i}\Lambda^{-i/5}} {1+\sum_{i=1}^{3}b_{i}\Lambda^{-i/5}}\,,\] (9) \[\hat{I}(Q;a,b,K_{yx}) =K_{yx}Q^{\alpha}\frac{1+\sum_{i=1}^{3}a_{i}Q^{-i/5}}{1+\sum_{i=1 }^{3}b_{i}Q^{-i/5}}\,. \tag{10}\] These forms ensure that when \(a_{i}\) and \(b_{i}\) are zero, these relations limit to the Newtonian form. We display best-fit parameters in Table 6. We solve the TOV equations in the slow-rotation limit up to second order [71] to compute the dimensionless moment of inertia, quadrupole moment, and tidal deformability5. We then fit the parameters of each relation using a nonlinear least squares algorithm. We display the _loss_, i.e., best fit \(\chi^{2}/N_{\text{dof}}\) value, of each fit for each EoS distribution in Table 1. In this context \(N_{\text{dof}}\) represents the number of degrees of freedom in the data, which is the number of points fit (200) minus the number of fitted parameters. The loss measures the residuals in the fit relative to \(\sigma_{\Lambda}=210\), as described in Sec. II.2. Footnote 5: We thank Victor Guedes for the use of code to solve the TOV equations in the slow-rotation limit up to second order. The I-Love-Q relations hold independent of EoS distribution to very high precision, with loss values less than \(3\times 10^{-3}\) for almost all relations. In particular \(I(\Lambda)\), with losses of \(\lesssim 10^{-5}\) indicates that even with \(\mathcal{O}(10)\) improvement in GW detector sensitivity, the systematic error of the relation will still be at sub-percent level compared to statistical uncertainties. Nonetheless, the parametric EoS distributions display moderately better EoS-independence than the corresponding nonparametric distributions, typically by a factor of 3-10. Similarly, the hadronic nonparametric distribution is typically a factor of 2-3 better than the corresponding mixed-composition distributions. In all cases, the fits to pulsar-informed distributions show higher losses than the ones conditioned on all astrophysical data. For the spectral distributions, the difference is marginal, about a factor of 2, whereas for the nonparametric mixed distribution the difference is almost a factor of 20 for the relations involving \(\Lambda\). We display the fits for the nonparametric mixed-composition, and spectral EoS distributions in Figs. 2 and 3 respectively, again each distribution conditioned on all astrophysical data. For other distributions, see Appendix A. The higher degree of EoS-independence in the spectral fit is apparent in the residuals, which are several times smaller than the nonparametric residuals. The \(I\)-\(Q\) relation shows the smallest loss in EoS-independence (by a factor of 10) when moving from the spectral pulsar-informed distribution to the equivalent nonparametric distribution. This indicates that the \(I\)-\(Q\) relation is fundamentally more EoS-independent than relations involving \(\Lambda\). This is potentially related to the discussion of emergent symmetries in Ref. [3], which demonstrated that the \(I\)-\(Q\) relation is indeed EoS-independent under the elliptical isodensity approximation, which is \begin{table} \begin{tabular}{|c|c|c|c|} \multicolumn{4}{c}{\(\chi^{2}/N_{\text{dof}}\)} \\ \hline EoS Dist.Relation & \(I(\Lambda)\) & \(I(Q)\) & \(Q(\Lambda)\) \\ \hline GP-hadronic (astro) & \(4.6\times 10^{-5}\) & \(2.9\times 10^{-7}\) & \(6.9\times 10^{-4}\) \\ GP-hadronic (psr) & \(5.3\times 10^{-4}\) & \(6.2\times 10^{-7}\) & \(8.9\times 10^{-3}\) \\ GP-mixed (astro) & \(1.5\times 10^{-4}\) & \(5.0\times 10^{-7}\) & \(2.0\times 10^{-3}\) \\ GP-mixed (psr) & \(2.6\times 10^{-3}\) & \(7.9\times 10^{-7}\) & \(3.9\times 10^{-2}\) \\ SP (astro) & \(5.9\times 10^{-7}\) & \(8.4\times 10^{-8}\) & \(3.3\times 10^{-5}\) \\ SP (psr) & \(3.1\times 10^{-6}\) & \(1.2\times 10^{-7}\) & \(1.0\times 10^{-4}\) \\ PP (astro) & \(4.2\times 10^{-6}\) & \(2.7\times 10^{-7}\) & \(2.1\times 10^{-4}\) \\ PP (psr) & \(4.1\times 10^{-5}\) & \(1.3\times 10^{-6}\) & \(4.2\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 1: Table \(\chi^{2}/N_{\text{dof}}\) for the I–Love–Q relations for several EoS distributions. Here GP represents the nonparametric (Gaussian Process) distributions, SP represents the spectral distributions, and PP represents the piecewise-polytrope distributions. We show results for each of the pulsar-informed distributions (psr), and fully astrophysically-informed distributions (astro). nearly true in astrophysically relevant NSs [3]. ### Binary-Love The _Binary-Love_ relation allows us to estimate the tidal deformability of one NS in a binary given its NS companion's deformability. The expression is given in terms of the symmetric deformability \(\Lambda_{s}\equiv(\Lambda_{1}+\Lambda_{2})/2\) and the antisymmetric deformability, \(\Lambda_{a}=(\Lambda_{2}-\Lambda_{1})/2\) where \(\Lambda_{1}\) and \(\Lambda_{2}\) are the deformabilities of two NSs [27]: \[\Lambda_{a}(\Lambda_{s},q;b,c)=F_{n}(q)\Lambda_{s}\frac{1+\sum_{i=1}^{3}\sum_{ j=1}^{2}q^{j}b_{ij}\Lambda_{s}^{-1/5}}{1+\sum_{i=1}^{3}\sum_{j=1}^{2}q^{j}c_{ij} \Lambda_{s}^{-1/5}}\,. \tag{11}\] For the Binary-Love relation, we use a NS distribution truncated at \(0.8M_{\odot}\) rather than \(1.0M_{\odot}\), this is necessary to allow the relationship to be evaluated over a Figure 3: The same as Fig. 2 but with the spectral EoS distribution conditioned on all astrophysical data. We use identical axes ranges between the two figures. Worst-case residuals are of order 10 times smaller than the nonparametric mixed-composition distribution seen in Fig. 2. Figure 2: _Top, each panel_: EoSs drawn from nonparametric mixed-composition EoS distribution conditioned on all astrophysical data (in blue), along with the best-fits (black dashed). From left to right we display the I-Love, I-Q, and Q-Love relations. _Bottom, each panel_: Residuals of the fit relative to each of the sampled EoSs. This represents a measure of the “error” of using the particular relation with the given EoS set. wider range of mass ratios \(q\). Fit coefficients for the astrophysically-informed EoS sets are given in Table 7. Here, and in the rest of the paper, we solve the TOV equations only to first order in the small spin parameter using the approach from [72]. We display the fit losses in Table 2, and additionally plot them in Fig. 4. The losses are noticeably higher than any of the I-Love-Q relations for corresponding EoS sets, indicating the relation is less EoS-independent. The use of lower mass cutoff inevitably leads to an increase in loss for the relation, as low-mass NSs have larger tidal deformabilities; however, raising the mass cutoff to \(M_{\rm min}=0.9\) lowers losses by only a factor of \(\sim 2-3\). This indicates that the fits are indeed worse than the I-Love-Q relations. Nonetheless, the spectral EoS distribution fits are \(\sim\)50 times better than the nonparametric and piecewise-polytrope distributions. The better fit to the spectral distribution might be due to large correlations between density scales. Such correlations may reduce the variation in \(\Lambda\) across mass scales, making the relation between \(\Lambda(m_{1})\) and \(\Lambda(m_{2})\) more EoS-independent. The astrophysically-informed fits show improvement over pulsar-informed fits, with typical loss values 3-5 times better. The piecewise polytrope is by far the most improved distribution upon inclusion of more data, with losses decreasing by factors of more than 10. In all cases the hadronic distributions give improved fits relative to the mixed composition distributions, typically by a factor of 10 in loss. For the worst-fit case, the nonparametric pulsar-informed mixed distribution, the fit quality (\(\chi^{2}/N_{\rm dof}=2.6\times 10^{-1}\)) may be poor enough to pose challenges for current-generation GW detectors, as it indicates systematic errors of order 60% in the predicted value of \(\Lambda_{a}\) relative to statistical uncertainties. Figure 5 shows nonparametric mixed and spectral fits relative to sampled EoSs. The larger variation of the nonparametric EoS set relative to the spectral set is apparent. The large differences in fit quality for nonparametric distribution with mixed composition and hadronic composition are consistent with observations of the Binary-Love relation (as presented here) failing to describe effectively EoSs with phase transitions [59; 45]. This could potentially lead to analyses using Binary-Love relations artificially downranking EoSs which support hybrid star. This effect will likely be smaller than an e-fold in likelihood for any individual event, but such effects will multiply in a hierarchical analysis, leading to large errors after many events. See the discussion in Sec. IV. #### iv.2.1 Changing fit quality with the tolerance factor The deteriorating quality of the fit at low values of \(\Lambda_{s}\) is apparent in Fig. 5, left panel. This is because assuming a constant tolerance factor for \(\Lambda\) upweights relative errors where \(\Lambda\) is large, i.e. the regime where small relative differences lead to very large \(\chi^{2}\) values. It is possible to use the tolerance factor to improve the fit quality at low \(\Lambda\). Instead of choosing an observational value for the tolerance factor \(\sigma(\Lambda_{s})\), we set \(\sigma(\Lambda_{s})=\Lambda_{a}(\Lambda_{s})\). Such a fit may be useful for tidal analyses of binaries containing a massive NS, as it gives constant relative uncertainty and therefore tolerates only small errors in \(\Lambda_{a}\) when \(\Lambda_{a}\) itself is small. We plot the fit achieved in Fig. 6, scaling the uncertainty by a factor of 0.5 for display purposes. We additionally plot the region encompassed by \(\pm\sigma(\Lambda_{s})\) and shade the region in between for the \(q=0.9\) fit. This demonstrates the role of tolerance factors and the flexibility they offer. ### C-Love Another established EoS-independent relation relates compactness to tidal deformability [52; 11]. This relation is useful for determining the radii of NS with measured tidal deformabilities and masses from GW observations. Such a relation is plausible: radius and tidal deformability are linked by definition \[\Lambda=\frac{2}{3}k_{2}(m)\left(R/m\right)^{5}=\frac{2}{3}k_{2}C^{-5}\,, \tag{12}\] \begin{table} \begin{tabular}{|c|c|c|c|} \multicolumn{4}{c}{\(\chi^{2}/N_{\rm dof}\)} \\ \hline EoS Dist.Relation & \(q=0.55\) & \(q=0.75\) & \(q=0.9\) \\ \hline GP-hadronic (astro) & \(6.6\times 10^{-3}\) & \(1.8\times 10^{-2}\) & \(9.1\times 10^{-3}\) \\ GP-hadronic (psr) & \(2.7\times 10^{-2}\) & \(1.0\times 10^{-1}\) & \(8.5\times 10^{-2}\) \\ GP-mixed (astro) & \(1.0\times 10^{-2}\) & \(5.2\times 10^{-2}\) & \(4.9\times 10^{-2}\) \\ GP-mixed (psr) & \(5.5\times 10^{-2}\) & \(2.6\times 10^{-1}\) & \(2.2\times 10^{-1}\) \\ SP (astro) & \(3.3\times 10^{-3}\) & \(6.5\times 10^{-3}\) & \(2.0\times 10^{-3}\) \\ SP (psr) & \(6.7\times 10^{-3}\) & \(1.0\times 10^{-2}\) & \(2.9\times 10^{-3}\) \\ PP (astro) & \(3.5\times 10^{-3}\) & \(1.6\times 10^{-2}\) & \(1.1\times 10^{-2}\) \\ PP (psr) & \(6.2\times 10^{-2}\) & \(2.2\times 10^{-1}\) & \(9.6\times 10^{-2}\) \\ \hline \end{tabular} \end{table} Table 2: Table \(\chi^{2}/N_{\rm dof}\) for the Binary-Love relations for several distributions on the EoS and binary mass ratios. Figure 4: The costs shown in Table 2. The spectral costs are in general the lowest, especially for more equal-mass binaries. though a truly EoS-independent description would require \(k_{2}\), the tidal Love number, to be either independent of the EoS or expressible only as a function of \(C\). The relation is given as follows, again using a fitting form from Ref. [45] \[C=K_{C\Lambda}\frac{1+\sum_{i=1}^{3}a_{i}\Lambda^{-i/5}}{1+\sum_{i=1}^{3}b_{i} \Lambda^{-i/5}}\,. \tag{13}\] As before, we propagate a constant \(\Lambda\) uncertainty to a \(C\) uncertainty. However, for high-compactness stars, GWs are expected to be weakly informative probe, leading to poor fits for the high-compactness part of the relation. Moreover, X-ray probes of compactness can provide complementary constraints [62, 64]. We therefore hybridize two tolerance factors: \[\sigma_{C}^{-2}=\sigma_{C,\text{x-ray}}^{-2}+\sigma_{C,\text{GW}}^{-2}\,. \tag{14}\] The X-ray uncertainty \(\sigma_{C,\text{x-ray}}^{-2}\) is negligible for \(C\lesssim 0.16\), while the GW uncertainty \(\sigma_{C,\text{GW}}\) is negligible for \(C\gtrsim 0.2\). This corresponds to a transition from X-ray to GW data dominating constraints near \(\Lambda=200-500\). The total tolerance factor is not representative of any particular measurement, but rather provides a holistic picture of the statistical uncertainty. Results are shown in Table 3. We find the fit qualities to be \(\sim 100\) times poorer than for the I-Love-Q relations, even for the parametric distributions. Contrary to the Binary-Love case, the C-Love goodness-of-fit is relatively independent of conditioning on additional data, with the loss changing by \(\lesssim 2\) in all cases when additional astrophysical data are included. Also, for the nonparametric EoSs, the C-Love relation is not appreciably better fit to the hadronic distribution than to the mixed distribution. Similarly to the Binary-Love relations, the mixed-composition nonparametric distribution conditioned only on heavy pulsar mass measurements shows a loss of \(3.6\times 10^{-1}\), indicating systematic errors are already comparable to statistical uncertainties. The same holds true for the piecewise-polytrope distribution, Figure 5: Similar to Fig. 2. _Left_: The Binary-Love relation fitted, along with all EoSs for the nonparametric EoS distribution with mixed composition when conditioning on all astrophysical data. We plot each fit for three different mass ratios, \(q=0.55,q=0.75\), and \(q=0.9\). _Right_: The same for the spectral EoS distribution. though the piecewise-polytrope loss decreases by almost a factor of 10 upon the introduction of additional astrophysical data, while the nonparametric distribution decreases by only a factor of 3. This is consistent with the discussion in Sec. III.2, and indicates that the large variance in the piecewise-polytrope distribution that leads to large losses is not consistent with current astrophysical data from x-ray pulsars and gravitational waves. Finally, we also display the fits to the nonparametric mixed-composition and spectral EoS distributions conditioned on all astrophysical data in Fig. 7. The nonparametric EoSs have residuals larger by about a factor of 2, as in the previous examples. Fit parameters are given in table 8. The relatively large losses in the \(C\)-Love relation are consistent with the existence of doppelgangers [73, 74]: EoSs with similar \(\Lambda\) across the parameter space, \(\Delta\Lambda<30\), but different \(R\), \(\Delta R\) up to 0.5 km. This phenomenon is due to variability in the EoS at densities below \(2\rho_{\rm nuc}\); the nonparametric EoS prior contains a wide range of low-density behaviors and thus produces EoSs with similar features. Approximating the nonparametric EoS distribution with this relation may result in errors in compactness \(\Delta C\sim 0.02\), although typical errors are \(\Delta C\lesssim 0.01\). Choosing a fiducial NS radius of 10.5 km, and a fiducial mass of 1.4 \(M_{\odot}\), this error can be translated to maximal radius uncertainty of \(\sim 1\)km, with typical errors half that, in line with Refs. [73, 74]. The presence of these features is additionally consistent with Fig. 1; the nonparametric EoS distribution shows a less EoS-independent relations between \(R\) and \(\Lambda\). This indicates that independent radius and tidal deformability measurements will be required in order to effectively constrain the EoS at intermediate (\(\sim 1-2\rho_{\rm nuc}\)) densities. ### \(R_{1.4}\)-\(\tilde{\Lambda}\) An additional relation between NS tidal properties and the radius was proposed by Ref. [14]. The relation leverages the relative insensitivity of \(\tilde{\Lambda}\), the leading order tidal parameter in the post-Newtonian expansion of the GW phase [70, 75] as a function of \(q\), and the relation given in Eq. (12): \(\tilde{\Lambda}(R_{1.4},\mathcal{M}_{c})\). That is, we write the tidal deformability as a EoS-independent function of typical star radius and the chirp mass, \(\mathcal{M}_{c}\equiv(m_{1}m_{2})^{3/5}/(m_{1}+m_{2})^{1/5}\), of the binary. The relation is given, following Ref. [14], by \[\frac{a}{\tilde{\Lambda}}\left(\frac{R_{1.4}}{\mathcal{M}_{c}}\right)^{6}=1\,. \tag{15}\] For it to be useful, it should hold for some (perhaps narrow) range of mass ratios, chirp masses, and for a wide range of EoSs for some constant \(a\). In practice, the relation is used to infer \(R_{1.4}\) so we use this to define the uncertainty in this case (unlike all other examples), we can no longer only propagate uncertainty from \(\Lambda\) measurements because the chirp mass is also uncertain. We assume a fiducial uncertainty of \(\Delta R_{1.4}=1.0\) km, which represents a \(\sim\pm 8\%\) measurement of the radius of a NS and use a typical \(R_{1.4}\) value of 12 km; we select a fiducial grid of \(\mathcal{M}_{c}\) for each EoS, induced by requiring both components to be below \(M_{\rm max}\) and above \(1.05M_{\odot}\). We find that additionally by fixing the chirp mass, the loss of the fit may decrease by a factor of 5 for the spectral distribution, but by only a factor of 2 for the nonparametric dis Figure 6: The Binary-Love fit to the mixed nonparametric, astrophysically-informed EoS distribution when applying a modified tolerance factor that favors better fits at low-\(\Lambda\) values. The best fit line is in dashed black, plotted over draws from the nonparametric distribution in blue. For comparison, we plot in dashed red the best-fit line for the uniform tolerance factor fit to the same distribution, the same as Fig. 5. We shade the \(\sigma(\Lambda_{s})/2\) area away from best fit \(q=0.9\) curve in pink for the uniform tolerance factor, and in gray for the modified, constant relative tolerance factor. The fit requires better agreement at low \(\Lambda_{a}\) in order to achieve low cost, and therefore fit appears better by eye than the fit in Fig. 5, especially on a log-log plot. \begin{table} \begin{tabular}{|c|c|} \hline \(\chi^{2}/N_{\rm def}\) & \(C(\Lambda)\) \\ \hline \(\text{Re}\text{G}\text{Disk.}\) & \(C(\Lambda)\) \\ \hline GP-hadronic (astro) & \(7.2\times 10^{-2}\) \\ GP-hadronic (psr) & \(2.2\times 10^{-1}\) \\ GP-mixed (astro) & \(1.2\times 10^{-1}\) \\ GP-mixed (psr) & \(3.6\times 10^{-1}\) \\ SP (astro) & \(1.6\times 10^{-2}\) \\ SP (psr) & \(2.6\times 10^{-2}\) \\ PP (astro) & \(6.4\times 10^{-2}\) \\ PP (psr) & \(4.7\times 10^{-1}\) \\ \hline \end{tabular} \end{table} Table 3: Table \(\chi^{2}/N_{\rm def}\) for the C–Love relations for several distributions on the EoS. tribution, and a factor of 1.5 for the piecewise-polytropic distribution. The \(\chi^{2}\) in this case is then \[\chi^{2}(a)=\sum_{i}\sum_{j}P(\epsilon_{i})\frac{\left(\frac{1}{a}\tilde{\Lambda} _{(i)}^{1/6}\mathcal{M}_{c}^{(j)}-R_{1.4}^{(i)}\right)^{2}}{\Delta R_{1.4}^{2}}\,. \tag{16}\] Where \(R_{1.4}^{(i)}\) depends on the EoS(\(\epsilon_{i}\)), and \(\tilde{\Lambda}_{(i,j)}\) depends on the EoS and the binary parameters, but \(R_{1.4}\), the fiducial radius value, is independent of both EoS and binary parameters. In this case the optimal solution can be obtained analytically by differentiating the cost with respect to \(1/a\). The loss is then given by \(\chi^{2}(a_{*})\) and shown in Table 4. Fit parameters are given in Table 9. The spectral EoS distributions again show greater levels of EoS-independence than the nonparametric distribution, indicating a tighter relationship between \(R_{1.4}\) and \(\Lambda\) in the spectral model, consistent with Fig. 1. However, fits are typically poorer relative to the I-Love-Q relations and more consistent with the Binary-Love relations. Similar to the Binary-Love case, the nonparametric and piecewise-polytrope, pulsar-informed distributions show nearly identical loss, \(\sim 1.3-1.7\times 10^{-1}\). The mixed composition distribution shows marginally worse fits, with losses about 1.3 times worse for the pulsar-informed distributions. When conditioning on additional astrophysical data, the piecewise-polytrope distribution is better fit with the relation, improving by a factor of 2, while the nonparametric distributions improve by less than 25%. This distinction is likely due to relatively strong correlations between \(\sim\rho_{\rm nuc}\) and higher densities in the piecewise-polytrope dist Figure 7: _Left:_ The C–Love relation fitted along sampled EoSs for the nonparametric EoS distribution with mixed composition when conditioning on all astrophysical data. _Right:_ The same for the spectral parametrization. Relative errors are larger for the C–Love relation than for the I–Love–Q relation, and the nonparametric mixed distribution shows greater variability than the spectral distribution. in the nonparametric EoS distribution. See, e.g., Fig. 5 of [51]. These correlations cause astrophysical measurements to be highly informative at and below nuclear densities in the piecewise-polytrope case, and therefore likely rule out many of the configurations which lead to "doppleganger"-like behavior [73; 74]. This leads to less variation in the relation between \(R\) and \(\Lambda\) and therefore improves the quality of the fit. By contrast, there is still a range of low-density behavior within the nonparametric posterior [51], which likely increases the range of behaviors seen in the \(\Lambda-R\) relations of nonparametric EoSs. This variability would be associated with a lower degree of EoS-independence in the \(R_{1.4}\)-\(\bar{\Lambda}\) relation. ### \(\alpha_{c}\)-\(C\) A EoS-independent relation between \(\alpha_{c}\equiv p_{c}/\epsilon_{c}\) and compactness \(C\) was proposed by Ref. [6]. The quantity \(\alpha_{c}\) is most sensitive to the EoS only at the highest densities in a star, while the compactness depends on the all densities in the star. Therefore we would expect EoS parametrizations which impose strong inter-density correlations to be most consistent with the relation. The expression to be fit is [6] \[\ln(\alpha_{c})=\sum_{j=0}^{5}a_{j}\ln(C)^{5}\,. \tag{17}\] We define a tolerance factor for this relation by propagating the uncertainty in \(\Lambda\) through the C-Love relation, and then through the \(\alpha_{c}\)-\(C\) relation, using fiducial parameters for the C-Love relation given by [45] and for the \(\alpha_{c}\)-\(C\) relation given by [6]. In Fig. 8 we display the fit and residuals of this relation to our nonparametric, mixed composition, astrophysically-informed EoS distribution, and to the corresponding astrophysically-informed spectral EoS distribution. Fit coefficients are given in Table 5. We show in Table 5 the losses for this relation for all of the distributions studied. This relation, like all others studied, shows higher losses than the I-Love-Q relations. Also similar to other relations, the nonparametric distributions show higher losses than the parametric distributions, typically by orders of magnitude. Likewise the hadronic nonparametric distributions show improvements in loss compared to the mixed distributions, though effects are less than an order of magnitude In contrast to the other relations, however, the \(\alpha_{c}\)-\(C\) relations show losses greater than one for the nonparametric EoS distributions. This indicates that systematic errors are likely greater than statistical uncertainties for this relation. Additionally the large errors for the piecewise-polytrope and spectral distributions relative to other relations demonstrate that, even for these distributions, the EoS independence is questionable. The tolerance factor we use is conservative, though removing the component which models X-ray mass-radius measurability still gives loss values greater than one, which indicates this relation very poorly models the nonparametric EoS distributions even for just the purposes of GW observations. The appearance of EoS independence in, e.g., the spectral model, even though it is weak, is likely due to model-dependent correlations. Under the spectral distribution, strong correlations appear between density scales which can lead to, e.g., the compactness (a function of the entire matter profile of the star) being correlated with the central pressure-energy density. These correlations are not present for the nonparametric EoS distributions, and are present to a weaker extent in the piecewise-polytropic EoS distributions. ## IV Discussion In this paper, we tested the EoS-independence of relations between NS properties under multiple EoS models, including parametric and nonparametric distributions. In particular, we used a nonparametric EoS distribution, and evaluated the goodness-of-fit of the relations both to subsets mimicking hadronic EoSs or mixed-composition EoSs. We found that effectively all relations are better fit by parametric models. Additionally within the nonparametric distributions, relations are better satisfied by EoSs which do not show signs of phase transitions. The I-Love-Q relation is qualitatively better than other proposed relations, with typical loss values of \(10^{-3}\) or below. In particular, the \(I\)-\(Q\) relation is very well fit by all EoS distributions. This could be expected based on Ref. [3], which indicated that the \(I\)-\(Q\) relation should indeed be mostly EoS independent due to the near self-similarity of isodensity contours and near EoS independence of the ellipticity profile of NSs. In fact, the best-fit relations we studied are \(I\)-\(Q\) relations under the spectral distributions, with prediction errors of \(|\Delta I|/I\sim|\Delta Q|/Q\sim 0.001\), in line with Refs. [11; 45]. The piecewise-polytrope and nonparametric distributions are \begin{table} \begin{tabular}{|c|c|} \hline EoS Dist.Relation & \(\alpha_{c}(C)\) \\ \hline GP-hadronic (astro) & \(2.7\times 10^{-1}\) \\ GP-hadronic (psr) & \(1.8\times 10^{0}\) \\ GP-mixed (astro) & \(1.4\times 10^{0}\) \\ GP-mixed (psr) & \(5.8\times 10^{0}\) \\ SP (astro) & \(2.9\times 10^{-2}\) \\ SP (psr) & \(7.2\times 10^{-2}\) \\ PP (astro) & \(1.5\times 10^{-1}\) \\ PP (psr) & \(2.9\times 10^{-1}\) \\ \hline \end{tabular} \end{table} Table 5: Table \(\chi^{2}/N_{\mathrm{dof}}\) for the \(\alpha_{c}-C\) relations for several distributions on the EoS. The quality of the fit decreases for all distributions except the piecewise-polytrope upon incorporating more astrophysical data, unlike the bulk of all the relations we study. worse fit, especially for relations involving \(\Lambda\). Nonetheless even the worst-fit relation, \(Q(\Lambda)\), still has prediction errors at percent level (\(\Delta Q/Q\lesssim 0.1\)). For the piecewise-polytrope model, this is qualitatively similar to the findings of Ref. [29]. Systematic errors of \(\sim 1-10\%\) are comparable to systematic errors from many other factors, such as detector calibration [76], and waveform modeling [77, 78, 79, 80, 81]. At a comparable precision to the errors presented here, the quality of numerical solutions to the TOV equations may become important for stars containing sharp phase transitions [82]. Because the speed of sound of Gaussian process draws is analytically greater than zero, we are not subject to this concern, we do not have truly sharp transitions, and therefore standard techniques for computing the tidal Love numbers is sufficient. Nonetheless, improved accuracy in TOV solutions will likely be important in future analyses with much improved detector sensitivities. In contrast, the other relations involving tidal deformability show worse fits, especially among nonparametric EoSs not informed by all astrophysical measurements. All of the C-Love, Binary-Love, and \(R_{1.4}\)-\(\bar{\Lambda}\) relations show losses of order \(10^{-1}\) or more for the mixed-composition nonparametric EoS distribution. This indicates systematic errors from these relations are already of order the statistical uncertainties. All relations, though, do improve with the inclusion of additional astrophysical data, which indicates that data have ruled out some EoS candidates inconsistent with the relations posed. In fitting the Binary-Love relation, the inclusion of phase transition EoSs appreciably worsens the fit to the nonparametric EoS distribution, increasing losses by a factor of 10 in the astrophysically-informed case. This is consistent with Ref. [45] which found that hybrid EoSs are poorly modeled with a Binary-Love relation. In particular, Carson _et al._[45] found that hybrid EoSs would likely have residuals of order \(\Lambda_{a}\sim 50\) at \(\Lambda_{s}\sim 100\), which is consistent with the worst-case residuals we find in Fig. 5. However, the mixed-composition distributions are not universally worse-fit among relations, the C-Love fit sees comparable losses among the two distributions, indicating this relation is essentially insensitive to the presence of a phase transition. On the other hand, the \(\alpha_{c}\)-\(C\) relation is the only relation we studied with loss values greater than 1 for the nonparametric EoS distribution. A similar near total-loss of universality was observed for modes in hybrid Figure 8: Similar to Fig. 2 but for the \(\alpha_{c}\)–\(C\) relation. Left: the relations between \(\alpha_{c}\) and \(C\) for the nonparametric EoS model with mixed composition conditioned on all astrophysical data. Right: The same for the spectral parametrization, conditioned on all astrophysical data. stars [83], which could be a useful target for future work. The loss values for the nonparametric distribution are almost 100 times worse for the nonparametric distributions than for the spectral distributions, indicating that modeling systematics are likely responsible for the appearance of EoS independence in this relation. Nonetheless, the improvement of EoS independence in the hadronic nonparametric case, especially upon the inclusion of additional astrophysical data, may indicate that this relation does hold universally for certain classes of EoSs (e.g. hadronic EoSs), under certain assumptions (such as astrophysically reasonable compactness-mass-radii) relations. For this reason, even relations which are not truly EoS independent may still be useful, depending on the use cases intended. These results are all dependent on the choice of tolerance factor; it is difficult to chose a completely realistic representation when many different potential sources of NS measurements exist. Nonetheless, certain conclusions, such as the relatively poor fits to the nonparametric mixed distribution relative to the spectral distribution, are independent of choice of tolerance factor. Additionally the distribution of points (NSs) that the relations are evaluated with cannot be prescribed universally. A potentially more physical choice that uniform-in-central-density would be a distribution which is consistent with the known population of NS sources: \[\chi^{2}=\int P(\epsilon)\pi(m)\chi^{2}(G;F,\epsilon)dmd\epsilon, \tag{18}\] where \(\pi(m)\) is the distribution of NS masses, and \(F\) is a generic NS property which serves as the independent variable for a relation and \(G\) is the dependent variable of the relation. A mapping from \(F(m)\), \(G(m)\) must be chosen in the case that EoSs with multiple stable branches in the \(M\)-\(R\) relation are used. Then the loss would be equal to the expected failure of the EoS-independent relation to correctly model the next NS source detected. However, the population of NSs observable via GWs is still poorly known [84; 85]. Mathematically, such modifications to the analysis are equivalent to changes to the tolerance factor, though they have different physical interpretation. It is important to recognize the sensitivity of the loss to choices such as the distribution of NSs used in evaluating each EoS \(\chi^{2}\) and in the tolerance factor chosen for each NS. As seen in, e.g., Fig. 5, the highest \(\chi^{2}\) contributions appear at high \(\Lambda\) values for relations involving \(\Lambda\), equivalent to larger residuals there (under the constant uncertainty model). There may not exist merging BNSs with symmetric tidal deformabilities as high as \(10^{4}\), or they may be exceedingly rare. However, Fig. 5 also demonstrates that at \(\Lambda\sim 10^{3}\), deviations from the EoS-independent relation of order 100 or larger are still possible within the nonparametric model. Therefore, we expect variation in the loss based on choices in the truncation of the population, though we do not expect the relationship between losses for the various models to change appreciably under different assumptions. Longer-term EoS independence tests will likely have to carefully examine both of these factors in order to determine, with higher fidelity, the usefulness of EoS-independent relations to our understanding of NSs and the nuclear EoS. ## V Acknowledgments We thank Victor Guedes for code to compute the quadrupole moment for NSs. We thank Lami Suleiman for comments on the manuscript. This work is supported by National Science Foundation grant No. PHY-2150027 as part of the LIGO Caltech REU Program. I.L. and K.C. acknowledge support from the Department of Energy under award number DE-SC0023101. K.C. acknowledges support from the Sloan Foundation. R.E. is supported by the Natural Sciences & Engineering Research Council of Canada (NSERC). The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. Software: This work makes use of scipy [86], numpy [87], and matplotlib [88]. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. The authors gratefully acknowledge the Italian Instituto Nazionale de Fisica Nucleare (INFN), the French Centre National de la Recherche Scientifique (CNRS), and the Netherlands Organization for Scientific Research for the construction and operation of the Virgo detector and the creation and support of the EGO consortium. ## Appendix A Additional Figures and Tables In this appendix we display results for the piecewise-polytrope and nonparametric hadronic EoS distributions, in similar form as the main text results for the spectral and nonparametric mixed distributions. The \(\chi^{2}\) values for each fit are given in the main text tables. We use the astrophysically-informed EoS distributions, again because we do not find significant differences modulo improvements of order no more than 10 to the fit quality upon conditioning. For the I-Love-Q relation, we display the fits for the hadronic nonparametric distribution in Fig. 9, and for the piecewise-polytrope EoS distribution in Fig. 10. \(I\)-\(Q\) is still the best fit EoS-independent relation. We also give fitting coefficients in Table 6. We display the fits for the Binary-Love relation for the hadronic nonparametric distribution and piecewise-polytrope distribution in Fig. 11. We display the best-fit coefficients in Table 7. We display the fits for the C-Love relation for the hadronic nonparametric distribution and piecewise-polytrope distribution in Fig. 12. We display the best-fit coefficients in Table 8. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{2}{c|}{GP-mixed (astro)} \\ \cline{2-3} Coefficient & \(I(\Lambda)\) & \(Q(\Lambda)\) & \(I(Q)\) \\ \hline \(\alpha\) & 2/5 & 1/5 & 2 \\ \(K_{yx}\) & 0.5356 & 0.0072 & 0.0072 \\ \(a_{1}\) & 1.7583 & 11.1589 & 11.1589 \\ \(a_{2}\) & 1.3883 & -37.6926 & -37.6926 \\ \(a_{3}\) & -5.6089 & 42.7718 & 42.7718 \\ \(b_{1}\) & -0.7071 & -2.5575 & -2.5575 \\ \(b_{2}\) & -0.9748 & 2.3251 & 2.3251 \\ \(b_{3}\) & 0.5105 & -7.3937 & -7.3937 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{SP (astro)} \\ \hline \(I(\Lambda)\) & \(Q(\Lambda)\) & \(I(Q)\) \\ \hline \(2/5\) & 1/5 & 2 \\ 0.5139 & 0.0052 & 0.0052 \\ 0.2486 & 12.1774 & 12.1774 \\ 0.8249 & -37.0504 & -37.0504 \\ 0.7629 & 43.0395 & 43.0395 \\ -1.0615 & -2.6161 & -2.6161 \\ 2.2034 & -2.4074 & 2.4074 \\ -0.9326 & -7.6697 & -7.6697 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{SP (astro)} \\ \hline \(I(\Lambda)\) & \(Q(\Lambda)\) & \(I(Q)\) \\ \hline \(2/5\) & 1/5 & 2 \\ 0.4192 & 0.0007 & 0.0007 \\ 2.2881 & 30.3545 & 30.3545 \\ -3.7192 & -16.5079 & -16.5079 \\ -2.9633 & 43.3485 & 48.3485 \\ -2.7902 & -2.7902 \\ 16.8924 & 2.6775 & 2.6775 \\ -7.6431 & -8.7651 & -8.7651 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{SP (astro)} \\ \hline \(I(\Lambda)\) & \(Q(\Lambda)\) & \(I(Q)\) \\ \hline \(2/5\) & 1/5 & 2 \\ 0.4192 & 0.0007 & 0.0007 \\ 2.2881 & 30.3545 & 30.3545 \\ -3.7192 & -16.5079 & -16.5079 \\ -2.9633 & 43.3955 \\ -3.8574 & -2.7902 & -2.7902 \\ 2.2034 & 2.4074 & 2.4074 \\ -0.9326 & -7.6697 & -7.6697 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{SP (astro)} \\ \hline \(I(\Lambda)\) & \(Q(\Lambda)\) & \(I(Q)\) \\ \hline \(2/5\) & 1/5 & 2 \\ 0.4192 & 0.0007 & 0.0007 \\ 2.2881 & 30.3545 & 30.3545 \\ -3.7192 & -16.5079 & -16.5079 \\ -2.9633 & 43.3485 & 48.3485 \\ -2.7902 & -2.7902 \\ -2.7902 & -2.7902 \\ 16.8924 & 2.6775 & 2.6775 \\ -7.6431 & -8.7651 & -8.7651 \\ \hline \end{tabular} \end{table} Table 6: Table of coefficients for the I–Love–Q relations for the nonparametric mixed-composition, spectral, and piecewise-polytrope astrophysical posterior EoS distributions. See Eqs. (8), (9), and (10) respectively. Figure 10: The same as Fig. 2 but with the piecewise-polytrope EoS distribution conditioned on all astrophysical data. Figure 9: The same as Fig. 2, but with the hadronic nonparametric distribution conditioned on all astrophysical data. For the \(\tilde{\Lambda}\)-\(R_{1.4}\) relation we display the value for the coefficient \(a\) for all of the EoS sets in Table 9. We display the fits to the \(\alpha_{c}\)-\(C\) EoS-independent relation for the nonparametric hadronic, and piecewise-polytrope distribution both conditioned only on mass measurements of heavy pulsars, and conditioned on all astrophysical data in Fig. 13. The piecewise-polytropic distribution is the only one which is better fit by the \(\alpha_{c}\)-\(C\) relation after the inclusion of GW mass-tidal deformability and X-ray mass-radius measurements. This can be attributed to _a priori_ large values of \(\alpha_{c}\) in the cores of the most massive neutron stars under the piecewise-polytrope models.
2305.14884
Perturbative invariants of cusped hyperbolic 3-manifolds
We prove that a formal power series associated to an ideally triangulated cusped hyperbolic 3-manifold (together with some further choices) is a topological invariant. This formal power series is conjectured to agree to all orders in perturbation theory with two important topological invariants of hyperbolic knots, namely the Kashaev invariant and the Andersen--Kashaev invariant (also known as the state-integral) of Teichm\"uller TQFT.
Stavros Garoufalidis, Matthias Storzer, Campbell Wheeler
2023-05-24T08:33:29Z
http://arxiv.org/abs/2305.14884v2
# Perturbative Invariants of Cusped Hyperbolic 3-Manifolds ###### Abstract. We prove that a formal power series associated to an ideally triangulated cusped hyperbolic 3-manifold (together with some further choices) is a topological invariant. This formal power series is conjectured to agree to all orders in perturbation theory with two important topological invariants of hyperbolic knots, namely the Kashaev invariant and the Andersen-Kashaev invariant (also known as the state-integral) of Teichmuller TQFT. Key words and phrases:3-manifolds, knots, Kashaev invariant, Teichmuller TQFT, complex Chern-Simons theory, hyperbolic geometry, asymptotic expansions, perturbation theory, Feynman diagrams, Faddeev quantum dilogarithm, state-integrals, volume conjecture, ideal triangulations, Neumann-Zagier data, half-symplectic matrices, 2-3 Pachner moves, Fourier transform, pentagon. ## 1. Introduction This paper concerns the topological invariance of a formal power series associated to an ideally triangulated \(3\)-manifold \(M\) with a torus boundary component [11]. The series is defined by formal Gaussian integration of a finite dimensional integral (a so-called "state-integral") and it is expected to coincide to all orders with the asymptotic expansion of three important topological invariants of \(3\)-manifolds. The first is the Kashaev invariant of a hyperbolic knot [25], where Kashaev's famous volume conjecture is refined to an asymptotic statement to all orders in perturbation theory using the above formal power series. This was studied in detail in [21] based on extensive numerical computations, but it is only proven for a handful of hyperbolic knots. The second invariant is the Andersen-Kashaev state integral [2], whose asymptotic expansion for the simplest hyperbolic \(4_{1}\) knot was shown to agree with the above series in [2, Sec.12], and also observed numerically for the first three simplest hyperbolic knots in [20]. The state-integrals of [2] are finite-dimensional integrals whose integrand is a product of Faddeev quantum dilogarithms times an exponential of a quadratic form, assembled from an ideal triangulation of a \(3\)-manifold with torus boundary components. Andersen-Kashaev proved that these are topological invariants that are the partition function of the Teichmuller TQFT [2, 1], which is a \(3\)-dimensional version of a quantization of Teichmuller space [26, 17]. A third invariant is the Chern-Simons theory with complex gauge group \(\mathrm{SL}_{2}(\mathbb{C})\). This theory was introduced by Witten [38] and studied extensively by Gukov [23]. Although Chern-Simons theory with compact gauge group \(\mathrm{SU}(2)\) has an exact nonperturbative definition given by the so-called Witten-Reshetikhin-Turaev invariant [37, 33] and a well-defined perturbation theory involving Feynman diagrams with uni-trivalent vertices [3, 4], the same is not known for Chern-Simons theory with complex gauge group \(\mathrm{SL}_{2}(\mathbb{C})\). For reasons that are not entirely understood, the partition function of complex Chern-Simons theory for \(3\)-manifolds with torus boundary reduces to a finite-dimensional state-integral, as if some unknown localization principle holds. The corresponding state-integrals were introduced and studied by Hikami [24], Dimofte [10] and others [12]. Unfortunately, in those works the integration contour was not pinned down, and this problem was finally dealt with in [2] and, among other things, implied topological invariance of the state-integrals, which were coined to be the partition function of Teichmuller TQFT. But ignoring the contour of integration, and focusing on a critical point of the action, which is a solution to a system of polynomial equations, allowed [11] to give a definition of the formal power series that are the main focus of our paper. Note, however, that the Feynman diagrams of [11] involve stable graphs of arbitrary valency, a perturbative definition of Chern-Simons theory with complex gauge groups would involve uni-trivalent graphs. The above discussion points out several aspects of these formal power series and conjectural relations to perturbation theory of complex Chern-Simons and of Teichuller TQFT. Aside from conjectures, this paper concerns a theorem, the topological invariance of the above formal power series. Let us briefly recall the key ingredients that go into the definition of the series, and discuss those in detail in later sections. The first ingredient is an ideal triangulation \(\mathcal{T}\) with \(N\) ideal tetrahedra of a \(3\)-manifold \(M\) with torus boundary. Each tetrahedron has shapes \(z\in\mathbb{C}\setminus\{0,1\}\), \(z^{\prime}=1/(1-z)\) and \(z^{\prime\prime}=1-1/z\) attached to its three pairs of opposite edges, and the shapes satisfy a system of polynomial equations (so-called "gluing equations" [34] or Neumann-Zagier equations [29]) determined by the combinatorics of the triangulation, one for each edge and peripheral curve. After some choices are made (such as an ordering of the tetrahedra and their edges, a choice of shape for each tetrahedron, a choice of an edge to remove from the gluing equations, and a choice of peripheral curve to include), one obtains two matrices \(A\) and \(B\) with integer entries such that \((A\,|\,B)\) is the upper-half of a symplectic \(2N\times 2N\) matrix, as well as a vector \(\nu\in\mathbb{Z}^{N}\). In addition, we choose a solution \(z=(z_{1},\ldots,z_{N})\) of the gluing equations as well as a flattening \((f,f^{\prime\prime})\), i.e., an integer solution to the equation \(\nu=Af+Bf^{\prime\prime}\). The power series \(\Phi^{\Xi}(\hbar)\) depends on the tuple \(\Xi=(A,B,\nu,z,f,f^{\prime\prime})\), which we collectively call a NZ-datum. The next ingredient that goes in the definition of \(\Phi^{\Xi}(\hbar)\) is an auxiliary formal power series \[\psi_{\hbar}(x,z)\;:=\;\exp\bigg{(}-\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\Phi^{\Xi}(\hbar)\ =\ \langle f^{\Xi}_{h}(x,z)\rangle_{x,\Lambda}\in\mathbb{Q}(z) \llbracket h\rrbracket\,. \tag{7}\] Using a solution \(z\) of the Neumann-Zagier equations that corresponds to the discrete faithful representation \(\rho^{\mathrm{geom}}:\pi_{1}(M)\to\mathrm{PSL}_{2}(\mathbb{C})\) of a cusped hyperbolic 3-manifold \(M\), one obtains a series \(\Phi^{\Xi}(\hbar)\) with coefficients in the invariant trace field of \(M\). Our main theorem is the following. **Theorem 1.1**.: \(\Phi^{\Xi}(\hbar)\) _is a topological invariant of a cusped hyperbolic 3-manifold._ A corollary of the above theorem is that every coefficient of the above series is a topological invariant of a cusped hyperbolic 3-manifold. These topological invariants have a geometric origin, since they take values in the invariant trace field of the manifold, and hence they are algebraic, but not (in general) rational, numbers. This series seems to come from hyperbolic geometry (though a definition of these invariants in terms of the hyperbolic geometry of the 3-manifold is not known), and perhaps not from enumerative quantum field theory (such as the Gromov-Witten or any of the 4-dimensional known theories). The behavior of these series under finite cyclic coverings of cusped hyperbolic 3-manifolds is given in [19], and the formulas presented there (e.g., for the coefficient of \(\hbar^{2}\)) point to unknown connections with the spectral theory of hyperbolic 3-manifolds. The proof of our theorem combines the ideas of the topological invariance of the Andersen-Kashaev state-integrals [2] with those of the Aarhus integral [4]. We briefly recall that the building block for the state-integral is the Faddeev quantum dilogarithm, the state-integral is a finite-dimensional integral of a product of Faddeev quantum dilogarithm [15], one for each tetrahedron of an (ordered) ideal triangulation. Aside from elementary choices, two important parts in the proof of topological invariance of the state-integral is invariance under (a) the choice of a nondegenerate quad, and (b) 2-3 ordered Pachner moves, the latter connecting one ideal triangulation with another. In [2], (a) and (b) are dealt with a Fourier transform and a pentagon identity for the Faddeev quantum dilogarithm. The power series \(\Phi^{\Xi}(\hbar)\) is given instead by a formal Gaussian integral (as opposed to an integral over Euclidean space), where the building block is the formal power series \(\psi_{\hbar}\) instead of the Faddeev quantum dilogarithm. The topological invariance of the \(\Phi^{\Xi}(\hbar)\) under the choice of quad and under the 2-3 Pachner moves follows from a Fourier tranform identity and a pentagon identity for \(\psi_{\hbar}\); see Theorems 3.4 and 3.6 below. Although these identities are, in a sense, limits of the corresponding identities for the Faddeev quantum dilogarithm (just as \(\psi_{\hbar}\) is a limit of the Faddeev quantum dilogarithm), it would requires additional analytic work to do so, and instead we give algebraic proofs of theorems 3.4 and 3.6 using properties of the formal Gaussian integration, together with holonomic properties of the involved formal power series. Having discussed the similarities between the proof of Theorem 1.1 and the corresponding theorem for the state-integral of [2], we now point out some differences. In [2], Andersen-Kashaev use ordered triangulations, and the state-integral is obtained by the push-forward of a distribution with variables at the faces and the tetrahedra of the ordered triangulation. Part of the distribution is a product of delta functions in linear forms of the face-variables. In our Theorem 1.1, and in the formal Gaussian integration, we carefully avoided the need to use delta functions, although such a reformulation of our results are possible, with additional effort. We end this section by pointing out a wider context for the asymptotic series \(\Phi^{\Xi}(\hbar)\) and Theorem 1.1. It was clear from [11] that a NZ-datum \(\Xi=(A,B,\nu,z,f,f^{\prime\prime})\) depends on two square matrices \(A\) and \(B\) such that \(AB^{t}\) is symmetric that may or may not come from topology, and that the series \(\Phi^{\Xi}(\hbar)\) is defined under the assumption that \(\det(B)\neq 0\) and \(\det(\Lambda)\neq 0\). Doing so, the proof of Theorem 1.1 shows that the series \(\Phi^{\Xi}(\hbar)\) is invariant under the moves that appear in [21, Sec.6]; see also Section 4. ## 2. Preliminaries ### The Faddeev quantum dilogarithm In this subsection, we recall some basic properties of the Faddeev quantum dilogarithm, which are motivations for Theorems 3.4 and 3.6 below. At the same time, we will ignore additional properties of the Faddeev quantum dilogarithm that play no role in our paper, such as the fact that it is a meromorphic function with precise zeros, poles and residues. The Faddeev quantum dilogarithm [15]\(\varphi:=\Phi_{\mathsf{b}}\,\) satisfies a key integral pentagon identity [16] \[e^{2\pi ixy}\widetilde{\varphi}(x)\widetilde{\varphi}(y)=\int_{\mathds{R}}e^{ \pi ix^{2}}\widetilde{\varphi}(x-z)\widetilde{\varphi}(z)\widetilde{\varphi}( y-z)\,\mathrm{d}z \tag{8}\] where both sides are tempered distributions on \(\mathds{R}\) and \(\widetilde{\varphi}\) denotes the distributional (inverse) Fourier transformation \[\widetilde{\varphi}(x):=\int_{\mathds{R}}e^{-2\pi ixy}\varphi(y)\,\mathrm{d}y\,. \tag{9}\] It turns out that the inverse Fourier transform of \(\varphi^{\pm 1}\) is expressed in terms of \(\varphi\) as given in [2, Sec.13.2] \[\begin{split}\int_{\mathds{R}}e^{-2\pi ixy}\varphi(y)\,\mathrm{d}y &=\zeta_{8}(q/\widetilde{q})^{\frac{1}{24}}e^{-\pi ix^{2}} \varphi(-x+c_{\mathsf{b}})\\ \int_{\mathds{R}}e^{-2\pi ixy}\varphi(y)^{-1}\,\mathrm{d}y& =\zeta_{8}^{-1}(\widetilde{q}/q)^{\frac{1}{24}}e^{\pi ix^{2}} \varphi(x-c_{\mathsf{b}})\end{split} \tag{10}\] where \(q=e^{2\pi\mathsf{i}\mathsf{b}^{2}}\), \(\widetilde{q}=e^{-2\pi\mathsf{i}/\mathsf{b}^{2}}\), \(c_{\mathsf{b}}=\frac{i}{2}(\mathsf{b}+\mathsf{b}^{-1})\) and \(\zeta_{8}=e^{2\pi i/8}\). The Faddeev quantum dilogarithm satisfies the inversion formula \[\varphi(x)\varphi(-x)=(\widetilde{q}/q)^{\frac{1}{24}}e^{\pi ix^{2}} \tag{11}\] see for example [2, App.A]. In a certain domain, the Faddeev quantum dilogarithm is given as a ratio of two Pochhammer symbols \[\Phi_{\mathsf{b}}(x)\;=\;\frac{(-q^{\frac{1}{2}}e^{2\pi\mathsf{i}\mathsf{b}x} ;q)_{\infty}}{(-\widetilde{q}^{\frac{1}{2}}e^{2\pi\mathsf{i}\mathsf{b}^{-1}x} ;\widetilde{q})_{\infty}}\,, \tag{12}\] from which it follows that its asymptotic expansion as \(\mathsf{b}\to 0\) is given by replacing the denominator by \(1\) and the numerator by the asymptotic expansion of the Pochhammer symbol. ### Neumann-Zagier data In this section, we discuss in detail Neumann-Zagier data following [11]. We start with a \(3\)-manifold \(M\) with torus boundary component (all manifolds and their triangulations will be oriented throughout the paper) equipped with a concrete oriented ideal triangulation, that is with a triangulation such that each tetrahedron \(\Delta\) of \(\mathcal{T}\) comes with a bijection of its vertices with those of the standard \(3\)-simplex. (All triangulations that are used in the computer programs SnapPy[8] and Regina[7] are concrete). Every concrete tetrahedron \(\Delta\) has shape parameters \((z,z^{\prime},z^{\prime\prime})\) assigned to pairs of opposite edges as in Figure 1, where the complex numbers \(z^{\prime}=1/(1-z)\) and \(z^{\prime\prime}=1-1/z\) satisfy the equations \[z^{\prime\prime}+z^{-1}=1,\qquad zz^{\prime}z^{\prime\prime}=-1\,. \tag{13}\] If \(\mathcal{T}\) is a triangulation as above, an Euler characteristic argument shows that the number of tetrahedra equals to the number of edges. Fix an ordering of the tetrahedra \(\Delta_{j}\) for \(j=1,\ldots,N\) and of the edges \(e_{1},\ldots,e_{N}\) of \(\mathcal{T}\), and assign a shape \(z_{j}\) to the tetrahedron \(\Delta_{j}\) for \(j=1,\ldots,N\). To describe the complete hyperbolic structure of \(M\) (when it exists) and its deformations, Thurston [34] introduced the gluing equations for the variables \(z=(z_{1},\ldots,z_{N})\) around each edge \(e_{i}\), for \(i=1,\ldots,N\). In logarithmic form, these equations have the form \[\sum_{j=1}^{N}G_{i,j}\log z_{j}+G^{\prime}_{i,j}\log z^{\prime}_{j}+G^{\prime \prime}_{i,j}\log z^{\prime\prime}_{j}\;=\;2\pi i,\qquad(i=1,\ldots,N) \tag{14}\] where \(G_{i,j}\) (and likewise for \(G^{\prime}_{i,j}\) and \(G^{\prime\prime}_{i,j}\)) denote the number of times that an edge of \(\Delta_{j}\) labelled \(z_{j}\) winds around the edge \(e_{i}\). Every peripheral curve \(\gamma\) in the boundary also gives rise to a gluing equation of the same form as (14), except the right hand side is \(0\) instead of \(2\pi i\). Choosing a symplectic basis for \(H_{1}(\partial M,\mathbb{Z})\), we can enhance the equations (14) by adding the peripheral equations \[\sum_{j=1}^{N}G_{N+c,j}\log z_{j}+G^{\prime}_{N+c,j}\log z^{\prime}_{j}+G^{ \prime\prime}_{N+c,j}\log z^{\prime\prime}_{j}\;=\;0,\qquad(c=1,2)\,. \tag{15}\] It turns out that the \((N+2)\times N\) matrices \(G\), \(G^{\prime}\) and \(G^{\prime\prime}\) have both symmetry and redundancy. Any one of the edge equations is implies by the others, and we make a choice to remove one of them, and replace it by one peripheral equation for a fixed peripheral curve, resulting Figure 1. The shapes of an ideal tetrahedron. into \(N\times N\) matrices \(\mathbf{G}\), \(\mathbf{G}^{\prime}\) and \(\mathbf{G}^{\prime\prime}\). Using the second Equation (13) in logarithmic form \(\log z_{j}+\log z_{j}^{\prime}+\log z_{j}^{\prime\prime}=\pi i\), we can eliminate one of the three shapes of each tetrahedron (this is a choice of quad). For example, eliminating the shape \(z_{j}^{\prime}\) for \(j=1,\ldots,N\) then results into a system of equations \[\sum_{j=1}^{N}A_{i,j}\log z_{j}+B_{i,j}\log z_{j}^{\prime\prime}\;=\;\pi i \nu\qquad(i=1,\ldots,N) \tag{16}\] where \[A\;=\;\mathbf{G}-\mathbf{G}^{\prime},\qquad B\;=\;\mathbf{G}^{\prime\prime}- \mathbf{G}^{\prime} \tag{17}\] are the Neumann-Zagier matrices [29] and \(\nu=(2,\ldots,2,0)^{t}-\mathbf{G}^{\prime}(1,\ldots,1,1)^{t}\in\mathbb{Z}^{N}\). The Neumann-Zagier matrices \((A\,|\,B)\) have an important symplectic property, they are the upper part of a symplectic matrix over \(\mathbb{Z}[1/2]\) (and even a symplectic matrix over \(\mathbb{Z}\) if one divides the peripheral gluing equation by \(2\) while keeping the integrality of its coefficients) [29]. It follows that \(AB^{t}\) is symmetric, that \((A\,|\,B)\) has full rank, and that one can always choose a quad such that \(B\) is invertible-for this see [11, Lem.A.3]. A further ingredient of a Neumann-Zagier datum is a flattening, that is a triple \((f,f^{\prime},f^{\prime\prime})\) of vectors in \(\mathbb{Z}^{N}\) that satisfy the conditions \[f+f^{\prime}+f^{\prime\prime}=(1,\ldots,1)^{t},\qquad\mathbf{G}f+\mathbf{G}^{ \prime}f^{\prime}+\mathbf{G}^{\prime\prime}f^{\prime\prime}=(2,\ldots,2,0)^{t }\,. \tag{18}\] Using our choice of quad, we can eliminate \(f^{\prime}\) and thus obtain a pair \((f,f^{\prime\prime})\) of vectors in \(\mathbb{Z}^{N}\) that satisfy the condition \[Af+Bf^{\prime\prime}=\nu\,. \tag{19}\] This defines all the terms that appear in a NZ-datum \(\Xi=(A,B,\nu,z,f,f^{\prime\prime})\). The definition of the series \(\Phi^{\Xi}(\hbar)\) requires a nondegenerate NZ datum \(\Xi\), that is one that satisfies the condition \(\det(B)\neq 0\), which as we discussed above, can always be achieved, as well as the condition \(\det(\Lambda)\neq 0\). We discuss this choice next, and connect it to the geometric representation of a hyperbolic 3-manifold \(M\). We end this section with a comment regarding Neumann-Zagier data of 3-manifolds with several (as opposed to one) torus boundary components. Their ideal triangulations with equal number \(N\) of tetrahedra as edges and the edge gluing equations have the same shape (14) as above. If \(r\) denotes the number of boundary components, then there are \(2r\) peripheral equations (15) and after a choice of one peripheral curve per boundary component, this leads to \((N+r)\times N\) matrices \(G\), \(G^{\prime}\) and \(G^{\prime\prime}\). The edge gluing equations have redundancy, and although it is not true that we can remove any \(r\) of them, it is shown in [18, Sec.4.6] that one can remove \(r\) of them and and replace them by \(r\) peripheral equations so as to obtain \(N\times N\) matrices \(\mathbf{G}\), \(\mathbf{G}^{\prime}\) and \(\mathbf{G}^{\prime\prime}\) such that the corresponding matrices \((A\,|\,B)\) defined in (17) have the same symplectic properties as in the case of \(r=1\). Moreover, any two choices of removal of the \(r\) edge equations are related to each other by an invertible matrix in \(\operatorname{GL}_{r}(\mathbb{Z})\). Finally, flattenings satisfy Equations (18) where the right hand side of the second equation is the vector \((2,\ldots,2,0,\ldots,0)^{t}\in\mathbb{Z}^{N+r}\) with the first \(N\) coordinates equal to \(2\) and the remaining \(r\) coordinates equal to zero. For simplicity in the presentation (related to the choice of peripheral curves and the flattenings), we will give the proof of Theorem 1.1 assuming that the 3-manifold \(M\) has one cusp. The proof applies verbatim to the general case of arbitrary number of cusps. ### Geometric aspects In this section, we discuss some geometric aspects of Theorem 1.1 related to the choice of the shapes \(z\) in a Neumann-Zagier datum. Let us fix an ideal triangulation \(\mathcal{T}\). A solution \(z\in(\mathbb{C}\setminus\{0,1\})^{N}\) to the Neumann-Zagier equations gives rise, via a developing map, to a representation \(\rho_{z}:\pi_{1}(M)\to\mathrm{PSL}_{2}(\mathbb{C})\). For a detailed discussion, see the appendix of [6]. However, not every representation \(\rho:\pi_{1}(M)\to\mathrm{PSL}_{2}(\mathbb{C})\) is "seen" by \(\mathcal{T}\), that is, is in the image of the map \(z\to\rho_{z}\). What's more, if \(\rho\) is in the image of the above map and we do a 2-3 Pachner move on the triangulation, it may no longer be in the image of the map corresponding to the new triangulation. The reason is that the shapes of the two triangulations are related by a rational map, which may send a complex number different from 0 and 1 to 0, 1 or \(\infty\). To phrase the problem differently, every two ideal triangulations (each with at least two tetrahedra, as we will always assume) of a 3-manifold with non-empty boundary are related by a sequence of 2-3 Pachner moves [28, 32]. However, it is not known that the set of ideal triangulations that see the discrete faithful representation \(\rho^{\mathrm{geom}}:\pi_{1}(M)\to\mathrm{PSL}_{2}(\mathbb{C})\) is connected under 2-3 Pachner moves, nor is it known whether the set of nondegenerate NZ data is connected under 2-3 Pachner moves. A solution to these issues was found in [11], and this was used to prove the topological invariance of the 1-loop function, and was also used in [18] to prove the topological invariance of the 3D-index. Let us recall the geometric details here. Every cusped hyperbolic 3-manifold \(M\) (complete, finite volume) has a canonical ideal cell decomposition whose cells are 3-dimensional convex ideal polytopes given by Epstein-Penner [13]. It is easy to see that every convex ideal polyhedron can be triangulated into ideal tetrahedra by connecting an ideal vertex to all other ideal vertices (thus reducing the problem to the ideal cone of an ideal polygon), and then triangulating further the ideal polygon into ideal triangles. Doing so, the triangulation of the common faces of the 3-dimensional convex ideal polytopes may not match, in which case one can pass from one triangulation of a polygonal face to another by adding flat tetrahedra. The question is whether every two such triangulations are related by a sequence of 2-3 moves. This is a combinatorial problem of convex geometry, which we summarize below. For a detailed discussion, the reader may consult the book [9] and references therein. Fix a convex polytope \(P\) in \(\mathbb{R}^{d}\). One can consider the set of triangulations of \(P\). When \(d=2\), \(P\) is a polygon and it is known that every two triangulations are related by a sequence of flips. For general \(d\), flips are replaced by _geometric bistellar moves_. When \(d\geq 5\), it is known that the graph of triangulations (with edges given by geometric bistellar flips) is not connected, and has isolated vertices. For \(d=3\), it is not known whether the graph is connected. The situation is much better when one considers _regular triangulations_ of \(P\). In that case, the corresponding graph of regular triangulations is connected and, in fact, it is the edge set of the _secondary polytope_ of \(P\). When \(d=3\) and \(P\) is convex and in general position, then the only geometric bistellar move is the 2-3 move where the added edge that appears in the move is an edge that connects two vertices of \(P\). When \(d=3\) and \(P\) is not in general position, the same conclusion holds as long as one allows for tetrahedra that are flat, i.e., lie on a 2-dimensional plane. Returning to the Epstein-Penner ideal cell decomposition, let \(\mathcal{T}^{\mathrm{EP}}\) denote a regular ideal triangulations of the canonical ideal cell decomposition of \(M\). (For a detailed discussion of this set, see also [18, Sec.6]). The set \(\{\mathcal{T}^{\mathrm{EP}}\}\) of regular ideal triangulations is connected by 2-3 Pachner moves. Moreover, such triangulations see \(\rho^{\mathrm{geom}}\) since by the geometric construction, the shapes are always nondegenerate, i.e., not equal to 0 or 1 and in fact always have nonnegative (though sometimes zero) imaginary part. Finally, we need to show that \(\det(\Lambda)\) is nonzero. This follows from the fact that \(\det(B)\det(\Lambda)=\det(-A+B\mathrm{diag}(1/(1-z_{j}))\) equals (up to multiplication by a monomial in \(z\) and \(z^{\prime\prime}\)) to the 1-loop invariant of [11, Sec.1.3]. The nonvanishing of the latter follows from Thurston's hyperbolic Dehn surgery theorem [34], which implies that \(\rho^{\mathrm{geom}}\in X^{\mathrm{geom}}_{M}\cup P_{M}\) is an isolated smooth point of the geometric component \(X^{\mathrm{geom}}_{M}\) of the \(\mathrm{PSL}_{2}(\mathbb{C})\)-character variety of \(M\), intersected with the locus \(P_{M}\) of boundary-parabolic \(\mathrm{PSL}_{2}(\mathbb{C})\)-representations, i.e., representations \(\rho\) satisfying \(\mathrm{tr}(\rho(\gamma))^{2}=4\) for all peripheral elements \(\gamma\in\pi_{1}(M)\). Since the 1-loop invariant is the determinant of the Hessian of the defining NZ equations of \(\rho^{\mathrm{geom}}\), it follows that the 1-loop invariant is nonzero. ## 3. Formal Gaussian integration ### Basics on formal Gaussian integration In this section, we review the basic properties of formal Gaussian integration, which is a combinatorial analogue of integration of analytic functions. This theory was introduced and studied in detail in [4], where it was used to define a universal perturbative finite type invariant of rational homology 3-spheres, and to identify it with the trivial connection contribution of Chern-Simons perturbation theory. As a warm-up, the formal Gaussian integral of a monomial \(x^{n}\) in one variable with respect to the quadratic form \(\lambda\neq 0\) is defined by \[\left\langle x^{n}\right\rangle_{x,\lambda}\;=\;\begin{cases}\lambda^{-n/2}(n -1)!!&n\ \mathrm{even}\\ 0&n\ \mathrm{odd}\,.\end{cases} \tag{20}\] When \(\lambda>0\), then the above bracket equals to a normalized Gaussian integral \[\left\langle x^{n}\right\rangle_{x,\lambda}\;=\;\frac{\int_{\mathbb{R}}\,e^{ -\frac{1}{2}\lambda x^{2}}x^{n}\,dx}{\int_{\mathbb{R}}\,e^{-\frac{1}{2} \lambda x^{2}}\,dx}\,, \tag{21}\] explaining the naming of formal Gaussian integration. The formal Gaussian integration can be extended by linearity to \(\left\langle f(x)\right\rangle_{x,\lambda}\) for any polynomial \(f(x)\), or even further to a power series \(f(x)=\sum_{n\geq 0}a_{n}x^{n}\) whose coefficients tend to zero in a local ring (such as the ring \(\mathbb{Q}[\![h^{\frac{1}{2}}]\!]\)). The formal Gaussian integral (20) has a multivariable extension given in Equation (5) where \(x\) is a vector of variables and \(\Lambda\) is an invertible matrix over a field matching the size of \(x\). Throughout this paper, the entries of \(\Lambda\) are elements of the field \(\mathbb{Q}(z)\) where \(z\) is a vector of variables, the integrable functions \(f_{h}(x,z)\) that we apply the formal Gaussian integration with respect to \(x\) with be elements of \(\mathbb{Q}(z)[\![h^{\frac{1}{2}}]\!]\), and the result of formal Gaussian integration will be an element of the ring \(\mathbb{Q}(z)[\![\hbar]\!]\) or \(\mathbb{Q}(z)[x][\![\hbar^{\frac{1}{2}}]\!]\). When we specialize to the case of a vector of algebraic numbers \(z\), then \(\mathbb{Q}(z)\) defines the corresponding number field. Just like integration of sufficiently smooth functions satisfies certain invariance properties (such as change of variables, iterated integration, and even integration by parts [4]), so does formal Gaussian integration. The corresponding identities of formal Gaussian integration are combinatorial statements about polynomials or rational functions and often follow from the corresponding statements of integration of functions. We now give some elementary properties of formal Gaussian integration that we will use in our paper. **Lemma 3.1**.: (a) For all invertible matrices \(\Lambda\) and \(P\) over \(\mathbb{Q}(z)\), we have \[\left\langle f_{\hbar}(Px,z)\right\rangle_{x,P^{t}\Lambda P}\ =\ \left\langle f_{ \hbar}(x,z)\right\rangle_{x,\Lambda}\,. \tag{22}\] (b) For all invertible matrices \(\Lambda\) and vectors \(G\) over \(\mathbb{Q}(z)\), we have \[\left\langle\exp(-G^{t}\Lambda x\hbar^{\frac{1}{2}})f_{\hbar}(x+G\hbar^{\frac {1}{2}},z)\right\rangle_{x,\Lambda}\ =\ \exp\left(\frac{G^{t}\Lambda G}{2}\hbar\right) \left\langle f_{\hbar}(x,z)\right\rangle_{x,\Lambda} \tag{23}\] (c) If \(\Lambda=\left(\begin{smallmatrix}A&B\\ B^{t}&C\end{smallmatrix}\right)\), and \(\Lambda\) and \(A\) invertible, then for any \(F\), we have \[\left\langle\exp(Fx^{\prime}\hbar^{\frac{1}{2}})f_{\hbar}(x^{\prime\prime},z) \right\rangle_{x,\Lambda}\ =\ \exp\left(\frac{FA^{-1}F^{t}}{2}\hbar\right) \left\langle\exp(-FA^{-1}Bx^{\prime\prime}\hbar^{\frac{1}{2}})f_{\hbar}(x^{ \prime\prime},z)\right\rangle_{x^{\prime\prime},C-B^{t}A^{-1}B^{\cdot}} \tag{24}\] Proof.: Part (a) follows from the fact the integration is unchanged under a linear change of variables. Part (b) follows from the fact the integration is unchanged under an affine change of variables. Part (c) follows from Fubini's theorem [4, Prop.2.13], combined with Equation (23). The next lemma, which will be important in the application of \(q\)-holonomic methods in Section 3.2 and in the proofs of Theorems 3.4 and 3.6 below, concerns the behavior of formal Gaussian integration when \(z=(z_{1},\ldots,z_{r})\) is shifted to \(e^{\varepsilon\hbar}z:=(e^{\varepsilon_{1}\hbar}z_{1},\ldots,e^{\varepsilon_{ r}\hbar}z_{r})\) for integers \(\varepsilon_{j}\). **Lemma 3.2**.: For all invertible matrices \(\Lambda(z)\) over \(\mathbb{Q}(z)\) and all integrable functions \(f_{\hbar}\), we have \[\left\langle f_{\hbar}(x,z)\right\rangle_{x,\Lambda(z)}\!|_{z\mapsto e^{ \varepsilon\hbar}z}\ =\ \sqrt{\frac{\det\Lambda(e^{\varepsilon\hbar}z)}{\det \Lambda(z)}}\Big{\langle}\exp\bigg{(}\frac{x^{t}\big{(}\Lambda(z)-\Lambda(e^{ \varepsilon\hbar}z)\big{)}x}{2}\bigg{)}f_{\hbar}(x,e^{\varepsilon\hbar}z) \Big{\rangle}_{x,\Lambda(z)}\,. \tag{25}\] Proof.: The lemma follows from recentering the Gaussian after multiplying \(z\) by \(e^{\varepsilon\hbar}\). ### \(q\)-holonomic aspects of \(\psi_{\hbar}\) It is well-known that identities of holonomic functions can be proven algorithmically [36, 31]. We will use these ideas, adapted to our needs, to prove fundamental identities between certain Gaussian integrals involving the building block \(\exp(\psi_{\hbar})\) of our series \(\Phi^{\Xi}(\hbar)\). Since \(\psi_{\hbar}\) is related to the infinite Pochhammer symbol given in Equation (2), its functional equations will be of fundamental importance. From its definition, it is clear that the infinite Pochhammer symbol satisfies a simple first order linear \(q\)-difference equation \[(x;q)_{\infty}\;=\;(1-x)(qx;q)_{\infty}\,. \tag{26}\] To convert this into an identity of formal \(\hbar\)-series where \(q=e^{\hbar}\), we use the fact that there is an action of the quantum plane (also known as \(q\)-Weyl [14, Ex. 1.7] algebra) on the space \(\mathbb{Q}(z)[x][\![\hbar^{\frac{1}{2}}]\!]\) of integrable functions by an action on the \(z\)-variable given by \[(Lf_{\hbar})(x,z)\;=\;f_{\hbar}(x,e^{\hbar}z),\qquad(Mf_{\hbar})(x,z)\;=\;e^{ \hbar}f_{\hbar}(x,e^{\hbar}z) \tag{27}\] where \(LM=qML\). The next lemma asserts that the completion \(\widehat{\psi}_{\hbar}(x,z)\) \[\begin{split}\widehat{\psi}_{\hbar}(x,z)\;:&=\; \exp\Big{(}-\frac{\operatorname{Li}_{2}(z)}{\hbar}-\frac{\operatorname{Li}_{1 }(z)x}{\hbar^{\frac{1}{2}}}+\frac{1}{2}\operatorname{Li}_{1}(z)-\frac{ \operatorname{Li}_{0}(z)x^{2}}{2}\Big{)}\psi_{\hbar}(x,z)\\ &=\;\exp\Big{(}-\sum_{k,\ell\in\mathbb{Z}_{\geq 0}}\frac{B_{k}\,x^{ \ell}\,\hbar^{k+\frac{\ell}{2}-1}}{\ell!\,k!}\operatorname{Li}_{2-k-\ell}(z) \Big{)}\\ &\in\exp\Big{(}-\frac{\operatorname{Li}_{2}(z)}{\hbar}-\frac{ \operatorname{Li}_{1}(z)x}{\hbar^{\frac{1}{2}}}+\frac{1}{2}\operatorname{Li}_ {1}(z)-\frac{\operatorname{Li}_{0}(z)x^{2}}{2}\Big{)}(1+\hbar^{\frac{1}{2}} \mathbb{Q}(z)[x][\![\hbar^{\frac{1}{2}}]\!])\end{split} \tag{28}\] of \(\psi_{\hbar}(x,z)\), as well as \(\psi_{\hbar}(x,z)\) itself, satisfy a corresponding first order linear \(q\)-difference equation, albeit with complicated coefficients. **Lemma 3.3**.: (a) We have: \[\widehat{\psi}_{\hbar}(x,e^{\hbar}z)\;=\;(1-ze^{x\hbar^{1/2}})\,\widehat{ \psi}_{\hbar}(x,z)\,. \tag{29}\] (b) We have: \[\begin{split}&\psi_{\hbar}(x,e^{\hbar}z)\;=\;(1-ze^{x\hbar^{1/2} })\,\psi_{\hbar}(x,z)\sqrt{\frac{1-e^{\hbar}z}{1-z}}\\ &\times\exp\Big{(}\frac{\operatorname{Li}_{2}(e^{\hbar}z)}{ \hbar}-\frac{\operatorname{Li}_{2}(z)}{\hbar}+\frac{\operatorname{Li}_{1}(e^ {\hbar}z)x}{\hbar^{\frac{1}{2}}}-\frac{\operatorname{Li}_{1}(z)x}{\hbar^{ \frac{1}{2}}}+\frac{1}{2}\operatorname{Li}_{0}(e^{\hbar}z)x^{2}-\frac{1}{2} \operatorname{Li}_{0}(z)x^{2}\Big{)}\,.\end{split} \tag{30}\] Note that the identity (29) takes place in a larger ring, which includes the symbols \(\exp(\frac{\operatorname{Li}_{2}(z)}{\hbar})\), \(\exp(\frac{\operatorname{Li}_{1}(z)x}{\hbar^{\frac{1}{2}}})\), \(\exp(\frac{\operatorname{Li}_{0}(z)x^{2}}{2})\) and \(\exp(\frac{\operatorname{Li}_{0}(z)x^{2}}{2})\) which one can adjoin in the differential field \(\mathbb{Q}(z)((\hbar))\) as is standard in differential Galois theory of linear differential equations [35]. The symbols \(\operatorname{Li}_{k}(z)\) for \(k=0,1,2\) can be interpreted as normalized solutions to the linear differential equations \(z\frac{d}{dz}\operatorname{Li}_{k}(z)=\operatorname{Li}_{k-1}(z)\) with \(\operatorname{Li}_{0}(z)=z/(1-z)\) and satisfy the usual properties \(\operatorname{Li}_{k}(e^{\hbar}z)=\sum_{r=0}^{\infty}\frac{1}{r!}\operatorname {Li}_{k-r}(z)\hbar^{r}\). On the other hand, the coefficients in identity (30) are elements of the ring \(\mathbb{Q}(z)[x][\![\hbar^{\frac{1}{2}}]\!]\). Proof.: Part (a) can be proved directly using the facts that for the Bernoulli polynomials \(B_{r}(x)\) we have \(B_{r}(1)=B_{r}+\delta_{r,1}\) and \(B_{r}(x)=\sum_{k=0}^{r}\binom{r}{k}B_{n-k}x^{k}\). Applying these identities we find that \[\exp\Big{(}-\sum_{k,\ell\in\mathbb{Z}_{\geq 0}}\frac{B_{k}x^{\ell} \hbar^{k+\frac{\ell}{2}-1}}{\ell!k!}\operatorname{Li}_{2-k-\ell}(ze^{\hbar}) \Big{)}\] \[=\ \exp\Big{(}-\sum_{r,\ell\in\mathbb{Z}_{\geq 0}}\sum_{k=0}^{r}B_{k} \binom{r}{k}\frac{x^{\ell}\hbar^{r+\frac{\ell}{2}-1}}{\ell!r!}\operatorname{Li }_{2-r-\ell}(z)\Big{)}\] \[=\ \exp\Big{(}-\sum_{r,\ell\in\mathbb{Z}_{\geq 0}}\frac{B_{r}(1)x^{ \ell}\hbar^{r+\frac{\ell}{2}-1}}{\ell!r!}\operatorname{Li}_{2-r-\ell}(z)\Big{)} \tag{31}\] \[=\ \exp\Big{(}-\operatorname{Li}_{1}(ze^{x\hbar^{1/2}})-\sum_{r, \ell\in\mathbb{Z}_{\geq 0}}\frac{B_{r}x^{\ell}\hbar^{r+\frac{\ell}{2}-1}}{ \ell!r!}\operatorname{Li}_{2-r-\ell}(z)\Big{)}\] \[=\ (1-ze^{x\hbar^{1/2}})\exp\Big{(}-\sum_{r,\ell\in\mathbb{Z}_{ \geq 0}}\frac{B_{r}x^{\ell}\hbar^{r+\frac{\ell}{2}-1}}{\ell!r!}\operatorname{ Li}_{2-r-\ell}(z)\Big{)}\,.\] Part (b) follows from (a), using Equation (28) and expanding in \(\hbar\) to find that \[\sqrt{\tfrac{1-e^{\hbar}z}{1-z}}\exp\big{(}\tfrac{\operatorname{Li}_{2}(e^{ \hbar}z)}{\hbar}-\tfrac{\operatorname{Li}_{2}(z)}{\hbar}+\tfrac{\operatorname {Li}_{1}(e^{\hbar}z)x}{\hbar^{\frac{1}{2}}}-\tfrac{\operatorname{Li}_{1}(z)x}{ \hbar^{\frac{1}{2}}}+\tfrac{1}{2}\operatorname{Li}_{0}(e^{\hbar}z)x^{2}- \tfrac{1}{2}\operatorname{Li}_{0}(z)x^{2}\big{)}\in\ \mathbb{Q}(z)[x][\hbar^{\frac{1}{2}}]\,. \tag{32}\] We end this section by discussing a useful relation between \(\psi_{\hbar}(x,z)\) and its specialization at \(x=0\). It is easy to see that the completion \(\widehat{\psi}_{\hbar}(x,z)\) of \(\psi_{\hbar}(x,z)\) is regular at \(x=0\) and satisfies \[\widehat{\psi}_{\hbar}(x,z)\ =\ \widehat{\psi}_{\hbar}(0,ze^{x\hbar^{1/2}})\,. \tag{33}\] This implies a corresponding statement \[\psi_{\hbar}(x,z)\ =\ \psi_{\hbar}\big{(}0,ze^{x\hbar^{1/2}}\big{)}C_{\hbar}(x,z), \tag{34}\] for \(\psi_{\hbar}\) where \[\begin{split} C_{\hbar}(x,z)&=\ \exp\Big{(}-\sum_{\ell \geq 3}\frac{\hbar^{\frac{\ell}{2}-1}}{\ell!}\operatorname{Li}_{2-\ell}(z)x^{ \ell}+\frac{1}{2}\sum_{\ell\geq 1}\frac{\hbar^{\frac{\ell}{2}}}{\ell!} \operatorname{Li}_{1-\ell}(z)x^{\ell}\Big{)}\\ &=\ \exp\Big{(}-\frac{\operatorname{Li}_{2}(ze^{x\hbar^{\frac{1}{2}}} )}{\hbar}+\frac{\operatorname{Li}_{2}(z)}{\hbar}-\frac{\log(1-z)}{\hbar^{ \frac{1}{2}}}x+\frac{z}{1-z}\frac{x^{2}}{2}\\ &\qquad\qquad-\frac{1}{2}(\log(1-ze^{z\hbar^{\frac{1}{2}}})- \log(1-z))\Big{)}\,.\end{split} \tag{35}\] ### Fourier transform of \(\psi_{\hbar}\) In this section, we prove two functional identities for \(\psi_{\hbar}(\bullet,z)\), \(\psi_{\hbar}(\bullet,z^{\prime})\) and \(\psi_{\hbar}(\bullet,z^{\prime\prime})\) where \(z^{\prime}:=1/(1-z)\) and \(z^{\prime\prime}:=1-1/z\) are related to the \(\mathbb{Z}/3\mathbb{Z}\)-symmetry of the shapes of a tetrahedron, and reflect the \(\mathbb{Z}/3\mathbb{Z}\)-symmetry of the dilogarithm function. The next theorem is a formal Gaussian integration version of the Fourier transform of the Faddeev quantum dilogarithm given in Equation (9). The proof, however, does not follow from the distributional identity (9), and instead uses \(q\)-holonomic ideas and the basics of formal Gaussian integration. This theorem and its Corollary 3.5 are used in Section 5 to show that the power series \(\Phi^{\Xi}(\hbar)\) are independent of the choice of a nondegenerate quad. **Theorem 3.4**.: _We have:_ \[\psi_{h}(x,z)\ =\ e^{-\frac{\hbar}{24}}\Big{\langle}\exp\Big{(}\Big{(}y+\frac{ xz}{1-z}\Big{)}\frac{\hbar^{\frac{1}{2}}}{2}\Big{)}\,\psi_{h}\Big{(}y+\frac{ xz}{1-z},\frac{1}{1-z}\Big{)}\Big{\rangle}_{y,1-z^{-1}} \tag{36}\] _in the ring \(\mathbb{Q}(z)[x][\![\hbar^{\frac{1}{2}}]\!]\)._ In fact, both sides of of Equation (36) lie in the subring \(\mathbb{Q}[z^{\pm},(1-z)^{-1}][x][\![\hbar^{\frac{1}{2}}]\!]\). Proof.: It it clear from the definition that both sides of Equation (36) are elements of the ring \(\mathbb{Q}(x)[z][\![\hbar]\!]\). First, we will prove the specialization of Equation (36) when \(x=0\), i.e., we will show that \[\psi_{h}(0,z)\ =\ e^{-\frac{\hbar}{24}}\Big{\langle}\exp\Big{(}\frac{y}{2} \hbar^{\frac{1}{2}}\Big{)}\,\psi_{h}\Big{(}y,\frac{1}{1-z}\Big{)}\Big{\rangle} _{y,1-z^{-1}}\in\mathbb{Q}(z)[[\hbar]]\,. \tag{37}\] To prove this, we will combine \(q\)-holonomic ideas with formal Gaussian integration. From Equation (29), we see that the left hand side of Equation (37) multiplied by \(\frac{\exp(-\operatorname{Li}_{2}(z)\hbar^{-1})}{\sqrt{1-z}}\) satisfies a simple first order \(q\)-difference equation. We want to show the same for the right hand side. To do this, consider the function \[I_{m,h}(w,z)\ =\ w^{m}e^{-\frac{\hbar}{24}+\frac{\pi^{2}}{6\hbar}}\exp\Big{(} \log(w)\Big{(}-\frac{\log(w)}{2\hbar}+\frac{\pi i}{\hbar}-\frac{\log(z)}{ \hbar}+\frac{1}{2}\Big{)}\Big{)}\,\widehat{\psi}_{h}(0,w) \tag{38}\] which is an element of a larger ring discussed in relation to \(\widehat{\psi}_{h}\) of Equation (28). Using Equation (34) and the symmetry \[\operatorname{Li}_{2}(z)\ =\ \operatorname{Li}_{2}\Big{(}\frac{1}{1-z}\Big{)}- \frac{1}{2}\log(1-z)^{2}+\pi i\log(1-z)-\log(z)\log(1-z)-\frac{\pi^{2}}{6} \tag{39}\] of the dilogarithm [39], it is easy to see that the right hand side of Equation (37) is given by \[\Big{\langle}\sqrt{-z}\exp\Big{(}\frac{\operatorname{Li}_{2}(z)}{\hbar}+(1-z^ {-1})\frac{w^{2}}{2}\Big{)}I_{0,h}\Big{(}\frac{1}{1-z}e^{wh^{1/2}},z\Big{)} \Big{\rangle}_{w,1-z^{-1}}\,. \tag{40}\] The function \(I_{m,h}\) satisfies the linear \(q\)-difference equations \[\begin{split} I_{m,h}(e^{h}w,z)&\ =\ -e^{mh}w^{-1}z^{-1}(1-w)I_{m,h}(w,z)\,,\\ I_{m,h}(w,e^{h}z)&\ =\ w^{-1}I_{m,h}(w,z)\,,\\ I_{m,h}(w,z)&\ =\ wI_{m,h}(w,z)\,,\end{split} \tag{41}\] which imply that \[(1-e^{-mh}z)I_{m,h}\Big{(}\frac{1}{1-z}e^{wh^{1/2}+h},z\Big{)}\ =\ I_{m}\Big{(}\frac{1}{1-z}e^{wh^{1/2}},e^{h}z\Big{)}. \tag{42}\] If we multiply both sides of (42) with the factors from Equation (40) and take the bracket of both sides and apply Lemma 3.1 and Lemma 3.2 to change coordinates and the Gaussian, we find that when \(m=0\) both sides of Equation (37) satisfy the same \(q\)-difference equation (29). Moreover, it is easy to see that both sides of Equation (40) are power series in \(\hbar\) with coefficients rational functions of \(z\) of nonpositive degree, thus we can work with the ring instead. In this case, the Newton polygon of this first order linear \(q\)-difference equation implies that it has a unique solution in determined by its value at \(z=\infty\). Therefore, the identity in Equation (37) follows from its specialization at \(z=\infty\). Since (43) this completes the proof of Equation (37). Going back to the general case of \(z\), Equation (36) follows from Equation (34) together with (37) using Lemma 3.2 to shift the Gaussian and Lemma 3.1 to change integration variable via the transformation \[w\mapsto w-\frac{1}{\hbar^{\frac{1}{2}}}\log\left(\frac{1-z}{1-ze^{\hbar^{1/2} }}\right)-\frac{x}{1-z^{-1}}\,. \tag{44}\] The detailed calculation is given in Appendix A. Theorem 3.4 implies the following identity relating \(\psi_{\hbar}(\bullet,z)\) and \(\psi_{\hbar}(\bullet,z^{\prime\prime})\). **Corollary 3.5**.: We have: \[\psi_{\hbar}(x,z)\ =\ e^{\frac{\hbar}{24}}\Big{\langle}\exp\Big{(}-\frac{x}{2} \hbar^{\frac{1}{2}}\Big{)}\,\psi_{\hbar}\Big{(}y-\frac{x}{1-z},1-z^{-1}\Big{)} \Big{\rangle}_{y,z-1} \tag{45}\] in the ring. Proof.: Apply Equation (36) to the \(\psi_{\hbar}\) that appears on the right hand side of the same Equation (36). Then apply a change of variables and Fubini's theorem of Lemma 3.1 to calculate the bracket for the variable that doesn't appear in the argument of the remaining \(\psi_{\hbar}\). ### Pentagon identity for \(\psi_{\hbar}\) In this section, we prove a pentagon identity for the functions \(\psi_{\hbar}(\bullet,z)\) where \(z\) takes the five values \[[z_{1}]+[z_{2}]\mapsto[z_{1}z_{0}^{-1}]+[z_{0}]+[z_{2}z_{0}^{-1}],\qquad z_{0} \ =\ z_{1}+z_{2}-z_{1}z_{2} \tag{46}\] what appear in the 5-term relation for the dilogarithm. The next theorem is a formal Gaussian integration version of the pentagon identity (8) of the Faddeev quantum dilogarithm and will be used in Section 6 to prove that the series \(\Phi^{\Xi}(\hbar)\) is independent of 2-3 Pachner moves. Let us denote \[\delta\ =\ 2+\mathrm{Li}_{0}(z_{1}z_{0}^{-1})+\mathrm{Li}_{0}(z_{0})+\mathrm{ Li}_{0}(z_{2}z_{0}^{-1})\ =\ \frac{(z_{1}+z_{2}-z_{1}z_{2})^{2}}{z_{1}z_{2}(1-z_{1})(1-z_{2})}\,. \tag{47}\] **Theorem 3.6**.: _We have:_ \[\begin{split}&\psi_{\hbar}(x,z_{1})\,\psi_{\hbar}(y,z_{2})\ =\ e^{-\frac{\hbar}{24}}\Big{\langle}\psi_{\hbar}\Big{(}-w-y+\frac{xz_{2}+yz_{1 }}{z_{0}},z_{1}z_{0}^{-1}\Big{)}\\ &\psi_{\hbar}\Big{(}w+x+y-\frac{xz_{2}+yz_{1}}{z_{0}},z_{0}\Big{)} \psi_{\hbar}\Big{(}-w-x+\frac{xz_{2}+yz_{1}}{z_{0}},z_{2}z_{0}^{-1}\Big{)} \Big{\rangle}_{w,\delta}\end{split} \tag{48}\] _in the ring \(\mathbb{Q}(z_{1},z_{2})[x,y][\![h^{\frac{1}{2}}]\!]\)._ Proof.: It is clear from the definition that both sides of Equation (48) are elements of the ring \(\mathbb{Q}(z_{1},z_{2})[x,y][\![h^{\frac{1}{2}}]\!]\). To prove this, we follow the same approach we used to prove Theorem 3.4. First, we prove the identity (48) when \(u=v=0\), i.e., we will show that \[\psi_{h}(0,z_{1})\,\psi_{h}(0,z_{2})\ =\ e^{-\frac{\hbar}{24}}\big{\langle}\psi_{ h}(-w,z_{1}z_{0}^{-1})\,\psi_{h}(w,z_{0})\,\psi_{h}(-w,z_{2}z_{0}^{-1}) \big{\rangle}_{w,\delta} \tag{49}\] in the ring \(\mathbb{Q}(z_{1},z_{2})[\![h]\!]\) and to prove this we will again use \(q\)-holonomic methods. To do so, consider the function \[\begin{split}& I_{m,\hbar}(z_{1},z_{2},z)\ =\ e^{\frac{\pi^{2}}{6\hbar}-\frac{\hbar}{24}}z^{m}\,\widehat{\psi}_{h}(0,z_{1 }z^{-1})\,\widehat{\psi}_{h}(0,z)\,\widehat{\psi}_{h}(0,z_{2}z^{-1})\\ &\times\exp\Big{(}-\frac{\log(z_{1})\log(z_{2})}{\hbar}+\frac{ \log(z)\log(z_{1})}{\hbar}+\frac{\log(z)\log(z_{2})}{\hbar}-\frac{\log(z)^{2} }{\hbar}\Big{)}\,,\end{split} \tag{50}\] which is again an element of a larger ring discussed in relation to \(\widehat{\psi}_{h}\) of Equation (28). Using Equation (34) and the five term relation for the dilogarithm [39], it is easy to see that the right hand side of Equation (49) is given by \[\Big{\langle}\frac{\sqrt{z_{1}z_{2}}(1-z_{1})(1-z_{2})}{z_{1}+z_{2}-z_{1}z_{ 2}}\exp\Big{(}\frac{\text{Li}_{2}(z_{1})}{\hbar}+\frac{\text{Li}_{2}(z_{2})}{ \hbar}+\frac{\delta}{2}z^{2}\Big{)}I_{0,\hbar}(z_{1},z_{2},z_{0}e^{wh^{1/2}}) \Big{\rangle}_{w,\delta}\,. \tag{51}\] The function \(I_{m,\hbar}\) satisfies the system of linear \(q\)-difference equations \[\begin{split}& I_{m,\hbar}(e^{\hbar}z_{1},z_{2},z)\ =\ z\,z_{2}^{-1}(1-z_{1}z^{-1})\,I_{m,\hbar}(z_{1},z_{2},z)\,,\\ & I_{m,\hbar}(z_{1},e^{\hbar}z_{2},z)\ =\ z\,z_{1}^{-1}(1-z_{2}z^{-1})\,I_{m, \hbar}(z_{1},z_{2},z)\,,\\ & I_{m,\hbar}(z_{1},z_{2},e^{\hbar}z)\ =\ (1-z)\,z_{1}z_{2}z^{-2}e^{(m-1)\hbar}I_{m, \hbar}(z_{1},z_{2},z)\,,\\ & I_{m+1,\hbar}(z_{1},z_{2},z)\ =\ z\,I_{m,\hbar}(z_{1},z_{2},z)\,, \end{split} \tag{52}\] which can be derived from Equation (29). In fact, the function \(I_{m,\hbar}\) is holonomic of rank 1, a fact that we will not use explicitly [31]. Therefore, we see that \[\begin{split}& I_{m,\hbar}(e^{\hbar}z_{1},z_{2},z)\ =\ z_{2}^{-1}I_{m+1,\hbar}(z_{1},z_{2},z)-z_{1}z_{2}^{-1}I_{m,\hbar}(z_{1},z_{2 },z)\,,\\ & I_{m,\hbar}(e^{\hbar}z_{1},z_{2},(e^{\hbar}z_{1}+z_{2}-e^{\hbar }z_{1}z_{2})e^{wh^{1/2}})\\ &\ =\ z_{2}^{-1}I_{m+1,\hbar}(z_{1},z_{2},(e^{\hbar}z_{1}+z_{2}-e^{ \hbar}z_{1}z_{2})e^{wh^{1/2}})-z_{1}z_{2}^{-1}I_{m,\hbar}(z_{1},z_{2},(e^{ \hbar}z_{1}+z_{2}-e^{\hbar}z_{1}z_{2})e^{wh^{1/2}})\,,\\ & I_{m,\hbar}(z_{1},z_{2},(z_{1}+z_{2}-z_{1}z_{2})e^{zh^{1/2}+ \hbar})\\ &\ =\ z_{1}z_{2}e^{(m-1)\hbar}I_{m-2,\hbar}(z_{1},z_{2},(z_{1}+z_{2}-z _{1}z_{2})e^{zh^{1/2}})-z_{1}z_{2}e^{(m-1)\hbar}I_{m-1,\hbar}(z_{1},z_{2},(z_{ 1}+z_{2}-z_{1}z_{2})e^{zh^{1/2}})\,.\end{split} \tag{53}\] Let us define \(\widehat{J}_{m,\hbar}\) and \(J_{m,\hbar}\) by the equation \[\begin{split}\widehat{J}_{m,\hbar}(z_{1},z_{2})&\ =\ \frac{1}{\sqrt{(1-z_{1})(1-z_{2})}}\exp\Big{(}-\frac{\text{Li}_{2}(z_{1})}{\hbar}- \frac{\text{Li}_{2}(z_{2})}{\hbar}\Big{)}J_{m,\hbar}(z_{1},z_{2})\\ &\ =\ \Big{\langle}\frac{\sqrt{z_{1}z_{2}(1-z_{1})(1-z_{2})}}{z_{1}+z_{2}-z _{1}z_{2}}\exp\Big{(}\frac{\delta}{2}z^{2}\Big{)}I_{m,\hbar}(z_{1},z_{2},z_{0}e^ {wh^{1/2}})\Big{\rangle}_{w,\delta}\,.\end{split} \tag{54}\] If we multiply both sides of the Equations (53) with the factors from Equation (51), take the bracket of both sides, and apply Lemma 3.2 to change coordinates and the Gaussian, we find that \(\widehat{J}_{\hbar,m}(z_{1},z_{2})\) satisfies the \(q\)-difference equations \[\begin{split}&\widehat{J}_{m,\hbar}(e^{\hbar}z_{1},z_{2})\;=\;z_ {2}^{-1}\widehat{J}_{m+1,\hbar}(z_{1},z_{2})-z_{1}z_{2}^{-1}\widehat{J}_{m, \hbar}(z_{1},z_{2})\,,\\ &\widehat{J}_{m,\hbar}(z_{1},e^{\hbar}z_{2})\;=\;z_{1}^{-1} \widehat{J}_{m+1,\hbar}(z_{1},z_{2})-z_{1}^{-1}z_{2}\widehat{J}_{m,\hbar}(z_{1},z_{2})\,,\\ &\widehat{J}_{m,\hbar}(z_{1},z_{2})\;=\;z_{1}z_{2}e^{(m-1)\hbar} \widehat{J}_{m-2,\hbar}(z_{1},z_{2})-z_{1}z_{2}e^{(m-1)\hbar}\widehat{J}_{m-1,\hbar}(z_{1},z_{2})\,.\end{split} \tag{55}\] From these relations we can derive the equations \[\begin{split} 0&\;=\;(z_{1}-z_{1}e^{(m+1)\hbar}+z_{1}^{2} e^{(m+1)\hbar}-z_{1})\widehat{J}_{m,\hbar}(z_{1},z_{2})\\ &\qquad+(z_{1}z_{2}e^{(m+1)\hbar}-z_{2}+qz_{1})\widehat{J}_{m, \hbar}(e^{\hbar}z_{1},z_{2})+z_{2}\widehat{J}_{m,\hbar}(e^{2\hbar}z_{1},z_{2} )\,,\\ 0&\;=\;(z_{2}-z_{2}e^{(m+1)\hbar}+z_{2}^{2}e^{(m+1) \hbar}-z_{2})\widehat{J}_{m,\hbar}(z_{1},z_{2})\\ &\qquad+(z_{2}z_{1}e^{(m+1)\hbar}-z_{1}+qz_{2})\widehat{J}_{m, \hbar}(z_{1},e^{\hbar}z_{2})+z_{1}\widehat{J}_{m,\hbar}(z_{1},e^{2\hbar}z_{2} )\,.\end{split} \tag{56}\] These \(q\)-difference equations give rise to equations for \(J_{m,\hbar}\) given by \[\begin{split} 0&\;=\;(z_{1}-z_{1}e^{\hbar}+z_{1}^{2} e^{\hbar}-z_{1})J_{m,\hbar}(z_{1},z_{2})\\ &\qquad+(z_{1}z_{2}e^{\hbar}-z_{2}+qz_{1})\sqrt{\frac{1-z_{1}}{1 -e^{\hbar}z_{1}}}\exp\Big{(}\frac{\operatorname{Li}_{2}(z_{1})-\operatorname{ Li}_{2}(e^{\hbar}z_{1})}{\hbar}\Big{)}J_{m,\hbar}(e^{\hbar}z_{1},z_{2})\\ &\qquad+z_{2}\sqrt{\frac{1-z_{1}}{1-e^{2\hbar}z_{1}}}\exp\Big{(} \frac{\operatorname{Li}_{2}(z_{1})-\operatorname{Li}_{2}(e^{2\hbar}z_{1})}{ \hbar}\Big{)}J_{m,\hbar}(e^{2\hbar}z_{1},z_{2})\,,\\ 0&\;=\;(z_{2}-z_{2}e^{\hbar}+z_{2}^{2}e^{\hbar}-z_ {2})J_{m,\hbar}(z_{1},z_{2})\\ &\qquad+(z_{2}z_{1}e^{\hbar}-z_{1}+qz_{2})\sqrt{\frac{1-z_{2}}{1 -e^{\hbar}z_{2}}}\exp\Big{(}\frac{\operatorname{Li}_{2}(z_{2})-\operatorname{ Li}_{2}(e^{\hbar}z_{2})}{\hbar}\Big{)}J_{m,\hbar}(z_{1},e^{\hbar}z_{2})\\ &\qquad+z_{1}\sqrt{\frac{1-z_{2}}{1-e^{2\hbar}z_{2}}}\exp\Big{(} \frac{\operatorname{Li}_{2}(z_{2})-\operatorname{Li}_{2}(e^{2\hbar}z_{2})}{ \hbar}\Big{)}J_{m,\hbar}(z_{1},e^{2\hbar}z_{2})\,.\end{split} \tag{57}\] These equations are also satisfied by the left hand side of Equation (49), which follows from repeated application of Equation (30). It is easy to see that both sides of Equation (49) are formal power series in \(\hbar\) with coefficients rational functions of \((z_{1},z_{2})\) of nonpositive degree with respect to both \(z_{1}\) and \(z_{2}\), thus their coefficients embed in \(\mathbb{Q}\llbracket z_{1}^{-1},z_{2}^{-1}\rrbracket\), and it suffices to prove (49) in the ring \(\mathbb{Q}\llbracket z_{1}^{-1},z_{2}^{-1}\rrbracket\llbracket\hbar\rrbracket\). In this case, we consider the linear \(q\)-difference equations (57) over the ring \(\mathbb{Q}\llbracket z_{1}^{-1},z_{2}^{-1}\rrbracket\llbracket\hbar\rrbracket\). Looking at the corresponding Newton polytopes, the leading monomials that appear as coefficients of this \(q\)-difference equation (57) at \(\infty\) are \(z_{1}^{2}(1+\operatorname{O}(z_{1}^{-1})\operatorname{O}(z_{2}^{0}))\) and \(z_{1}^{2}z_{2}(1+\operatorname{O}(z_{1}^{-1})\operatorname{O}(z_{2}^{0}))\) for the first equation, and \(z_{2}^{2}(1+\operatorname{O}(z_{1}^{0})\operatorname{O}(z_{2}^{-1}))\) and \(z_{1}z_{2}^{2}(1+\operatorname{O}(z_{1}^{0})\operatorname{O}(z_{2}^{-1}))\) for the second equation, respectively. The structure of these monomials and their Newton polytopes shows that there is a unique solution to these equations in \(\mathbb{Q}\llbracket z_{1}^{-1},z_{2}^{-1}\rrbracket\llbracket\hbar\rrbracket\) determined by the value at \(z_{1}=z_{2}=\infty\). Therefore, the identity in Equation (49) reduces to its specialization at \(z_{1}=z_{2}=\infty\). Since \[\begin{split}\psi_{\hbar}(0,\infty)\,\psi_{\hbar}(0,\infty)\;=\;e^ {\frac{\hbar}{6}}\quad\text{and}\\ e^{-\frac{\hbar}{24}}\big{\langle}\psi_{\hbar}(-w,0)^{2}\,\psi_{ \hbar}(w,\infty)\big{\rangle}_{w,\delta}\;=\;e^{-\frac{\hbar}{24}}\Big{\langle} \exp\Big{(}\frac{\hbar}{12}-\frac{z\hbar^{\frac{1}{2}}}{2}\Big{)}\Big{\rangle} _{w,\delta}\;=\;e^{\frac{\hbar}{6}}\,,\end{split} \tag{58}\] this completes the proof of Equation (49). Going back to the general case of \(x,y\), Equation (48) follows from Equation (34) together with (49) using Lemma 3.2 to shift the Gaussian and with a change of integration variable \[w\mapsto w+\frac{1}{\hbar^{\frac{1}{2}}}\log\left(\frac{z_{1}+z_{2}-z_{1}z_{2}}{z _{1}e^{x_{1}\hbar^{1/2}}+z_{2}e^{x_{2}\hbar^{1/2}}-z_{1}z_{2}e^{(x_{1}+x_{2}) \hbar^{1/2}}}\right)+x+y-\frac{xz_{1}+yz_{2}}{z_{0}}\,. \tag{59}\] The detailed calculation is given in Appendix B. ### Inversion formula for \(\psi_{h}\) In this section, we give an inversion formula for \(\psi_{h}\) analogous to the inversion formula (11) for the Faddeev quantum dilogarithm. Although we will not use this formula explicitly in our paper, we include it for completeness. **Lemma 3.7**.: We have: \[\psi_{h}(x,z)\frac{1-z^{-1}e^{-x\hbar^{1/2}}}{1-z^{-1}}\psi_{h}(- x,1/z)\ =\ \exp\Big{(}-\frac{x\hbar^{\frac{1}{2}}}{2}+\frac{\hbar}{12}\Big{)}, \tag{60}\] \[\widehat{\psi}_{h}(0,z)\widehat{\psi}_{h}(0,1/z)\ =\ \frac{\sqrt{-z}}{1-z}\exp\Big{(}\frac{\pi^{2}}{6\hbar}+\frac{1}{2 \hbar}\log(-z)^{2}+\frac{\hbar}{12}\Big{)}\,. \tag{61}\] Proof.: These formulas all follow from the well-known inversion formulas for the polylogarithms (see for example [30]): \[\begin{split}\operatorname{Li}_{2}(z)+\operatorname{Li}_{2}(1/z )&=\ -\frac{\pi^{2}}{6}-\frac{1}{2}\log(-z)^{2}\,,\\ \operatorname{Li}_{1}(z)-\operatorname{Li}_{1}(1/z)& =\ -\log(-z)\,,\\ \operatorname{Li}_{0}(z)+\operatorname{Li}_{0}(1/z)& =\ -1\,,\\ \operatorname{Li}_{-n}(z)+(-1)^{n}\operatorname{Li}_{-n}(1/z)& =\ 0\qquad(n>0)\,.\end{split} \tag{62}\] Using these relations, we have \[\begin{split}\widehat{\psi}_{h}(0,z)\widehat{\psi}_{h}(0,1/z)& =\ \exp\Big{(}-\sum_{k\in\mathbb{Z}_{\geq 0}}\frac{B_{k}\,\hbar^{k-1}}{k!}( \operatorname{Li}_{2-k}(z)+\operatorname{Li}_{2-k}(1/z))\Big{)}\\ &=\ \frac{\sqrt{-z}}{1-z}\exp\Big{(}\frac{\pi^{2}}{6\hbar}+\frac{1}{2 \hbar}\log(-z)^{2}+\frac{\hbar}{12}\Big{)}\,.\end{split} \tag{63}\] Similarly, we find that \[\begin{split}&\psi_{h}(x,z)\frac{1-z^{-1}e^{-x\hbar^{1/2}}}{1-z ^{-1}}\psi_{h}(-x,1/z)\\ &=\ \exp\left(-\!\!\!\!\sum_{\begin{subarray}{c}k,\ell\in\mathbb{Z}_{ \geq 0}\\ k+\frac{\ell}{2}>1\end{subarray}}\frac{B_{k}\,x^{\ell}\,\hbar^{k+\frac{\ell}{2 }-1}}{\ell!\,k!}\big{(}\operatorname{Li}_{2-k-\ell}(z)+(-1)^{k+\ell} \operatorname{Li}_{2-k-\ell}(1/z)\big{)}\right)\\ &=\ \exp\Big{(}-\frac{x\hbar^{\frac{1}{2}}}{2}+\frac{\hbar}{12} \Big{)}\,.\end{split} \tag{64}\] Notice that this is exactly the factor that appeared in the remark after Equation (1), which gave the relation between the \(\psi_{h}\) used in the current paper and the ones used in [11, Eq. 1.9]. Indeed, \(\frac{1-z^{-1}}{1-z^{-1}e^{-zh^{1/2}}}\psi_{h}(-x,1/z)^{-1}\) is exactly the series used in [11] where the factor \(\frac{1-z^{-1}}{1-z^{-1}e^{-xh^{1/2}}}\) amounts to swapping the sign of \(B_{1}\) as done there. ## 4. Elementary invariance properties In this section, we review some basic choices needed to define the the Neumann-Zagier data for a triangulation with \(N\) tetrahedra, namely: 1. an ordering of the \(N\) tetrahedra, 2. an ordering of the \(N\) edges, 3. an edge equation to remove, 4. a path to represent the meridian curve, 5. a flattening. A change of these choices changes the corresponding Neumann-Zagier data in a simple way, which we now describe. Fix a triangulation and the choices needed to define Neumann-Zagier data \[\Xi\ =\ (A,B,\nu,z,f,f^{\prime\prime})\,. \tag{65}\] Regarding choice (a), suppose that \(\sigma\in S_{N}\) is a permutation (and also the associated matrix) of our labelling of the tetrahedra. Then the Neumann-Zagier matrices transform as follows \[\Xi\ =\ (A,B,\nu,z,f,f^{\prime\prime})\mapsto(A\sigma,B\sigma,\nu, \sigma^{-1}z,\sigma^{-1}f,\sigma^{-1}f^{\prime\prime})\ =\ \Xi\cdot\sigma\,. \tag{66}\] This implies that the integrand \(f^{\Xi}_{\hbar}(x,z)\) of Equation (4) and the quadratic form \(\Lambda\) of Equation (6) satisfy \[f_{\Xi\cdot\sigma,\hbar}(\sigma^{-1}x,\sigma^{-1}z)\ =\ f^{\Xi}_{\hbar}(x,z) \qquad\Lambda^{\Xi\cdot\sigma}\ =\ \sigma^{t}\Lambda^{\Xi}\sigma\,, \tag{67}\] which combined with the fact that integration is invariant under a linear change of variables (see part (a) of Lemma 3.1), implies that \(\Phi^{\Xi\cdot\sigma}(\hbar)=\Phi^{\Xi}(\hbar)\). Choices (b), (c) and (d) are a special case of the following transformation of \(P\in\mathrm{GL}_{N}(\mathbb{Z})\) acting on Neumann-Zagier data via \[\Xi\ =\ (A,B,\nu,z,f,f^{\prime\prime})\mapsto(PA,PB,P\nu,z,f,f^{ \prime\prime})\ =\ P\cdot\Xi\,. \tag{68}\] It follows that the integrand and the quadratic form satisfy \[f^{P\cdot\Xi}_{\hbar}(x,z)\ =\ f^{\Xi}_{\hbar}(x,z),\qquad \Lambda^{P\cdot\Xi}\ =\ \Lambda^{\Xi}\,, \tag{69}\] which implies again that \(\Phi^{\Xi\cdot\sigma}(\hbar)=\Phi^{\Xi}(\hbar)\). Finally, if \(\Xi\) and \(\widetilde{\Xi}\) differ by a choice of flattening, then it is easy to see that \[f^{\widetilde{\Xi}}_{\hbar}(x,z)\ =\ e^{c\hbar}f^{\Xi}_{\hbar}(x,z), \qquad\Lambda^{\widetilde{\Xi}}\ =\ \Lambda^{\Xi} \tag{70}\] for some \(c\in\frac{1}{8}\mathbb{Z}\), which implies that \(\Phi^{\widetilde{\Xi}}(\hbar)=e^{c\hbar}\,\Phi^{\Xi}(\hbar)\). ## 5. Invariance under the choice of quad The definition of the series \(\Phi^{\Xi}(\hbar)\) requires some choices, some of which were described and dealt with in the previous Section 4. What remains to complete the proof of Theorem 1.1 is the independence of \(\Phi^{\Xi}(\hbar)\) under the choice of a nondegenerate quad, and the independence under the 2-3 Pachner moves that connect two ideal triangulations. In this section, we will prove that \(\Phi^{\Xi}(\hbar)\) is independent of the choice of a nondegenerate quad. Recall that to each pair of opposite edges of an ideal tetrahedron there is an associated variable called a shape variable. When defining Neumann-Zagier data (see Section 2.2), we must choose an edge for each tetrahedron, which associates one of these shapes. There are three possible choices, which leads to the action on \(\mathbb{Z}/3\mathbb{Z}\) on the each column of the Neumann-Zagier data. All in all, for a triangulation with \(N\) tetrahedra, this leads to \(3^{N}\) choices on of Neumann-Zagier data. On the other hand, the definition of the series \(\Phi^{\Xi}(\hbar)\) requires a choice of a nondegenerate quad, i.e., one for which \(\det(B)\neq 0\) (such quads always exist [11, Lem.A.3]). In this section, we will show that any of the \(3^{N}\) choices of quad with \(\det(B)\neq 0\) lead to the same \(\Phi^{\Xi}(\hbar)\) series. **Theorem 5.1**.: _The series \(\Phi^{\Xi}(\hbar)\) is independent of the choice of a nondegenerate quad._ Proof.: Fix two non-degenerate NZ data \(\Xi=(A,B,\nu,z,f,f^{\prime\prime})\) and \(\widetilde{\Xi}=(\widetilde{A},\widetilde{B},\widetilde{\nu},\widetilde{z}, \widetilde{f},\widetilde{f}^{\prime\prime})\) related by a quad change of the same ideal triangulation. The nondegeneracy assumption implies that \(\det(B)\neq 0\) and \(\det(\widetilde{B})\neq 0\). After reordering the tetrahedra (which does not change the \(\Phi^{\Xi}(\hbar)\) series, as follows from Section 4), we can assume that \(\widetilde{\Xi}\) is obtained by applying a change in quad that fixes the first \(N_{0}\) shapes \(z^{(0)}\), replaces the next \(N_{1}\) shapes \(z^{(1)}\) by \(z^{\prime(1)}\) and replaces the next \(N_{2}\) shapes \(z^{(2)}\) by \(z^{\prime(2)}\). (Recall that \(z^{\prime}=1/(1-z)\) and \(z^{\prime\prime}=1-1/z\)). This partitions the shapes \(z=(z^{(0)},z^{(1)},z^{(2)})\) into three sets of size \(N_{0},N_{1},N_{2}\) and the matrices \(A\) and \(B\) into three block matrices \(A_{i}\) and \(B_{i}\) of size \(N\times N_{i}\) for \(i=0,1,2\) \[A\;=\;(A_{0}\,|\,A_{1}\,|\,A_{2})\,,\qquad B\quad=\;(B_{0}\,|\,B_{1}\,|\,B_{2})\,, \tag{71}\] and similarly for the flattening \(f=(f^{(0)},f^{(1)},f^{(2)})\) and \(f^{\prime\prime}=(f^{\prime\prime(0)},f^{\prime\prime(1)},f^{\prime\prime(2)})\). After the quad moves, the corresponding matrices and vectors are given by \[\begin{array}{lclcl}\tilde{A}&=&(A_{0}\,|-B_{1}\,|-A_{2}+B_{2})\,,&&\tilde{ B}&=&(B_{0}\,|\,A_{1}-B_{1}\,|-A_{2})\,,\\ \tilde{\nu}&=&\nu-B_{1}1-A_{2}1\,,&&\tilde{z}&=&(z^{(0)},z^{\prime(1)},z^{\prime \prime(2)})\,,\\ \tilde{f}&=&(f^{(0)},1-f^{(1)}-f^{\prime\prime(1)},f^{\prime\prime(2)})\,,&& \tilde{f}^{\prime\prime}&=&(f^{\prime\prime(0)},f^{(2)},1-f^{(2)}-f^{\prime \prime(2)})\,.\end{array} \tag{72}\] We also partition the vector of formal Gaussian integration variables \(x=(x^{(0)},x^{(1)},x^{(2)})\), as well as the symmetric matrix \(Q:=B^{-1}A\) \[Q\;=\;\begin{pmatrix}Q_{00}&Q_{01}&Q_{02}\\ Q_{01}^{t}&Q_{11}&Q_{12}\\ Q_{02}^{t}&Q_{12}^{t}&Q_{22}\end{pmatrix}\;. \tag{73}\] With the above notation, we have \[\Phi^{\Xi}(\hbar)\;=\;\langle I_{0}\rangle_{x,\Lambda_{0}} \tag{74}\] where \[I_{0}\ =\ \exp\Big{(}\frac{\hbar}{8}f^{t}B^{-1}Af-\frac{\hbar^{\frac{1}{2}}}{2}x^{ (0)}{}^{t}(B^{-1}\nu-1)^{(0)}-\frac{N_{1}\hbar}{24}-\frac{\hbar^{\frac{1}{2}}}{2 }x^{(1)}{}^{t}(B^{-1}\nu-1)^{(1)}\] and \[\Lambda_{0}\ =\ \text{diag}(z^{\prime})-Q\,.\] Applying the first quad move as in Theorem 3.4 to the \(\psi_{h}\) with arguments of \(z^{(1)}\) and the second quad move as in its Corollary 3.5 to the \(\psi_{h}\) with arguments \(z^{(2)}\), we obtain that \[\Phi^{\Xi}(\hbar)\ =\ \langle I_{1}\rangle_{(x,y,w),\Lambda_{1}} \tag{75}\] where \[I_{1}\ =\ \exp\Big{(}\frac{\hbar}{8}f^{t}B^{-1}Af-\frac{\hbar^{ \frac{1}{2}}}{2}x^{(0)}{}^{t}(B^{-1}\nu-1)^{(0)}-\frac{N_{1}\hbar}{24}-\frac{ \hbar^{\frac{1}{2}}}{2}x^{(1)}{}^{t}(B^{-1}\nu-1)^{(1)}\] \[\qquad\qquad\qquad+\sum_{j=1}^{N_{1}}\Big{(}y_{j}+\frac{x_{j}^{(1 )}z_{j}^{(1)}}{1-z_{j}^{(1)}}\Big{)}\frac{\hbar^{\frac{1}{2}}}{2}+\frac{N_{2} \hbar}{24}-\frac{\hbar^{\frac{1}{2}}}{2}x^{(2)}{}^{t}(B^{-1}\nu-1)^{(2)}-\sum _{j=1}^{N_{2}}x_{j}^{(2)}\frac{\hbar^{\frac{1}{2}}}{2}\Big{)}\] \[\times\prod_{j=1}^{N_{0}}\psi_{h}(x_{j}^{(0)},z_{j}^{(0)})\prod_{ j=1}^{N_{1}}\psi_{h}\Big{(}y_{j}+\frac{x_{j}^{(1)}z_{j}^{(1)}}{1-z_{j}^{(1)}}, \frac{1}{1-z_{j}^{(1)}}\Big{)}\prod_{j=1}^{N_{2}}\psi_{h}\Big{(}w_{j}-\frac{x_ {j}^{(2)}}{1-z_{j}^{(2)}},1-z_{j}^{(2)}{}^{-1}\Big{)},\] \(y\) and \(w\) are vectors of size \(N_{1}\) and \(N_{2}\), respectively, and \[\Lambda_{1}\ =\ \begin{pmatrix}\Lambda&0&0\\ 0&\text{diag}(z^{\prime\prime(1)})&0\\ 0&0&\text{diag}(z^{(2)})-I\end{pmatrix}\,.\] Making the change of variables \(y_{j}\mapsto y_{j}-\frac{x_{j}^{(1)}z_{j}^{(1)}}{1-z_{j}^{(1)}}\) and \(w_{j}\mapsto w_{j}+\frac{x_{j}^{(2)}}{1-z_{j}^{(2)}}\) and using Lemma 3.1 we obtain that \[\Phi^{\Xi}(\hbar)\ =\ \langle I_{2}\rangle_{(x,y,w),\Lambda_{2}} \tag{76}\] where \[I_{2}\ =\ \exp\Big{(}\frac{\hbar}{8}f^{t}B^{-1}Af-\frac{\hbar^{ \frac{1}{2}}}{2}x^{(0)}{}^{t}(B^{-1}\nu-1)^{(0)}-\frac{N_{1}\hbar}{24}-\frac{ \hbar^{\frac{1}{2}}}{2}x^{(1)}{}^{t}(B^{-1}\nu-1)^{(1)}\] \[\qquad\qquad+y^{t}1\frac{\hbar^{\frac{1}{2}}}{2}+\frac{N_{2} \hbar}{24}-\frac{\hbar^{\frac{1}{2}}}{2}x^{(2)}{}^{t}(B^{-1}\nu)^{(2)}\Big{)}\] \[\qquad\qquad\times\prod_{j=1}^{N_{0}}\psi_{h}(x_{j}^{(0)},z_{j}^{ (0)})\prod_{j=1}^{N_{1}}\psi_{h}\Big{(}y_{j},\frac{1}{1-z_{j}^{(1)}}\Big{)} \prod_{j=1}^{N_{2}}\psi_{h}\big{(}w_{j},1-z_{j}^{(2)}{}^{-1}\big{)}\] and \[\Lambda_{2}\ =\ \begin{pmatrix}\text{diag}(z^{\prime(0)})-Q_{00}&-Q_{01}&-Q_{02}& 0&0\\ -Q_{01}^{t}&I-Q_{11}&-Q_{12}&I&0\\ -Q_{02}^{t}&-Q_{12}^{t}&-Q_{22}&0&-I\\ \hline 0&I&0&\text{diag}(z^{\prime\prime(1)})&0\\ 0&0&-I&0&\text{diag}(z^{(2)})-I\end{pmatrix}\,.\] Note that \(x^{(1)}\) and \(x^{(2)}\) do not appear in the arguments of \(\psi_{\hbar}\). Moreover, since \(BQ=A\), we see that \[B\begin{pmatrix}I&Q_{01}&Q_{02}\\ 0&Q_{11}-I&Q_{12}\\ 0&Q_{12}^{t}&Q_{22}\end{pmatrix}\begin{pmatrix}I&0&0\\ 0&I&0\\ 0&0&-I\end{pmatrix}\;=\;(B_{0}\,|\,A_{1}-B_{1}\,|-A_{2})\;=\;\widetilde{B}\,, \tag{77}\] which implies that \[\det\begin{pmatrix}Q_{11}-I&Q_{12}\\ Q_{12}^{t}&Q_{22}\end{pmatrix}\neq 0\,,\quad\text{and}\quad\begin{pmatrix}Q_{11}- I&Q_{12}\\ Q_{12}^{t}&Q_{22}\end{pmatrix}^{-1}\;=\;\begin{pmatrix}0&I&0\\ 0&0&-I\end{pmatrix}\widetilde{B}^{-1}B\begin{pmatrix}0&0\\ I&0\\ 0&I\end{pmatrix}\,. \tag{78}\] Therefore, we can apply Fubini's Theorem (Lemma 3.1) with the integration variables \(x^{(1)},x^{(2)}\), and use Lemma 5.2 and the equality \(Qf+f^{\prime\prime}=B^{-1}\nu\) to obtain that \[\Phi^{\Xi}(\hbar)\;=\;e^{c\hbar}\langle I_{3}\rangle_{\widetilde{x},\Lambda_ {3}}\,. \tag{79}\] where \(\widetilde{x}=(x^{(0)},y,w)\), \(c\in\frac{1}{24}\mathbb{Z}\) \[I_{3}\;=\;\exp\Big{(}\frac{\hbar}{8}\widetilde{f}^{t}\widetilde{B}^{-1} \widetilde{A}\widetilde{f}-\frac{\hbar^{\frac{1}{2}}}{2}\widetilde{x}^{t}( \widetilde{B}^{-1}\widetilde{\nu}-1)\Big{)}\prod_{j=1}^{N}\psi_{\hbar}( \widetilde{x}_{j},\widetilde{z}_{j})\] and \[\Lambda_{3}\;=\;\text{diag}(\widetilde{z}^{\prime})-\widetilde{B}^{-1} \widetilde{A}\,.\] The right hand side of Equation (79) is exactly equal to \(e^{c\hbar}\Phi^{\widetilde{\Xi}}(\hbar)\), completing the proof of Theorem 5.1. **Lemma 5.2**.: With the notation as in the proof of Theorem 5.1, we have the following identities \[\widetilde{B}^{-1}\widetilde{A} =\;\begin{pmatrix}Q_{00}&0&0\\ 0&0&0\\ 0&0&-I\end{pmatrix}+\begin{pmatrix}Q_{01}&Q_{02}\\ -I&0\\ 0&I\end{pmatrix}\begin{pmatrix}Q_{11}-I&Q_{12}\\ Q_{12}^{t}&Q_{22}\end{pmatrix}^{-1}\begin{pmatrix}Q_{01}^{t}&-I&0\\ Q_{02}^{t}&0&I\end{pmatrix}, \tag{80}\] \[\widetilde{B}^{-1}\widetilde{\nu}-1 =\;\begin{pmatrix}I&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}(B^{-1}\nu-1)-\begin{pmatrix}0\\ 1\\ 0\end{pmatrix}\] \[\quad+\begin{pmatrix}-Q_{01}&-Q_{02}\\ I&0\\ 0&-I\end{pmatrix}\begin{pmatrix}Q_{11}-I&Q_{12}\\ Q_{12}^{t}&Q_{22}\end{pmatrix}^{-1}\left(\begin{pmatrix}0&I&0\\ 0&0&I\end{pmatrix}B^{-1}\nu-\begin{pmatrix}1\\ 0\end{pmatrix}\right) \tag{81}\] and for some \(d\in\mathbb{Z}\) \[\begin{split}&\widetilde{f}\widetilde{B}^{-1}\widetilde{\nu}\;= \;f^{t}B^{-1}\nu+d\\ &-\left((B^{-1}\nu)^{t}\begin{pmatrix}0&0\\ I&0\\ 0&I\end{pmatrix}-\begin{pmatrix}1&0\\ 1&0\end{pmatrix}\right)\begin{pmatrix}Q_{11}-I&Q_{12}\\ Q_{12}^{t}&Q_{22}\end{pmatrix}^{-1}\left(\begin{pmatrix}0&I&0\\ 0&0&I\end{pmatrix}B^{-1}\nu-\begin{pmatrix}1\\ 0\end{pmatrix}\right).\end{split} \tag{82}\] Proof.: We will show that boths sides of Equation (80) and Equation (81) are equal after multiplying by the invertible matrix \(\widetilde{B}\). Denote \[\begin{pmatrix}Q_{11}-I&Q_{12}\\ Q_{12}^{t}&Q_{22}\end{pmatrix}^{-1}\;=\;\begin{pmatrix}\Gamma_{11}&\Gamma_{12}\\ \Gamma_{12}^{t}&\Gamma_{22}\end{pmatrix}\,, \tag{83}\] and note that \[\begin{array}{l}\widetilde{B}\begin{pmatrix}-Q_{01}&-Q_{02}\\ I&0\\ 0&-I\end{pmatrix}\begin{pmatrix}Q_{11}-I&Q_{12}\\ Q_{12}^{t}&Q_{22}\end{pmatrix}^{-1}\;=\;\widetilde{B}\begin{pmatrix}-Q_{01} \Gamma_{11}-Q_{02}\Gamma_{12}^{t}&-Q_{01}\Gamma_{12}-Q_{02}\Gamma_{22}\\ \Gamma_{11}&\Gamma_{12}\\ -\Gamma_{12}^{t}&-\Gamma_{22}\end{pmatrix}\\ =\;(-B_{0}Q_{01}\Gamma_{11}-B_{0}Q_{02}\Gamma_{12}^{t}+(A_{1}-B_{1})\Gamma_{11 }+A_{2}\Gamma_{12}^{t}\;|\;-B_{0}Q_{01}\Gamma_{12}-B_{0}Q_{02}\Gamma_{22}+(A_{ 1}-B_{1})\Gamma_{12}+A_{2}\Gamma_{22})\\ =\;((B_{1}(Q_{11}-I)+B_{2}Q_{12}^{t})\Gamma_{11}+(B_{1}Q_{12}+B_{2}Q_{22}) \Gamma_{12}^{t}\;|\;(B_{1}(Q_{11}-I)+B_{2}Q_{12}^{t})\Gamma_{12}+(B_{1}Q_{12}+ B_{2}Q_{22})\Gamma_{22})\\ =\;(B_{1}\;|\;B_{2})\,.\end{array} \tag{84}\] Therefore, since \[\widetilde{B}\begin{pmatrix}Q_{00}&0&0\\ 0&0&0\\ 0&0&-I\end{pmatrix}\;=\;(B_{0}Q_{00}\;|\;0\;|-A_{2})\,, \tag{85}\] we see that multiplying the right hand side of Equations (80) on the left by \(\widetilde{B}\) we have \[(B_{0}Q_{00}+B_{1}Q_{01}^{t}+B_{2}Q_{02}^{t}\;|-B_{1}\;|-A_{2}+B_{2})\;=\; \widetilde{A}\,, \tag{86}\] which completes the proof of Equation (80). Similarly, since \[\widetilde{B}\begin{pmatrix}I&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}(B^{-1}\nu-1)-\widetilde{B}\begin{pmatrix}0\\ 1\\ 0\end{pmatrix}\;=\;B_{0}(B^{-1}\nu)^{(0)}-B_{0}1-A_{1}1+B_{1}1 \tag{87}\] we see that multiplying the right hand side of Equations (81) on the left by \(\widetilde{B}\) we have \[B_{0}(B^{-1}\nu)^{(0)}-B_{0}1-A_{1}1+B_{1}1+B_{1}(B^{-1}\nu)^{(1)}-B_{1}1+B_{2 }(B^{-1}\nu)^{(2)}\;=\;\widetilde{\nu}-\widetilde{B}1\,, \tag{88}\] which completes the proof of Equation (81). For Equation (82), we will compute the three terms that appear there. Firstly, use Equation (81) to obtain that \[\begin{pmatrix}0&I&0\\ 0&0&-I\end{pmatrix}(\widetilde{B}^{-1}\widetilde{\nu}-1)\;=\;-\begin{pmatrix} 1\\ 0\end{pmatrix}+\begin{pmatrix}Q_{11}-I&Q_{12}\\ Q_{12}^{t}&Q_{22}\end{pmatrix}^{-1}\left(\begin{pmatrix}0&I&0\\ 0&0&I\end{pmatrix}B^{-1}\nu-\begin{pmatrix}1\\ 0\end{pmatrix}\right), \tag{89}\] to obtain that \[\begin{array}{l}\left((B^{-1}\nu)^{t}\begin{pmatrix}0&0\\ I&0\\ 0&I\end{pmatrix}-\begin{pmatrix}1&0\\ 0\end{pmatrix}\right)\left(\begin{pmatrix}0&I&0\\ 0&0&-I\end{pmatrix}(\widetilde{B}^{-1}\widetilde{\nu}-1)+\begin{pmatrix}1\\ 0\end{pmatrix}\right)\\ =\;((B^{-1}\nu)^{(1)}{}^{t}-1)(\widetilde{B}^{-1}\widetilde{\nu})^{(1)}-(B^{- 1}\nu)^{(2)}{}^{t}(\widetilde{B}^{-1}\widetilde{\nu})^{(2)}+(B^{-1}\nu)^{(2)}{} ^{t}1\,.\end{array} \tag{90}\] Secondly, expanding out \[\begin{split}\widetilde{f}^{t}\widetilde{B}^{-1}\widetilde{\nu}& =\ f^{(0)}(\widetilde{B}^{-1}\widetilde{\nu})^{(0)}+(1-f^{(1)}-f^{\prime\prime (1)})(\widetilde{B}^{-1}\widetilde{\nu})^{(1)}+f^{\prime\prime(2)}( \widetilde{B}^{-1}\widetilde{\nu})^{(2)}\\ &=\ f^{(0)}(\widetilde{B}^{-1}\widetilde{\nu})^{(0)}\\ &\ \ \ \ +(1-f^{(1)}-{(B^{-1}\nu)^{(1)}}^{t}+f^{(0)}Q_{01}+f^{(1)}Q _{11}+f^{(2)}Q_{12}^{t})(\widetilde{B}^{-1}\widetilde{\nu})^{(1)}\\ &\ \ \ \ +({(B^{-1}\nu)^{(2)}}^{t}-f^{(0)}Q_{02}-f^{(1)}Q_{12}-f^{(2)} Q_{22})(\widetilde{B}^{-1}\widetilde{\nu})^{(2)}\,.\end{split} \tag{91}\] Finally, using Equation (77) we obtain that \[\begin{split} f^{t}B^{-1}\nu-f^{(1)}1+f^{\prime\prime(2)}1& =\ f^{t}B^{-1}\nu-f^{(1)}1-f^{t}Q(0,0,1)^{t}+{(B^{-1}\nu)^{(2)}}^{t}1\\ &=\ f^{t}B^{-1}\widetilde{\nu}+{(B^{-1}\nu)^{(2)}}^{t}1\\ &=\ f^{(0)}(\widetilde{B}^{-1}\widetilde{\nu})^{(0)}\\ &\ \ \ \ +(f^{(0)}Q_{01}+f^{(1)}(Q_{11}-I)+f^{(2)}Q_{12}^{t})( \widetilde{B}^{-1}\widetilde{\nu})^{(1)}\\ &\ \ \ \ -(f^{(0)}Q_{02}+f^{(1)}Q_{12}+f^{(2)}Q_{22})(\widetilde{B}^{-1 }\widetilde{\nu})^{(2)}+{(B^{-1}\nu)^{(2)}}^{t}1\,.\end{split} \tag{92}\] ## 6. Invariance under Pachner moves In this section, we will prove that \(\Phi^{\Xi}\) is invariant under 2-3 Pachner moves. There are several versions of the 2-3 move (and of the corresponding pentagon identity in Teichmuller TQFT [27]) and the one we choose in the next theorem is slightly different from the one in [11, Sec.3.6] and can be related by composing with quad moves. The 2-3 move involves two triangulations \(\mathcal{T}\) and \(\widetilde{\mathcal{T}}\) with \(N+2\) and \(N+3\) tetrahedra, respectively, shown in Figure 2. Using \(z_{0}=z_{1}+z_{2}-z_{1}z_{2}\) from Equation (46) used in Theorem 3.6, it follows that the shapes \(z\) of \(\mathcal{T}\) and \(\widetilde{z}\) of \(\widetilde{\mathcal{T}}\) are related by \[\begin{split} z\ =\ (z_{1},z_{2},z^{*})\mapsto\widetilde{z}& =\left(\widetilde{z}_{0},\ \widetilde{z}_{1},\ \widetilde{z}_{2},\ \widetilde{z}^{*}\right)\\ &=\left(z_{0},\ z_{1}z_{0}^{-1},\ z_{2}z_{0}^{-1},\ z^{*}\right). \end{split} \tag{93}\] Figure 2. The 2–3 Pachner move. Similarly to [11, Eq. (3.20) and (3.21)], these shapes satisfy the relations \[\begin{split}\widetilde{z}_{0}^{\prime}&=\ z_{1}^{ \prime}z_{2}^{\prime}\,,\qquad\widetilde{z}_{1}^{\prime\prime}\;=\;z_{1}^{ \prime\prime}z_{2}\,,\qquad\widetilde{z}_{2}^{\prime\prime}\;=\;z_{1}z_{2}^{ \prime\prime}\,,\\ z_{1}&=\ \widetilde{z}_{0}\widetilde{z}_{1}\,,\qquad z_{1}^{ \prime}\;=\;\widetilde{z}_{1}^{\prime}\widetilde{z}_{2}\,,\qquad z_{1}^{ \prime\prime}\;=\;\widetilde{z}_{0}^{\prime\prime}\widetilde{z}_{2}^{\prime}\,, \\ z_{2}&=\ \widetilde{z}_{0}\widetilde{z}_{2}\,,\qquad z_{2}^{ \prime}\;=\;\widetilde{z}_{1}\widetilde{z}_{2}^{\prime\prime}\,,\qquad z_{2}^ {\prime\prime}\;=\;\widetilde{z}_{0}^{\prime\prime}\widetilde{z}_{1}^{\prime} \,.\end{split} \tag{94}\] If we write the Neumann-Zagier matrices of \(\mathcal{T}\) in the form \[A\;=\;(a_{1}\,|\,a_{2}\,|\,a_{*})\,,\qquad B=\;(b_{1}\,|\,b_{2}\,|\,b_{*})\,, \tag{95}\] where \(a_{1}\), \(a_{2}\) are the first two columns of \(A\) and \(a_{*}\) is the block \((N+2)\times N\) matrix of the remaining \(N\) columns of \(A\), and likewise for \(B\), then using Equations (94) the corresponding Neumann-Zagier matrices of \(\widetilde{\mathcal{T}}\) are given by \[\widetilde{A}\;=\;\begin{pmatrix}-1&0&0&0\\ -b_{1}-b_{2}+a_{1}+a_{2}&a_{1}-b_{2}&a_{2}-b_{1}&a_{*}\end{pmatrix},\qquad \widetilde{B}\;=\;\begin{pmatrix}-1&1&1&0\\ 0&b_{1}&b_{2}&b_{*}\end{pmatrix} \tag{96}\] and the corresponding vector \(\widetilde{\nu}=(1,\nu)\). Analogously to the shapes, we will fix flattenings \((\widetilde{f}_{0},\widetilde{f}_{1},\widetilde{f}_{2},\widetilde{f}^{*})\) and \((\widetilde{f}_{0}^{\prime\prime},\widetilde{f}_{1}^{\prime\prime},\widetilde {f}_{2}^{\prime\prime},\widetilde{f}^{\prime\prime*})\) for \((\widetilde{A}\,|\,\widetilde{B})\) and \((f_{1},f_{2},f^{*})=(\widetilde{f}_{0}+\widetilde{f}_{1},\widetilde{f}_{0}+ \widetilde{f}_{2},f^{*})\) and \((f_{1}^{\prime\prime},f_{2}^{\prime\prime},f^{\prime\prime*})=(\widetilde{f} _{0}^{\prime\prime}+f_{2}^{\prime},f_{0}^{\prime\prime}+\widetilde{f}_{1}^{ \prime},f^{\prime\prime*})\) for \((A\,|\,B)\). The data of the flattenings then satisfy the additive versions of Equations (94). **Theorem 6.1**.: _The series \(\Phi^{\Xi}(\hbar)\) is invariant under 2-3 Pachner moves._ The proof involves an application of the pentagon identity for \(\psi_{\hbar}\) of Theorem 3.6. For two triangulations \(\mathcal{T}\) and \(\widetilde{\mathcal{T}}\) with \(N+2\) and \(N+3\) tetrahedra and NZ matrices \((\mathcal{A}\,|\,\mathcal{B})\), respectively, related by a Pachner 2-3 move. To define the corresponding series \(\Phi^{\Xi}(\hbar)\) and \(\Phi^{\widetilde{\Xi}}(\hbar)\), we need to possibly change quads on \(\mathcal{T}\) and \(\widetilde{\mathcal{T}}\) so that both \(\mathcal{B}\) and \(\widetilde{\mathcal{B}}\) are invertible. By a quad move \(q\), we can replace \((\mathcal{A}\,|\,\mathcal{B})\) by \((\mathbf{A}\,|\,\mathbf{B})\) where \(\det(\mathbf{B})\neq 0\). Recalling that quad moves act on tetrahedra and actions on different tetrahedra commute, we can write the move \(q=q_{2}\times q_{N}\) as a product of quad moves \(q_{2}\) on the first 2 tetrahedra times moves \(q_{N}\) on the remaining \(N\) tetrahedra of \(\mathcal{T}\). Let \((A\,|\,B)\) denote the result of applying the move \(1\times q_{N}\) on the \(N+2\) tetrahedra of \(\mathcal{T}\). Since the \((N+2)\times(N+2)\) matrix \(\mathbf{B}\) has full rank and \(B\) and \(\mathbf{B}\) have the same last \(N\) columns, it follows that \(B\) has nullity 0, 1 or 2. Now on \(\widetilde{\mathcal{T}}\), we can apply the identity move on the first three tetrahedra and the \(q_{N}\) moves on the remaining \(N\) tetrahedra, which transforms the NZ matrices \((\widetilde{\mathcal{A}}\,|\,\widetilde{\mathcal{B}})\) to \((\widetilde{A}\,|\,\widetilde{B})\), and further apply the \(q_{2}\) moves on the second and third tetrahedra to obtain the NZ matrices \((\widetilde{\mathbf{A}}\,|\,\widetilde{\mathbf{B}})\). By looking at how the matrix \(B\) transforms under a 2-3 move (see Equation (96)), it follows that \(\widetilde{\mathbf{B}}\) has the same rank as \(\mathbf{B}\), and hence \(\det(\widetilde{\mathbf{B}})\neq 0\). The above discussion can be summarized in the following commutative diagram. (97) where \(\det(\mathbf{B})\neq 0\) and \(\det(\widetilde{\mathbf{B}})\neq 0\). Now \(\Phi^{\Xi}(\hbar)\) and \(\Phi^{\widetilde{\Xi}}(\hbar)\) can be defined using the non-degenerate NZ data with matrices \((\mathbf{A}\,|\,\mathbf{B})\) and \((\widetilde{\mathbf{A}}\,|\,\widetilde{\mathbf{B}})\), respectively. We will show that the two \(\hbar\)-series are equal using a diagram (98) where \(i=0,1,2\) denotes the nullity of \(B\). We will treat each case in a separate section. With this discussion the Theorem 6.1 follows from Propositions 6.2, 6.4, and 6.6. Before proceeding with the proof, we will set some notation. Similarly to the proof of the quad invariance, denote \(\mathbf{Q}=\mathbf{B}^{-1}\mathbf{A}\) \[\mathbf{Q}\;=\;\begin{pmatrix}\mathbf{Q}_{11}&\mathbf{Q}_{12}&\mathbf{Q}_{1}^ {*}\\ \mathbf{Q}_{12}&\mathbf{Q}_{22}&\mathbf{Q}_{2}^{*}\\ \hline\mathbf{Q}_{1}^{*\,t}&\mathbf{Q}_{2}^{*\,t}&\mathbf{Q}^{*}\end{pmatrix}, \tag{99}\] where \(\mathbf{Q}_{ij}\) are matrices of size \(1\times 1\), \(\mathbf{Q}_{j}^{*}\) are matrices of size \(1\times N\) and \(\mathbf{Q}^{*}\) is a matrix of size \(N\times N\). ### The case of \(B\) with full rank In this section, we prove Theorem 3.6 under the assumption that the matrix \(B\) has full rank. In this case, Equation (98) simplifies to the following one (100) since we do not need to apply Fourier transform, nor Fubini's theorem. **Proposition 6.2**.: If \(B\) has full rank, then \(\Phi^{\Xi}(\hbar)\) is invariant under the 2-3 Pachner move given in Equation (96). Proof.: To begin with, noting that in this case \(A=\mathbf{A}\) and \(B=\mathbf{B}\), we have: \[\Phi^{\Xi}(\hbar)\;=\;\langle I_{0}\rangle_{x,\Lambda} \tag{101}\] where \[I_{0}\;=\;\exp\Big{(}\frac{\hbar}{8}f^{t}B^{-1}Af-\frac{\hbar^{\frac{1}{2}}}{ 2}x^{t}(B^{-1}\nu-1)\Big{)}\prod_{j=1}^{N+2}\psi_{\hbar}(x_{j},z_{j})\] and \[\Lambda_{0}\;=\;\text{diag}(z^{\prime})-\mathbf{Q}\] and \(x=(x_{1},x_{2},x^{*})\) are the integration variables. Applying the pentagon identity of Theorem 3.6, to the \(\psi_{h}\) with arguments \(x_{1},x_{2}\) and introducing a new integration variable \(x_{0}\), we obtain that \[\Phi^{\Xi}(\hbar)\;=\;\langle I_{1}\rangle_{(x_{0},x),\Lambda_{1}} \tag{102}\] where \[I_{1} =\;\exp\Big{(}-\frac{\hbar}{24}+\frac{\hbar}{8}f^{t}B^{-1}Af- \frac{\hbar^{\frac{1}{2}}}{2}x^{t}(B^{-1}\nu-1)\Big{)}\,\psi_{h}\Big{(}-x_{0}- x_{2}+\frac{x_{1}z_{2}+x_{2}z_{1}}{z_{0}},z_{1}z_{0}^{-1}\Big{)}\] \[\times\psi_{h}\Big{(}x_{0}+x_{1}+x_{2}-\frac{x_{1}z_{2}+x_{2}z_{1 }}{z_{0}},z_{0}\Big{)}\,\psi_{h}\Big{(}-x_{0}-x_{1}+\frac{x_{1}z_{2}+x_{2}z_{1 }}{z_{0}},z_{2}z_{0}^{-1}\Big{)}\prod_{j=1}^{N}\psi_{h}(x_{j}^{*},z_{j}^{*})\] and \[\Lambda_{1}\;=\;\begin{pmatrix}\frac{(z_{1}+z_{2}-z_{1}z_{2})^{2}}{(z_{1}-1)z _{1}(z_{2}-1)z_{2}}&0\\ 0&\Lambda\end{pmatrix}\,.\] Making a change of variables \(x_{0}\mapsto x_{0}+x_{1}(-1+z_{2}z_{0}^{-1})+x_{2}(-1+z_{1}z_{0}^{-1})\) using Lemma 3.1, we obtain that \[\Phi^{\Xi}(\hbar)\;=\;\langle I_{2}\rangle_{(x_{0},x),\Lambda_{2}}, \tag{103}\] where \[I_{2} =\;\exp\Big{(}-\frac{\hbar}{24}+\frac{\hbar}{8}f^{t}B^{-1}Af- \frac{\hbar^{\frac{1}{2}}}{2}x^{t}(B^{-1}\nu-1)\Big{)}\,\psi_{h}(-x_{0}+x_{1}, z_{1}z_{0}^{-1})\] \[\times\psi_{h}(x_{0},z_{0})\,\psi_{h}(-x_{0}+x_{2},z_{2}z_{0}^{-1} )\,\prod_{j=1}^{N}\psi_{h}(x_{j}^{*},z_{j}^{*})\] and \[\Lambda_{2}\;=\;\begin{pmatrix}\frac{(z_{1}+z_{2}-z_{1}z_{2})^{2}}{(z_{1}-1)z _{1}(z_{2}-1)z_{2}}&\frac{z_{1}+z_{2}-z_{1}z_{2}}{z_{2}(z_{1}-1)}&\frac{z_{1}+ z_{2}-z_{1}z_{2}}{z_{1}(z_{2}-1)}&0\\ \frac{z_{1}+z_{2}-z_{1}z_{2}}{z_{2}(z_{1}-1)}&-\mathbf{Q}_{11}-\frac{z_{1}+z_{2 }-z_{1}z_{2}}{z_{2}(z_{1}-1)}&1-\mathbf{Q}_{12}&-\mathbf{Q}_{1}^{*}\\ \frac{z_{1}+z_{2}-z_{1}z_{2}}{z_{1}(z_{2}-1)}&1-\mathbf{Q}_{12}&-\mathbf{Q}_{2 2}-\frac{z_{1}+z_{2}-z_{1}z_{2}}{z_{1}(z_{2}-1)}&-\mathbf{Q}_{2}^{*}\\ \hline 0&-\mathbf{Q}_{1}^{*\,t}&-\mathbf{Q}_{2}^{*\,t}&\mathrm{diag}({z^{*}}^{ \prime})-\mathbf{Q}^{*}\end{pmatrix}.\] Making a change of variables \(x_{1}\mapsto x_{1}+x_{0}\) and \(x_{2}\mapsto x_{2}+x_{0}\) using Lemma 3.1, and denoting \(\widetilde{x}=(x_{0},x)\), \(\widetilde{z}=(z_{0},z_{1}z_{0}^{-1},z_{2}z_{0}^{-1},z^{*})\) we obtain that \[\Phi^{\Xi}(\hbar)\;=\;e^{ch}\langle I_{3}\rangle_{\widetilde{x},\Lambda_{3}}, \tag{104}\] where \[I_{3}\;=\;\exp\Big{(}\frac{\hbar}{8}\widetilde{f}^{t}\widetilde{B}^{-1} \widetilde{A}\widetilde{f}-\frac{\hbar^{\frac{1}{2}}}{2}\widetilde{x}^{t}( \widetilde{B}^{-1}\widetilde{\nu}-1)\Big{)}\prod_{j=1}^{N+3}\psi_{h}( \widetilde{x}_{j},\widetilde{z}_{j})\] and \[\Lambda_{3}\;=\;\mathrm{diag}(\tilde{z}^{\prime})-\tilde{B}^{-1}\tilde{A}\,.\] The next lemma identifies vectors and matrices of the two triangulations \(\mathcal{T}\) and \(\widetilde{\mathcal{T}}\). **Lemma 6.3**.: With the notation used in the proof of the previous Proposition 6.2, \[\widetilde{B}^{-1}\widetilde{A} = \tag{105}\] \[\widetilde{B}^{-1}\widetilde{\nu} = \begin{pmatrix}(B^{-1}\nu)_{1}+(B^{-1}\nu)_{2}-1\\ B^{-1}\nu\end{pmatrix}\] (106) \[\widetilde{f}^{t}\widetilde{B}^{-1}\widetilde{\nu} = \begin{pmatrix}\phantom{-}t^{t}B^{-1}\nu-\widetilde{f}_{0}\end{pmatrix} \tag{107}\] Proof.: We will show that both sides of Equations (105) and Equations (106) are equal after multiplying by the invertible matrix \(\widetilde{B}\). For Equation (105), we have \[\begin{pmatrix}-1&1&1&0\\ 0&b_{1}&b_{2}&b_{*}\end{pmatrix}\begin{pmatrix}\mathbf{Q}_{11}+2\mathbf{Q}_{ 12}+\mathbf{Q}_{22}-1&\mathbf{Q}_{11}+\mathbf{Q}_{12}-1&\mathbf{Q}_{12}+ \mathbf{Q}_{22}-1&\mathbf{Q}_{1}^{*}+\mathbf{Q}_{2}^{*}\\ \mathbf{Q}_{11}+\mathbf{Q}_{12}-1&\mathbf{Q}_{11}&\mathbf{Q}_{12}-1&\mathbf{Q }_{1}^{*}\\ \mathbf{Q}_{12}+\mathbf{Q}_{22}-1&\mathbf{Q}_{12}-1&\mathbf{Q}_{22}&\mathbf{Q }_{2}^{*}\\ \hline\mathbf{Q}_{1}^{*t}+\mathbf{Q}_{2}^{*t}&\mathbf{Q}_{1}^{*t}&\mathbf{Q}_{ 2}^{*t}&\mathbf{Q}_{2}^{*}\end{pmatrix} \tag{108}\] \[= \begin{pmatrix}-1&0&0&0\\ b_{1}(-1+\mathbf{Q}_{11}+\mathbf{Q}_{12})+b_{*}\mathbf{Q}_{1}^{*t}&b_{1} \mathbf{Q}_{11}+b_{*}\mathbf{Q}_{1}^{*t}&b_{2}\mathbf{Q}_{22}+b_{*}\mathbf{Q }_{2}^{*t}\\ +b_{2}(-1+\mathbf{Q}_{12}+\mathbf{Q}_{22})+b_{*}\mathbf{Q}_{2}^{*t}&+b_{2}(-1+ \mathbf{Q}_{12})&+b_{1}(-1+\mathbf{Q}_{12})\end{pmatrix}b_{1}\mathbf{Q}_{1}^{*} +b_{2}\mathbf{Q}_{2}^{*}+b_{*}\mathbf{Q}_{*}\end{pmatrix}\] \[= \widetilde{A}.\] For Equation (106), we have \[\begin{pmatrix}-1&1&1&0\\ 0&b_{1}&b_{2}&b_{*}\end{pmatrix}\begin{pmatrix}(B^{-1}\nu)_{1}+(B^{-1}\nu)_{2 }-1\\ B^{-1}\nu\end{pmatrix}\;=\;\begin{pmatrix}1\\ \nu\end{pmatrix}. \tag{109}\] Finally, for Equation (107), \[\begin{split}\widetilde{f}^{t}\widetilde{B}^{-1}\nu&=& \widetilde{f}^{t}\begin{pmatrix}(B^{-1}\nu)_{1}+(B^{-1}\nu)_{2}-1\\ B^{-1}\nu\end{pmatrix}\\ &=&(\widetilde{f}_{0}+\widetilde{f}_{1})(B^{-1}\nu)_{1}+( \widetilde{f}_{0}+\widetilde{f}_{2})(B^{-1}\nu)_{2}+f^{*}(B^{-1}\nu)_{1}- \widetilde{f}_{0}\,,\end{split} \tag{110}\] and we recall we choose \(f_{i}=\widetilde{f}_{0}+\widetilde{f}_{i}\) for \(i=1,2\). ### The case of \(B\) with nullity one In this section, we prove Theorem 3.6 under the assumption that the matrix \(B\) has nullity \(1\). In this case, starting from the series \(\Phi^{\Xi}(\hbar)\), there are three intermediate formulas (shown as bullets) in Equation (98) that eventually identify the result with \(\Phi^{\widetilde{\Xi}}(\hbar)\). The intermediate formulas involve a Fourier transform (adding one integration variable), a pentagon (adding a second variable), an inverse Fourier transform (adding a third), and an application of Fubini's theorem that removes two integration variables. The detailed computation is given in the next proposition. **Proposition 6.4**.: If \(B\) has rank \(N+1\) then \(\Phi^{\Xi}(\hbar)\) is invariant under the 2-3 Pachner move given in Equation (98). Proof.: Following the discussion above we can assume that \(\operatorname{rank}(b_{1}\,|\,b_{2}\,|\,b_{*})=\operatorname{rank}(b_{2}\,|\,b_{*} )=N+1\). Then by [11, Lem.A.3], the matrix \((a_{1}\,|\,b_{2}\,|\,b_{*})\) has full rank and we can apply a quad move to the first columns of \((A\,|\,B)\) to obtain \[\mathbf{A}\;=\;(-b_{1}\,|\,a_{2}\,|\,a_{*})\,,\qquad\mathbf{B}\;=\;(a_{1}-b_{1 }\,|\,b_{2}\,|\,b_{*})\,,\qquad\boldsymbol{\nu}\;=\;\nu-b_{1}\,, \tag{111}\] where \(\mathbf{B}\) has full rank. The proof will use the following sequence of intermediate matrices \[\begin{split}&(\mathbf{A}\,|\,\mathbf{B}\,|\,\boldsymbol{\nu}) \,=\big{(}-b_{1}\,\big{|}\,a_{2}\,\big{|}\,a_{*}\,\big{|}\,a_{1}-b_{1}\,\big{|} \,b_{2}\,\big{|}\,b_{*}\,\big{|}\,\nu-b_{1}\,\big{)}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad q_{1}^{-1}\times 1 \\ &(A\,|\,B\,|\,\nu)\,=\big{(}\,a_{1}\,\big{|}\,a_{2}\,\big{|}\,a_{* }\,\big{|}\,b_{1}\,\big{|}\,b_{2}\,\big{|}\,b_{*}\,\big{|}\,\nu\,\big{)}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ and \[\boldsymbol{\Lambda}_{1}\;=\;\begin{pmatrix}\frac{z_{1}}{1-z_{1}}&0\\ 0&\boldsymbol{\Lambda}\end{pmatrix}.\] We substitute \(w_{1}\mapsto w_{1}+x_{1}(1-z_{1}^{-1})\) and obtain, from Lemma 3.1, that \[\Phi^{\Xi}(\hbar)\;=\;\langle I_{2}\rangle_{(w_{1},x_{1},x_{2},x^{*}), \boldsymbol{\Lambda}_{2}} \tag{115}\] where \[I_{2}\;=\;\exp\Big{(}\frac{\hbar}{24}+\frac{\hbar}{8}\mathbf{f}^{t}\mathbf{B} ^{-1}\mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x(\mathbf{B}^{-1} \boldsymbol{\nu}-1)-x_{1}\frac{\hbar^{\frac{1}{2}}}{2}\Big{)}\,\psi_{\hbar}(w _{1},z_{1})\,\psi_{\hbar}(x_{2},z_{2})\prod_{j=1}^{N}\psi_{\hbar}(x_{j}^{*},z_ {j}^{*})\] and with \(\mathbf{Q}_{11}=0\) from Lemma 6.5 \[\boldsymbol{\Lambda}_{2}\;=\;\begin{pmatrix}\frac{z_{1}}{1-z_{1}}&-1&0&0\\ -1&0&-\mathbf{Q}_{12}&-\mathbf{Q}_{1}^{*}\\ 0&-\mathbf{Q}_{12}&\frac{1}{1-z_{2}}-\mathbf{Q}_{22}&-\mathbf{Q}_{2}^{*}\\ \hline 0&-\mathbf{Q}_{1}^{*t}&-\mathbf{Q}_{2}^{*t}&z^{*\prime}-\mathbf{Q}^{*} \end{pmatrix}.\] We apply the pentagon identity of Theorem 3.6 to the \(\psi_{h}\) with arguments \(w_{1}\) and \(x_{1}\) and obtain, with the new integration variable \(x_{0}\), that \[\Phi^{\Xi}(\hbar)\;=\;\langle I_{3}\rangle_{(x_{0},w_{1},x_{1},x_{2},x^{*}), \boldsymbol{\Lambda}_{3}} \tag{116}\] where \[I_{3}\;=\;\exp\Big{(}\frac{\hbar}{8}\mathbf{f}^{t}\mathbf{B}^{-1} \mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x(\mathbf{B}^{-1} \boldsymbol{\nu}-1)-x_{1}\frac{\hbar^{\frac{1}{2}}}{2}\Big{)}\,\psi_{h}\Big{(} -x_{0}-x_{2}+\frac{w_{1}z_{2}+x_{2}z_{1}}{z_{0}},z_{1}z_{0}^{-1}\Big{)}\] and \[\boldsymbol{\Lambda}_{3}\;=\;\begin{pmatrix}\frac{(z_{1}+z_{2}-z_{1}z_{2})^{2} }{(z_{1}-1)z_{1}(z_{2}-1)z_{2}}&0&0&0&0\\ 0&\frac{z_{1}}{1-z_{1}}&-1&0&0\\ 0&-1&0&-\mathbf{Q}_{12}&-\mathbf{Q}_{1}^{*}\\ 0&0&-\mathbf{Q}_{12}&\frac{1}{1-z_{2}}-\mathbf{Q}_{22}&-\mathbf{Q}_{2}^{*}\\ \hline 0&0&-\mathbf{Q}_{1}^{*t}&-\mathbf{Q}_{2}^{*t}&z^{*\prime}-\mathbf{Q}^{*} \end{pmatrix}.\] We use Lemma 3.1 to change the variables \(x_{0}\mapsto x_{0}-w_{1}-x_{2}+\frac{w_{1}z_{2}+x_{2}z_{1}}{z_{0}}\) and obtain that \[\Phi^{\Xi}(\hbar)\;=\;\langle I_{4}\rangle_{(x_{0},w_{1},x_{1},x_{2},x^{*}), \boldsymbol{\Lambda}_{4}} \tag{117}\] where \[I_{4} \;=\;\exp\Big{(}\frac{\hbar}{8}\mathbf{f}^{t}\mathbf{B}^{-1} \mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x(\mathbf{B}^{-1} \boldsymbol{\nu}-1)-x_{1}\frac{\hbar^{\frac{1}{2}}}{2}\Big{)}\] \[\quad\times\psi_{h}(-x_{0}+w_{1},z_{1}z_{0}^{-1})\,\psi_{h}(x_{0},z_{0})\,\psi_{h}(-x_{0}+x_{2},z_{2}z_{0}^{-1})\prod_{j=1}^{N}\psi_{h}(x_{j}^{* },z_{j}^{*})\] and \[\boldsymbol{\Lambda}_{4}\ =\ \begin{pmatrix}\frac{(z_{1}+z_{2}-z_{1}z_{2})^{2}}{(z_{1} -1)z_{1}(z_{2}-1)z_{2}}&\frac{z_{1}+z_{2}-z_{1}z_{2}}{(z_{1}-1)z_{2}}&0&\frac{z _{1}+z_{2}-z_{1}z_{2}}{z_{1}(z_{2}-1)}&0\\ \frac{z_{1}+2z-z_{1}z_{2}}{(z_{1}-1)z_{2}}&\frac{z_{1}}{z_{2}(1-z_{1})}&-1&1&0 \\ 0&-1&0&-\mathbf{Q}_{12}&-\mathbf{Q}_{1}^{*}\\ \frac{z_{1}+z_{2}-z_{1}z_{2}}{z_{1}(z_{2}-1)}&1&-\mathbf{Q}_{12}&-\mathbf{Q}_{ 22}-\frac{z_{1}+z_{2}-z_{1}z_{2}}{z_{1}(z_{2}-1)}&-\mathbf{Q}_{2}^{*}\\ \hline 0&0&-{Q_{1}^{*}}^{t}&-\mathbf{Q}_{2}^{*}&{z^{*}}^{\prime}-\mathbf{Q}^{*} \end{pmatrix}.\] We substitute \(w_{1}\mapsto w_{1}+x_{0}\) and \(x_{2}\mapsto x_{2}+x_{0}\) to obtain, with Lemma 3.1, that \[\Phi^{\Xi}(\hbar)\ =\ \langle I_{5}\rangle_{(x_{0},w_{1},x_{1},x_{2},x^{*}), \boldsymbol{\Lambda}_{5}} \tag{118}\] where \[I_{5} =\ \exp\left(\frac{\hbar}{8}\mathbf{f}^{t}\mathbf{B}^{-1} \mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x(\mathbf{B}^{-1}\boldsymbol {\nu}-1)-\frac{\hbar^{\frac{1}{2}}}{2}x_{0}(\mathbf{B}^{-1}\boldsymbol{\nu}_{ 2}-1)-x_{1}\frac{\hbar^{\frac{1}{2}}}{2}\right)\] \[\ \ \ \ \times\psi_{h}(w_{1},z_{1}z_{0}^{-1})\,\psi_{h}(x_{0},z_{0}) \,\psi_{h}(x_{2},z_{2}z_{0}^{-1})\prod_{j=1}^{N}\psi_{h}(x_{j}^{*},z_{j}^{*})\] and \[\boldsymbol{\Lambda}_{5}\ =\ \begin{pmatrix}-\mathbf{Q}_{22}+\frac{1}{(z_{1}-1)(z_{2} -1)}&0&-1-\mathbf{Q}_{12}&1-\mathbf{Q}_{22}&-\mathbf{Q}_{2}^{*}\\ 0&\frac{z_{1}}{z_{2}(1-z_{1})}&-1&1&0\\ -1-\mathbf{Q}_{12}&-1&0&-\mathbf{Q}_{12}&-\mathbf{Q}_{1}^{*}\\ 1-\mathbf{Q}_{22}&1&-\mathbf{Q}_{12}&-\mathbf{Q}_{22}+\frac{z_{1}+z_{2}-z_{1} z_{2}}{z_{1}(z_{2}-1)}&-\mathbf{Q}_{2}^{*}\\ \hline-\mathbf{Q}_{2}^{*}{}^{t}&0&-\mathbf{Q}_{1}^{*}{}^{t}&-\mathbf{Q}_{2}^{*} &{z^{*}}^{\prime}-\mathbf{Q}^{*}\end{pmatrix}.\] By applying the first quad move from Theorem 3.4 to \(\psi_{h}(w_{1},z_{1}z_{0}^{-1})\) we obtain, with a new integration variable \(y_{1}\), that \[\Phi^{\Xi}(\hbar)\ =\ e^{-\frac{\hbar}{24}}\langle I_{6}\rangle_{(y_{1},x_{0},w_{1},x_{1},x_{2},x^{*}),\boldsymbol{\Lambda}_{6}} \tag{119}\] where \[I_{6} =\ \exp\Big{(}-\frac{\hbar}{24}+\frac{\hbar}{8}\mathbf{f}^{t} \mathbf{B}^{-1}\mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x(\mathbf{B} ^{-1}\boldsymbol{\nu}-1)\] \[\ \ \ -\frac{\hbar^{\frac{1}{2}}}{2}x_{0}((\mathbf{B}^{-1} \boldsymbol{\nu})_{2}-1)-x_{1}\frac{\hbar^{\frac{1}{2}}}{2}+\Big{(}y_{1}+\frac {w_{1}z_{1}z_{0}^{-1}}{1-z_{1}z_{0}^{-1}}\Big{)}\frac{\hbar^{\frac{1}{2}}}{2} \Big{)}\] \[\ \ \ \ \times\psi_{h}\Big{(}y_{1}+\frac{w_{1}z_{1}z_{0}^{-1}}{1-z_{1 }z_{0}^{-1}},\frac{1}{1-z_{1}z_{0}^{-1}}\Big{)}\,\psi_{h}(x_{0},z_{0})\,\psi_ {h}(x_{2},z_{2}z_{0}^{-1})\prod_{j=1}^{N}\psi_{h}(x_{j}^{*},z_{j}^{*})\] and \[\boldsymbol{\Lambda}_{6}\ =\ \begin{pmatrix}\frac{(z_{1}-1)z_{2}}{z_{1}}&0&0&0&0&0\\ 0&-\mathbf{Q}_{22}+\frac{1}{(z_{1}-1)(z_{2}-1)}&0&-1-\mathbf{Q}_{12}&1-\mathbf{ Q}_{22}&-\mathbf{Q}_{2}^{*}\\ 0&0&\frac{z_{1}}{z_{2}(1-z_{1})}&-1&1&0\\ 0&-1-\mathbf{Q}_{12}&-1&0&-\mathbf{Q}_{12}&-\mathbf{Q}_{1}^{*}\\ 0&1-\mathbf{Q}_{22}&1&-\mathbf{Q}_{12}&-\mathbf{Q}_{22}+\frac{z_{1}+z_{2}-z_{ 1}z_{2}}{z_{1}(z_{2}-1)}&-\mathbf{Q}_{2}^{*}\\ \hline 0&-\mathbf{Q}_{2}^{*t}&0&-\mathbf{Q}_{1}^{*t}&-\mathbf{Q}_{2}^{*t}&z^{* \prime}-\mathbf{Q}^{*}\end{pmatrix}.\] We change variables \(y_{1}\mapsto y_{1}-w_{1}\frac{z_{1}z_{0}^{-1}}{1-z_{1}z_{0}^{-1}}\) using Lemma 3.1 and obtain that \[\Phi^{\Xi}(\hbar)\ =\ \langle I_{7}\rangle_{(y_{1},x_{0},w_{1},x_{1},x_{2},x^ {*}),\boldsymbol{\Lambda}_{7}} \tag{120}\] where \[I_{7}\ =\ \exp\Big{(}-\frac{\hbar}{24}+\frac{\hbar}{8}\mathbf{f}^{t }\mathbf{B}^{-1}\mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x(\mathbf{B} ^{-1}\boldsymbol{\nu}-1)-\frac{\hbar^{\frac{1}{2}}}{2}x_{0}((\mathbf{B}^{-1} \boldsymbol{\nu})_{2}-1)-x_{1}\frac{\hbar^{\frac{1}{2}}}{2}+y_{1}\frac{\hbar ^{\frac{1}{2}}}{2}\Big{)}\] \[\ \ Using Lemma 6.5 we obtain, with respect to the integration variables \(\tilde{x}=(x_{0},y_{1},x_{2},x^{*})\) and some \(c\in\frac{1}{24}\mathbb{Z}\), that \[\Phi^{\Xi}(\hbar)\ =\ e^{ch}\langle I_{9}\rangle_{\tilde{x},\mathbf{A}_{9}} \tag{122}\] where \[I_{9}\ =\ \exp\left(\frac{\hbar}{8}\widetilde{\mathbf{f}}^{t}\widetilde{ \mathbf{B}}^{-1}\widetilde{\mathbf{A}}\widetilde{\mathbf{f}}-\frac{\hbar^{ \frac{1}{2}}}{2}\tilde{x}^{t}(\widetilde{\mathbf{B}}^{-1}\widetilde{\boldsymbol {\nu}}-1)\right)\prod_{j=0}^{N+2}\psi_{\hbar}(\widetilde{x}_{j}^{*}, \widetilde{z}_{j}^{*})\] and with \(\widetilde{\mathbf{A}}\), \(\widetilde{\mathbf{B}}\) as in (112) \[\mathbf{\Lambda}_{9}\ =\ \mathrm{diag}(\widetilde{z}_{0}^{\prime},\widetilde{z}_ {1}^{\prime\prime},\widetilde{z}_{2}^{\prime},\widetilde{z}^{\prime\prime})- \widetilde{\mathbf{B}}^{-1}\widetilde{\mathbf{A}}.\] **Lemma 6.5**.: With the notation used in the previous proof of Proposition 6.4 we have \(\mathbf{Q}_{11}=0\) and the following equalities: \[\widetilde{\mathbf{B}}^{-1}\widetilde{\mathbf{A}} =\ \begin{pmatrix}\mathbf{Q}_{22}&\mathbf{Q}_{12}+1&\mathbf{Q}_{12}+ \mathbf{Q}_{22}&\mathbf{Q}_{2}^{*}\\ \mathbf{Q}_{12}+1&0&\mathbf{Q}_{12}&\mathbf{Q}_{1}^{*}\\ \mathbf{Q}_{12}+\mathbf{Q}_{22}&\mathbf{Q}_{12}&2\mathbf{Q}_{12}+\mathbf{Q}_ {22}&\mathbf{Q}_{1}^{*}+\mathbf{Q}_{2}^{*}\\ \hline\mathbf{Q}_{2}^{*t}&\mathbf{Q}_{1}^{*t}&\mathbf{Q}_{1}^{*t}+\mathbf{Q}_{ 2}^{*t}&\mathbf{Q}^{*}\end{pmatrix}, \tag{123}\] \[\widetilde{\mathbf{B}}^{-1}\widetilde{\boldsymbol{\nu}} =\ \begin{pmatrix}(\mathbf{B}^{-1}\boldsymbol{\nu})_{2}&\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{1}&\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{1}+(\mathbf{B}^{-1}\boldsymbol{\nu})_{2} \\ (\mathbf{B}^{-1}\boldsymbol{\nu})^{*}\end{pmatrix},\] (124) \[\widetilde{\mathbf{f}}^{t}\widetilde{\mathbf{B}}^{-1}\widetilde{ \boldsymbol{\nu}} =\ \mathbf{f}^{t}\mathbf{B}^{-1}\boldsymbol{\nu}\,. \tag{125}\] Proof.: Similarly to the proof of Lemma 6.3 we will proof Equations (123) and (124) by showing that both sides are equal after multiplying by the invertible matrix \(\widetilde{\mathbf{B}}\). Using \(\mathbf{Q}=\mathbf{B}^{-1}\mathbf{A}\) and the fact that \((a_{1}\,|\,b_{2}\,|\,b_{*})\) are linearly independent we conclude \(\mathbf{Q}_{11}=0\). For (123) we compute \[\begin{pmatrix}-1&-1&0\\ 0&a_{1}-b_{1}-b_{2}&b_{2}&b_{*}\end{pmatrix}\begin{pmatrix}\mathbf{Q}_{22}& \mathbf{Q}_{12}+1&\mathbf{Q}_{12}+\mathbf{Q}_{22}&\mathbf{Q}_{2}^{*}\\ \mathbf{Q}_{12}+1&0&\mathbf{Q}_{12}&\mathbf{Q}_{1}^{*}\\ \mathbf{Q}_{12}+\mathbf{Q}_{22}&\mathbf{Q}_{12}&2\mathbf{Q}_{12}+\mathbf{Q}_ {22}&\mathbf{Q}_{1}^{*}+\mathbf{Q}_{2}^{*}\\ \hline\mathbf{Q}_{2}^{*t}&\mathbf{Q}_{1}^{*t}&\mathbf{Q}_{1}^{*t}+\mathbf{Q}_ {2}^{*t}&\mathbf{Q}^{*}\end{pmatrix} \tag{126}\] \[=\ \begin{pmatrix}-1&-1&0&0\\ (a_{1}-b_{1})\mathbf{Q}_{12}&\\ +b_{2}\mathbf{Q}_{22}+b_{*}\mathbf{Q}_{2}^{*t}&b_{2}\mathbf{Q}_{12}+b_{*} \mathbf{Q}_{1}^{*t}&\begin{matrix}b_{2}\mathbf{Q}_{12}+b_{*}\mathbf{Q}_{1}^{*t }\\ (a_{1}-b_{1})\mathbf{Q}_{12}+b_{2}\mathbf{Q}_{22}+b_{*}\mathbf{Q}_{1}^{*t}& \begin{matrix}(a_{1}-b_{1}-b_{2})\mathbf{Q}_{1}^{*}\\ +b_{2}(\mathbf{Q}_{1}^{*}+\mathbf{Q}_{2}^{*})+b_{*}\mathbf{Q}^{*}\end{matrix} \end{matrix}\end{pmatrix}\] \[=\widetilde{\mathbf{A}}\] For (124) we have \[\begin{split}&\begin{pmatrix}-1&-1&1&0\\ 0&a_{1}-b_{1}-b_{2}&b_{2}&b_{*}\end{pmatrix}\left(\begin{array}{c}(\mathbf{B} ^{-1}\boldsymbol{\nu})_{2}\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{1}\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{1}+(\mathbf{B}^{-1}\boldsymbol{\nu})_{2} \\ (\mathbf{B}^{-1}\boldsymbol{\nu})^{*}\end{array}\right)\\ &=\ \begin{pmatrix}0\\ \boldsymbol{\nu}\end{pmatrix}\end{split}\end{split} \tag{127}\] For Equation (125), we have \[\begin{split}&\widetilde{\mathbf{f}}^{t}\left(\begin{array}{c}( \mathbf{B}^{-1}\boldsymbol{\nu})_{2}\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{1}\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{1}+(\mathbf{B}^{-1}\boldsymbol{\nu})_{2} \\ (\mathbf{B}^{-1}\boldsymbol{\nu})^{*}\end{array}\right)\\ &=\ (\widetilde{f}_{1}^{\prime}+\widetilde{f}_{2})(\mathbf{B}^{-1} \boldsymbol{\nu})_{1}+(\widetilde{f}_{0}-\widetilde{f}_{2})(\mathbf{B}^{-1} \boldsymbol{\nu})_{2}+\widetilde{f}^{*}(\mathbf{B}^{-1}\boldsymbol{\nu})^{*} \,.\end{split} \tag{128}\] Then, using the analogous relations between the flattening as given in Equation (94) for the shapes, we have \(\widetilde{f}_{1}^{\prime}+\widetilde{f}_{2}=f_{1}^{\prime}=\mathbf{f}_{1}\) and \(\widetilde{f}_{0}-\widetilde{f}_{2}=f_{2}=\mathbf{f}_{2}\). ### The case of \(B\) with nullity two In this section, we prove Theorem 3.6 under the assumption that the matrix \(B\) has nullity 2. Similarly to the proof of Proposition 6.4, we use three intermediate formulas in Equation (98). They involve two Fourier transforms (adding two variables), a pentagon (adding a third variable), two inverse Fourier transforms (adding two variables) and an applications of Fubini's Theorem (removing four integration variables). The details are given in the following proposition. **Proposition 6.6**.: If \(B\) has rank \(N\) then \(\Phi^{\Xi}(\hbar)\) is invariant under the 2-3 Pachner move given in Equation (98). Proof.: Following the discussion above, we can assume that \(\operatorname{rank}(b_{1}\,|\,b_{2}\,|\,b_{*})=\operatorname{rank}(\,b_{*})=N\). Then by [11, Lem.A.3], the matrix \((a_{1}-b_{1}\,|\,a_{2}-b_{2}\,|\,b_{*})\) has full rank and we can apply a quad move to the first columns of \((A\,|\,B)\) to obtain \[\mathbf{A}\ =\ (-b_{1}\,|\,-b_{2}\,|\,a_{*}),\qquad\mathbf{B}\ =\ (a_{1}-b_{1}\,|\,a_{2}-b_{2}\,|\,b_{*}) \tag{129}\] where \({\bf B}\) has full rank. The proof will use the following sequence of intermediate matrices \[\begin{array}{l}({\bf A}\,|\,{\bf B}\,|\,\mathbf{\nu})\,=\,\big{(}-b_{ 1}\,\,\mathbf{|}\,-b_{2}\,\,\mathbf{|}\,\,a_{*}\,\,\mathbf{|}\,a_{1}-b_{1}\,\,\mathbf{|}\,\,a_{2}-b_{2}\,\,\mathbf{|}\,\,b_{*}\,\,\mathbf{|}\,\,\nu-b_{1}-b_{2}\,\big{)}\\ \qquad\qquad\qquad\qquad q_{2}^{-1}\times 1\\ \qquad(A\,|\,B\,|\,\nu)\,=\,\big{(}\,a_{1}\,\,\mathbf{|}\,a_{2}\,\, \mathbf{|}\,\,a_{*}\,\,\mathbf{|}\,\,b_{1}\,\,\mathbf{|}\,\,b_{2}\,\,\mathbf{|}\,\,b_{*}\,\,\mathbf{|}\, \,\nu\,\big{)}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad 2\to 3\\ \qquad(\widetilde{A}\,|\,\widetilde{B}\,|\,\widetilde{\nu})\,=\,\bigg{(} \,\begin{matrix}-1&0&0&0&-1&1&1&0&1\\ a_{1}+a_{2}-b_{1}-b_{2}&a_{1}-b_{2}&a_{2}-b_{1}&a_{*}&0&b_{1}&b_{2}&b_{*}&\nu\end{matrix} \bigg{)}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad 1\times q_{2}\times 1 \\ \qquad(\widetilde{\bf A}\,|\,\widetilde{\bf B}\,|\,\widetilde{\mathbf{\nu}})=\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad and \[\boldsymbol{\Lambda}_{1}\;=\;\begin{pmatrix}\frac{z_{2}}{1-z_{2}}&0&0\\ 0&\frac{z_{1}}{1-z_{1}}&0\\ 0&0&\boldsymbol{\Lambda}\end{pmatrix}\] The substitutions \(w_{1}\mapsto w_{1}+x_{1}(1-z_{1}^{-1})\) and \(w_{2}\mapsto w_{2}+x_{2}(1-z_{2}^{-1})\) imply according to Lemma 3.1 that \[\Phi^{\Xi}(\hbar)\;=\;\langle I_{2}\rangle_{(w_{1},w_{2},x_{1},x_{2},x^{*}), \boldsymbol{\Lambda}_{2}} \tag{133}\] where \[I_{2} = \exp\Big{(}\frac{\hbar}{12}+\frac{\hbar}{8}\mathbf{f}^{t} \mathbf{B}^{-1}\mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x^{t}( \mathbf{B}^{-1}\boldsymbol{\nu}-1)-(x_{1}+x_{2})\frac{\hbar^{\frac{1}{2}}}{2} \Big{)}\] \[\times\psi_{h}(w_{1},z_{1})\,\psi_{h}(w_{2},z_{2})\,\prod_{j=1}^{ N}\psi_{h}(x_{j}^{*},z_{j}^{*})\] and with \(\mathbf{Q}_{11}=\mathbf{Q}_{12}=\mathbf{Q}_{22}=0\) from Lemma 6.7 \[\boldsymbol{\Lambda}_{2}\;=\;\begin{pmatrix}\frac{z_{1}}{1-z_{1}}&0&-1&0&0\\ 0&\frac{z_{2}}{1-z_{2}}&0&-1&0\\ -1&0&0&0&-\mathbf{Q}_{1}^{*}\\ 0&-1&0&0&-\mathbf{Q}_{2}^{*}\\ \hline 0&0&-\mathbf{Q}_{1}^{*t}&-\mathbf{Q}_{2}^{*t}&z^{*\prime}-\mathbf{Q}^{*} \end{pmatrix}.\] We apply the pentagon identity of Theorem 3.6 to the \(\psi_{h}\) with arguments \(w_{1}\) and \(w_{1}\) and obtain with the new integration variable \(x_{0}\) \[\Phi^{\Xi}(\hbar)\;=\;\langle I_{3}\rangle_{(x_{0},w_{1},w_{2},x_{1},x_{2},x^{ *}),\boldsymbol{\Lambda}_{3}} \tag{134}\] where \[I_{3} = \exp\Big{(}\frac{\hbar}{24}+\frac{\hbar}{8}\mathbf{f}^{t} \mathbf{B}^{-1}\mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x^{t}( \mathbf{B}^{-1}\boldsymbol{\nu}-1)-(x_{1}+x_{2})\frac{\hbar^{\frac{1}{2}}}{2} \Big{)}\] \[\times\psi_{h}\Big{(}-x_{0}-w_{2}+\frac{w_{1}z_{2}+w_{2}z_{1}}{z_ {0}},z_{1}z_{0}^{-1}\Big{)}\,\psi_{h}\Big{(}x_{0}+w_{1}+w_{2}-\frac{w_{1}z_{2}+ w_{2}z_{1}}{z_{0}},z_{0}\Big{)}\] \[\times\psi_{h}\Big{(}-x_{0}-w_{1}+\frac{w_{1}z_{2}+w_{2}z_{1}}{z_ {0}},z_{2}z_{0}^{-1}\Big{)}\,\prod_{j=1}^{N}\psi_{h}(x_{j}^{*},z_{j}^{*})\] and \[\boldsymbol{\Lambda}_{3}\;=\;\begin{pmatrix}\frac{(z_{1}+z_{2}-z_{1}z_{2})^{2 }}{(z_{1}-1)z_{1}(z_{2}-1)z_{2}}&0&0&0&0&0\\ 0&\frac{z_{1}}{1-z_{1}}&0&-1&0&0\\ 0&0&\frac{z_{2}}{1-z_{2}}&0&-1&0\\ 0&-1&0&0&0&-\mathbf{Q}_{1}^{*}\\ 0&0&-1&0&0&-\mathbf{Q}_{2}^{*}\\ \hline 0&0&0&-\mathbf{Q}_{1}^{*t}&-\mathbf{Q}_{2}^{*t}&z^{*\prime}-\mathbf{Q}^{*} \end{pmatrix}.\] With change of variables \(x_{0}\mapsto x_{0}-w_{1}-w_{2}+\frac{w_{1}z_{2}+w_{2}z_{1}}{z_{0}}\) Lemma 3.1 gives that \[\Phi^{\Xi}(\hbar)\;=\;\langle I_{4}\rangle_{(x_{0},w_{1},w_{2},x_{1},x_{2},x^{ *}),\boldsymbol{\Lambda}_{4}} \tag{135}\] where \[I_{4} = \exp\Big{(}\frac{\hbar}{24}+\frac{\hbar}{8}\mathbf{f}^{t}\mathbf{B} ^{-1}\mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x^{t}(\mathbf{B}^{-1} \boldsymbol{\nu}-1)-(x_{1}+x_{2})\frac{\hbar^{\frac{1}{2}}}{2}\Big{)}\] \[\times\psi_{h}(-x_{0}+w_{1},z_{1}z_{0}^{-1})\,\psi_{h}(x_{0},z_{0} )\,\psi_{h}(-x_{0}+w_{2},z_{2}z_{0}^{-1})\,\prod_{j=1}^{N}\psi_{h}(x_{j}^{*},z_ {j}^{*})\] and \[\boldsymbol{\Lambda}_{4}\ =\ \begin{pmatrix}\frac{(z_{1}+z_{2}-z_{1}z_{2})^{2}}{(z_{ 1}-1)z_{1}(z_{2}-1)z_{2}}&\frac{z_{1}+z_{2}-z_{1}z_{2}}{(z_{1}-1)z_{2}}&\frac{ z_{1}+z_{2}-z_{1}z_{2}}{(z_{2}-1)z_{1}}&0&0\\ \frac{z_{1}+z_{2}-z_{1}z_{2}}{(z_{1}-1)z_{2}}&\frac{z_{1}}{1-z_{1}}&1&-1&0\\ \frac{z_{1}+z_{2}-z_{1}z_{2}}{(z_{2}-1)z_{1}}&1&\frac{z_{2}}{1-z_{2}}&0&-1&0\\ 0&-1&0&0&0&-\mathbf{Q}_{1}^{*}\\ 0&0&-1&0&0&-\mathbf{Q}_{2}^{*}\\ \hline 0&0&0&-\mathbf{Q}_{1}^{*\,t}&-\mathbf{Q}_{2}^{*\,t}&{z^{*}}^{\prime}- \mathbf{Q}^{*}\end{pmatrix}.\] We substitute \(w_{1}\mapsto w_{1}+x_{0}\) and \(w_{2}\mapsto w_{2}+x_{0}\) to obtain with Lemma 3.1 that \[\Phi^{\Xi}(\hbar)\ =\ \langle I_{5}\rangle_{(x_{0},w_{1},w_{2},x_{1},x_{2},x^{ *}),\boldsymbol{\Lambda}_{5}} \tag{136}\] where \[I_{5} = \exp\Big{(}\frac{\hbar}{24}+\frac{\hbar}{8}\mathbf{f}^{t}\mathbf{ B}^{-1}\mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x^{t}(\mathbf{B}^{-1} \boldsymbol{\nu}-1)-(x_{1}+x_{2})\frac{\hbar^{\frac{1}{2}}}{2}\Big{)}\] \[\times\psi_{h}(w_{1},z_{1}z_{0}^{-1})\,\psi_{h}(x_{0},z_{0})\, \psi_{h}(w_{2},z_{2}z_{0}^{-1})\,\prod_{j=1}^{N}\psi_{h}(x_{j}^{*},z_{j}^{*})\] and \[\boldsymbol{\Lambda}_{5}\ =\ \begin{pmatrix}\frac{z_{1}+z_{2}-z_{1}z_{2}}{(z_{1}-1)(z_ {2}-1)}&0&0&-1&-1&0\\ 0&\frac{z_{1}}{(1-z_{1})z_{2}}&1&-1&0&0\\ 0&1&\frac{z_{2}}{(1-z_{2})z_{1}}&0&-1&0\\ -1&-1&0&0&0&-\mathbf{Q}_{1}^{*}\\ -1&0&-1&0&0&-\mathbf{Q}_{2}^{*\,t}\\ \hline 0&0&0&-\mathbf{Q}_{1}^{*\,t}&-\mathbf{Q}_{2}^{*\,t}&{z^{*}}^{\prime}- \mathbf{Q}^{*}\end{pmatrix}.\] By applying the first quad move from Theorem 3.4 to both \(\psi_{h}(w_{1},z_{1}z_{0}^{-1})\) and \(\psi_{h}(w_{2},z_{2}z_{0}^{-1})\) we obtain with new integration variables \(y_{1}\) and \(y_{2}\) that \[\Phi^{\Xi}(\hbar)\ =\ \langle I_{6}\rangle_{(y_{1},y_{2},x_{0},w_{1},w_{2},x_{1}, x_{2},x^{*}),\boldsymbol{\Lambda}_{6}} \tag{137}\] where \[I_{6} = \exp\Big{(}-\frac{\hbar}{24}+\frac{\hbar}{8}\mathbf{f}^{t} \mathbf{B}^{-1}\mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x^{t}( \mathbf{B}^{-1}\boldsymbol{\nu}-1)-(x_{1}+x_{2})\frac{\hbar^{\frac{1}{2}}}{2}\] \[\times\psi_{h}\Big{(}y_{1}+\frac{w_{1}z_{1}z_{0}^{-1}}{1-z_{1}z_{0 }^{-1}},\frac{1}{1-z_{1}z_{0}^{-1}}\Big{)}\,\psi_{h}\Big{(}y_{2}+\frac{w_{2}z_{ 2}z_{0}^{-1}}{1-z_{2}z_{0}^{-1}},\frac{1}{1-z_{2}z_{0}^{-1}}\Big{)}\,\psi_{h}(x _{0},z_{0})\] and \[\boldsymbol{\Lambda}_{6}\ =\ \begin{pmatrix}\frac{(z_{1}-1)z_{2}}{z_{1}}&0&0&0&0&0&0 &0\\ 0&\frac{(z_{2}-1)z_{1}}{z_{2}}&0&0&0&0&0&0\\ 0&0&\frac{z_{1}+z_{2}-z_{1}z_{2}}{(z_{1}-1)(z_{2}-1)}&0&0&-1&-1&0\\ 0&0&0&\frac{z_{1}}{(1-z_{1})z_{2}}&1&-1&0&0\\ 0&0&0&1&\frac{z_{2}}{(1-z_{2})z_{1}}&0&-1&0\\ 0&0&-1&-1&0&0&0&-\mathbf{Q}_{1}^{*}\\ 0&0&-1&0&-1&0&0&-\mathbf{Q}_{2}^{*}\\ \hline 0&0&0&0&0&-\mathbf{Q}_{1}^{*t}&-\mathbf{Q}_{2}^{*t}&z^{*^{\prime}}- \mathbf{Q}^{*}\end{pmatrix}.\] We change variables \(y_{1}\mapsto y_{1}-w_{1}\frac{z_{1}z_{0}^{-1}}{1-z_{1}z_{0}^{-1}}\) and \(y_{2}\mapsto y_{2}-w_{2}\frac{z_{2}z_{0}^{-1}}{1-z_{2}z_{0}^{-1}}\) using Lemma 3.1 to obtain that \[\Phi^{\Xi}(\hbar)\ =\ \langle I_{7}\rangle_{(y_{1},y_{2},x_{0},w_{1},w_{2},x_ {1},x_{2},x^{*}),\boldsymbol{\Lambda}_{7}} \tag{138}\] where \[I_{7} =\ \exp\Big{(}-\frac{\hbar}{24}+\frac{\hbar}{8}\mathbf{f}^{t} \mathbf{B}^{-1}\mathbf{A}\mathbf{f}-\frac{\hbar^{\frac{1}{2}}}{2}x^{t}( \mathbf{B}^{-1}\boldsymbol{\nu}-1)-(x_{1}+x_{2})\frac{\hbar^{\frac{1}{2}}}{2} +(y_{1}+y_{2})\frac{\hbar^{\frac{1}{2}}}{2}\Big{)}\] \[\ \ \ \ \times\psi_{\hbar}\Big{(}y_{1},\frac{1}{1-z_{1}z_{0}^{-1}} \Big{)}\,\psi_{\hbar}\Big{(}y_{2},\frac{1}{1-z_{2}z_{0}^{-1}}\Big{)}\,\psi_{ \hbar}(x_{0},z_{0})\,\prod_{j=1}^{N}\psi_{\hbar}(x_{j}^{*},z_{j}^{*})\] and \[\boldsymbol{\Lambda}_{7}\ =\ \begin{pmatrix}\frac{(z_{1}-1)z_{2}}{z_{1}}&0&0&1&0&0&0 &0\\ 0&\frac{(z_{2}-1)z_{1}}{z_{2}}&0&0&1&0&0&0\\ 0&0&\frac{z_{1}+z_{2}-z_{1}z_{2}}{(z_{1}-1)(z_{2}-1)}&0&0&-1&-1&0\\ 1&0&0&0&1&-1&0&0\\ 0&1&0&1&0&0&-1&0\\ 0&0&-1&-1&0&0&0&-\mathbf{Q}_{1}^{*}\\ 0&0&-1&0&-1&0&0&-\mathbf{Q}_{2}^{*}\\ \hline 0&0&0&0&-\mathbf{Q}_{1}^{*t}&-\mathbf{Q}_{2}^{*t}&z^{*^{\prime}}-\mathbf{Q} ^{*}\end{pmatrix}.\] Therefore, we can apply Fubini's Theorem (Lemma 3.1) with the integration variables \(w_{1},w_{2},x_{1},x_{2}\), to obtain that \[\Phi^{\Xi}(\hbar)\ =\ \langle I_{8}\rangle_{(y_{1},y_{2},x_{0},x^{*}), \boldsymbol{\Lambda}_{8}} \tag{139}\] where \[I_{8}\ =\ \exp\Big{(}-\frac{\hbar}{4}(\mathbf{B}^{-1}\boldsymbol{ \nu})_{1}(\mathbf{B}^{-1}\boldsymbol{\nu})_{2}-\frac{\hbar}{24}+\frac{\hbar}{8} \mathbf{f}^{t}\mathbf{B}^{-1}\mathbf{A}\mathbf{f}+\frac{\hbar^{\frac{1}{2}}}{2} \big{(}x_{0}\left((\mathbf{B}^{-1}\boldsymbol{\nu})_{1}+(\mathbf{B}^{-1} \boldsymbol{\nu})_{2}\right)\big{)}\] \[\qquad\qquad-\frac{\hbar^{\frac{1}{2}}}{2}y_{1}\left((\mathbf{B} ^{-1}\boldsymbol{\nu})_{1}-1\right)-\frac{\hbar^{\frac{1}{2}}}{2}y_{2}\left(( \mathbf{B}^{-1}\boldsymbol{\nu})_{2}-1\right)\] \[\qquad\qquad-\frac{\hbar^{\frac{1}{2}}}{2}x^{*t}\left((\mathbf{B }^{-1}\boldsymbol{\nu})^{*}-(\mathbf{B}^{-1}\boldsymbol{\nu})_{2}\mathbf{Q}_{1 }^{*}-(\mathbf{B}^{-1}\boldsymbol{\nu})_{1}\mathbf{Q}_{2}^{*}-1\right)\Big{)}\] \[\times\psi_{h}\Big{(}y_{1},\frac{1}{1-z_{1}z_{0}^{-1}}\Big{)}\, \psi_{h}\Big{(}y_{2},\frac{1}{1-z_{2}z_{0}^{-1}}\Big{)}\,\psi_{h}(x_{0},z_{0} )\,\prod_{j=1}^{N}\psi_{h}(x_{j}^{*},z_{j}^{*})\] and \[\boldsymbol{\Lambda}_{8}\ =\ \begin{pmatrix}\frac{(z_{1}-1)z_{2}}{z_{1}}&0&-1&- \mathbf{Q}_{1}^{*}\\ 0&\frac{(z_{2}-1)z_{1}}{z_{2}}&-1&-\mathbf{Q}_{2}^{*}\\ -1&-1&\frac{z_{1}+z_{2}-z_{2}z_{2}}{(z_{1}-1)(z_{2}-1)}+2&\mathbf{Q}_{1}^{*}+ \mathbf{Q}_{2}^{*}\\ \hline-\mathbf{Q}_{1}^{*t}&-\mathbf{Q}_{2}^{*t}&\mathbf{Q}_{1}^{*t}+\mathbf{Q} _{2}^{*t}&-\mathbf{Q}^{*}+\mathbf{Q}_{1}^{*}\mathbf{Q}_{2}^{*}+\mathbf{Q}_{2}^ {*}\mathbf{Q}_{1}^{*}\end{pmatrix}.\] Using Lemma 6.7, we obtain with respect to the integration variables \(\tilde{x}=(x_{0},y_{1},y_{2},x^{*})\) and some \(c\in\frac{1}{24}\mathbb{Z}\) that \[\Phi^{\Xi}(\hbar)\ =\ e^{c\hbar}\langle I_{9}\rangle_{\tilde{x},\boldsymbol{ \Lambda}_{9}} \tag{140}\] where with \(\widetilde{\mathbf{A}},\widetilde{\mathbf{B}}\) and \(\widetilde{\boldsymbol{\nu}}\) as in Equation (130) and \[I_{9}\ =\ \exp\Big{(}\frac{\hbar}{8}\widetilde{\mathbf{f}}^{t}\widetilde{ \mathbf{B}}^{-1}\widetilde{\mathbf{A}}\widetilde{\mathbf{f}}-\frac{\hbar^{ \frac{1}{2}}}{2}\tilde{x}^{t}(\widetilde{\mathbf{B}}^{-1}\widetilde{\boldsymbol {\nu}}-1)\Big{)}\,\prod_{j=0}^{N+2}\psi_{h}(\widetilde{x}_{j}^{*},\widetilde{z }_{j}^{*}) \tag{141}\] and \[\boldsymbol{\Lambda}_{9}\ =\ \mathrm{diag}(\widetilde{z}_{0}^{\prime},\, \widetilde{z}_{1}^{\prime\prime},\,\widetilde{z}_{2}^{\prime\prime},\, \widetilde{z}^{*\prime})-\widetilde{\mathbf{B}}^{-1}\widetilde{\mathbf{A}}. \tag{142}\] **Lemma 6.7**.: With the notation used in the proof of the previous Proposition 6.6 we have \(\mathbf{Q}_{11}=\mathbf{Q}_{12}=\mathbf{Q}_{22}=0\) and the following equalities: \[\widetilde{\mathbf{B}}^{-1}\widetilde{\mathbf{A}}\ =\ \begin{pmatrix}-1&1&1&- \mathbf{Q}_{1}^{*}-\mathbf{Q}_{2}^{*}\\ 1&0&0&\mathbf{Q}_{1}^{*}\\ 1&0&0&\mathbf{Q}_{2}^{*}\end{pmatrix}, \tag{143}\] \[\widetilde{\mathbf{B}}^{-1}\widetilde{\boldsymbol{\nu}}\ =\ \begin{pmatrix}-(\mathbf{B}^{-1} \boldsymbol{\nu})_{1}-(\mathbf{B}^{-1}\boldsymbol{\nu})_{2}+1\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{1}\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{2}\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{1}\mathbf{Q}_{2}^{*}-(\mathbf{B}^{-1} \boldsymbol{\nu})_{2}\mathbf{Q}_{1}^{*}\end{pmatrix},\] (144) \[\widetilde{\mathbf{f}}^{t}\widetilde{\mathbf{B}}^{-1}\widetilde{ \boldsymbol{\nu}}\ =\ \mathbf{f}^{t}\mathbf{B}^{-1}\boldsymbol{\nu}+\widetilde{f}_{0}-2( \mathbf{B}^{-1}\boldsymbol{\nu})_{1}(\mathbf{B}^{-1}\boldsymbol{\nu})_{2}\,. \tag{145}\] Proof.: The relation \(\mathbf{B}^{-1}\mathbf{A}=\mathbf{Q}\) and the fact that the columns of \((a_{1}\,|\,a_{2}\,|\,b_{*})\) are linearly independent imply that \(\mathbf{Q}_{11}=\mathbf{Q}_{12}=\mathbf{Q}_{22}=0\). We will prove Equation (143) by showing the identity after multiplying by the invertible matrix \(\widetilde{\mathbf{B}}\). For (143) we compute \[\begin{pmatrix}-1&-1&-1&0\\ 0&a_{1}-b_{1}-b_{2}&a_{2}-b_{1}-b_{2}&b_{*}\end{pmatrix}\begin{pmatrix}-1&1&1 &-\mathbf{Q}_{1}^{*}-\mathbf{Q}_{2}^{*}\\ 1&0&0&\mathbf{Q}_{1}^{*}\\ 1&0&0&\mathbf{Q}_{2}^{*}\\ \hline-\mathbf{Q}_{1}^{*t}-\mathbf{Q}_{2}^{*t}&\mathbf{Q}_{1}^{*t}&\mathbf{Q}_ {2}^{*t}&\mathbf{Q}^{*}-\mathbf{Q}_{1}^{*t}\mathbf{Q}_{2}^{*}-\mathbf{Q}_{2}^ {*t}\mathbf{Q}_{1}^{*}\end{pmatrix}\\ =\ \begin{pmatrix}-1&-1&-1&0\\ \hline a_{1}+a_{2}-2b_{1}-2b_{2}&-b_{*}\mathbf{Q}_{1}^{*t}&b_{*}\mathbf{Q}_{ 2}^{*t}&-b_{2}\mathbf{Q}_{1}^{*t}-b_{1}\mathbf{Q}_{2}^{*t}-b_{*}\mathbf{Q}_{1} ^{*t}\mathbf{Q}_{2}^{*t}\mathbf{Q}_{1}^{*t}\end{pmatrix}\\ =\ \widetilde{\mathbf{A}}. \tag{146}\] For Equation (144), we compute \[\begin{pmatrix}-1&-1&-1&0\\ 0&a_{1}-b_{1}-b_{2}&a_{2}-b_{1}-b_{2}&b_{*}\end{pmatrix}\begin{pmatrix}-( \mathbf{B}^{-1}\boldsymbol{\nu})_{1}-(\mathbf{B}^{-1}\boldsymbol{\nu})_{2}+1\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{1}\\ (\mathbf{B}^{-1}\boldsymbol{\nu})_{2}\\ (\mathbf{B}^{-1}\boldsymbol{\nu})^{*}-(\mathbf{B}^{-1}\boldsymbol{\nu})_{1} \mathbf{Q}_{2}^{*}-(\mathbf{B}^{-1}\boldsymbol{\nu})_{2}\mathbf{Q}_{1}^{*} \end{pmatrix}\\ =\ \begin{pmatrix}-1\\ \boldsymbol{\nu}\end{pmatrix}. \tag{147}\] For Equation (145), we note that for \(i=1,2\) we have \[\widetilde{\mathbf{f}}^{*t}\mathbf{Q}_{i}^{*}\ =\ (\widetilde{\mathbf{B}}^{-1} \widetilde{\boldsymbol{\nu}})_{i}-\widetilde{\mathbf{f}}_{0}-\widetilde{ \mathbf{f}}_{i}^{\prime\prime} \tag{148}\] and so \[\widetilde{\mathbf{f}}^{t}\widetilde{\mathbf{B}}^{-1}\widetilde{\boldsymbol{ \nu}}\ =\ \widetilde{f}_{0}+(\widetilde{f}_{1}^{\prime}+\widetilde{f}_{2})( \mathbf{B}^{-1}\boldsymbol{\nu})_{1}+(\widetilde{f}_{1}+\widetilde{f}_{2}^{ \prime})(\mathbf{B}^{-1}\boldsymbol{\nu})_{2}+\widetilde{f}^{*t}(\mathbf{B}^{ -1}\boldsymbol{\nu})^{*}-2(\mathbf{B}^{-1}\boldsymbol{\nu})_{1}(\mathbf{B}^{-1 }\boldsymbol{\nu})_{2} \tag{149}\] Then, using the analogous relations between the flattening as given in Equation (94) for the shapes, we have \(\widetilde{f}_{1}^{\prime}+\widetilde{f}_{2}=f_{1}^{\prime}=\mathbf{f}_{1}\) and \(\widetilde{f}_{1}+\widetilde{f}_{2}^{\prime}=f_{2}^{\prime}=\mathbf{f}_{2}\). ## 7. The series of the simplest hyperbolic \(4_{1}\) knot In this section, we discuss an effective computation of the power series \(\Phi^{\Xi}(\hbar)\) for the simplest hyperbolic knot, namely the \(4_{1}\) knot. This example was studied extensively in [11]. From [11, Ex. 2.6], we obtain that the NZ datum \(\Xi_{4_{1}}\) is given by \[A\ =\ \begin{pmatrix}2&2\\ 1&1\end{pmatrix},\quad B\ =\ \begin{pmatrix}1&1\\ 1&0\end{pmatrix},\quad\nu\ =\ \begin{pmatrix}2\\ 1\end{pmatrix},\quad f\ =\ \begin{pmatrix}0\\ 1\end{pmatrix},\quad f^{\prime\prime}\ =\ \begin{pmatrix}0\\ 0\end{pmatrix}, \tag{150}\] and \(z_{1}=z_{2}=\zeta_{6}=e^{2\pi i/6}\). Therefore, we have \[\Phi^{\Xi_{4_{1}}}(\hbar)\ =\ e^{\,\frac{\hbar}{8}}\langle\psi_{\hbar}(x_{1}, \zeta_{6})\psi_{\hbar}(x_{2},\zeta_{6})\rangle_{(x_{1},x_{2}),\Lambda_{0}}\,, \tag{151}\] where \[\Lambda_{0}\ =\ \begin{pmatrix}\zeta_{6}-1&-1\\ -1&\zeta_{6}-1\end{pmatrix}\,. \tag{152}\] This is a two dimensional formal Gaussian integral which can be simplified. Using \(\zeta_{6}=1/(1-\zeta_{6})=1-\zeta_{6}^{-1}\) and applying a change of coordinates \(x_{1}\mapsto x_{1}-\zeta_{6}x_{2}\), we find that \[\Phi^{\Xi_{4_{1}}}(\hbar)\ =\ e^{\frac{\hbar}{8}}\langle\psi_{\hbar}(x_{1}-\zeta_{6 }x_{2},\zeta_{6})\psi_{\hbar}(x_{2},\zeta_{6})\rangle_{(x_{1},x_{2}),\Lambda_{ 1}}\,, \tag{153}\] where \[\Lambda_{1}\ =\ \begin{pmatrix}\zeta_{6}-1&0\\ 0&2\zeta_{6}-1\end{pmatrix}\,. \tag{154}\] We can apply Fubini's theorem [4, Prop.2.13] and Corollary 3.5 to perform the integral over \(x_{1}\). After renaming the variable \(x_{2}\) by \(x\), we express \(\Phi^{\Xi_{4_{1}}}(\hbar)\) by a one-dimensional formal Gaussian integral \[\Phi^{\Xi_{4_{1}}}(\hbar)\ =\ e^{\frac{\hbar}{6}}\Big{\langle}\exp\Big{(} \frac{x}{2}\hbar^{\frac{1}{2}}\Big{)}\psi_{\hbar}(x,\zeta_{6})^{2}\Big{\rangle} _{x,2\zeta_{6}-1}\,. \tag{155}\] Using the definition of \(\psi_{\hbar}\) from Equation (1) and expanding to \(O(\hbar^{5/2})\), we obtain that \[\exp\Big{(}\frac{x}{2}\hbar^{\frac{1}{2}}\Big{)}\psi_{\hbar}(x, \zeta_{6})^{2} \tag{156}\] \[=\ 1+\bigg{(}\frac{1}{3}x^{3}+\Big{(}\zeta_{6}-\frac{1}{2} \Big{)}x\bigg{)}\hbar^{1/2}+\bigg{(}\frac{1}{18}x^{6}+\Big{(}\frac{1}{2}\zeta _{6}-\frac{1}{4}\Big{)}x^{4}-\frac{7}{8}x^{2}+\Big{(}-\frac{1}{6}\zeta_{6}+ \frac{1}{6}\Big{)}\bigg{)}\hbar\] \[\ \ \ \ +\bigg{(}\frac{1}{162}x^{9}+\Big{(}\frac{1}{9}\zeta_{6}- \frac{1}{18}\Big{)}x^{7}-\frac{1}{2}x^{5}+\Big{(}-\frac{73}{72}\zeta_{6}+ \frac{77}{144}\Big{)}x^{3}+\Big{(}\frac{1}{12}\zeta_{6}+\frac{1}{4}\Big{)}x \bigg{)}\hbar^{3/2}\] \[\ \ \ \ +\bigg{(}\frac{1}{1944}x^{12}+\Big{(}\frac{5}{324}\zeta_{6}- \frac{5}{648}\Big{)}x^{10}-\frac{37}{288}x^{8}+\Big{(}-\frac{1337}{2160}\zeta _{6}+\frac{1357}{4320}\Big{)}x^{6}\] \[\ \ \ \ +\Big{(}\frac{1}{24}\zeta_{6}+\frac{1027}{1152}\Big{)}x^{4}+ \Big{(}\frac{23}{48}\zeta_{6}-\frac{5}{16}\Big{)}x^{2}-\frac{1}{72}\zeta_{6} \Big{)}\hbar^{2}+O(\hbar^{5/2})\,.\] Noting that \(2\zeta_{6}-1=\sqrt{-3}\), we can then evaluate Equation (155) with Equation (20) to obtain that \[e^{-\frac{\hbar}{4}}\Phi^{\Xi_{4_{1}}}(\hbar)\ =\ 1+\frac{11}{72\sqrt{-3}} \hbar+\frac{697}{2(72\sqrt{-3})^{2}}\hbar^{2}+O(\hbar^{4})\,. \tag{157}\] This is in agreement with computations in [11, 12, 21]. The one-dimensional formal Gaussian integral (155) gives an effective computation of the series \(\Phi^{\Xi_{4_{1}}}(\hbar)\). Indeed, using a pari-gp program one can compute one hundred coefficients in a few seconds and two hundred coefficients in a few minutes, the first few of them are given by \[e^{-\frac{\hbar}{4}}\Phi^{\Xi_{4_{1}}}(\hbar) \tag{158}\] \[=\ 1-\frac{11}{216}\sqrt{-3}\,\hbar-\frac{697}{31104}\hbar^{2}+ \frac{724351}{100776960}\sqrt{-3}\,\hbar^{3}+\frac{278392949}{29023764480} \hbar^{4}-\frac{244284791741}{43889331893760}\sqrt{-3}\,\hbar^{5}\] \[\ \ \ \ \ -\frac{1140363907117019}{94789292890521600}\hbar^{6}+ \frac{212114205337147471}{20474487264352665600}\sqrt{-3}\,\hbar^{7}+\frac{36736 2844229968131557}{11793304664267135385600}\hbar^{8}\] \[\ \ \ \ -\frac{44921192873529779078383921}{1260940134703442115428352000 }\sqrt{-3}\,\hbar^{9}-\frac{31743421305624955760214307}{23109593741473993679123251 200}\hbar^{10}+O(\hbar^{11})\,.\] Similar to the case of the \(4_{1}\), one can obtain one-dimensional formal Gaussian integrals for the next two simplest hyperbolic knots, the \(5_{2}\) and the \((-2,3,7)\)-pretzel knot, whose details we omit. ### Acknowledgements The authors wish to thank Don Zagier for enlightening conversations. The work of M.S. and C.W. has been supported by the Max-Planck-Gesellschaft. C.W. wishes to thank the Southern University of Science and Technology's International Center for Mathematics in Shenzhen for their hospitality where the paper was completed. ## Appendix A Complements on the Fourier transform In this appendix, we give the omitted details in the last step of the proof of Theorem 3.4. They are an affine change of coordinates, followed by the corresponding computation of the formal Gaussian integration. The proof requires a version of formal Gaussian integration where the symmetric matrix \(\Lambda_{\hbar}\in\operatorname{GL}_{N}(\mathbb{Q}(z)[\![{\hbar^{1/2}}]\!])\) depends on \(\hbar\), such that \(\Lambda_{0}\) is invertible. In this case, for an integrable function \(f_{\hbar}(x,z)\in\mathbb{Q}(z)[\![{\hbar^{1/2}}]\!]\), we define \[\begin{split}\left\langle\!\left\langle f_{\hbar}(x,z)\right \rangle\!\right\rangle_{x,\Lambda_{\hbar}}&\;:=\;\sqrt{\frac{ \det(\Lambda_{\hbar})}{\det(\Lambda_{0})}}\big{\langle}\exp\big{(}-\tfrac{1}{ 2}x^{t}(\Lambda_{\hbar}-\Lambda_{0})x\big{)}f_{\hbar}(x,z)\big{\rangle}_{x, \Lambda_{0}}\\ &\;=\;\frac{\int e^{-\frac{1}{2}x^{t}\Lambda(\hbar)\,x}f_{\hbar} (x,z)\,dx}{\int e^{-\frac{1}{2}x^{t}\Lambda(\hbar)\,x}\,dx}\in\mathbb{Q}(z)[\! [{\hbar}]\!]\,.\end{split} \tag{159}\] This version of formal Gaussian integration satisfies the properties of Lemmas 3.1 and 3.2. We use Equations (34) and Equation (37) to obtain that (160) Lemma 3.2 implies that \[\begin{split}\psi_{\hbar}(x,z)&\;=\;C_{\hbar}(x,z) \,\exp\Big{(}-\frac{\hbar}{24}-\frac{\hbar^{1/2}}{2}(a+x)\Big{)}\\ &\qquad\times\Big{\langle}\exp\Big{(}\frac{y}{2}\hbar^{\frac{1} {2}}+\Big{(}\frac{1}{ze^{\pi\hbar^{1/2}}}-\frac{1}{z}\Big{)}\frac{y^{2}}{2} \Big{)}\,\psi_{\hbar}\Big{(}y,\frac{1}{1-ze^{\pi\hbar^{1/2}}}\Big{)}\Big{\rangle} _{y,1-z^{-1}},\end{split} \tag{161}\] where \(a=a_{\hbar}(x,z)\in\mathbb{Q}(z)[x][\![{\hbar^{1/2}}]\!]\) is given by \[a\;:=\;\frac{1}{\hbar^{1/2}}\log\Big{(}\frac{1-z}{1-ze^{\pi\hbar^{1/2}}}\Big{)} \in\mathbb{Q}(z)[x][\![{\hbar^{1/2}}]\!]\,. \tag{162}\] Similar to Equation (34) we write \[\psi_{\hbar}\Big{(}y,\frac{1}{1-ze^{\pi\hbar^{1/2}}}\Big{)}\;=\;\exp\Big{(}A _{0}-(a(1-z^{-1})+x)y-\Big{(}\frac{1}{ze^{\pi\hbar^{1/2}}}-\frac{1}{z}\Big{)} \frac{y^{2}}{2}\Big{)}\,\psi_{\hbar}\Big{(}y+a,\frac{1}{1-z}\Big{)} \tag{163}\] where \(A_{0}=A_{0,h}(x,z)\in\frac{1}{\hbar}\mathbb{Q}(z)[x][\![\hbar^{1/2}]\!]\) is given by \[\begin{split} A_{0}&\;=\;\frac{1}{2}\Big{(}\log\Big{(} \frac{-ze^{x\hbar^{1/2}}}{1-ze^{x\hbar^{1/2}}}\Big{)}-\log\Big{(}\frac{-z}{1-z} \Big{)}\Big{)}+\frac{1}{\hbar}\Big{(}\operatorname{Li}_{2}\Big{(}\frac{1}{1- ze^{x\hbar^{1/2}}}\Big{)}-\operatorname{Li}_{2}\Big{(}\frac{1}{1-z}\Big{)} \Big{)}\\ &\qquad+\frac{a^{2}}{2z}+\frac{a}{\hbar^{1/2}}\log\Big{(}\frac{-z }{1-z}\Big{)}\\ &\;=\;\frac{1}{2}a\hbar^{\frac{1}{2}}+\frac{1}{2}x\hbar^{\frac{ 1}{2}}+\frac{1}{\hbar}\Big{(}\operatorname{Li}_{2}\Big{(}\frac{1}{1-ze^{x\hbar ^{1/2}}}\Big{)}-\operatorname{Li}_{2}\Big{(}\frac{1}{1-z}\Big{)}\Big{)}+ \frac{a^{2}}{2z}+\frac{a}{\hbar^{1/2}}\log\Big{(}\frac{-z}{1-z}\Big{)}\,.\end{split} \tag{164}\] Then Equation (161) can be written as \[\begin{split}\psi_{h}(x,z)&\;=\;C_{h}(x,z)\,\exp \Big{(}-\frac{\hbar}{24}-\frac{\hbar^{1/2}}{2}(a+x)+A_{0}\Big{)}\\ &\qquad\times\Big{\langle}\exp\Big{(}\frac{y}{2}\hbar^{\frac{1}{2 }}-(a(1-z^{-1}+x)y)\Big{)}\,\psi_{h}\Big{(}y+a,\frac{1}{1-z}\Big{)}\Big{\rangle} _{y,1-z^{-1}}.\end{split} \tag{165}\] We make the change of variables \[w\mapsto w-a+\frac{xz}{1-z} \tag{166}\] and using Equation (23) of Lemma 3.1, we obtain that \[\begin{split}\psi_{h}(x,z)&\;=\;C_{h}(x,z)\,\exp \Big{(}-\frac{\hbar}{24}-\frac{\hbar^{1/2}}{2}(a+x)+A_{0}+\frac{a^{2}}{2}- \frac{a^{2}}{2z}+ax+\frac{x^{2}}{2(1-z^{-1})}-\frac{\hbar^{1/2}}{2}a\Big{)}\\ &\qquad\times\Big{\langle}\exp\Big{(}\frac{\hbar^{1/2}}{2}\Big{(} y+\frac{xz}{1-z}\Big{)}\Big{)}\,\psi_{h}\Big{(}y+\frac{xz}{1-z},\frac{1}{1-z} \Big{)}\Big{\rangle}_{y,1-z^{-1}}.\end{split} \tag{167}\] Hence, it remains to show that \[1\;=\;C_{h}(x,z)\,\exp\Big{(}-\frac{\hbar^{1/2}}{2}(a+x)+A_{0}+\frac{a^{2}}{2} -\frac{a^{2}}{2z}+ax+\frac{x^{2}}{2(1-z^{-1})}-\frac{\hbar^{1/2}}{2}a\Big{)}. \tag{168}\] In other words, using the definitions of \(C_{h}(x,z)\) from Equation (34) and \(A_{0}\) from Equation (164) it suffices to prove that \[\begin{split} 0&\;=\frac{1}{\hbar}\Big{(} \operatorname{Li}_{2}\Big{(}\frac{1}{1-ze^{x\hbar^{1/2}}}\Big{)}-\operatorname{ Li}_{2}\Big{(}\frac{1}{1-z}\Big{)}\Big{)}+\frac{1}{\hbar}\big{(} \operatorname{Li}_{2}(z)-\operatorname{Li}_{2}\big{(}ze^{x\hbar^{1/2}}\big{)} \big{)}\\ &\qquad+\frac{a}{\hbar^{1/2}}\log\Big{(}\frac{-z}{1-z}\Big{)}- \frac{x}{\hbar^{1/2}}\log(1-z)+\frac{a^{2}}{2}+ax.\end{split} \tag{169}\] With the transformation formula of the dilogarithm \[\operatorname{Li}_{2}\Big{(}\frac{1}{1-z}\Big{)}\;=\;\operatorname{Li}_{2}(z) -\frac{\pi^{2}}{3}+\log(z)\log(1-z)-\frac{1}{2}\log^{2}(z-1) \tag{170}\] the right hand side of the previous equation is given by \[\begin{split}&\frac{1}{\hbar}\Big{(}\log\left(ze^{x\hbar^{1/2}} \right)\log\left(1-ze^{x\hbar^{1/2}}\right)-\frac{1}{2}\log^{2}\left(ze^{x\hbar^ {1/2}}-1\right)\\ &\qquad-\log(z)\log(1-z)+\frac{1}{2}\log^{2}(z-1)\Big{)}\\ &\qquad+\frac{a}{\hbar^{1/2}}\log\left(\frac{-z}{1-z}\right)- \frac{x}{\hbar^{1/2}}\log(1-z)+\frac{a^{2}}{2}+ax.\end{split} \tag{171}\] With \(x\hbar^{1/2}=\log\left(e^{x\hbar^{1/2}}\right)\) we compute \[\begin{split}&\frac{1}{\hbar}\log\left(ze^{x\hbar^{1/2}}\right) \log\left(1-ze^{x\hbar^{1/2}}\right)-\frac{1}{\hbar}\log(z)\log(1-z)-\frac{x}{ \hbar^{1/2}}\log(1-z)+ax\\ &=\frac{1}{\hbar}\log\left(ze^{x\hbar^{1/2}}\right)\log\left(1- ze^{x\hbar^{1/2}}\right)-\frac{1}{\hbar}\log(z)\log(1-z)-\frac{1}{\hbar}\log(e^{x \hbar^{1/2}})\log(1-z)+\frac{a}{\hbar^{1/2}}\log\left(e^{x\hbar^{1/2}}\right) \\ &=\frac{1}{\hbar}\log\left(ze^{x\hbar^{1/2}}\right)\Big{(}\log \left(1-ze^{x\hbar^{1/2}}\right)-\log(1-z)\Big{)}+\frac{a}{\hbar^{1/2}}\log \left(e^{x\hbar^{1/2}}\right)\\ &=\ -\frac{a}{\hbar^{1/2}}\log\left(ze^{x\hbar^{1/2}}\right)+\frac{a}{ \hbar^{1/2}}\log\left(e^{x\hbar^{1/2}}\right)\\ &=\ -\frac{a}{\hbar^{1/2}}\log(z)\end{split} \tag{172}\] so that Equation (171) becomes \[\begin{split}&\quad-\frac{1}{2\hbar}\log^{2}\left(ze^{x\hbar^{1/2 }}-1\right)+\frac{1}{2\hbar}\log^{2}(z-1)+\frac{a}{\hbar^{1/2}}\log\left(\frac {-z}{1-z}\right)+\frac{a^{2}}{2}-\frac{a}{\hbar^{1/2}}\log(z)\\ &=\ -\frac{1}{2\hbar}\log^{2}\left(ze^{x\hbar^{1/2}}-1\right)+\frac{1}{2 \hbar}\log^{2}(z-1)-\frac{a}{\hbar^{1/2}}\log(z-1)+\frac{a^{2}}{2}\\ &=\ -\frac{1}{2\hbar}\log^{2}\left(ze^{x\hbar^{1/2}}-1\right)+\frac{1}{2 \hbar}\Big{(}\frac{1}{\hbar^{1/2}}\log(z-1)-a\Big{)}^{2}\\ &=\ -\frac{1}{2\hbar}\log^{2}\left(ze^{x\hbar^{1/2}}-1\right)+\frac{1}{2 \hbar}\log^{2}(ze^{x\hbar^{1/2}}-1)\\ &=0,\end{split} \tag{173}\] which completes the proof of the last step of Theorem 3.4. ## Appendix B Complements on the pentagon identity In this appendix, we give the omitted details in the last step of the proof of Theorem 3.6. They are an affine change of coordinates, followed by the corresponding computation of the formal Gaussian integration. Equations (49) and Equation (34) give \[\psi_{\hbar}(x,z_{1})\psi_{\hbar}(y,z_{2})\ =\ e^{-\frac{\hbar}{24}}C_{\hbar}(x,z_{1})C_{ \hbar}(y,\hat{z}_{1})\ \langle\!\langle\psi_{\hbar}(-w,\hat{z}_{1}\hat{z}_{0}^{-1})\psi_{\hbar}(w, \hat{z}_{0})\psi_{\hbar}(-w,\hat{z}_{2}\hat{z}_{0}^{-1})\rangle\!\rangle_{w, \hat{\delta}}\, \tag{174}\] where \[\begin{split}\hat{z}_{1}\ =\ z_{1}e^{x\hbar^{1/2}}\qquad\qquad\hat{z}_ {2}\ =\ z_{2}e^{y\hbar^{1/2}}\\ \hat{z}_{0}\ =\ \hat{z}_{1}+\hat{z}_{1}-\hat{z}_{1}\hat{z}_{2},\quad \hat{\delta}\ =\ \frac{(\hat{z}_{2}+\hat{z}_{2}-\hat{z}_{1}\hat{z}_{2})^{2}}{\hat{z}_{1}\hat{z}_ {2}(1-\hat{z}_{1})(1-\hat{z}_{2})}\,.\end{split} \tag{175}\] Note that \(\hat{z}_{1}\), \(\hat{z}_{2}\), \(\hat{z}_{0}\) and \(\hat{\delta}\) are power series in \(\hbar^{1/2}\) which, when evaluated at \(\hbar=0\), coincide with \(z_{1}\), \(z_{2}\), \(z_{0}\) and \(\delta\) given in Equations (46) and (47). We apply Lemma 3.2 to obtain that \[\psi_{\hbar}(x,z_{1})\psi_{\hbar}(y,z_{2}) =e^{-\frac{\hbar}{24}}C_{\hbar}(x,z_{1})C_{\hbar}(y,\hat{z}_{1}) \exp\Big{(}\frac{1}{2}(\log\hat{\delta}-\log\delta)\Big{)} \tag{176}\] \[\quad\times\Big{\langle}\exp\Big{(}\frac{w^{2}}{2}(\delta-\hat{ \delta})\Big{)}\psi_{\hbar}(-w,\hat{z}_{1}\hat{z}_{0}^{-1})\psi_{\hbar}(w,\hat {z}_{0})\psi_{\hbar}(-w,\hat{z}_{2}\hat{z}_{0}^{-1})\Big{\rangle}_{w,\delta}.\] where \(a=a_{\hbar}(x,z)\in\mathbb{Q}(z)[x]\llbracket\hbar^{1/2}\rrbracket\) is given by \[a\ :=\ \frac{1}{\hbar^{1/2}}\log(z_{0}\hat{z}_{0}^{-1}). \tag{177}\] We write similarly to Equation (34) \[\psi_{\hbar}(-w,\hat{z}_{1}\hat{z}_{0}^{-1})\psi_{\hbar}(w,\hat{ z}_{0})\psi_{\hbar}(-w,\hat{z}_{2}\hat{z}_{0}^{-1}) \tag{178}\] \[= \exp(A_{0}+w((x+y)+2a+\mathrm{Li}_{0}(z_{1}z_{0}^{-1})(a+x)+ \mathrm{Li}_{0}(z_{0})a+\mathrm{Li}_{0}(z_{2}z_{0}^{-1})(a+y))+\frac{w^{2}}{2 }(\hat{\delta}-\delta))\] \[\times\psi_{\hbar}(-w+a+x,\hat{z}_{1}\hat{z}_{0}^{-1})\psi_{ \hbar}(w-a,\hat{z}_{0})\psi_{\hbar}(-w+a+y,\hat{z}_{2}\hat{z}_{0}^{-1})\] where \(A_{0}=A_{0,\hbar}(x,z)\in\frac{1}{\hbar}\mathrm{Q}(z)[x]\llbracket\hbar^{1/2}\rrbracket\) is given by \[A_{0} :=\frac{1}{\hbar}(\mathrm{Li}_{2}(\hat{z}_{1}\hat{z}_{0}^{-1})+ \mathrm{Li}_{2}(\hat{z}_{0})+\mathrm{Li}_{2}(\hat{z}_{2}\hat{z}_{0}^{-1})- \mathrm{Li}_{2}(z_{1}z_{0}^{-1})+\mathrm{Li}_{2}(z_{0})-\mathrm{Li}_{2}(z_{2} z_{0}^{-1})) \tag{179}\] \[\quad+\frac{1}{2}(\log(1-\hat{z}_{1}\hat{z}_{0}^{-1})+\log(1- \hat{z}_{0})+\log(1-\hat{z}_{2}\hat{z}_{0}^{-1})\] \[\qquad\quad-\log(1-z_{1}\hat{z}_{0}^{-1})+\log(1-z_{0})+\log(1-z_ {2}z_{0}^{-1}))\] \[\quad+\frac{1}{\hbar^{1/2}}(\log(1-z_{1}z_{0}^{-1})(a+x)-\log(1-z _{0})a+\log(1-z_{2}z_{0}^{-1})(a+y))\] \[\quad-\frac{1}{2}(\mathrm{Li}_{0}(z_{1}z_{0}^{-1})(a+x)^{2}+ \mathrm{Li}_{0}(z_{0})a^{2}+\mathrm{Li}_{0}(z_{2}z_{0}^{-1})(a+y)^{2})\] Hence, \(\psi_{\hbar}(x,z_{1})\psi_{\hbar}(y,z_{2})\) can be written as \[e^{-\frac{\hbar}{24}}C_{\hbar}(x,z_{1})C_{\hbar}(y,\hat{z}_{1}) \exp\Big{(}\frac{1}{2}(\log(\hat{\delta})-\log(\delta))+A_{0}\Big{)} \tag{180}\] \[\quad\times\psi_{\hbar}(-w+a+x,\hat{z}_{1}\hat{z}_{0}^{-1})\psi_{ \hbar}(w-a,\hat{z}_{0})\psi_{\hbar}(-w,\hat{z}_{2}\hat{z}_{0}^{-1})\Big{\rangle} _{w+a+y,\hat{\delta}}.\] With the change of variables \[w\mapsto w+a+x+y-\frac{xz_{1}+yz_{2}}{z_{0}} \tag{181}\] combined with Equation (23) of Lemma 3.1, we obtain that \[\begin{split}& e^{-\frac{h}{24}}C_{h}(x,z_{1})C_{h}(y,\hat{z}_{1}) \exp\Big{(}\frac{1}{2}(\log(\delta)-\log(\delta))+A_{0}\Big{)}\\ &\exp\Big{(}\frac{\delta}{2}\Big{(}a+x+y-\frac{xz_{2}+yz_{1}}{z_{ 0}}\Big{)}^{2}+\Big{(}a+x+y-\frac{xz_{2}+yz_{1}}{z_{0}}\Big{)}(x+y+\delta a+ \operatorname{Li}_{0}(z_{1}z_{0}^{-1})x+\operatorname{Li}_{0}(z_{2}z_{0}^{-1} )y)\Big{)}\\ &\times\Big{\langle}\psi_{h}\big{(}-w-y+\frac{xz_{2}+yz_{1}}{z_{ 0}},\hat{z}_{1}\hat{z}_{0}^{-1}\Big{)}\psi_{h}\Big{(}w+x+y+\frac{xz_{2}+yz_{1}} {z_{0}},\hat{z}_{0}\Big{)}\psi_{h}\big{(}-w-x+\frac{xz_{2}+yz_{1}}{z_{0}},\hat {z}_{2}\hat{z}_{0}^{-1}\big{)}\Big{\rangle}_{w,\delta}.\end{split} \tag{182}\] Hence, in order to prove Equation (48) it remains to prove that the term in front of the bracket simplifies to \(e^{-\frac{h}{24}}\). For this, we use the definitions of \(C_{h}\) (35) and \(A_{0}\) (179) to obtain \[\begin{split}&\log(C_{h}(x,z_{1}))+\log(C_{h}(y,\hat{z}_{1}))+ \frac{1}{2}(\log(\hat{\delta})-\log(\delta))+A_{0}+\frac{\delta}{2}\Big{(}a+x+y -\frac{xz_{2}+yz_{1}}{z_{0}}\Big{)}^{2}\\ &+\Big{(}a+x+y-\frac{xz_{2}+yz_{1}}{z_{0}}\Big{)}(x+y+\delta a+ \operatorname{Li}_{0}(z_{1}z_{0}^{-1})x+\operatorname{Li}_{0}(z_{2}z_{0}^{-1 })y))\\ =&\frac{1}{\hbar}(-\operatorname{Li}_{2}(\hat{z}_{1} )+\operatorname{Li}_{2}(z_{1})-\operatorname{Li}_{2}(\hat{z}_{2})+ \operatorname{Li}_{2}(z_{2})+\operatorname{Li}_{2}(\hat{z}_{1}\hat{z}_{0}^{-1 })-\operatorname{Li}_{2}(z_{1}z_{0}^{-1})\\ &\qquad+\operatorname{Li}_{2}(\hat{z}_{0}^{-1})-\operatorname{Li} _{2}(z_{0})+\operatorname{Li}_{2}(\hat{z}_{2}\hat{z}_{0}^{-1})-\operatorname {Li}_{2}(z_{2}z_{0}^{-1}))\\ &+\frac{1}{2}(-\log(1-\hat{z}_{1})+\log(1-z_{1})-\log(1-\hat{z}_{ 2})+\log(1-z_{2})+\log\hat{\delta}-\log\delta\\ &\qquad+\log(1-\hat{z}_{1}\hat{z}_{0}^{-1})-\log(1-z_{1}z_{0}^{-1 })+\log(1-\hat{z}_{0})-\log(1-z_{0})\\ &\qquad+\log(1-\hat{z}_{2}\hat{z}_{0}^{-1})-\log(1-z_{2}z_{0}^{- 1}))\\ &+\frac{1}{\hbar^{1/2}}(-\log(1-z_{1})x-\log(1-z_{2})y+(x+y)(\log( z_{0})-\log(\hat{z}_{0}))\\ &\qquad+\log(1-z_{1}z_{0}^{-1})(a+x)-\log(1-z_{0})a+\log(1-z_{2} z_{0}^{-1})(a+y))\\ &+\frac{a^{2}}{2}(\delta-\operatorname{Li}_{0}(z_{1}z_{0}^{-1})- \operatorname{Li}_{0}(z_{0})-\operatorname{Li}_{0}(z_{2}z_{0}^{-1}))\\ &+\Big{(}x+y-\frac{xz_{2}+yz_{2}}{z_{0}}\Big{)}\Big{(}x+y+ \operatorname{Li}_{0}(z_{1}z_{0}^{-1})x+\operatorname{Li}_{0}(z_{2}z_{0}^{-1} )y-\frac{\delta}{2}\Big{(}x+y-\frac{xz_{2}+yz_{2}}{z_{0}}\Big{)}\Big{)}\\ &\qquad-\operatorname{Li}_{0}(z_{1}z_{0}^{-1})\frac{x^{2}}{2}- \operatorname{Li}_{0}(z_{2}z_{0}^{-1})\frac{y^{2}}{2}.\end{split} \tag{183}\] Inserting the definitions of \(\delta\) (47) and \(\hat{\delta}\) (175) and using the relations \[\begin{split} 1-z_{1}z_{0}^{-1}&\ =\ z_{2}(1-z_{1})z_{0}^{-1},\\ 1-z_{2}z_{0}^{-1}&\ =\ z_{1}(1-z_{2})z_{0}^{-1},\\ 1-z_{0}&\ =\ (1-z_{1})(1-z_{2}),\end{split} \tag{184}\] and similar ones for \(\hat{z}_{1},\hat{z}_{2}\) and \(\hat{z}_{0}\) we obtain that the terms \[\begin{split}\frac{1}{2}(-&\log(1-\hat{z}_{1})+ \log(1-z_{1})-\log(1-\hat{z}_{2})+\log(1-z_{2})+\log\hat{\delta}-\log\delta\\ &+\log(1-\hat{z}_{1}\hat{z}_{0}^{-1})-\log(1-z_{1}z_{0}^{-1})+ \log(1-\hat{z}_{0})-\log(1-z_{0})\\ &+\log(1-\hat{z}_{2}\hat{z}_{0}^{-1})-\log(1-z_{2}z_{0}^{-1})) \end{split} \tag{185}\] vanish. Furthermore, we have \[\delta-\operatorname{Li}_{0}(z_{1}z_{0}^{-1})-\operatorname{Li}_{0}(z_{0})- \operatorname{Li}_{0}(z_{2}z_{0}^{-1})\ =\ 2 \tag{186}\] as well as \[\begin{split}&\Big{(}x+y-\frac{xz_{2}+yz_{2}}{z_{0}}\Big{)} \Big{(}x+y+\operatorname{Li}_{0}(z_{1}z_{0}^{-1})x+\operatorname{Li}_{0}(z_{2} z_{0}^{-1})y-\frac{\delta}{2}\Big{(}x+y-\frac{xz_{2}+yz_{2}}{z_{0}}\Big{)}\Big{)}\\ &-\operatorname{Li}_{0}(z_{1}z_{0}^{-1})\frac{x^{2}}{2}- \operatorname{Li}_{0}(z_{2}z_{0}^{-1})\frac{y^{2}}{2}\ =\ xy.\end{split} \tag{187}\] Therefore, Equation (183) simplifies to \[\begin{split}&\frac{1}{\hbar}(-\operatorname{Li}_{2}(\hat{z}_{1})+ \operatorname{Li}_{2}(z_{1})-\operatorname{Li}_{2}(\hat{z}_{2})+\operatorname {Li}_{2}(z_{2})+\operatorname{Li}_{2}(\hat{z}_{1}\hat{z}_{0}^{-1})- \operatorname{Li}_{2}(z_{1}z_{0}^{-1})\\ &\qquad\quad+\operatorname{Li}_{2}(\hat{z}_{0}^{-1})- \operatorname{Li}_{2}(z_{0})+\operatorname{Li}_{2}(\hat{z}_{2}\hat{z}_{0}^{-1} )-\operatorname{Li}_{2}(z_{2}z_{0}^{-1}))\\ &+\frac{1}{\hbar^{1/2}}(-\log(1-z_{1})x-\log(1-z_{2})y+(x+y)( \log(z_{0})-\log(\hat{z}_{o}))\\ &\qquad\quad+\log(1-z_{1}z_{0}^{-1})(a+x)-\log(1-z_{0})a+\log(1- z_{2}z_{0}^{-1})(a+y))\\ &+a^{2}+xy.\end{split} \tag{188}\] Using the definition of \(a\) and the relations from Equation (184) we compute \[\begin{split}&\frac{1}{\hbar^{1/2}}(-\log(1-z_{1})x-\log(1-z_{2})y+(x+y)( \log(z_{0})-\log(\hat{z}_{o}))\\ &\qquad+\log(1-z_{1}z_{0}^{-1})(a+x)-\log(1-z_{0})a+\log(1-z_{2} z_{0}^{-1})(a+y))\\ &\quad+a^{2}+xy\\ &=\frac{1}{\hbar^{1/2}}(x(\log(z_{2})-\log(\hat{z}_{0}))+y(\log( z_{1})-\log(\hat{z}_{0}))\\ &\qquad+\frac{1}{\hbar}(\log(z_{0})(\log(z_{1})+\log(z_{2}))+ \log^{2}(z_{0})-\log(\hat{z}_{0})(\log(z_{1})+\log(z_{2})))\\ &=\frac{1}{\hbar}(\log(z_{2})\log(z_{0})+\log(z_{1})\log(z_{0})- \log(z_{1})\log(z_{2})\\ &\qquad\qquad-\log(\hat{z}_{2})\log(\hat{z}_{0})+\log(\hat{z}_{1 })\log(\hat{z}_{2})-\log(\hat{z}_{1})\log(\hat{z}_{0})).\end{split} \tag{189}\] Using the 5-term relation of the dilogarithm we obtain that the expression in Equation (188) \[\begin{split}&\operatorname{Li}_{2}(z_{1})+\operatorname{Li}_{2}(z_{ 2})-\operatorname{Li}_{2}(z_{1}z_{0}^{-1})-\operatorname{Li}_{2}(z_{0})- \operatorname{Li}_{2}(z_{2}z_{0}^{-1})+\log(z_{2})\log(z_{0})+\log(z_{1})\log( z_{0})-\log(z_{1})\log(z_{2})\\ &\quad-\operatorname{Li}_{2}(\hat{z}_{1})-\operatorname{Li}_{2}( \hat{z}_{2})+\operatorname{Li}_{2}(\hat{z}_{1}\hat{z}_{0}^{-1})+\operatorname {Li}_{2}(\hat{z}_{0}^{-1})+\operatorname{Li}_{2}(\hat{z}_{2}\hat{z}_{0}^{-1} )-\log(\hat{z}_{2})\log(\hat{z}_{0})-\log(\hat{z}_{1})\log(\hat{z}_{0})+\log( \hat{z}_{1})\log(\hat{z}_{2}).\end{split} \tag{190}\] vanishes. In particular, the terms in front of the bracket in Equation (182) simplify to \(e^{-\frac{\hbar}{24}}\) which concludes the proof of the last step of Theorem 3.6.
2305.05751
What is mature and what is still emerging in the cryptocurrency market?
In relation to the traditional financial markets, the cryptocurrency market is a recent invention and the trading dynamics of all its components are readily recorded and stored. This fact opens up a unique opportunity to follow the multidimensional trajectory of its development since inception up to the present time. Several main characteristics commonly recognized as financial stylized facts of mature markets were quantitatively studied here. In particular, it is shown that the return distributions, volatility clustering effects, and even temporal multifractal correlations for a few highest-capitalization cryptocurrencies largely follow those of the well-established financial markets. The smaller cryptocurrencies are somewhat deficient in this regard, however. They are also not as highly cross-correlated among themselves and with other financial markets as the large cryptocurrencies. Quite generally, the volume V impact on price changes R appears to be much stronger on the cryptocurrency market than in the mature stock markets, and scales as $R(V) \sim V^{\alpha}$ with $\alpha \gtrsim 1$.
Stanisław Drożdż, Jarosław Kwapień, Marcin Wątorek
2023-05-09T20:12:52Z
http://arxiv.org/abs/2305.05751v1
# What is mature and what is still emerging in the cryptocurrency market? ###### Abstract In relation to the traditional financial markets the cryptocurrency market is a recent invention and the trading dynamics of all its components is readily recorded and stored. This fact opens a unique opportunity to follow the multidimensional trajectory of its development since inception up to the present time. Several main characteristics commonly recognized as financial stylized facts of mature markets are here quantitatively studied. In particular, it is shown that the return distributions, volatility clustering effects and even the temporal multifractal correlations for a few highest capitalization cryptocurrencies largely follow those of the well-established financial markets. The smaller ones are somewhat deficient in this regard, however. They are also not as highly cross-correlated among themselves and with other financial markets as the large ones. Quite generally, volume \(V\) impact on price changes \(R\) appears much stronger on the cryptocurrency market than in the mature stock markets and scales as \(R(V)\sim V^{\alpha}\) with \(\alpha\gtrsim 1\). Blockchain; cryptocurrencies; time series; fluctuations; correlations; multifractality; market maturity; market impact ## 1 Introduction Studying the world cryptocurrency market is welcome for many reasons. Up to now, it constitutes the most spectacular and influential application of the distributed ledger technology called the blockchain, which in the underlying peer-to-peer network allows the same access to information for all participants [1; 2]. Research on blockchain technology is also unique because all related data is publicly available in the form of a history of every operation performed on the network. Furthermore, the tick-by-tick data for each transaction made on the cryptocurrency exchange are freely available using the application programming interfaces (APIs) of a given exchange. As far as the financial, economic, and, in general terms, the social aspects of cryptocurrencies are concerned, a basic related question that arises is whether such digital products can be considered a commonly accepted means of exchange [3; 4; 5]. This is a complex issue involving many social, economical, and technological factors like trust, perceived risk, peer opinions, transaction security, network size effect, supply elasticity, and so on. But also from a dynamical perspective, for this to apply, a certain level of maturity expressed in terms of market efficiency, liquidity, stability, size, and other characteristics is required [6; 7]. Moreover, the developed markets show several statistical properties that newly-established emerging markets often lack. Among such properties, one can list the so-called financial stylized facts: heavy tails of the probability distribution functions of fixed-time returns, long-term memory of volatility, a
2302.01590
Many-body enhancement in a spin-chain quantum heat engine
We show that ferromagnetic interactions can enhance the adiabatic performance of a quantum spin chain engine at low temperatures. The enhancement in work output is particular pronounced, increasing exponentially with interaction strength. The performance enhancement occurs in the paramagnetic phase and is qualitatively explained by considering just the ground and first excited state, in which case the system exhibits bipartite entanglement. As the temperature is increased, thermal occupation of higher energy levels diminishes performance. We find that these thermal fluctuations are smallest for long-range interactions, resulting in the highest efficiency. Diabatic work extraction degrades performance due to quantum friction. We identify an approximate, experimentally realisable counterdiabatic drive that can mitigate friction for weak interactions.
L. A. Williamson, Matthew J. Davis
2023-02-03T08:05:50Z
http://arxiv.org/abs/2302.01590v1
# Many-body enhancement in a spin-chain quantum heat engine ###### Abstract We show that ferromagnetic interactions can enhance the adiabatic performance of a quantum spin chain engine at low temperatures. The enhancement in work output is particular pronounced, increasing exponentially with interaction strength. The performance enhancement occurs in the paramagnetic phase and is qualitatively explained by considering just the ground and first excited state, in which case the system exhibits bipartite entanglement. As the temperature is increased, thermal occupation of higher energy levels diminishes performance. We find that these thermal fluctuations are smallest for long-range interactions, resulting in the highest efficiency. Diabatic work extraction degrades performance due to quantum friction. We identify an approximate, experimentally realisable counterdiabatic drive that can mitigate friction for weak interactions. Quantum heat engines convert heat into work utilising some distinctly quantum effect in the reservoir or working substance [1]. Reservoirs possessing coherence [2; 3; 4], squeezing [5; 6; 7; 8; 9; 10; 11] or entanglement [12; 13] have been shown to improve engine performance. Coherence in a working substance can be utilised as a resource [14; 15; 16; 17] and can improve power output for rapid engine cycles [18; 19]. In the many-body regime, interactions in a Bose gas can enhance engine performance compared to a non-interacting gas [20; 21; 22; 23]. Interactions in a many-body quantum system can also be tuned to change the energy of a working substance, hence providing a means to extract work [24; 25]. One of the simplest quantum working substances is an ensemble of two-level systems ("spins") [19; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. Work can be extracted by tuning the level spacing \(\hbar\omega(t)\) via control of an external field, see Fig. 1(a). Including interactions between spins opens up the possibility to explore many-body quantum effects. While considerable work has explored engines with two interacting spins [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47], much less is known about higher numbers of spins. For nearest-neighbour interactions, a spin chain can function as both a heat engine and a refrigerator [48] with critical scaling of performance close to the critical point [49]. However, the question of whether many-body effects can improve the performance of a spin-chain quantum heat engine has not been explored. In this work we characterise the performance of an Otto cycle with a ferromagnetic spin chain as the working substance. In addition to displaying rich many-body physics, this system may be realised in experiments with a remarkable degree of control [50; 51; 52; 53; 54]. We show that both short and long-range interactions improve the adiabatic work output and efficiency in the paramagnetic phase at low temperatures \(k_{B}T\lesssim\hbar\omega\). The performance enhancement is qualitatively explained by an analytic model considering just the ground and first excited state, in which case the thermal state exhibits bipartite entanglement. For temperatures \(k_{B}T>\hbar\omega\), higher energy eigenstates are occupied and the performance degrades, approaching the non-interacting performance. This effect is reduced as the range of interactions is increased, and hence high efficiency is most robust for long-range interactions. For diabatic work extraction, decreasing the engine cycle time decreases performance due to quantum friction [36; 37; 55]. We demonstrate an approximate, experimentally realisable counterdiabatic drive that can mitigate friction for weak interactions, and hence a performance enhancement is possible at finite power output. System setup.A chain of \(N\) ferromagnetic interacting two-level spins is described by the Hamiltonian (hereon \(\hbar\equiv 1\)), \[\hat{H}(\omega(t))=-\omega(t)\sum_{i=1}^{N}\hat{\sigma}_{z}^{(i)}-g\sum_{ \begin{subarray}{c}i,j=1\\ (j\neq i)\end{subarray}}^{N}J_{ij}\hat{\sigma}_{x}^{(i)}\hat{\sigma}_{x}^{(j)}, \tag{1}\] with \(\hat{\sigma}_{\mu}^{(i)}\) (\(\mu=x,y,z\)) the Pauli spin-\(1/2\) matrices for spins \(i=1,...,N\). The interaction strength between spins \(i\) and \(j\) is \(gJ_{ij}\) with \(J_{ij}=1/|i-j|^{p}\), \(g\geq 0\) the nearest-neighbour interaction strength and \(p>0\) determining the range of interactions, with both \(g\) and \(p\) tuneable in experiments [51; 53]. For \(N\rightarrow\infty\), the system may be ferromagnetic (\(g\lesssim\omega\)) or paramagnetic (\(g\gtrsim\omega\)) with the precise cross-over \(g_{c}(p)\) dependent on \(p\)[56; 57; 58; 59; 60; 61; 62]. We denote nearest-neighbour interactions by \(p=\infty\). We consider an Otto engine cycle with the following steps, as shown in Fig. 1(a). (1) We begin with a hot thermal state \(\rho_{H}^{\rm th}=e^{-\beta_{H}\hat{H}(\nu\omega_{0})}/Z(\beta_{H},\pi\omega_{ 0})\) at level spacing \(\omega=r\omega_{0}\), with \(r>1\) the "compression ratio" [63], \(Z(\beta,\omega)=\operatorname{Tr}e^{-\beta\hat{H}(\omega)}\) the partition function, and \(\beta=(k_{B}T)^{-1}\) the inverse temperature. (\(1\to 2\)) The system is then thermally isolated and work is extracted by \(\omega\) from \(r\omega_{0}\) to \(\omega_{0}\), via the protocol \(\omega(t)/\omega_{0}=f(t)=r+(1-r)\sin^{2}(\pi t/2\tau)\) (\(0\leq t\leq\tau\)). (\(2\to 3\)) Next, we cool the system at fixed \(\omega=\omega_{0}\), leaving the system in a cold thermal state \(\rho_{C}^{\rm th}=e^{-\beta_{C}\hat{H}(\omega_{0})}/Z(\beta_{C},\omega_{0})\). (\(3\to 4\)) We thermally isolate the system again and increase \(\omega\) from \(\omega_{0}\) back to \(r\omega_{0}\), with the protocol \(\omega(t)/\omega_{0}=f(\tau-t)\) (\(0\leq t\leq\tau\)). (\(4\to 1\)) Finally we heat the system at fixed \(\omega=r\omega_{0}\) back to the initial state. The work output \(W\) and efficiency \(\eta\) of the engine cycle are \[W=Q_{H}-Q_{C},\ \ \ \ \ \eta=\frac{W}{Q_{H}}. \tag{2}\] Here \(Q_{H}=\text{Tr}\left[\hat{H}(r\omega_{0})(\rho_{H}^{\text{th}}-\rho_{4})\right]\) is the heat input from the hot reservoir and \(Q_{C}=-\text{Tr}\left[\hat{H}(\omega_{0})(\rho_{C}^{\text{th}}-\rho_{2})\right]\) is the heat output to the cold reservoir, with \(\rho_{4}\) the density matrix prior to coupling to the hot reservoir and \(\rho_{2}\) the density matrix prior to coupling to the cold reservoir. The density matrix at points 2 and 4 are obtained by time-evolving the von Neumann equation \(\dot{\rho}(t)=-i[H(t),\rho(t)]\) with initial conditions \(\rho_{H}^{\text{th}}\) and \(\rho_{C}^{\text{th}}\) respectively, using Runge-Kutta integration. Adiabatic low-temperature performance.We first examine the quantum adiabatic limit \(\tau\gg\omega_{0}^{-1},g^{-1}\) (we set \(\tau=100\omega_{0}^{-1}\)) such that transitions between eigenstates during the work steps are suppressed [64; 65; 66; 67]. For zero interactions and fixed \(\beta_{H}\omega_{0}\gg 1\), the maximum work output occurs at a compression ratio \(r_{\text{NI}}^{\text{max}}\approx 1+(\beta_{H}\omega_{0})^{-1}\), which gives a small efficiency \(\eta_{\text{NI}}\approx(\beta_{H}\omega_{0})^{-1}\) that decreases with decreasing temperature. We find that interactions drastically improve both work output and efficiency in the paramagnetic phase for temperatures \(\beta_{H}^{-1}\ll\omega_{0}\), see Fig. 1(b). The improvement in work output is particularly pronounced, with a maximum work output \(\sim 10^{2}\) times larger than the non-interacting ensemble. The behaviour is qualitatively similar in all cases \(p=1,2,3,\infty\) after rescaling interactions by \(g_{c}(p)\), which we define to be the point at which \(\partial^{2}\Delta/\partial g^{2}\big{|}_{\omega=\omega_{0}}\) has a maximum [68]. Here \(\Delta\) is the energy gap to the first excited state. The improvement increases monotonically up to \(g\approx g_{c}(p)\), before dropping abruptly. For \(p=2,3,\infty\) the performance scales approximately extensively with increasing \(N\)[68]. For \(p=1\) the performance increases non-extensively [68] due to the non-extensive thermodynamics of long-range interactions [69; 70]. This is removed after scaling \(g\) by \(g_{c}\)[68]. In the limit of large \(N\), a chain with nearest neighbour interactions has a solvable spectrum [71]. For \(\beta_{H}^{-1}\ll\omega_{0},g\), only the first \(N\) excited states are appreciably occupied. The dimensionless free energy is then [72] \[\ln Z\approx\frac{N}{\pi}\int_{0}^{\pi}e^{-\beta\sqrt{\omega^{2}+g^{2}-2 \omega g\cos\theta}}\,d\theta. \tag{3}\] The performance computed from Eq. (3) (see [68]) is plotted alongside the full numerical results in Fig. 1(b), and agrees well with the \(p=\infty\) results for \(g<g_{c}\). Transforming the spin operators using a Holstein-Primakoff transformation and expanding to quadratic order in the bosonic operators gives an analytically tractable theory even for long-range interactions [73]. To lowest order in \(g/\omega\) and for large \(N\) and \(\beta\), \(\ln Z\approx N\mathcal{G}_{p}(\beta g)e^{-\beta\Delta}\), where \(Ne^{-\beta\Delta}\) is the low-temperature free energy of \(N\) two-level systems with level splitting \(\Delta(\omega)=\omega(1-\omega_{0}g/\omega g_{c})\). The factor \(\mathcal{G}_{p}(\beta g)\) arises from thermal fluctuations and depends on \(p\)[68]. To lowest order in \(\beta^{-1}\), this gives, \[W= N(r-1)\omega_{0}\mathcal{G}_{p}(\beta_{H}g)\left(e^{-\beta_{H} \Delta(r\omega_{0})}-e^{-\beta_{C}\Delta(\omega_{0})}\right),\] \[\eta= 1-\frac{\Delta_{2}}{\Delta_{1}}. \tag{4}\] For \(g<g_{c}(p)\), increasing \(g\) decreases \(\Delta(\omega)\). From examination of Eq. (4), this increases low-temperature work output as \(W\sim W_{\text{NI}}e^{\beta_{H}\omega_{0}g/g_{c}}\), consistent with the exponential increase in Fig. 1(b,i), and efficiency as \(\eta\sim\eta_{\text{NI}}/(1-g/rg_{c})\). Above \(g_{c}\) the system transitions to the ferromagnetic state and \(\Delta\), and therefore \(\partial^{2}\ln Z/\partial\beta\partial\omega\), changes sign. The cycle therefore no longer functions as a heat engine [48], resulting in the abrupt drop in performance in Fig. 1(b) above \(g_{c}\). The quadratic approximation above permits a calculation of the bipartite entanglement of the spin chain, Figure 1: (a) An Otto cycle can be realised in an ensemble of two-level spins, as described in the main text. Area of blue and red spots indicate relative ground and excited state occupations, respectively. (b),(c) Engine performance for a 10-spin chain operating adiabatically at low temperature (\(\beta_{H}=10\omega_{0}^{-1}\), \(\beta_{C}=2\beta_{H}\)). Increasing \(g\) increases both (b,i) work output and (b,ii) efficiency, with maximum performance at \(g\approx g_{c}(p)\) (results for \(r=r_{\text{NI}}^{\text{max}}\)). Above this, the system transitions to the ferromagnetic phase and no longer functions as an engine. For \(p=\infty\), the behaviour is captured by Eq. (3) (black dashed line) and the approximation Eq. (4) (gray dot-dashed line). (c,i) Work output and (c,ii) efficiency as a function of compression ratio for \(g=g_{c}(p)\), with dotted lines indicating the non-interacting results \(W_{\text{NI}}\) and \(\eta_{\text{NI}}\) (\(W_{\text{NI}}\) is scaled by a factor of 100). Gray horizontal lines in (b,ii),(c,ii) is the Carnot efficiency. which allows us to determine the "quantum" nature of the performance enhancement. For low temperatures, a thermal state can be approximated by \(\rho_{T}^{01}\approx(\ket{0}\bra{0}+e^{-\beta\Delta}\ket{1}\bra{1}+e^{-\beta\Delta})\), with \(\ket{0}\) the ground state, \(\ket{1}=\sum_{i=1}^{N}\hat{\sigma}_{+}^{(i)}\ket{0}/\sqrt{N}\) the approximate first-excited state (independent of \(p\)) and \(\hat{\sigma}_{+}^{(i)}=\hat{\sigma}_{x}^{(i)}+i\hat{\sigma}_{y}^{(i)}\)[68]. In [68], we show that \(\rho_{T}^{01}\) is entangled according to the Peres-Horodecki criterion [74; 75; 76]. The performance enhancement, Eq. (4), requires access to an entangled thermal state, and so is a many-body quantum effect. The entanglement of \(\ket{1}\) is also directly evident from the entanglement entropy, which is \(\mathcal{S}=\ln(N/2)\) for a partition dividing the spin chain in half [68]. In contrast, in a mean-field approximation, the interaction of spin \(i\) with the remaining spins is replaced by \(-g\Omega_{i}\hat{\sigma}_{x}^{(i)}\). Here \(\Omega_{i}=\sum_{j\neq i}J_{ij}s_{j}\) is an effective transverse drive and \(s_{j}=\langle\hat{\sigma}_{x}^{(j)}\rangle_{\rm mf}\) is a mean-field approximation for spin \(j\). The energy gap of spin \(i\) then increases with \(g\) as \(\sqrt{\omega^{2}+g^{2}\Omega_{i}^{2}}\) and interactions degrade performance. Increasing the compression ratio increases performance until \(r=r^{\prime}\), with \(r^{\prime}\sim 1.1\) at \(g=g_{c}(p)\), at which point the performance abruptly drops, see Fig. 1(c). Unlike the non-interacting case, the peak work output and efficiency can both occur at a comparable compression ratio. Equation (4) describes this behaviour: within this approximation, performance increases until \(1-\Delta_{2}/\Delta_{1}\sim\eta_{C}\), with \(\eta_{C}=1-\beta_{H}/\beta_{C}\) the Carnot efficiency. Hence \(r^{\prime}\approx(1-g\eta_{C}/g_{c}(p))/(1-\eta_{C})+O(g^{2})\) diminishes with increasing \(g\). As a result, for \(g\sim g_{c}(p)\) we can have high efficiency at small compression ratios \(r\sim r_{\rm NI}^{\rm max}\). Without interactions, the maximum compression ratio satisfies \(1-1/r^{\prime}=\eta_{C}\) and therefore \(r^{\prime}=1/(1-\eta_{C})\gg r_{\rm NI}^{\rm max}\). _Effect of increasing temperature._ As the temperature increases, thermal fluctuations render Eq. (4) invalid and we find that the performance enhancement relative to the non-interacting system is diminished, see Fig. 2(a). A performance enhancement is present as long as \(\beta_{H}\gtrsim 4\omega_{0}^{-1}\) (the precise cross-over point is dependent on \(g\)), coinciding with the regime where only the ground and first \(N\) excited levels are appreciably occupied, see Fig. 2(b). The transverse Ising model gives a qualitative understanding of this diminished performance enhancement. The first \(N\) excited energy levels of this model are \(\sqrt{\omega^{2}+g^{2}-2\omega g\cos\theta_{k}}\) with \(\theta_{k}=2\pi k/N\) (\(k=0,...,N-1\)) [71]. Interactions diminish the energy of only the lowest \(N/2\) excited levels, with the most pronounced reduction occurring for the first excited level (\(k=0\)). Hence, the enhancement is largest when only the first excited level is occupied, and diminishes as more excited levels are occupied. The efficiency enhancement is most robust to increasing temperature for long-range interactions, see Fig. 2(a,ii). At a given temperature, the occupation \(\sum_{i=2}^{N}n_{i}\) decreases as the range of interactions increases, see Fig. 2(b). Hence long-range interactions are most effective at suppressing fluctuations beyond the approximation (4). For \(\beta_{H}<4\omega_{0}^{-1}\), interactions degrade performance, see Fig. 2(a). Expanding the dimensionless free energy in powers of \(\beta\), we obtain \[\ln Z=\ln Z_{\infty}+\frac{\beta^{2}\omega^{2}}{4}+\frac{\beta^{2}g^{2}\sum_{ i}(\Omega_{i}^{\prime})^{2}}{4}+O(\beta^{4}), \tag{5}\] with \(Z_{\infty}=2^{N}\) the infinite temperature partition function and \(\Omega_{i}^{\prime}=(1/2)\sum_{j\neq i}J_{ij}\) is \(\Omega_{i}\) with \(s_{j}=1/2\). At order \(\beta^{2}\), the free energy is indistinguishable from the mean-field free energy \(\sum_{i}\ln\operatorname{Tr}e^{-\beta(\omega\hat{\sigma}_{x}^{(i)}-\Omega_{i}^ {\prime}\hat{\sigma}_{x}^{(i)})}\), in which case interactions degrade performance. The scaling \(\ln(Z/Z_{\infty})\propto\beta^{2}\) is clearly present for temperatures \(\beta_{H}\lesssim\omega_{0}^{-1}\), see Fig. 2(c). _Diabatic work extraction._ For diabatic (finite-time) work extraction, interactions generally degrade engine performance due to "quantum friction" [36; 37; 55]. This friction arises when the interaction component of the Hamiltonian does not commute with the driving component, and hence the density matrix develops off-diagonal elements in the energy eigenbasis. The diabatic performance of a \(p=\infty\) engine with weak interactions is shown in Fig. 3(a). The peak power output occurs for a time step \(\tau\approx 4\omega_{0}^{-1}\) (the precise value is dependent on \(g\)), at which point the efficiency is close to the adiabatic ef Figure 2: (a) Adiabatic engine performance for varying temperature (\(r=1.1\), \(\beta_{C}=2\beta_{H}\)). (a,i) Work output and (a,ii) efficiency exceed the non-interacting values (dotted lines) for cold temperatures \(\beta_{H}\gtrsim 4\omega_{0}^{-1}\), whereas interactions degrade performance for \(\beta_{H}<4\omega_{0}^{-1}\). The gray line in (a,ii) is the Carnot efficiency. (b) Thermal energy-level occupations \(n_{0}\) (dotted lines), \(n_{1}\) (solid lines), \(\sum_{i=2}^{N}n_{i}\) (dashed lines) and \(\sum_{i=N+1}^{2^{N}-1}n_{i}\) (dash-dotted lines). Here \(n_{i}=e^{-\beta E_{i}}/Z(\beta,\omega)\) is the thermal occupation of energy level \(i\), indexed in order of increasing energy \(E_{i}\). The performance enhancement occurs when occupation is predominantly in the ground and first \(N\) excited states. Long-range interactions suppress occupation beyond the first excited state, resulting in the highest efficiency in (a,ii). (c) Dimensionless free energy \(\ln Z\) showing the \(\beta^{2}\) scaling for \(\beta\lesssim\omega^{-1}\), coinciding with the regime of low performance. All results are for a 10-spin chain at \(g=g_{c}(p)\). ficiency. For faster cycles, the performance rapidly decreases. In principle, quantum friction can be mitigated completely using a counterdiabatic driving field \(\hat{H}_{\text{cd}}\)[77, 78]. In practice, exact counterdiabatic driving in a many-body system requires unrealistic interactions between all particles [79, 80, 81, 82, 83, 84, 85], and approximate protocols are required. A powerful approximation method is to find \(\hat{H}_{\text{cd}}\) variationally by minimising the action \(S=\text{Tr}[G(\hat{H}_{\text{cd}})^{2}]\), with \(G(\hat{H}_{\text{cd}})=\partial\hat{H}/\partial t+i[\hat{H}_{\text{cd}},\hat{H}]\) and \(\hat{H}_{\text{cd}}\) expanded in some truncated set of operators [86, 87]. We use \(\hat{H}_{\text{cd}}=\sum_{ij(j\neq i)}C_{ij}\hat{\sigma}_{x}^{(i)}\hat{\sigma} _{y}^{(j)}\), which is the optimal counterdiabatic drive over all one-body and two-body operators [68] (c.f. [88]). For large \(N\) in the paramagnetic phase, we obtain [68], \[\hat{H}_{\text{cd}}=-\sum_{\begin{subarray}{c}i,j=1\\ (j\neq i)\end{subarray}}^{N}\frac{g\omega^{\prime}(t)J_{ij}}{2\omega(t)^{2}} \chi_{ij}(t)\hat{\sigma}_{x}^{(i)}\hat{\sigma}_{y}^{(j)}, \tag{6}\] with \(\chi_{ij}(t)=1+O(g^{2}/\omega^{2})\) given in [68]. The work protocols \(f(t)\) and \(f(\tau-t)\) satisfy \(f^{\prime}(0)=f^{\prime}(\tau)=0\), and hence the net power transferred to the counterdiabatic drive field is zero [89]. For nearest neighbour interactions, \(\chi_{ij}=1/(1+g^{2}/\omega(t)^{2})\) and Eq. (6) drastically improves the diabatic engine operation for \(g\lesssim 0.3g_{c}\), see Fig. 3. For rapid cycles, the work output approaches a constant with little cost in efficiency, and hence the power \(P\) increases as \(\tau^{-1}\). In practice, the time scale of the thermalization steps will limit the engine to finite power [90, 26]. Note \(\eta\propto W\) irrespective of counterdiabatic driving (Fig. 3(a)), hence \(Q_{H}\) depends only weakly on \(\tau\). For increasing \(g/g_{c}\) there is a trade-off in the performance gained from interactions and the performance lost from quantum friction, with peak performance occurring for \(g/g_{c}\approx 0.3\) for \(\tau=\omega_{0}^{-1}\). Here, the work output from a chain with nearest neighbour interactions is about 50% larger than the non-interacting chain and both show comparable efficiency, see Fig. 3(b). For \(p=1,2,3\), \(\chi_{ij}(t)\) is difficult to engineer since the interactions must be reconfigured at different times. To simplify, we expand to lowest order in \(g/\omega\) and set \(\chi_{ij}(t)=1\). While this is somewhat effective at mitigating diabatic degradation for weak interactions, the performance enhancement diminishes as the range of interactions increases. Hence a chain with \(p=1\), \(g/g_{c}\lesssim 0.3\) and \(\tau=\omega_{0}^{-1}\) has approximately the same performance as a non-interacting chain. Interestingly, we find that Eq. (6) is most effective for \(\beta_{H}\lesssim 10\omega_{0}^{-1}\), with reduced performance for colder temperatures. This may be due to thermal fluctuations countering quantum friction [91]. Conclusion.We have shown that an engine of interacting spins outperforms a non-interacting engine in the paramagnetic phase for low temperatures and adiabatic operation, due to a lowering of the first excited state energy gap. The enhancement in work output is particular pronounced, with \(W/W_{\text{NI}}\) increasing exponentially with increasing interactions. The efficiency enhancement is largest for long-range interactions, which suppress occupation of energy levels beyond the first excited state. A performance enhancement due to long-range interactions has also been identified in Kitaev chains [92, 93]. For diabatic engine operation, quantum friction degrades performance. We have presented one counterdiabatic method that mitigates friction for weak interactions, however other methods could be explored [94, 95, 96, 97, 98, 99, 94, 99]. Modulating the phase and detuning of the drive profile may better isolate the two lowest energy eigenstates [100, 101, 102, 103, 104], limiting degradation due to thermal fluctuations and quantum friction. The low-temperature performance enhancement is a many-body quantum effect due to bipartite entanglement arising from the first excited state. A more thorough investigation of the entanglement properties of the thermal spin chain could reveal how entanglement changes for higher temperatures [105, 106, 107, 108] or diabatic operation. Acknowledgements. We thank C. Woffinden and M. Edmonds for useful comments on the manuscript. This research was supported by the Australian Research Council Centre of Excellence for Engineered Quantum Systems (EQUS, CE17010009). Figure 3: (a) Diabatic work output \(W_{\text{D}}\) and efficiency \(\eta_{\text{D}}\) for \(p=\infty\) with weak interactions \(g/g_{c}=0.2\). The maximum power output \(P_{\text{D}}\) (see inset) occurs at \(\tau\approx 4\omega_{0}^{-1}\) (star), with performance rapidly declining for smaller \(\tau\). The approximate counterdiabatic driving (Eq. (6)) results in work output (\(W_{\text{CD}}\)) and efficiency (\(\eta_{\text{CD}}\)) close to the adiabatic performance even for rapid engine cycles. The counterdiabatic power output (\(P_{\text{CD}}\)) grows as \(\tau^{-1}\) (inset). (b) The effectiveness of Eq. (6) diminishes for larger \(g/g_{c}(p)\) or smaller \(p\) (results for \(\tau=\omega_{0}^{-1}\); for \(p=\infty\), we use the exact \(\chi_{ij}\), whereas for \(p=1,2,3\), we set \(\chi_{ij}=1\)). All results are for \(N=10\) and \(r=r_{\text{NI}}^{\text{max}}\).
2303.04919
Dynamical Analysis of a Lotka-Volterra Competition Model with both Allee and Fear Effect
Population ecology theory is replete with density dependent processes. However trait-mediated or behavioral indirect interactions can both reinforce or oppose density-dependent effects. This paper presents the first two species competitive ODE and PDE systems where an Allee effect, which is a density dependent process and the fear effect, which is non-consumptive and behavioral are both present. The stability of the equilibria is discussed analytically using the qualitative theory of ordinary differential equations. It is found that the Allee effect and the fear effect change the extinction dynamics of the system and the number of positive equilibrium points, but they do not affect the stability of the positive equilibria. We also observe some special dynamics that induce bifurcations in the system by varying the Allee or fear parameter. Interestingly we find that the Allee effect working in conjunction with the fear effect, can bring about several qualitative changes to the dynamical behavior of the system with only the fear effect in place, in regimes of small fear. That is, for small amounts of the fear parameter, it can change a competitive exclusion type situation to a strong competition type situation. It can also change a weak competition type situation to a bi-stability type situation. However for large fear regimes the Allee effect reinforces the dynamics driven by the fear effect. The analysis of the corresponding spatially explicit model is also presented. To this end the comparison principle for parabolic PDE is used. The conclusions of this paper have strong implications for conservation biology, biological control as well as the preservation of biodiversity.
Shangming Chen, Fengde Chen, Vaibhava Srivastava, Rana D. Parshad
2023-03-08T22:27:45Z
http://arxiv.org/abs/2303.04919v1
# Dynamical Analysis of a Lotka-Volterra Competition Model with both Allee and Fear Effect ###### Abstract Population ecology theory is replete with density dependent processes. However trait-mediated or behavioral indirect interactions can both reinforce or oppose density-dependent effects. This paper presents the first two species competitive ODE and PDE systems where an Allee effect, which is a density dependent process and the fear effect, which is non-consumptive and behavioral are _both_ present. The stability of the equilibria is discussed analytically using the qualitative theory of ordinary differential equations. It is found that the Allee effect and the fear effect change the extinction dynamics of the system and the number of positive equilibrium points, but they do not affect the stability of the positive equilibria. We also observe some special dynamics that induce bifurcations in the system by varying the Allee or fear parameter. Interestingly we find that the Allee effect working in conjunction with the fear effect, can bring about several qualitative changes to the dynamical behavior of the system with only the fear effect in place, in regimes of small fear. That is, for small amounts of the fear parameter, it can change a competitive exclusion type situation to a strong competition type situation. It can also change a weak competition type situation to a bi-stability type situation. However for large fear regimes the Allee effect reinforces the dynamics driven by the fear effect. The analysis of the corresponding spatially explicit model is also presented. To this end the comparison principle for parabolic PDE is used. The conclusions of this paper have strong implications for conservation biology, biological control as well as the preservation of biodiversity. keywords: Competition Model, Allee Effect, Fear Effect, Stability, Bifurcation, Reaction-Diffusion System ## 1 Introduction The mechanisms of competition, non-consumptive effects or trait mediated indirect interactions and density dependent effects such as Allee effects are central tenets of population ecology theory. They shape the dynamics of many eco-systems [1, 2, 3, 4, 5]. It is typically assumed that species interactions are governed by their respective densities. However, a species could react to the presence of a second species by altering it's phenotype or behavior, consequently effecting population density or fitness of the other species. Thus such trait-mediated or behavioral indirect interactions can both reinforce or oppose density-dependent effects [6, 7], which has strong consequences for community ecology, and food chain dynamics. We present and analyze in this work a first model that _connects_ trait-mediated indirect interactions focusing on fear effects with density dependent effects herein Allee effects, in a two species competition setting. Our analysis reveals that (1) the Allee effect opposes the fear effect, in the regimes of small fear, (2) the Allee effect reinforces the fear effect, in the regimes of large fear. The details of how these two mechanisms work in conjunction to create such novel dynamics is investigated and presented in the current manuscript. We begin by fixing ideas about the Allee effect, which is crucial to the ensuing analysis. The ecologist Allee [8] discovered in 1931 that a population's growth rate was correlated with its density - that is being rare may introduce a fitness cost, leading to population decline. He also illustrates that "clustering" benefits population growth and survival, whereas extreme sparseness and overcrowding can prevent growth and negatively affect reproduction, thus each species has its optimum density. In 1953, Odum [9], another ecologist, introduced the term "Allee effect" for this phenomenon. The Allee effect can be divided into three types: species size, structure, and behavioral effects. The species size says that when a population size falls below a certain threshold, the reproduction and maintenance of the population will be affected, leading to a significant increase in the likelihood of population extinction. The single species model with an Allee effect can be modeled by the following ordinary differential equation, \[\frac{\mathrm{d}x}{\mathrm{d}t}=rx\left(1-\frac{x}{K}\right)\left(x-m\right), \tag{1.1}\] is called the single species model with multiplicative Allee effect. \(r\) is the endogenous growth rate of the species, \(K\) is the environmental accommodation, and \(\left(x-m\right)\) is the Allee effect. If \(0<m<K\), we claim that the species is subject to a strong Allee effect. When the species size is below the threshold \(m\), the endogenous growth rate of the population is negative, and there is a risk of extinction. If \(-K<m\leq 0\), the species suffers from a weak Allee effect. At this time, the species' growth slows down, but there is no risk of extinction. Research on the Allee effect has important implications regarding the conservation of species diversity. According to (1.1), Zhu et al. [10] proposed a single-species logistic model with strong Allee effects and feedback control: \[\left\{\begin{array}{l}\frac{dx}{dt}=rx\left(1-\frac{x}{k}\right) \left(x-m\right)-axu\\ \\ \frac{du}{dt}=-bu+cx\end{array}\right. \tag{1.2}\] The authors' results suggest species will become extinct if the feedback control variables and the Allee effect are large enough. In addition, the authors also studied the saddle-node bifurcation, supercritical, and subcritical Hopf bifurcation that occurs with parameter variation. By calculating the universal unfolding near a cusp, it is concluded that system (1.2) has a Bogdanov-Takens bifurcation of codimension 2. For more related studies on the multiplicative Allee effect, see [11, 12]. The Allee effect could also be _weak_, where the growth rate is always positive, but less pronounced at lower densities [13, 14]. It has been observed that an Allee effect is significant at very low population size and with bias in sex ratio [15, 16]. Researchers [17, 15] observed that extinction of Atlantic cod (_Gadus morhua_) in the southern Gulf of St. Lawrence and the depletion of Atlantic herring (_Clupea harengus_) population in the North Sea are due to predation-driven Allee effect, therein prompting the risk of population extinction. The study of the predator-prey model is also a central topic in ecology and evolutionary biology [18]. It was once commonly believed that predators could only affect prey populations by direct consumption of prey. However, even the presence of a predator may alter the behavior and physiology of prey. Prey perceive the risk of predation and respond with a range of anti-predatory responses, such as changes in habitat selection and foraging behavior [2, 19]. These changes, in various forms, may ultimately affect the overall reproductive rate of the prey population. We refer to this particular biological phenomenon as the "fear" effect. The first experiment on the species fears effect was done by Zanette et al. [20]. They isolated the effects of perceived predation risk in a free-living population of song sparrows by actively eliminating direct predation and used playbacks of predator calls and sounds to manipulate perceived risk. The research showed that under the influence of fear effect only, the number of offspring produced by the species was reduced by 40% annually. In 2016, Wang et al. [21] considered the fear effect for the first time based on the classical two-species Lotka-Volterra predator-prey model: \[\left\{\begin{array}{l}\frac{dx}{dt}=rxf(k,y)-dx-ax^{2}-g(x)y, \\ \frac{dy}{dt}=-my+cg(x)y,\end{array}\right. \tag{1.3}\] where \(a\) represents the mortality rate due to intraspecific competition of the prey, \(g(x)\) is the functional predation rate of the predator, and \(f(k,y)=\dfrac{1}{1+ky}\) represents the anti-predation response of the prey due to the fear of the predator, i.e., the fear effect function. The researchers found that under conditions of Hopf bifurcation, an increase in fear level may shift the direction of Hopf bifurcation from supercritical to subcritical when the birth rate of prey increases accordingly. Numerical simulations also suggest that the anti-predator defenses of animals increase as the rate of predator attack increases. Based on (1.3), Sasmal et al. [22] proposed for the first time a predator-prey system with multiplicative Allee effect and fear effect for prey species. The model is specified as follows: \[\left\{\begin{array}{l}\frac{dx}{dt}=rx\left(1-\frac{x}{k}\right)(x- \theta)\frac{1}{1+fy}-axy,\\ \frac{dy}{dt}=a\alpha xy-my.\end{array}\right. \tag{1.4}\] The authors' study showed that the fear effect did not change the stability of the equilibrium point. However, with the more substantial fear effect, the final population density of the predator will decrease. The Multiplicative Allee effect will cause a subcritical Hopf bifurcation in system (1.4), thus producing a stable limit cycle. More related studies can be found in [23, 24, 25, 26, 27, 28, 29]. We note that the Allee effect in competitive systems have been considered as well. This starts with the work of Wang [30]. Herein a weak Allee effect is modeled as affecting both competitors. Jang modeled the strong Allee effect in both competitors [31]. Also, Desilva and Jang model a two species competitive system, with a strong Allee effect in one competitor and stocking effect. These works primarily focus on equilibrium analysis and not bifurcation analysis. The effect of fear on predator-prey systems has been extensively studied, but in competitive systems fear has rarely been considered. However, there is strong evidence that fear exists in purely competitive systems _without_ predation effects, or where predation effects are negligible [32, 33, 1]. The Barred Owl (_Strix varia_) is a species of Owl native to eastern North America. During the last century, they have expanded their range westward and have been recognized as an invasion of the western North American ecosystem. Currently, their range overlaps with the Spotted Owl (_Strix occidentalis_). The Spotted Owl is native to northwestern and western North America, which has led to intense competition between two species [34]. The Barred Owl has a strong negative impact on the Spotted Owl, and field observations have reported that barred owls frequently attack spotted owls [35]. There is also evidence that barred owls actively and unilaterally drive spotted owls out of shared habitat [33]. There is also other very recent empirical evidence to support such investigations. In [32] a series of 6 year long experiments are conducted in various Caribbean islands that aim to refute the theory of adaptive predation - which suggests that predators reduce dominant competitors, thus preventing competitive exclusion and enhancing coexistence in food webs. However, non-consumptive effects such as fear of depredation can have strongly influencing effects [32, 36, 2, 37]. [32] considers a series of experiments with two competing species of lizards, brown anolis (_Anolis sagrei_) that dwells on tree trunks, and green anolis (_Anolis smaragdinus_) that dwells on tree canopies. The experiments show that typically these species co-exist - due to a clear niche separation. However, the introduction of an intraguild predator, the curly tailed lizard (_Leiocphalus carinatus_) that dwells on the ground, causes (non-consumptive) fear driven effects. The brown anolis being fearful of possible depredation (as the lower half of tree trunks are within striking distance of the curly tailed lizard) moves upwards into the canopy, which is occupied by the green anolis. Herein interspecific competition intensifies leading to a loss of co-existence. However, what is most crucial in this study, is that the fecal analysis of the curly tail lizard shows that its diet included the brown anolis, in only 2 out of 51 samples examined. Thus the new dispersal pattern of the brown anolis and the "refuge competition" is driven strongly by a non-consumptive fear effect and not a consumptive one. Thus the brown anolis and the curly tail lizards are really competitors, as they have a strong overlap in dietary niche for several insects. However, the brown anolis is clearly fearful of the curly tailed lizard [32]. There is further evidence of non consumptive effects such as fear among competing aphid species, as well as competitors that feed on aphids [38, 39]. Such interplay between competition and predation has been investigated [1], where it is proposed that in many ecological processes, competition and predation are interlinked, and depending on niche overlap, one of them will dominate to drive the underlying dynamics. Such evidence motivates us to consider the fear effect in a purely competitive two-species model, in which one competitor causes fear to the other. Thus, Srivastava et al. [40] considered the classical two-group Lotka-Volterra competition model with only one competitor causing fear to the other competitor: \[\left\{\begin{array}{l}\frac{du}{dt}=a_{1}u-b_{1}u^{2}-c_{1}uv,\\ \frac{dv}{dt}=\frac{a_{2}v}{1+ku}-b_{2}v^{2}-c_{2}uv.\end{array}\right. \tag{1.5}\] They find that the presence of fear can have several interesting dynamical effects on the classical competitive scenarios. That is (1.5) can produce dynamic phenomena such as saddle-node bifurcation and transcritical bifurcation, which are drivers to change the dynamics that we see in classical competitive systems. Notably, for fear levels in certain regimes, novel bi-stability dynamics is established. Such dynamics have also been recently observed in [41]. Furthermore, in the spatially explicit setting, the effects of several spatially heterogeneous fear functions are investigated. Particularly under certain integral restrictions on the fear function, a weak competition type situation can change to competitive exclusion. Inspired by the above the ideas in the above works, we propose a novel Lotka-Volterra competition model, where the first species is affected by the multiplicative strong Allee effect, while the second species produces a fear effect on the first species, \[\left\{\begin{array}{l}\frac{\mathrm{d}x_{1}}{\mathrm{d}\tau}=x_{1}(r_{1}- \alpha_{1}x_{1})(x_{1}-m)\frac{1}{1+kx_{2}}-\beta_{1}x_{1}x_{2},\\ \frac{\mathrm{d}x_{2}}{\mathrm{d}\tau}=x_{2}(r_{2}-\alpha_{2}x_{2}-\beta_{2}x _{1}).\end{array}\right. \tag{1.6}\] where \(0<m<\dfrac{r_{1}}{\alpha_{1}}\). This paper has the following innovations: 1. This is the first model to consider _both_ the strong Allee effect and fear effect on species density in the two-species Lotka-Volterra competition model. Also note by setting \(m=0\), we are in a weak Allee type setting. 2. The Allee effect parameter and the fear effect parameter affect the existence and number of positive equilibrium points in the ODE model. 3. The Allee effect parameter and the fear effect parameter affect the extinction state of system (1.6), but do not change the stability of the positive equilibria. 4. Changing the fear parameter leads to several bifurcations in system (1.6). 5. In the PDE case restrictions are derived between the Allee threshold and the fear parameter, that would yield competitive exclusion of \(u\), the competitor subject to the Allee and fear effects. 6. In the PDE case restrictions are also derived to provide conditions under which one has initial condition dependent attraction to \((u^{*},0)\) or \((0,v^{*})\). The rest of this paper is organized as follows: The positivity and boundedness of the solution of system (1.6) are proved in Section 2. We examine the existence and stability of all equilibria in Section 3 and Section 4. In Section 5, we analyze the bifurcation of the system around the positive equilibria. In Section 6 we present the analysis of the spatially explicit model or the PDE case. We end this paper with a discussion and conclusion. ## 2 Preliminaries The intrinsic growth rates of either species are \(r_{1}\), \(r_{2}\), and the environmental carrying capacities are \(r_{1}/\alpha_{1}\) and \(r_{2}/\alpha_{2}\), respectively, according to the logistic rule of growth. We changed system (1.6) to the following form: \[\left\{\begin{array}{l}\frac{\mathrm{d}x_{1}}{\mathrm{d}\tau}=r_{1}x_{1}(1- \frac{x_{1}}{k_{1}})(x_{1}-m)\frac{1}{1+kx_{2}}-\beta_{1}x_{1}x_{2},\\ \frac{\mathrm{d}x_{2}}{\mathrm{d}\tau}=r_{2}x_{2}(1-\frac{x_{2}}{k_{2}})- \beta_{2}x_{1}x_{2}.\end{array}\right. \tag{2.1}\] In order to reduce the parameters of system (2.1), the following dimensionless quantities are applied to the non-dimensionalize model system (2.1) \[t=r_{1}k_{1}\tau,\quad\frac{x_{1}}{k_{1}}=x,\quad\frac{x_{2}}{k_{2}}=y,\quad \frac{m}{k_{1}}=p,\quad kk_{2}=q,\quad\frac{\beta_{1}k_{2}}{r_{1}k_{1}}=a,\quad \frac{r_{2}}{r_{1}k_{1}}=b,\quad\frac{\beta_{2}k_{1}}{r_{2}}=c,\] then system (2.1) becomes the following system: \[\left\{\begin{array}{l}\frac{\mathrm{d}x}{\mathrm{d}t}=x\left[(1-x)(x-p) \frac{1}{1+qy}-ay\right]=xf(x,y)\equiv F(x,y),\\ \frac{\mathrm{d}y}{\mathrm{d}t}=by\left(1-y-cx\right)=yg(x,y)\equiv G(x,y). \end{array}\right. \tag{2.2}\] All parameters in system (2.2) are positive and \(0<p<1\). Based on biological considerations, the initial condition of system (2.2) satisfies \[x(0)>0,\quad y(0)>0. \tag{2.3}\] **Proposition 2.1**.: _All solutions of system (2.2) are positive._ Proof.: Since \[x(t)=x(0){\rm exp}\left[\int_{0}^{t}f(x(s),y(s)){\rm d}s\ \right]>0,\] and \[y(t)=y(0){\rm exp}\left[\int_{0}^{t}g(x(s),y(s)){\rm d}s\ \right]>0.\] So all solutions of system (2.2) with initial condition (2.3) are positive. **Lemma 2.2**.: _[_42_]_ _If \(a,b>0\) and \(x^{{}^{\prime}}(t)\leq(\geq)x(t)(a-bx(t))\) with \(x(0)>0\), then_ \[\limsup_{t\to+\infty}x(t)\leq\frac{a}{b}(\liminf_{t\to+\infty}x(t)\geq\frac{a} {b}).\] **Proposition 2.3**.: _The solutions of system (2.2) are bounded._ Proof.: For the boundedness of \(y(t)\), according to the second equation of system (2.2), \[\frac{{\rm d}y}{{\rm d}t}=by\,(1-y-cx)\leq y(b-by),\] by applying Lemma 2.2 to the above inequality, we have \[\limsup_{t\to+\infty}y(t)\leq\frac{b}{b}=1.\] Next, we discuss the boundedness of \(x(t)\). For any \(x(0)>1\), \[\frac{{\rm d}x}{{\rm d}t}=x\left[(1-x)(x-p)\frac{1}{1+qy}-ay\right]<0\] as long as \(x>1\). Moreover, along \(x=1\), we have \[\frac{{\rm d}x}{{\rm d}t}=-ay<0.\] Obviously, there is no equilibrium point in the region \(\{(x,y)\mid x>1,y\geq 0\}\). Thus, any positive solution satisfies \(x(t)\leq\max\left\{x(0),1\right\}=\overline{x}\) (say) for all \(t\geq 0\). Therefore, combining the above analysis, the solutions of system (2.2) are bounded. **Proposition 2.4**.: _If \(0<x(0)\leq p\), \(y(0)\geq 0\) and \((x(0),y(0))\neq(p,0)\), then \(\lim_{t\to\infty}(x(t),y(t))=(0,0)\)._ Proof.: The proof is similar to Theorem 2.1 in [43], and we omit the detailed procedure here. **3. Existence of Equilibria** Obviously, system (2.2) has a constant equilibrium point \(E_{0}(0,0)\) and three boundary equilibria \(E_{1}(1,0)\), \(E_{2}(p,0)\), \(E_{3}(0,1)\). In the following, we discuss the existence of positive equilibria. The intersections of two isoclines \(f(x,y)=0\), \(g(x,y)=0\) in the first quadrant is the point of positive equilibria. Denote the positive equilibria of system (2.2) as \(E_{i*}(x_{i},y_{i})\) (i=1, 2), from \(f(x,y)=g(x,y)\), we obtain \[A_{1}x^{2}+A_{2}x+A_{3}=0,\] where \[A_{1}=ac^{2}q+1>0,\] \[A_{2}=-(2acq+ac+p+1)<0,\] \[A_{3}=a+aq+p>0.\] Denote the discriminant of (3.1) as \(\Delta(q)=A_{2}^{2}-4A_{1}A_{3}\). When \(\Delta>0\), (3.1) has two real roots, which can be expressed as follows: \[x_{1}={-A_{2}-\sqrt{\Delta}\over 2A_{1}},\quad x_{2}={-A_{2}+\sqrt{\Delta} \over 2A_{1}}.\] The real root of (3.1) is \(x_{i}\) and \(y_{i}=1-cx_{i}\). From Proposition 2.1 and 2.3, we know that \(0<x(t)<1\) and \(0<y(t)<1\), from which we give the following theorem. **Theorem 3.1**.: _The positive equilibrium point of the system (2.2) is shown below:_ _CASE I: \(\Delta>0\)_ 1. \(2A_{1}+A_{2}>0\)__ (a) _For_ \(0<c<1\)__ (a) _System (_2.2_) exists two positive equilibria_ \(E_{1*}\) _and_ \(E_{2*}\)_(Fig. 3(a))._ (b) _For_ \(c=1\)__ (a) _System (_2.2_) exists only one positive equilibrium point_ \(E_{1*}\)_(Fig. 3(b))._ (c) _For_ \(1<c\leq{1\over q}+1\)__ (a) _System (_2.2_) exists only one positive equilibrium point_ \(E_{1*}\) _if_ \(0<p<{1\over c}\)_(Fig. 3(c))._ (a) _For_ \(1<c<{1\over q}+1\)__ (a) _System (_2.2_) exists only one positive equilibrium point_ \(E_{1*}\) _if_ \(0<p<{1\over c}\)_(Fig. 3(d))._ (b) _For_ \(c>1\)__ 1. _System (2.2) exists only one positive equilibrium point \(E_{1*}\) if \(0<p<\dfrac{1}{c}\)(Fig. 3(e))._ _CASE II: \(\Delta=0\)_ 1. \(2A_{1}+A_{2}>0\)__ 1. _For_ \(0<c<1\)__ 1. _System (_2.2_) exists only one positive equilibrium point_ \(E_{3*}\)_(Fig. 3(f))._ Proof.: _For CASE I, we know that \(0<x_{1}<x_{2}\). When the condition \(2A_{1}+A_{2}>0\) is satisfied, \(0<x_{1}<1\) is always true. Next, we discuss \(x_{2}\). If \(0<x_{2}<1\), a simple calculation shows that the parameter \(c\) must satisfy \(0<c<1\) or \(c>\dfrac{1}{q}+1\). For \(0<c<1\), \(y_{2}=1-cx_{2}\) obviously satisfies \(0<y_{2}<1\). For \(c>\dfrac{1}{q}+1\), if \(0<y_{2}<1\) is true then the parameters of system (2.2) must also satisfy \(2+c(-ac-p-1)>0\) and \(cp>1\). However, the above two conditions cannot hold simultaneously, so \(c>\dfrac{1}{q}+1\) is also not valid either. On the contrary, if \(x_{2}\geq 1\), then we can get \(1\leq c\leq\dfrac{1}{q}+1\). When \(c=1\), we can guarantee that the equilibrium point \(E_{1*}\) must be a positive equilibrium point. If \(1<c\leq\dfrac{1}{q}+1\), it can be seen by calculating \(0<y_{1}<1\) that the equilibrium point \(E_{1*}\) can only be in the first quadrant if \(0<p<\dfrac{1}{c}\)._ _When the condition \(2A_{1}+A_{2}<0\) is satisfied, \(x_{2}>1\) always holds. Let us discuss the conditions under which \(0<x_{1}<1\) can hold. The inequality shows that the parameter \(c\) needs to satisfy \(1<c<\dfrac{1}{q}+1\), but \(E_{1*}\) may be located in the first or fourth quadrant. Thus we make \(0<p<\dfrac{1}{c}\) so that the equilibrium point \(E_{1*}\) is a positive equilibrium point._ _When the condition \(2A_{1}+A_{2}=0\) is satisfied, \(0<x_{1}<1<x_{2}\) always holds. If \(0<c<1\), based on \(2A_{1}+A_{2}=0\) we can find \(a=\dfrac{p-1}{c\left(2cq-2q-1\right)}\triangleq a_{*}\). Substitute \(a=a_{*}\) into \(\Delta\) and get_ \[\Delta=-\dfrac{4\left(-1+\left(c-1\right)q\right)\left(-1+\left(-2+\left(p+1 \right)c\right)q\right)\left(c-1\right)\left(p-1\right)}{\left(2cq-2q-1\right) ^{2}c}<0.\] _Therefore \(2A_{1}+A_{2}=0\) and \(\Delta>0\) cannot hold simultaneously, i.e., \(c\notin\left(0,1\right)\). While if \(c>1\), it also needs to satisfy both \(0<p<\dfrac{1}{c}\). Let \(c=1\), and then the calculation finds that \(2A_{1}+A_{2}=0\) will also make \(\Delta\) equal to \(0\). The results contradict the previous assumptions, i.e., \(c\neq 1\)._ _For CASE II, in order for \(0<x_{3}<1\) to hold, then \(2A_{1}+A_{2}>0\) must be satisfied. \(E_{1*}\) is a positive equilibrium point of system (2.2) when \(0<c<1\). Conversely when \(c>1\), in order to satisfy \(y_{3}>0\), the parameters of system (2.2) also need to satisfy \(ac^{2}+cp+c-2<0\). From \(\Delta=0\) we get_ \[q=\dfrac{a^{2}c^{2}+\left(-4+\left(2p+2\right)c\right)a+\left(p-1\right)^{2} }{4a\left(c-1\right)\left(cp-1\right)}\triangleq q_{*}.\] _Substitute \(q=q_{*}\) into \(2A_{1}+A_{2}\) and get_ \[\frac{\left(ac-p+1\right)\left(a\,c^{2}+cp+c-2\right)}{2(cp-1)}.\] _The condition \(0<cp<1\) must be satisfied if the above equation is greater than 0. But under \(0<cp<1\), we get \(q=q_{*}<0\). In summary, \(c\notin(1,+\infty)\). Let \(c=1\), and then the calculation finds that \(\Delta=0\) will also make \(2A_{1}+A_{2}\) equal to \(0\). The results contradict the previous assumptions, i.e., \(c\neq 1\). _ ## 4 Stability of Equilibria ### Stability of boundary equilibria The Jacobian matrix of system (2.2) is \[J(E)=\begin{bmatrix}\dfrac{\left(1-x\right)\left(x-p\right)}{qy+1}-ay+x\left( -\dfrac{x-p}{qy+1}+\dfrac{1-x}{qy+1}\right)&x\left(-\dfrac{\left(1-x\right) \left(x-p\right)q}{\left(qy+1\right)^{2}}-a\right)\\ -bcy&b\left(-cx-y+1\right)-by\end{bmatrix}. \tag{4.1}\] The Jacobian matrix at \(E_{0}(0,0)\) is given by \[J(E_{0})=\begin{bmatrix}-p&0\\ 0&b\end{bmatrix}.\] The two eigenvalues of \(J(E_{0})\) are \(\lambda_{10}=-p<0\) and \(\lambda_{20}=b>0\), so \(E_{0}\) is a saddle. The Jacobian matrix at \(E_{3}(0,1)\) is given by \[J(E_{3})=\begin{bmatrix}-\dfrac{p}{q+1}-a&0\\ -bc&-b\end{bmatrix}.\] The two eigenvalues of \(J(E_{3})\) are \(\lambda_{13}=-\dfrac{p}{q+1}-a<0\) and \(\lambda_{23}=-b<0\), so \(E_{3}\) is a stable node. Then we discuss the stability of the boundary equilibria \(E_{1}(1,0)\), \(E_{2}(p,0)\). **Theorem 4.1**.: _The stability of the boundary equilibrium point \(E_{1}\) is shown below:_ 1. _If_ \(c>1\)_,_ \(E_{1}\) _is a stable node (Fig._ 1_(a))._ 2. _If_ \(0<c<1\)_,_ \(E_{1}\) _is a saddle (Fig._ 1_(b))._ 3. _If_ \(c=1\)_,_ 1. \(E_{1}\) _is an attracting saddle-node, and the parabolic sector is on the lower half-plane when_ \(p<1-a\) _(Fig._ 1_(c))._ 2. \(E_{1}\) _is an attracting saddle-node, and the parabolic sector is on the upper half-plane when_ \(p>1-a\) _(Fig._ 1_(d))._ 3. \(E_{1}\) _is a nonhyperbolic saddle when_ \(p=1-a\) _(Fig._ 1_(e))._ Proof.: The Jacobian matrix at \(E_{1}(1,0)\) is given by \[J(E_{1})=\begin{bmatrix}-1+p&-a\\ 0&b(-c+1)\end{bmatrix}.\] The two eigenvalues of \(J(E_{1})\) are \(\lambda_{11}=-1+p<0\) and \(\lambda_{21}=b(-c+1)\). \(E_{1}\) is a stable node if \(\lambda_{21}=b(-c+1)<0\), i.e., \(c>1\). \(E_{1}\) is a saddle if \(\lambda_{21}=b(-c+1)>0\), i.e., \(0<c<1\). For \(\lambda_{21}=b(-c+1)=0\), i.e., \(c=1\), we conduct the following discussion. We move equilibrium \(E_{1}\) to the origin by transforming \((X,Y)=(x-1,y)\) and make Taylor's expansion around the origin, then system (2.2) becomes \[\left\{\begin{array}{l}\frac{\mathrm{d}X}{\mathrm{d}t}=\left(p-1\right)X-aY +\left(-2+p\right)X^{2}-\left(qp+a-q\right)YX-X^{3}-q\left(-2+p\right)Y\,X^{2} +\left(p-1\right)q^{2}X\,Y^{2}+P_{0}(X,Y),\\ \frac{\mathrm{d}Y}{\mathrm{d}t}=-bXY-bY^{2},\end{array}\right.\] where \(P_{i}(X,Y)\) are power series in \((X,Y)\) with terms \(X^{I}Y^{J}\) satisfying \(I+J\geq 4\) (the same below). In the next step, we make the following transformations to the above system \[\begin{bmatrix}X\\ Y\end{bmatrix}=\begin{bmatrix}1&\dfrac{a}{p-1}\\ 0&1\end{bmatrix}\begin{bmatrix}X_{1}\\ Y_{1}\end{bmatrix},\] and letting \(\tau=(p-1)t\), for which we will retain \(t\) to denote \(\tau\) for notational simplicity, we get \[\left\{\begin{array}{l}\frac{\mathrm{d}X_{1}}{\mathrm{d}t}=X_{1}+a_{11}X_{1} Y_{1}+a_{20}X_{1}^{2}+a_{02}Y_{1}^{2}+a_{30}X_{1}^{3}+a_{03}Y_{1}^{3}+a_{21}X_{1} ^{2}Y_{1}+a_{12}X_{1}Y_{1}^{2}+P_{1}(X_{1},Y_{1}),\\ \frac{\mathrm{d}Y_{1}}{\mathrm{d}t}=b_{02}Y_{1}^{2}+b_{11}X_{1}Y_{1}, \end{array}\right. \tag{4.2}\] where \[a_{11} =\frac{-p^{2}q+ab+ap+2pq-3a-q}{\left(p-1\right)^{2}},\quad a_{20} =\frac{-2+p}{p-1},\quad a_{02}=\frac{a\left(-p^{2}q+ab+bp+2pq-a-b-q\right)}{ \left(p-1\right)^{3}},\] \[a_{30} =-\frac{1}{p-1},\quad a_{21}=-\frac{p^{2}q-3pq+3a+2q}{\left(p-1 \right)^{2}},\quad a_{12}=-\frac{-p^{3}q^{2}+2a\,p^{2}q+3p^{2}q^{2}-6apq-3pq^{2 }+3a^{2}+4aq+q^{2}}{\left(p-1\right)^{3}},\] \[a_{03} =-\frac{a\left(-pq+a+q\right)\left(p^{2}q-2pq+a+q\right)}{\left(p -1\right)^{4}},\quad b_{11}=-\frac{b}{p-1},\quad b_{02}=-\frac{b\left(a+p-1 \right)}{\left(p-1\right)^{2}}.\] Figure 1: Red, green, pink, and orange points indicate stable node, unstable node (source), saddle, and saddle-node, respectively. (a) \(c>1\). (b) \(0<c<1\). (c) \(c=1\) and \(p<1-a\). (d) \(c=1\) and \(p>1-a\). (e) \(c=1\) and \(p=1-a\). Hence by Theorem 7.1 in Chapter 2 in [44], if \(b_{02}>0\), i.e., \(p<1-a\), \(E_{1}\) is an attracting saddle-node, and the parabolic sector is on the lower half-plane. If \(b_{02}<0\), i.e., \(p>1-a\), \(E_{1}\) is an attracting saddle-node, and the parabolic sector is on the upper half-plane. If \(b_{02}=0\), i.e. \(p=1-a\), system (4.2) becomes \[\left\{\begin{array}{l}\frac{\mathrm{d}X_{1}}{\mathrm{d}t}=X_{1}+a_{11}X_{1} Y_{1}+a_{20}X_{1}^{2}+a_{02}Y_{1}^{2}+a_{30}X_{1}^{3}+a_{03}Y_{1}^{3}+a_{21}X_{1} ^{2}Y_{1}+a_{12}X_{1}Y_{1}^{2}+P_{1}(X_{1},Y_{1}),\\ \frac{\mathrm{d}Y_{1}}{\mathrm{d}t}=b_{11}X_{1}Y_{1}.\end{array}\right. \tag{4.3}\] By using the first equation of system (4.3), we obtain the implicit function \[X_{1}=-a_{02}Y_{1}^{2}+(a_{11}a_{02}-a_{03})Y_{1}^{3}+\cdots\cdots.\] and \[\frac{\mathrm{d}Y_{1}}{\mathrm{d}t}=-a_{02}b_{11}Y_{1}^{3}+\cdots\cdots,\] where \[-a_{02}b_{11}=\frac{ab\left[-q(p-1)^{2}-a\right]}{(p-1)^{4}}<0.\] According to Theorem 7.1 again, \(E_{1}\) is a nonhyperbolic saddle. **Theorem 4.2**.: _The stability of the boundary equilibrium point \(E_{2}\) is shown below:_ 1. _If_ \(0<p<\frac{1}{c}\)_,_ \(E_{2}\) _is a unstable node (Fig._ 2_(a))._ 2. _If_ \(p>\frac{1}{c}\)_,_ \(E_{2}\) _is a saddle (Fig._ 2_(b))._ 3. _If_ \(p=\frac{1}{c}\)_,_ \(E_{2}\) _is a repelling saddle-node (Fig._ 2_(c))._ Proof.: The Jacobian matrix at \(E_{2}(p,0)\) is given by \[J(E_{2})=\begin{bmatrix}p(1-p)&-pa\\ 0&b(-cp+1)\end{bmatrix}.\] The two eigenvalues of \(J(E_{2})\) are \(\lambda_{12}=p(1-p)>0\) and \(\lambda_{22}=b(-cp+1)\). \(E_{2}\) is an unstable node if \(\lambda_{22}=b(-cp+1)>0\), i.e., \(0<p<\frac{1}{c}\). \(E_{2}\) is a saddle if \(\lambda_{22}=b(-cp+1)<0\), i.e., \(p>\frac{1}{c}\). For \(\lambda_{22}=b(-cp+1)=0\), i.e. \(p=\frac{1}{c}\), we conduct the following discussion. We move equilibrium \(E_{2}\) to the origin by transforming \((X,Y)=(x-p,y)\) and make Taylor's expansion around the origin, then system (2.2) becomes \[\left\{\begin{aligned} \frac{\mathrm{d}X}{\mathrm{d}t}=& -p\left(p-1\right)X-paY-\left(2p-1\right)X^{2}-\left(-p^{2}q+pq+a \right)YX-X^{3}\\ &+q\left(2p-1\right)YX^{2}-p\left(p-1\right)q^{2}XY^{2}+P_{2}(X,Y ),\\ \frac{\mathrm{d}Y}{\mathrm{d}t}=&-\frac{bYX}{p}-bY^ {2}.\end{aligned}\right.\] In the next step, we make the following transformations to the above system \[\begin{bmatrix}X\\ Y\end{bmatrix}=\begin{bmatrix}1&\frac{a}{1-p}\\ 0&1\end{bmatrix}\begin{bmatrix}X_{1}\\ Y_{1}\end{bmatrix},\] Figure 2: Red, green, pink, and orange points indicate stable node, unstable node (source), saddle, and saddle-node, respectively. (a) \(0<p<\frac{1}{c}\). (b) \(p>\frac{1}{c}\). (c) \(p=\frac{1}{c}\). and letting \(\tau=-p(p-1)t\), for which we will retain \(t\) to denote \(\tau\) for notational simplicity, we get \[\left\{\begin{array}{l}\frac{\mathrm{d}X_{1}}{\mathrm{d}t}=X_{1}+c_{11}X_{1}Y_{ 1}+c_{20}X_{1}^{2}+c_{02}Y_{1}^{2}+c_{30}X_{1}^{3}+c_{03}Y_{1}^{3}+c_{21}X_{1} ^{2}Y_{1}+c_{12}X_{1}Y_{1}^{2}+P_{3}(X_{1},Y_{1}),\\ \frac{\mathrm{d}Y_{1}}{\mathrm{d}t}=d_{02}Y_{1}^{2}+d_{11}X_{1}Y_{1}, \end{array}\right. \tag{4.4}\] where \[c_{11} =\frac{-p^{4}q+2p^{3}q-3ap^{2}-p^{2}q+ab+pa}{p^{2}\left(p-1\right) ^{2}},\quad c_{20}=\frac{2p-1}{p\left(p-1\right)},\quad c_{02}=-\frac{a\left(-p ^{4}q+2p^{3}q-a\,p^{2}-b\,p^{2}-p^{2}q+ab+bp\right)}{\left(p-1\right)^{3}p^{2}},\] \[c_{30} =\frac{1}{p\left(p-1\right)},\quad c_{21}=-\frac{2p^{2}q-3pq+3a+ q}{p\left(p-1\right)^{2}},\quad c_{12}=\frac{p^{4}q^{2}-3p^{3}q^{2}+4a\,p^{2}q+3p^{2} q^{2}-6apq-p\,q^{2}+3a^{2}+2aq}{p\left(p-1\right)^{3}},\] \[c_{03} =-\frac{a\left(p^{2}q-pq+a\right)\left(p^{2}q-2pq+a+q\right)}{ \left(p-1\right)^{4}p},\quad d_{11}=\frac{b}{p^{2}\left(p-1\right)},\quad d_{ 02}=-\frac{b\left(-p^{2}+a+p\right)}{p^{2}\left(p-1\right)^{2}}.\] Since \(-p(p-1)>0\) and \(d_{02}<0\), \(E_{2}\) is a repelling saddle-node. **Remark 1**.: _By analyzing the stability of system (2.2) boundary equilibria, we found that the Allee effect parameter \(p\) significantly affects the extinction of the species. This conclusion is consistent with the ecological implications of the strong Allee effect. The above theoretical analysis has guiding impacts on the conservation of environmental diversity._ ### Stability of positive equilibria Transform the Jacobian matrix of the positive equilibria \(E_{i\ast}\) of system (2.2) into \[\begin{bmatrix}x_{\ast}\frac{\partial f}{\partial x_{\ast}}&x_{\ast}\frac{ \partial f}{\partial y_{\ast}}\\ y_{\ast}\frac{\partial g}{\partial x_{\ast}}&y_{\ast}\frac{\partial g}{\partial y _{\ast}}\end{bmatrix}\triangleq\begin{bmatrix}B_{1}&B_{2}\\ B_{3}&B_{4}\end{bmatrix}, \tag{4.5}\] where \[B_{1}=\frac{x_{\ast}\left(-2x_{\ast}+p+1\right)}{qy_{\ast}+1},\] \[B_{2}=x_{\ast}\left(-\frac{\left(1-x_{\ast}\right)\left(x_{\ast}-p\right)q}{ \left(qy_{\ast}+1\right)^{2}}-a\right)<0,\] \[B_{3}=-bcy_{\ast}<0,\] \[B_{4}=-by_{\ast}<0.\] First we consider \(\det(J(E_{\ast}))\). From system (2.2), we have \[y_{\ast}^{(f)}=\frac{\sqrt{4\left(1-x_{\ast}\right)\left(x_{\ast}-p\right)q+a} -\sqrt{a}}{2\sqrt{a}\,q},\] \[y_{*}^{(g)}=1-cx_{*}.\] Using the implicit function differentiability theorem, we get the following expression \[\det(J(E_{*}))=\left[x_{*}y_{*}\frac{\partial f}{\partial y_{*}}\frac{\partial g }{\partial y_{*}}\left(\frac{\mathrm{d}y_{*}^{(g)}}{\mathrm{d}x_{*}}-\frac{ \mathrm{d}y_{*}^{(f)}}{\mathrm{d}x_{*}}\right)\right], \tag{4.6}\] where \(\frac{\mathrm{d}y_{*}^{(g)}}{\mathrm{d}x_{*}}\) and \(\frac{\mathrm{d}y_{*}^{(f)}}{\mathrm{d}x_{*}}\) stand for the magnitude of the tangent slope at \(E_{*}\) of two isoclines. In the following, we investigate the monotonicity of the function \(y_{*}^{(f)}\). First, we can find \[\frac{\mathrm{d}y_{*}^{(f)}}{\mathrm{d}x_{*}}=\frac{p-2x+1}{\sqrt{4\left(1-x \right)\left(x-p\right)q+a}\sqrt{a}}.\] When the two isoclines have two intersections in the first quadrant, the function \(y_{*}^{(f)}\) is monotonically increasing on the interval \((p,\frac{p+1}{2})\) and monotonically decreasing on the interval \((\frac{p+1}{2},1)\). And because when \(x_{*}\in(p,1)\), \(y_{*}^{(f)}\) is a continuous function, so combined with the image analysis(Fig. 3), we can get \[\frac{\mathrm{d}y_{*}^{(g)}}{\mathrm{d}x_{*}}-\frac{\mathrm{d}y_{*}^{(f)}}{ \mathrm{d}x_{*}}\Bigg{|}_{(x_{*},y_{*})=(x_{1},y_{1})}<0,\] \[\frac{\mathrm{d}y_{*}^{(g)}}{\mathrm{d}x_{*}}-\frac{\mathrm{d}y_{*}^{(f)}}{ \mathrm{d}x_{*}}\Bigg{|}_{(x_{*},y_{*})=(x_{2},y_{2})}>0.\] So we can determine that the positive equilibrium point \(E_{1*}\) is a saddle. For \(E_{2*}\), we have \[\det(J(E_{2*})) = \left[x_{*}y_{*}\frac{\partial f}{\partial y_{*}}\frac{\partial g }{\partial y_{*}}\left(\frac{\mathrm{d}y_{*}^{(g)}}{\mathrm{d}x_{*}}-\frac{ \mathrm{d}y_{*}^{(f)}}{\mathrm{d}x_{*}}\right)\right]\Bigg{|}_{(x_{*},y_{*})= (x_{2},y_{2})}\] \[= B_{1}B_{4}-B_{2}B_{3}|_{(x_{*},y_{*})=(x_{2},y_{2})}\] \[> 0.\] Then we can conclude that \(B_{1}B_{4}>B_{2}B_{3}\). Since the signs of \(B_{2}\), \(B_{3}\), and \(B_{4}\) have been determined, we can thus know that \(B_{1}<0\). Finally, we can determine that \(\mathrm{tr}(J(E_{2*}))=B_{1}+B_{4}<0\) by the above analysis. From \(\det(J(E_{2*}))>0\), \(\mathrm{tr}(J(E_{2*}))<0\), we know that \(E_{2*}\) is a stable node. According to Theorem 3.1, when the discriminant \(\Delta=0\) of (3.1), the positive equilibria \(E_{1*}\) and \(E_{2*}\) will merge into a new point \(E_{3*}\). In the following, we discuss the stability of \(E_{3*}\). If \(\Delta=0\), we get \[q=\frac{a^{2}c^{2}+2acp+2ac+p^{2}-4a-2p+1}{4a\left(c^{2}p-cp-c+1\right)}\triangleq q _{*}.\] Substituting \(q=q_{*}\), \(x=x_{3}\), \(y=1-cx_{3}\) into (4.1), we find that \(\det(J(E_{3*}))=0\). Thus the positive equilibrium point \(E_{3*}\) is a degenerate equilibrium point, which we will analyze in more detail in the next step. We move equilibrium \(E_{3*}\) to the origin by transforming \((X,Y)=(x-x_{3},y-y_{3})\) and make Taylor's expansion around the origin, then system (2.1) becomes \[\left\{\begin{array}{l}\frac{\mathrm{d}X}{\mathrm{d}t}=e_{10}X+e_{01}Y+e_{11} XY+e_{20}X^{2}+e_{02}Y^{2}+e_{30}X^{3}+e_{03}Y^{3}+e_{21}X^{2}Y+e_{12}XY^{2}+P_{4} (X,Y),\\ \frac{\mathrm{d}Y}{\mathrm{d}t}=f_{10}X+f_{01}Y+f_{11}XY+f_{02}Y^{2},\end{array}\right. \tag{4.7}\] where \(f_{11}=-bc\), \(f_{02}=-b\), please see Appendix A for the rest of the parameters. We make the following transformations to system (4.7) \[\begin{bmatrix}X\\ Y\end{bmatrix}=\begin{bmatrix}\dfrac{\left(ca+2cp-p-1\right)a\left(cap+ca+p^{2} -2a-2p+1\right)}{\left(a^{2}c^{2}-p^{2}+2p-1\right)b\left(c^{2}p-cp-c+1\right) }&-\dfrac{1}{c}\\ 1&1\end{bmatrix}\begin{bmatrix}X_{1}\\ Y_{1}\end{bmatrix},\] and letting \(\tau=m_{1}t\), where \[\begin{split} m_{1}=&\dfrac{2a^{2}b\,c^{4}p+2a^{2}\left(p+1 \right)\left(2p+a-b\right)c^{3}+\left(\left(4a-2b\right)p^{3}+\left(-8a+4b \right)p^{2}+\left(-16a^{2}+4a-2b\right)p-4a^{3}+2b\,a^{2}\right)c^{2}}{\left(- 2+ac^{2}+\left(p+1\right)c\right)\left(ca+p-1\right)\left(ca-p+1\right)}\\ &+\dfrac{4\left(p+1\right)\left(\left(-\frac{a}{2}+\frac{b}{2} \right)p^{2}+\left(a-b\right)p+a^{2}-\frac{a}{2}+\frac{b}{2}\right)c-2b\left(p -1\right)^{2}}{\left(-2+ac^{2}+\left(p+1\right)c\right)\left(ca+p-1\right) \left(ca-p+1\right)},\end{split}\] for which we will retain \(t\) to denote \(\tau\) for notational simplicity. We get \[\left\{\begin{array}{l}\frac{\mathrm{d}X_{1}}{\mathrm{d}t}=X_{1}+0\cdot Y_{ 1}+\cdots\cdots\,\\ \frac{\mathrm{d}Y_{1}}{\mathrm{d}t}=0\cdot X_{1}+0\cdot Y_{1}+g_{02}Y_{1} ^{2}+\cdots\cdots\,\end{array}\right. \tag{4.8}\] where \[g_{02}=\dfrac{\left(ca+2cp-p-1\right)\left(a\,c^{2}+cp+c-2\right)^{3}ab\left( a^{2}c^{2}-p^{2}+2p-1\right)}{4\left(a^{2}b\,c^{4}p+a^{2}\left(p+1\right)\left(2p+a-b \right)c^{3}+\left(\left(2a-b\right)p^{3}+\left(-4a+2b\right)p^{2}+\left(-8a^{ 2}+2a-b\right)p-2a^{3}+b\,a^{2}\right)c^{2}+N\right)^{2}c},\] \[N=2\left(p+1\right)\left(\left(-\frac{a}{2}+\frac{b}{2}\right)p^{2}+\left(a-b \right)p+a^{2}-\frac{a}{2}+\frac{b}{2}\right)c-b\left(p-1\right)^{2}.\] According to Theorem 3.1, system (2.2) satisfies the conditions \(\Delta=0\), \(2A_{1}+A_{2}>0\) when the positive equilibrium point \(E_{3}*\) exists. We substitute \(q=q*\) for \(2A_{1}+A_{2}>0\) and simplify to get \(\dfrac{\left(ac-p+1\right)\left(ac^{2}+cp+c-2\right)}{2cp-2}>0\). With \(0<c<1\), we can determine that \(ac^{2}+cp+c-2<0\). If \(a^{2}c^{2}-p^{2}+2p-1=0\), we can organize to get \( a=\dfrac{1-p}{c}\triangleq a_{*}\). Substitute \(a_{*}\) into \(2A_{1}+A_{2}\) to simplify and get \(2A_{1}+A_{2}=-2(p-1)q(c-1)\leq 0\), i.e., \(a^{2}c^{2}-p^{2}+2p-1\neq 0\). For \(ca+2cp-p-1\), if \( a=\dfrac{-2cp+p+1}{c}\triangleq a_{**}\) is true, then substitute \(q=q_{*}\) and \(a=a_{**}\) into \(A_{1}\), \(A_{2}\), and \(A_{3}\), respectively. It is tested that \(A_{1}=(c-1)(cp-1)\), \(A_{2}=0\), and \(A_{3}=0\), which do not comply with the Figure 3: Red, green, pink, and orange points indicate stable node, unstable node (source), saddle, and saddle-node, respectively. The pink and orange lines represent isoclines \(y^{(f)}\), \(y^{(g)}\), respectively. parameter assumptions in the previous section. From this, we know that \(ca+2cp-p-1\neq 0\). According to the above analysis, we can determine the parameter \(g02\neq 0\). Hence by Theorem 7.1 in Chapter 2 in [44], \(E_{3*}\) is a saddle-node. In summary, we derive the following theorem. **Theorem 4.3**.: _The stability of the positive equilibria is shown below:_ 1. \(E_{1*}\) _is a saddle._ 2. \(E_{2*}\) _is a stable node._ 3. \(E_{3*}\) _is a saddle-node._ ## 5 Bifurcation Analysis ### Transcritical bifurcation In proving Theorem 3.1 and 4.1, we found an interesting phenomenon: when \(c=1\), the positive equilibrium point \(E_{2*}\) will merge with the boundary equilibrium point \(E_{1}\). Also, the stability of the boundary equilibrium point \(E_{1}\) will change when the parameter \(c\) is in different intervals \((0,1)\) and \((1,+\infty)\), respectively. From this, we conjecture that system (2.2) experiences a transcritical bifurcation around \(E_{1}\) while noting \(c\) as a bifurcation parameter(Fig. 4). We proceed to a rigorous proof below. **Theorem 5.1**.: _System (2.2) undergoes a transcritical bifurcation around \(E_{1}\) at the bifurcation parameter threshold \(c_{TR}=1\) when \(p\neq 1-a\)._ Proof.: From Theorem 4.1, we know that the eigenvalues of \(J(E_{1})\) are \(\lambda_{11}=-1+p\), \(\lambda_{21}=0\) if \(c=c_{TR}=1\). Now, let \(\mathbf{V_{1}}=(v_{1},v_{2})^{T}\) and \(\mathbf{W_{1}}=(w_{1},w_{2})^{T}\) be the eigenvectors of \(J(E_{1})\) and \(J^{T}(E_{1})\) corresponding to \(\lambda_{21}=0\), respectively. By calculating, we obtain \[\mathbf{V_{1}}=\begin{bmatrix}v_{1}\\ v_{2}\end{bmatrix}=\begin{bmatrix}\dfrac{a}{p-1}\\ 1\end{bmatrix},\mathbf{W_{1}}=\begin{bmatrix}w_{1}\\ w_{2}\end{bmatrix}=\begin{bmatrix}0\\ 1\end{bmatrix}. \tag{5.1}\] We assume that \[Q(x,y)=\begin{bmatrix}F(x,y)\\ G(x,y)\end{bmatrix}=\begin{bmatrix}x\left[(1-x)(x-p)\dfrac{1}{1+qy}-ay\right] \\ \qquad\qquad\text{by}\left(1-y-cx\right)\end{bmatrix}.\] Furthermore, \[Q_{c}(E_{1};c_{TR})=\begin{bmatrix}\dfrac{\partial F}{\partial c}\\ \dfrac{\partial G}{\partial c}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix},\] \[DQ_{c}(E_{1};c_{TR})\mathbf{V_{1}}=\left[\begin{array}{cc}0&0\\ -by&-bx\end{array}\right]\Bigg{|}_{(E_{1};c_{TR})}\left[\frac{a}{p-1}\right]= \begin{bmatrix}0\\ -b\end{bmatrix},\] \[D^{2}Q(E_{1};c_{TR})(\mathbf{V_{1}},\mathbf{V_{1}})=\begin{bmatrix}\frac{ \partial^{2}F}{\partial x^{2}}v_{1}^{2}+2\frac{\partial^{2}F}{\partial x \partial y}v_{1}v_{2}+\frac{\partial^{2}F}{\partial y^{2}}v_{2}^{2}\end{bmatrix} \Bigg{|}_{(E_{1};c_{TR})}=\begin{bmatrix}-\frac{2\left(\left(p-1 \right)^{2}q+a\right)a}{\left(p-1\right)^{2}}\\ \frac{2b\left(a+p-1\right)}{1-p}\end{bmatrix}.\] Thus, we have \[\mathbf{W_{1}}^{T}Q_{c}(E_{1};c_{TR})=0,\] \[\mathbf{W_{1}}^{T}\left[DQ_{c}(E_{1};c_{TR})\mathbf{V_{1}}\right]=-b\neq 0,\] Figure 4: Red, green, pink, and orange points indicate stable node, unstable node (source), saddle, and saddle-node, respectively. System (2.2) undergoes a transcritical bifurcation around \(E_{1}\). \[\mathbf{W_{1}}^{T}\left[D^{2}Q(E_{1};c_{TR})(\mathbf{V_{1}},\mathbf{V_{1}})\right] =\frac{2b\left(a+p-1\right)}{1-p}\neq 0.\] According to _Sotomayor's Theorem_[45], all the transversality conditions for system (2.2) to experience a transcritical bifurcation are satisfied, so system (2.2) undergoes a transcritical bifurcation around \(E_{1}\) at the bifurcation parameter threshold \(c_{TR}=1\). ### Saddle-node bifurcation Under the condition \(2A_{1}+A_{2}>0\) and \(0<c<1\), we note that when \(\Delta<0\), \(\Delta=0\), and \(\Delta>0\), system (2.2) has 0, 1, and 2 positive equilibria, respectively. Therefore we consider system (2.2) undergoing a saddle-node bifurcation around the positive equilibrium point \(E_{3*}\). We selected the fear effect parameter \(q\) as the bifurcation parameter. By calculating \(\Delta=0\), we obtain the bifurcation parameter threshold Figure 5: Red, green, pink, and orange points indicate stable node, unstable node (source), saddle, and saddle-node, respectively. System (2.2) undergoes a saddle-node bifurcation around \(E_{3*}\). \[\mathbf{W_{2}}^{T}\left[D^{2}Q(E_{3*};q_{SN})(\mathbf{V_{2}},\mathbf{V_{2}})\right]=H \neq 0.\] According to _Sotomayor's Theorem_[45], all the transversality conditions for system (2.2) to experience a saddle-node bifurcation are satisfied, so system (2.2) undergoes a saddle-node bifurcation around \(E_{3*}\) at the bifurcation parameter threshold \(q_{SN}=q_{*}\). **Remark 2**.: _This section discusses all possible bifurcations of system (2.2). The above analysis demonstrates that the value of the Allee effect parameter \(p\) determines whether a transcritical bifurcation of system (2.2) is possible. Moreover, the variation of the fear effect parameter \(q\) causes a saddle-node bifurcation of system (2.2). Thus we can determine that both Allee and fear effect lead to complex dynamics in the classical Lotka-Volterra competition model._ ## 6 The PDE Case #### 6.0.1 Notations and preliminary observations We go over several preliminaries that will enable us to prove global existence of solutions to (6.4). To this end it suffices to derive uniform estimate on the \(\mathbb{L}^{p}\) norms of the R.H.S. of (6.4), for some \(p>\frac{n}{2}\). Classical theory will then yield global existence, [46]. The usual norms in spaces \(\mathbb{L}^{p}(\Omega)\), \(\mathbb{L}^{\infty}(\Omega)\) and \(\mathbb{C}\left(\overline{\Omega}\right)\) are respectively denoted by \[\left\|u\right\|_{p}^{p}=\int_{\Omega}\left|u(x)\right|^{p}dx,\ \left\|u\right\|_{ \infty}=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Lemma 6.2**.: _Using the same notations and hypotheses as in Lemma 6.1, suppose moreover that \(f\) has at most polynomial growth and that there exists \(\mathbf{b}\in\mathbb{R}^{m}\) and a lower triangular invertible matrix \(P\) with nonnegative entries such that_ \[\forall r\in[0,+\infty)^{m},\quad Pf(r)\leq\left[1+\sum_{i=1}^{m}r_{i}\right] \mathbf{b}.\] _Then, for \(u_{0}\in L^{\infty}(\Omega,\mathbb{R}^{m}_{+}),\) the system (6.2) has a strong global solution._ Under these assumptions, the following local existence result is well known, see D. Henry [46]. **Theorem 6.3**.: _The system (6.2) admits a unique, classical solution \((u,v)\) on \([0,T_{\max}]\times\Omega\). If \(T_{\max}<\infty\) then_ \[\lim_{t\nearrow T_{\max}}\Big{\{}\left\|u(t,.)\right\|_{\infty}+\left\|v(t,. )\right\|_{\infty}\Big{\}}=\infty, \tag{6.3}\] _where \(T_{\max}\) denotes the eventual blow-up time in \(\mathbb{L}^{\infty}(\Omega).\)_ The next result follows from the application of standard theory [49]. **Theorem 6.4**.: _Consider the reaction diffusion system (6.2). For spatially homogenous initial data \(u_{0}\equiv c,v_{0}\equiv d\), with \(c,d>0\), then the dynamics of (6.2) and its resulting kinetic (ODE) system, when \(d_{1}=d_{2}=0\) in (6.2), are equivalent._ Our objective now is to consider the case of a fear function that may be heterogeneous in space. A motivation for this comes from several ecological and sociological settings. For example it is very common for prey to be highly fearful closer to a predators lair, but less fearful in a region of refuge [50], or in regions of high density due to group defense [51]. To these ends, it is conceivable that the fear coefficient \(q\) is not a constant, but actually varies in the spatial domain \(\Omega\), so \(q=q(x)\), which could take different forms depending on the application at hand. This is also in line with the LOF concept [52]. Thus we consider the following spatially explicit version of (1.6), with heterogeneous fear function \(q(x)\), as well as the Allee effect, resulting in the following reaction diffusion system, \[\left\{\begin{array}{l}u_{t}=d_{1}\Delta u+u\left[(1-u)(u-p)\frac{1}{1+q(x) v}-av\right],\quad x\in\Omega,\\ v_{t}=d_{2}\Delta v+(1-v-cu)\quad x\in\Omega,\\ \frac{\partial u}{\partial\nu}=\frac{\partial v}{\partial\nu}=0,\quad\text{on }\quad\partial\Omega.\\ u(x,0)=u_{0}(x)\equiv c>0,\quad v(x,0)=v_{0}(x)\equiv d>0,\end{array}\right. \tag{6.4}\] whee \(\Omega\subset\mathbb{R}^{n}\). Furthermore, we impose the following restrictions on the fear function \(q(x)\), \[\begin{split}(i)&\quad q(x)\in C^{1}(\Omega),\\ (ii)&\quad q(x)\geq 0,\\ (iii)&\quad\text{If }q(x)\equiv 0\text{ on }\Omega_{1}\subset\Omega,\text{ then }|\Omega_{1}|=0.\\ (iv)&\quad\text{If }q(x)\equiv 0\text{ on }\cup_{i=1}^{n}\Omega_{i}\subset\Omega,\text{ then }\Sigma_{i=1}^{n}|\Omega_{i}|=0.\end{split} \tag{6.5}\] **Remark 3**.: _If \(q(x)\equiv 0\) on \(\Omega_{1}\subset\Omega\), with \(|\Omega_{1}|>\delta>0\), or \(q(x)\equiv 0\) on \(\cup_{i=1}^{n}\Omega_{i}\subset\Omega\), with \(\Sigma_{i=1}^{n}|\Omega_{i}|>\delta>0\), that is, on non-trivial parts of the domain, the analysis is notoriously difficult, as one now is dealing with a degenerate problem. See [53, 54] for results on this problem. This case is not in the scope of the current manuscript._ Since the nonlinear right hand side of (6.2) is continuously differentiable on \(\mathbb{R}^{+}\times\)\(\mathbb{R}^{+}\), then for any initial data in \(\mathbb{C}\left(\overline{\Omega}\right)\) or \(\mathbb{L}^{p}(\Omega),\;p\in(1,+\infty)\), it is standard to estimate the \(\mathbb{L}^{p}-\)norms of the solutions and thus deduce global existence. Standard theory will apply even in the case of a bonafide fear function \(k(x)\), because due to our assumptions on the form of \(k\), standard comparison arguments will apply [55]. Thus applying the classical methods above, via Theorem 6.3, and Lemmas 6.1-6.2, we can state the following lemmas: **Lemma 6.5**.: _Consider the reaction diffusion system (6.4), for \(q(x)\) such that the assumptions via (6.5) hold. Then solutions to (6.4) are non-negative, as long as they initiate from positive initial conditions._ **Lemma 6.6**.: _Consider the reaction diffusion system (6.4). For \(q(x)\) such that the assumptions via (6.5) hold. The solutions to (6.4) are classical. That is for \((u_{0},v_{0})\in\mathbb{L}^{\infty}(\Omega)\), \((u,v)\in C^{1}(0,T;C^{2}(\Omega))\), \(\forall T\)._ Our goal in this section is to investigate the dynamics of (6.4). Herein we will use the comparison technique, and compare to the ODE cases of classical competition, or the constant fear function case, where the dynamics are well known. **Remark 4**.: _The analysis in this section are primarily focused on the choice of spatially homogenous (flat) initial data._ Let's define some PDEs system, \[\begin{split}\overline{u}_{t}&=d_{1}\overline{u}_{ xx}+\overline{u}\Big{[}(1-\overline{u})(\overline{u}-p)-a\overline{v}\Big{]},\\ \overline{v}_{t}&=d_{2}\overline{v}_{xx}+(1-\overline {v}-c\overline{u})\,,\end{split} \tag{6.6}\] \[\begin{split}\widehat{u}_{t}&=d_{1}\widehat{u}_{xx}+ \widehat{u}\Big{[}(1-\widehat{u})(\widehat{u}-p)\frac{1}{1+\widehat{\mathbf{q}} \widehat{v}}-a\widehat{v}\Big{]},\\ \widehat{v}_{t}&=d_{2}\widehat{v}_{xx}+(1-\widehat{v }-c\widehat{u})\,,\end{split} \tag{6.7}\] \[\begin{split}\widetilde{u}_{t}&=d_{1}\widetilde{u}_{ xx}+\widetilde{u}\Big{[}(1-\widehat{u})(\widetilde{u}-p)\frac{1}{1+\widehat{ \mathbf{q}}\widehat{v}}-a\widetilde{v}\Big{]},\\ \widetilde{v}_{t}&=d_{2}\widetilde{v}_{xx}+(1- \widetilde{v}-c\widehat{u})\,,\end{split} \tag{6.8}\] \[\begin{split}\tilde{u}_{t}&=d_{1}\tilde{u}_{xx}+ \tilde{u}\Big{[}(1-\tilde{u})(\tilde{u}-p)\frac{1}{1+\widehat{\mathbf{q}}}-a \tilde{v}\Big{]},\\ \tilde{v}_{t}&=d_{2}\tilde{v}_{xx}+(1-\tilde{v}-c \tilde{u})\,,\end{split} \tag{6.9}\] where \[\widehat{\mathbf{q}}=\min_{x\in\Omega}q(x),\qquad\widetilde{\mathbf{q}}=\max_ {x\in\Omega}q(x). \tag{6.10}\] We assume Neumann boundary conditions for all of the reaction diffusion systems (6.6)-(6.9). Also in each of the systems we prescribe spatially homogenous (flat) initial conditions \(u(x,0)=u_{0}(x)\equiv c>\ 0,\quad v(x,0)=v_{0}(x)\equiv d>0\). **Theorem 6.7**.: _For the reaction diffusion system (6.4) for the Allee effect with a fear function \(q(x)\), as well as the reaction diffusion systems (6.6)-(6.9). Then the following point wise comparison holds,_ \[\tilde{u}\leq\widetilde{u}\leq u\leq\widehat{u}\leq\overline{u}.\] Proof.: From the positivity of the solutions to reaction diffusion systems (6.7)-(6.9) and via comparison of (6.4) to logistic equation to get upper bound for second species, i.e., \(v\leq 1\). Hence, we have \[\frac{1}{1+\widetilde{\mathbf{q}}}\leq\frac{1}{1+\widetilde{\mathbf{q}} \widetilde{v}}\leq\frac{1}{1+q(x)v}\leq\frac{1}{1+\widehat{\mathbf{q}} \widetilde{v}}\leq 1,\quad x\in\Omega.\] Hence, the result follows from the standard comparison theory [56]. Let's recall the notations: Denote the discriminant of (3.1) as \[\Delta=A_{2}^{2}-4A_{1}A_{3}, \tag{6.11}\] where \[A_{1}=ac^{2}q+1,\quad A_{2}=-(2acq+ac+p+1),\quad A_{3}=a+aq+p.\] **Theorem 6.8**.: _For the reaction diffusion system (6.4) for the Allee effect with a fear function \(q(x)\) that satisfies the parametric restriction_ \[\Big{(}2ac\mathbf{q}+ac+p+1\Big{)}^{2}<4\Big{(}ac^{2}\mathbf{q}+1\Big{)}\Big{(} a+a\mathbf{q}+p\Big{)} \tag{6.12}\] _for \(\mathbf{q}=\widehat{\mathbf{q}},\widetilde{\mathbf{q}}\), then the solution \((u,v)\) to (6.4) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\) for any choice of flat initial data._ Proof.: Under the given parametric restriction for for \(\mathbf{q}=\widehat{\mathbf{q}},\widetilde{\mathbf{q}}\), it is evident that discriminat \(\Delta<0\), hence there is no real interior equilibrium. Moreover, from the geometry of the nullclines, we have \(\dfrac{\mathrm{d}u}{\mathrm{d}t}\leq\dfrac{\mathrm{d}v}{\mathrm{d}t}\). Hence, we have \[(\widehat{u},\widetilde{v})\to(0,1)\quad\&\quad(\widetilde{u},\widetilde{v}) \to(0,1).\] Moreover, on using Lemma 6.7 we have, \[\widetilde{v}\leq v\leq\overline{v},\] which entails, \[\lim_{t\to\infty}(\widetilde{u},\widetilde{v})\leq\lim_{t\to\infty}(u,v)\leq \lim_{t\to\infty}(\widehat{u},\widetilde{v}),\] subsequently, \[(0,1)\leq\lim_{t\to\infty}(u,v)\leq(0,1).\] Now using a squeezing argument, in the limit that \(t\to\infty\), we have uniform convergence of solutions of (6.4), i.e., \[(u,v)\to(0,1)\] as \(t\to\infty\). **Remark 5**.: _The global stability of the competition state \((0,1)\) holds true for both the case of strong and weak Allee effects, which entails Theorem 6.8 also stands true for both types of Allee effects (See Figs [7,8])._ We now provide a lemma that entails a result such as Theorem 6.8, but for any initial data, **Corollary 6.9**.: _For the reaction diffusion system (6.4) with Allee effect and a fear function \(q(x)\) that satisfies the parametric restriction_ \[\Big{(}2ac\mathbf{q}+ac+p+1\Big{)}^{2}<4\Big{(}ac^{2}\mathbf{q}+1 \Big{)}\Big{(}a+a\mathbf{q}+p\Big{)}, \tag{6.13}\] _for \(\mathbf{q}=\widetilde{\mathbf{q}},\widetilde{\mathbf{q}},\) the solution \((u,v)\) to (6.4) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\) Figure 6: A graphical representation of \(\Delta(p,q)\) defined by equation (6.11) is presented through a region plot and a line plot. These plots illustrate the relationship between the parameters \(p\) and \(q\) that measure the Allee and fear for the reaction-diffusion system described in equation (6.4). The values of \(a\), \(b\), and \(c\) are set to \(0.1,0.7\), and \(0.3\), respectively. In the region plot, the red region indicates that the competitive exclusion state \((0,1)\) is globally stable for any initial data, while the blue region shows that the dynamics depend on the choice of initial data. The line plot specifically represents the case where the Allee effect is weak (\(p=0\)). Figure 7: The parameters specified for the simulation of the reaction-diffusion system given in (6.4) on the spatial domain \(\Omega=[0,1]\) for the case of competition exclusion with fear function \(q(x)=\sin^{2}(4x)\) and strong Allee effect are \(d_{1}=1,d_{2}=1,a=0.5,b=0.5,c=0.5\) and \(p=0.5\) Two different sets of initial data is used: (a) \([u_{0},v_{0}]=[0.1,0.1]\) and (b) \([u_{0},v_{0}]=[2,2]\). It should be noted that these parameters satisfy the parametric constraints specified in the Theorem 6.8 _for initial data, \((u_{0}(x),v_{0}(x))\in L^{\infty}(\Omega)\)._ Proof.: The proof follows the proof of Theorem 6.8. We can show that the solution to (6.4) via comparison is bounded by the solution to (6.6). Via the geometry of nullclines and parametric restrictions, the solution to (6.6) is bounded by the solution to the classical Lokta-Volterra model, with the same parametric restrictions, only without the Allee term. Next we can construct a Lyapunov function, \[E(u,v)=\int_{\Omega}\left(|u|^{2}+|v-1|^{2}\right)dx, \tag{6.14}\] for the classical Lokta-Volterra model. Via standard methods [57] we see that for the parametric restrictions considered, \(\frac{d}{dt}E(u,v)\leq 0\), so \((u,v)\to(0,1)\) as \(t\to\infty\). This is the classical case of competitive exclusion. Now the geometry of nullclines and parametric restrictions, enable via direct comparison to show \((\tilde{u},\tilde{v})\to(0,1)\) as \(t\to\infty\), where \((\tilde{u},\tilde{v})\), is the solution to (6.4). **Theorem 6.10**.: _For the reaction diffusion system (6.4) for the Strong Allee effect with a fear function \(q(x)\) that satisfies the parametric restriction_ \[\Delta>0,\quad,\quad 2A_{1}+A_{2}>0,\quad 1\leq c\leq\frac{1}{\mathbf{q}}+1, \quad\text{and}\quad 0<p<\frac{1}{c} \tag{6.15}\] _for \(\mathbf{q}=\widehat{\mathbf{q}},\widetilde{\mathbf{q}}\). Then there exists sufficiently small initial data \([u_{0}(x),v_{0}(x)]\) such that the solution \((u,v)\) to (6.4) converges uniformly to the spatially homogeneous state \((1,0)\) as \(t\to\infty\), while there exits also sufficiently large initial data \([u_{1}(x),v_{1}(x)]\), for which the solution \((u,v)\) to (6.4) converges uniformly to the spatially homogeneous Figure 8: The parameters specified for the simulation of the reaction-diffusion system given in (6.4) on the spatial domain \(\Omega=[0,1]\) for the case of competition exclusion with fear function \(q(x)=\sin^{2}(4x)\) and weak Allee effect are \(d_{1}=1,d_{2}=1,a=0.5,b=0.5,c=0.5\) and \(p=0\). Two different sets of initial data is used: (a) \([u_{0},v_{0}]=[0.2,0.6]\)and (b) \([u_{0},v_{0}]=[1,0.1]\). It should be noted that these parameters satisfy the parametric constraints specified in the Theorem 6.8 state \((0,1)\) as \(t\to\infty\)._ Proof.: Consider the reaction diffusion system (6.7). Since the \(\widehat{q}\) satisfies the parametric restriction, from Theorem 3.1(c) and Theorem 4.3, there exists a interior saddle equilibrium \(E_{1*}\) to the kinetic (ODE) system (6.7). On making use of the stable manifold theorem [45], i.e., \(\exists\;\;W^{1}_{s}(E_{1*})\in\mathcal{C}^{1}\) separatrix, such that for initial data \((\widehat{u}_{0},\widehat{v}_{0})\) chosen above \(W^{1}_{s}(E_{1*})\) the solution \((\widehat{u},\widehat{v})\to(0,1)\) and for initial data chosen below \(W^{1}_{s}(E_{1*})\), \((\widehat{u},\widehat{v})\to(1,0)\). Moreover, notice that \(\dfrac{1}{1+\widetilde{\mathbf{q}}v}\leq\dfrac{1}{1+\widehat{\mathbf{q}}v}\), we have that for the kinetic (ODE) system (6.8), we still remain in the strong competition case, and via standard theory again, \(\exists\;\;W_{s}(E_{1**})\in\mathcal{C}^{1}\) separatrix, such that for initial data \((\widetilde{u}_{0},\widetilde{v}_{0})\) chosen above \(W_{s}(E^{1**})\) the solution \((\widetilde{u},\widetilde{v})\to(0,1)\) and for initial data chosen below \(W_{s}(E_{1**})\), \((\widetilde{u},\widetilde{v})\to(1,0)\). Here \(E^{1**}\) is the interior saddle equilibrium to the kinetic (ODE) system for (6.8). Now since \(\dfrac{1}{1+\widetilde{\mathbf{q}}v}\leq\dfrac{1}{1+\widehat{\mathbf{q}}v}\), the \(u\) component of \(E_{1**}\) is more than the \(u\) component of \(E_{1*}\). Now using the \(\mathcal{C}^{1}\) property of the separatricies \(W^{1}_{s}(E_{1*}),W_{s}(E_{1**})\), we have the existence of a wedge \(\mathbb{V}\) emanating from \(E_{1*}\), s.t within \(\mathbb{V}\) we have \(W^{1}_{s}(E_{1*})\leq W_{s}(E_{1**})\). Note via Lemma 6.7 we have \(\widetilde{u}\leq u\leq\widehat{u}\). Let us consider positive initial data \((u_{0},v_{0})\) chosen small enough, within \(\mathbb{V}\) s.t. \((u_{0},v_{0})<W^{1}_{s}(E_{1*})\leq W_{s}(E_{1**})\), we will have \[\Big{\{}(1,0)\Big{\}}\leq\Big{\{}(u,v)\Big{\}}\leq\Big{\{}(1,0)\Big{\}}.\] On the other hand, for sufficiently large initial data \((u_{1},v_{1})\) via an analogous construction we will have \[\Big{\{}(0,1)\Big{\}}\leq\Big{\{}(u,v)\Big{\}}\leq\Big{\{}(0,1)\Big{\}}.\] Figure 9: The parameters specified for the simulation of the reaction-diffusion system given in (6.4) on the spatial domain \(\Omega=[0,1]\) for the case of competition exclusion with the fear function \(q(x)=0.8+\sin^{2}(4x)\) are \(d_{1}=1,d_{2}=1,a=0.5,b=0.5,c=0.5\) and \([u_{0}(x),v_{0}(x)]=[sin^{2}(x),cos^{2}(x)]\). (a) Strong Allee \(p=0.5\) and (b) Weak Allee \(p=0\). It should be noted that these parameters satisfy the parametric constraints specified in the Corollary 6.9 This proves the theorem. The above theorem can also be stated for the Weak Allee model and the proof follows the same argument as the Theorem 6.10. **Theorem 6.11**.: _For the reaction diffusion system (6.4) for the Strong Allee effect with a fear function \(q(x)\) that satisfies the parametric restriction_ \[\Delta>0,\quad,\quad 2A_{1}+A_{2}>0,\quad\text{and}\quad c=1 \tag{6.16}\] _for \(\mathbf{q}=\widehat{\mathbf{q}},\widetilde{\mathbf{q}}\). Then there exists sufficiently large initial data \([u_{0}(x),v_{0}(x)]\) such that the solution \((u,v)\) to (6.4) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\), while there exits also sufficiently small data \([u_{1}(x),v_{1}(x)]\), for which the solution \((u,v)\) to (6.4) converges uniformly to the spatially homogeneous state \((1,0)\) as \(t\to\infty\)._ **Remark 6**.: _For the reaction diffusion system (6.4) for the Strong Allee effect with a fear function \(q(x)\) that satisfies the parametric restriction_ \[\Delta>0\quad\&\quad c>1 \tag{6.17}\] _holds true for \(\mathbf{q}=\widehat{\mathbf{q}},\widetilde{\mathbf{q}}\). Then there exists sufficiently small data \([u_{0}(x),v_{0}(x)]\), for which the solution \((u,v)\) to (6.4) converges uniformly to the spatially homogeneous state \((1,0)\) as \(t\to\infty\)._ Denote \(q_{c}\), critical amount of fear required to transit from the globaly stable competition exclusion state \((0,v^{*})\) to the other dynamical case, i.e for any \(c\in(0,1)\) and \(\mathbf{q}\in[0,q_{c}]\), we have \[\Delta(\mathbf{q})>0\quad\&\quad 2A_{1}+A_{2}>0.\] **Theorem 6.12**.: _For the reaction diffusion system (6.4) for the Allee effect (both strong and weak) with a fear function \(q_{\epsilon}(x)\), where \((0<\epsilon\ll 1)\) such that the parametric restriction_ \[\Delta>0,\qquad 2A_{1}+A_{2}>0,\quad\text{and}\quad 0<c<1 \tag{6.18}\] _holds true for \(\widetilde{\mathbf{q}_{\epsilon}}\) and \(q(x)\equiv 0\). Then for some initial data \([u_{0}(x),v_{0}(x)]\) such that the solution \((u,v)\) to (6.4) converges uniformly to \((u^{*},v^{*})\) as \(t\to\infty\), while for some choice of initial data \([u_{1}(x),v_{1}(x)]\), for which the solution \((u,v)\) to (6.4) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\)._ Proof.: Given \(\epsilon\) such that \(0<\epsilon\ll 1\), we can always construct a fear function \(q_{\epsilon}(x)\) such that \(q_{c}-\epsilon\leq\overline{\mathbf{q}_{\epsilon}}\) and \(\widetilde{\mathbf{q}}_{\epsilon}\leq q_{c}+\epsilon\). Hence from 6.7, we have \(\widetilde{u}\leq u_{\epsilon}\leq\overline{u}\). Now, consider the reaction diffusion system (6.6). Since the \(q(x)\equiv 0\) satisfies the parametric restriction, from Theorems 3.1(a) and Theorem 4.3, there exists a interior saddle equilibrium \(E_{1*}\) and interior stable equilibrium \(E_{2*}\) to the kinetic (ODE) system (6.6). On making use of the stable manifold theorem, i.e., \(\exists\ \ W_{s}^{1}(E_{1*})\in\mathcal{C}^{1}\) separatrix, such that for initial data \((\overline{u}_{0},\overline{v}_{0})\) chosen above \(W_{s}^{1}(E_{1*})\) the solution \((\overline{u},\overline{v})\to(0,1)\) and for initial data chosen below \(W_{s}^{1}(E_{1*})\), \((\overline{u},\overline{v})\to(u^{*},v^{*})\) as \(t\to\infty\). Also, for the reaction diffusion system (6.8), since the \(q_{\epsilon}\) satisfies the parametric restriction, from Theorems 3.1(a) and Theorem 4.3, there exists a interior saddle equilibrium \(E_{1**}\) and interior stable equilibrium Figure 11: The parameters specified for the simulation of the reaction-diffusion system given in (6.4) on the spatial domain \(\Omega=[0,1]\) for strong competition with fear function \(q(x)=0.1+\sin^{2}(4x)\) and strong Allee effect are \(d_{1}=1\), \(d_{2}=1\), \(a=0.5\), \(b=0.5\), \(c=1\), and \(p=0.2\). Two different sets of initial data is used: (a) \([u_{0},v_{0}]=[0.2,0.6]\) and (b) \([u_{0},v_{0}]=[1,0.1]\). It should be noted that these parameters satisfy the parametric constraints specified in the Theorem 6.10. \(E_{2**}\) to the kinetic (ODE) system. Via standard theory again, \(\exists\ \ W_{s}(E_{1**})\in\mathcal{C}^{1}\) separatrix, such that for initial data \((\widetilde{u}_{0},\widetilde{v}_{0})\) chosen above \(W_{s}(E^{1**})\) the solution \((\widetilde{u},\widetilde{v})\to(0,1)\) and for initial data chosen below \(W_{s}(E_{1**})\), \((\widetilde{u},\widetilde{v})\to(u^{**},v^{**})\) as \(t\to\infty\). Now, since \(\dfrac{1}{1+\widetilde{\mathbf{q}}v}\leq\dfrac{1}{1+\widehat{\mathbf{q}}v}\), the \(u\) component of \(E_{1**}\) is more than the \(u\) component of \(E_{1*}\). Now using the \(\mathcal{C}^{1}\) property of the separatricies \(W_{s}^{1}(E_{1*}),W_{s}(E_{1**})\), we have the existence of a wedge \(\mathbb{V}_{1}\) emanating from \(E_{1*}\), s.t within \(\mathbb{V}_{1}\) we have \(W_{s}^{1}(E_{1*})\leq W_{s}(E_{1**})\). Similarly, since the \(v\) component of \(E_{1*}\) is higher than \(v\) component of \(E_{1**}\), we have the existence of wedge \(\mathbb{V}_{2}\) emanating from \(E_{1*}\), s.t within \(\mathbb{V}_{2}\) we have \(W_{s}^{1}(E_{1*})\geq W_{s}(E_{1**})\). Let us consider positive initial data \((u_{0},v_{0})\) chosen small enough, within \(\mathbb{V}_{1}\) s.t. \((u_{0},v_{0})<W_{s}^{1}(E_{1*})<W_{s}(E_{1**})\), we will have \((\overline{u},\overline{v})\to(u^{*},v^{*})\) and \((\widetilde{u},\widetilde{v})\to(u^{**},v^{**})\). Note the spatially homogeneous solutions may be different. Hence, from squeezing argument, we can take \(\epsilon\to 0\), to yield the uniform convergence of solutions, i.e., \[\lim_{\epsilon\to 0}\lim_{t\to\infty}(u,v)\to(u^{*},v^{*}).\] On the other hand, let us consider sufficiently large positive initial data \((u_{1},v_{1})\), within \(\mathbb{V}_{2}\) s.t. \((u_{1},v_{1})>W_{s}^{1}(E_{1*})\geq W_{s}(E_{1**})\), we will have \((\overline{u},\overline{v})\to(0,1)\) and \((\widetilde{u},\widetilde{v})\to(0,1)\), i.e., \[\Big{\{}(0,1)\Big{\}}\leq\Big{\{}(u,v)\Big{\}}\leq\Big{\{}(0,1)\Big{\}}.\] This proves the theorem. Figure 12: The parameters specified for the simulation of the reaction-diffusion system given in (6.4) on the spatial domain \(\Omega=[0,1]\) for strong competition with fear function \(q(x)=0.1+\sin^{2}(4x)\) and weak Allee effect are \(d_{1}=1\), \(d_{2}=1\), \(a=0.5\), \(b=0.5\), \(c=1\), and \(p=0\). Two different sets of initial data is used: (a) \([u_{0},v_{0}]=[0.2,0.6]\) and (b) \([u_{0},v_{0}]=[1,0.1]\). It should be noted that these parameters satisfy the parametric constraints specified in the Theorem 6.11 The numerical simulations (See Figure (14) and (15)), motivates us to formulate a conjecture: **Conjecture 6.1**.: _For the reaction diffusion system (6.4) for the Allee effect (both strong and weak) with a fear function \(q(x)\) such that the parametric restriction_ \[\Delta>0,\qquad 2A_{1}+A_{2}>0,\quad\text{and}\quad 0<c<1 \tag{6.19}\] _holds true for \(\widetilde{\mathbf{q}}\) and \(\widetilde{\mathbf{q}}\). Then there some initial data \([u_{0}(x),v_{0}(x)]\) such that the solution \((u,v)\) to (6.4) converges uniformly to \((u^{*},v^{*})\) as \(t\to\infty\), while for some choice of initial data \([u_{1}(x),v_{1}(x)]\), for which the solution \((u,v)\) to (6.4) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\)._ Let's make some observations about the spatially Homogeneous case. This is stated next, **Theorem 6.13**.: _For the reaction-diffusion system (6.4), for the Allee effect with a fear function \(q(x)\equiv q>0\), where \(q\) is a pure constant. Then the system does not possess diffusion-driven instability for any range of parameters or diffusion coefficients._ Proof.: The Jacobian matrix of the positive interior equilibrium, denoted as \(E_{i*}\), for the reaction-diffusion system (6.4) can be expressed as follows: \[J(u^{*},v^{*})=\begin{bmatrix}u^{*}\dfrac{\partial f}{\partial u_{*}}&u^{*} \dfrac{\partial f}{\partial v_{*}}\\ \\ v_{*}\dfrac{\partial g}{\partial u_{*}}&v_{*}\dfrac{\partial g}{\partial v_{*} }\end{bmatrix}\triangleq\begin{bmatrix}B_{1}&B_{2}\\ B_{3}&B_{4}\end{bmatrix},\] where \[B_{1}=\frac{u_{*}\left(-2u_{*}+p+1\right)}{qv_{*}+1},\quad B_{2}=u_{*}\left(- \frac{\left(1-u_{*}\right)\left(u_{*}-p\right)q}{\left(qv_{*}+1\right)^{2}}-a \right)<0,\] \[B_{3}=-bcv_{*}<0,\quad\&\quad B_{4}=-bv_{*}<0.\] Clearly, the sign of \(B_{i}\) is negative for \(i\in\{2,3,4\}\). Note that \((u^{*},v^{*})\) is dynamically stable, and \[\det(J(E_{i*})) =\left.\left[u_{*}v_{*}\frac{\partial f}{\partial v_{*}}\frac{ \partial g}{\partial v_{*}}\left(\frac{\mathrm{d}v_{*}^{(g)}}{\mathrm{d}u_{*} }-\frac{\mathrm{d}v_{*}^{(f)}}{\mathrm{d}u_{*}}\right)\right]\right|_{(u_{*}, v_{*})}\] \[=\left.B_{1}B_{4}-B_{2}B_{3}\right|_{(u_{*},v_{*})}\] \[>0.\] Hence \(B_{1}\) is negative, and it follows that \(B_{1}B_{4}>0\). This violates the necessary criterion for Turing instability. Consequently, standard theory [58] implies that diffusion-driven instability cannot exist for any range of parameters or diffusion coefficients. ## 7 Numerical Simulations The MATLAB R2021b software was utilized to perform a PDE simulation for the reaction-diffusion system featuring Allee effect, with consideration for spatially heterogeneous fear function \(q(x)\). The pdepe function was used for solving 1-D initial boundary value problems in one spatial dimension. The simulation was executed on an 8-core CPU, Apple M1 pro-chip-based workstation, and lasted between 5-7 seconds when the unit interval Figure 14: The parameters specified for the simulation of the reaction-diffusion system given in (6.4) on the spatial domain \(\Omega=[0,1]\) for case of two positive interior equilibrium with fear function \(q(x)=0.1+\sin^{2}(4x)\) and strong Allee effect are \(d_{1}=1,d_{2}=1,a=0.2,b=0.9,c=0.9\) and \(p=0.3\). Two different sets of initial data is used: (a) \([u_{0},v_{0}]=[0.1,0.8]\) and (b) \([u_{0},v_{0}]=[0.6,0.2]\). It should be noted that these parameters satisfy the parametric constraints specified in the Theorem 6.1 [0,1] was taken as a spatial domain, partitioned into 1000 sub-intervals. Our theoretical results and conjecture for the spatially explicit setting were validated numerically via a time series analysis over an extended period using simulations with parameters that adhered to theorems' parametric constraints. We utilized the standard comparison theory in the spatially explicit setting to determine point-wise restrictions on the fear function \(q(x)\), which resulted in competitive exclusion, strong competition, and multiple equilibria type dynamics for the reaction-diffusion system with spatially heterogeneous fear function and Allee effect (weak or strong). Theorems 6.8, 6.10, 6.11, and 6.12, and Conjecture 6.1 demonstrated these outcomes, while Figs [7,8,11,12,14,15] were used to validate the numerical results. Theoretical results were validated numerically using various heterogeneous fear functions, with captions specifying all parameters used for the numerical simulations and the relevant theorems. Based on the model and comparison to the logistics equation, it is evident that the population of any species cannot exceed one, so all parameters were chosen within the range [0,1]. ## 8 Discussion and Conclusion In this paper, we study a competitive system in which the first species is affected by both the Allee and fear effects. First, we can determine that the system (2.2) always has four boundary equilibria. Let \(f(x,y)=g(x,y)\) to obtain a quadratic equation (3.1), where the real roots of (3.1) are the horizontal coordinates of the equilibria. To be consistent with the biological meaning, we only study the positive equilibria. According to Proposition 2.1 and 2.3, we know that the solution of the system (2.2) needs to satisfy \(0<x(t)<1\) and \(0<y(t)<1\). Thus we obtain the conditions to be satisfied by the system's parameters (2.2) in the presence of all positive equilibria. Figure 15: The parameters specified for the simulation of the reaction-diffusion system given in (6.4) on the spatial domain \(\Omega=[0,1]\) for case of two positive interior equilibrium with fear function \(q(x)=0.1+\sin^{2}(4x)\) and weak Allee effect are \(d_{1}=1,d_{2}=1,a=0.2,b=0.9,c=0.9\) and \(p=0\). Two different sets of initial data is used: (a) \([u_{0},v_{0}]=[0.1,0.8]\) and (b) \([u_{0},v_{0}]=[0.6,0.2]\). It should be noted that these parameters satisfy the parametric constraints specified in the Theorem 6.1 Next, we analyze the stability of the equilibria. Substituting the boundary equilibria into the Jacobi matrix of system (2.2) for verification, it is found that the boundary equilibria \(E_{1}\) and \(E_{2}\) may be degenerate equilibria under certain conditions. We use Theorem 7.1 in Chapter in [44] to determine and obtain the stability of \(E_{1}\) and \(E_{2}\) under different conditions. Therefore, it can be concluded that a change in the Allee effect parameter \(p\) leads to a change in the extinction of system (2.2) as well. For the positive equilibria \(E_{1*}\) and \(E_{2*}\), we transform the Jacobi matrix of system (2.2) into the form of (6.0.1). By comparing the relationship between the magnitudes of the tangents of the two isoclines at the intersection, we can determine that \(E_{1*}\) is always a saddle and \(E_{2*}\) is a stable node. By changing the value of the fear effect parameter \(q\), \(E_{1*}\) and \(E_{1*}\) recombine into \(E_{3*}\) when \(q=q_{*}\). By calculating \(J(E_{3*})\) it is found that \(E_{3*}\) is a degenerate equilibrium point. Similarly, we translate \(E_{3*}\) to the origin and perform a Taylor expansion on it, using Theorem 7.1 in Chapter in [44] to find that \(E_{3*}\) is a saddle-node. In studying the existence of equilibria, we find that the positive equilibrium point \(E_{2*}\) coincides with the boundary equilibrium point \(E_{1}\) when \(c=1\). We consider that system (2.2) undergoes a transcritical bifurcation at \(E_{1}\). We choose \(c_{TR}\) as the bifurcation parameter and use _Sotomayor's Theorem_ to prove the transversality condition for system (2.2) to undergo a transcritical bifurcation. Similarly, we also find that the positive equilibria \(E_{1*}\) and \(E_{2*}\) will merge into \(E_{3*}\) when \(q=q_{*}\). By choosing \(q_{SN}\) as the bifurcation parameter and using _Sotomayor's Theorem_, we prove that system (2.2) undergoes a saddle-node bifurcation around \(E_{3*}\). In summary, when \(0<q<q_{*}\), \(2A_{1}+A_{2}>0\), and \(0<c<1\), there is a stable positive equilibrium point \(E_{2*}\), i.e., two species can maintain a coexistence relationship under this condition. This article has some guidance for the conservation of species diversity. It is worthwhile to comment to what degree the addition of the Allee effect changes the dynamical features of the model presented in [40], that is when there is only the fear effect present. We discuss this in terms of the classical scenarios of competitive exclusion, weak competition and strong competition, see Appendix B. In a weak competition type scenario, classically there is one globally attracting interior equilibrium. The effect of fear on the species \(u\), does not dynamically change this situation for small values of the fear parameter, in that there remains one interior equilibrium which is globally attracting. However, intermediate values of fear could yield two interior equilibrium and large values of fear can enable a competitive exclusion type scenario where \((0,v^{*})\) becomes globally attracting. However, with an added weak Allee effect, two interior equilibriums will always occur, see Fig. 16, for small values of fear - whereby Theorem 4.3, one is stable and one a saddle. This creates a bi-stability situation whereby attraction to an interior equilibrium is possible, for certain initial conditions, while certain other initial conditions are attracted to \((0,v^{*})\) - this is seen by comparing (b) to (d) in Fig. 16. Such monotonicity breaking behavior has been recently observed in variations of the classical competition model [41]. Note if fear values are chosen large one can again have a competitive exclusion type scenario where \((0,v^{*})\) becomes globally attracting. Similar dynamics are seen with a strong Allee effect in place. Note, for large enough Allee threshold or fear effect, a collision of the two interior equilibriums is possible via a saddle node bifurcation, leading to no interior equilibriums. In this setting we have a competitive exclusion type scenario with \((0,v^{*})\), being globally attracting. Thus Combination of Allee threshold and fear parameter, lead to qualitatively different dynamics than with only a fear effect, when in particular the fear effect is _small_. In the case of a competitive exclusion type scenario with \((u^{*},0)\), being globally attracting, a large enough fear effect in \(u\), can cause the occurrence of one interior equilibrium, which is a saddle, resulting in a bi-stability type situation. No effect is seen in the event of small fear. However, with an added weak Allee effect one interior equilibriums will always occur, see Fig. 17, for any values of fear, small or large, see Fig. 17. Note, with an added strong Allee effect, for any level of fear, there always exists an Allee threshold, s.t. competitive exclusion can be changed to a strong competition type scenario, that is \((0,v^{*})\), becomes attracting for certain initial data while \((u^{*},0)\), is attracting for certain other initial data, see Fig. 17. These dynamics have interesting applications for bio-control. If we envision a situation where an invasive/pest population \(u\) is attempted to be controlled by an introduced predator population \(v\), where the invader is not allowing the introduced species to establish (so competitive exclusion of \(v\) is seen) then knowledge that there exists a weak Allee effect in \(u\), does not warrant investment in \(v^{\prime}s\), that could incite fear - rather control should occur by direct consumptive effects. The reason being, any level of fear will not be sufficient to eradicate \(u\), as seen in Fig. 17 (d), (e). However, if a strong Allee effect is present in \(u\), managers could think of how that threshold could be manipulated or increased - this could "almost" reverse competitive exclusion and eradicate \(u\)[5], for most initial conditions - \((u^{*},0)\) would still be locally attracting, se. Alternatively if the invader and the introduced species are coexisting, then irrespective of the type of Allee effect, investment into inciting large enough fear in \(u\), could be fruitful and lead to its eradication, see Fig. 17 (g). Thus a key theoretical direction for bio-control applications is to investigate ecological mechanisms in competitive systems, such that Theorem 4.3_changes_ and one can have an interior equilibrium that changes in stability - perhaps as the equilibrium moves from the positive to negative quadrants. This could create geometries whereby a globally attracting situation would change to competitive exclusion of one species or a bi-stability (strong competition) scenario would change to a globally attracting situation. In summary the Allee effect, weak or strong opposes the fear effect, in the regimes of small fear, in that it qualitatively changes the dynamics from a purely fearful situation. However, the Allee effect reinforces the fear effect, in the regimes of large fear. Thus we purport quantification of the fear levels incited by introduced predators/competitors is a crucial step in designing effective bio-control strategies, if there are Allee effects present. ## 9 Appendix A \[e_{10} =\frac{2\left(a\left(p+1\right)c+p^{2}-2a-2p+1\right)c\left(\left(a+ 2p\right)c-p-1\right)a}{\left(a^{2}c^{2}-p^{2}+2p-1\right)\left(a\,c^{2}+cp+c- 2\right)}\] \[e_{01} =\frac{2\left(a\left(p+1\right)c+p^{2}-2a-2p+1\right)\left(\left(a +2p\right)c-p-1\right)a}{\left(a^{2}c^{2}-p^{2}+2p-1\right)\left(a\,c^{2}+cp+c- 2\right)}\] \[e_{20} =\frac{2\left(a\left(p+1\right)c^{2}+\left(p^{2}-3a-4p+1\right)c +p+1\right)a}{a^{2}c^{2}-\left(p-1\right)^{2}}\] \[e_{11} =-\frac{\left(a\left(p+1\right)c+p^{2}-2a-2p+1\right)}{\left(ac+ p-1\right)^{2}\left(ac-p+1\right)^{2}\left(-1+c\right)\left(cp-1\right)}e_{110}\] \[e_{110} =\left(a^{3}c^{4}+3\left(\frac{4p}{3}+a\right)\left(p+1\right)a \,c^{3}+e_{1100}+\left(-3p^{3}+3p^{2}+\left(4a+3\right)p+4a-3\right)c+2\left( p-1\right)^{2}\right)a\] \[e_{1100} =\left(4p^{3}+\left(-a-8\right)p^{2}+\left(-14a+4\right)p-6a^{2} -a\right)c^{2}\] \[e_{02} =-\frac{\left(\left(a+2p\right)c-p-1\right)\left(a^{2}c^{2}+2a \left(p+1\right)c+p^{2}-4a-2p+1\right)^{2}a}{2\left(ac+p-1\right)^{2}\left(ac- p+1\right)^{2}\left(-1+c\right)\left(cp-1\right)}\] \[e_{30} =-\frac{2a\left(-2+a\,c^{2}+\left(p+1\right)c\right)}{a^{2}c^{2}- \left(p-1\right)^{2}}\] \[e_{21} =-\frac{\left(a\left(p+1\right)c^{2}+\left(p^{2}-3a-4p+1\right) c+p+1\right)\left(a^{2}c^{2}+2a\left(p+1\right)c+p^{2}-4a-2p+1\right)a\left(-2+a\,c^{2}+ \left(p+1\right)c\right)}{\left(ac+p-1\right)^{2}\left(ac-p+1\right)^{2}\left( -1+c\right)\left(cp-1\right)}\] \[e_{12} =\frac{\left(a^{2}c^{4}p-2a\left(p+1\right)\left(p+a\right)c^{3} +\left(-3p^{3}+6p^{2}+\left(8a-3\right)p+3a^{2}\right)c^{2}-2\left(p+1\right) \left(-p^{2}+a+2p-1\right)c-\left(p-1\right)^{2}\right)e_{120}}{2\left(ac+ p-1\right)^{3}\left(ac-p+1\right)^{3}\left(-1+c\right)^{2}\left(cp-1\right)^{2}}\] \[e_{120} =-\left(a^{2}c^{2}+2a\left(p+1\right)c+p^{2}-4a-2p+1\right)^{2}a \left(-2+a\,c^{2}+\left(p+1\right)c\right)\] \[e_{03} =\frac{\left(\left(a+2p\right)c-p-1\right)\left(a^{2}c^{2}+2a \left(p+1\right)c+p^{2}-4a-2p+1\right)^{3}a\left(-2+a\,c^{2}+\left(p+1\right) c\right)}{4\left(ac+p-1\right)^{3}\left(ac-p+1\right)^{3}\left(-1+c\right)^{2} \left(cp-1\right)^{2}}\] \[f_{10} =\frac{2b\left(-1+c\right)\left(cp-1\right)c}{a\,c^{2}+cp+c-2}\] \[f_{01} =\frac{2\left(cp-1\right)\left(-1+c\right)b}{a\,c^{2}+cp+c-2}\] ## 10 Appendix B Consider two-species Lotka-Volterra competition model under the influence of Allee and fear effects: \[\left\{\begin{array}{l}\frac{\mathrm{d}x}{\mathrm{d}t}=x\left[ (e-x)\frac{p(x)}{q(y)}-ay\right]=xf(x,y)\equiv F(x,y),\\ \frac{\mathrm{d}y}{\mathrm{d}t}=by\left(f-y-cx\right)=yg(x,y)\equiv G(x,y). \end{array}\right. \tag{10.1}\] By setting \(p(x)\equiv 1\) and \(q(y)\equiv 1\), we can derive the classic Lotka-Volterra ODE competition model, which involves two competing species, denoted as \(x\) and \(y\): \[\left\{\begin{array}{l}\frac{\mathrm{d}x}{\mathrm{d}t}=ex-x^{2}-axy,\\ \frac{\mathrm{d}y}{\mathrm{d}t}=bfy-by^{2}-bcxy.\end{array}\right. \tag{10.2}\] The intrinsic (per capita) growth rates are represented by \(e\) and \(bf\), while the intraspecific competition rates are represented by \(1\) and \(b\), and the interspecific competition rates are represented by \(a\) and \(bc\). All parameters considered are positive. The dynamics of this system are well studied [59]. We recap these briefly, We briefly recap the dynamics of the classical two species Lotka-Volterra ODE competition model, * \(E_{0}=(0,0)\) is always unstable. * \(E_{x}=(e,0)\) is globally asymptotically stable if \(\frac{e}{bf}>\max\left\{\frac{1}{bc},\frac{a}{b}\right\}\). Herein \(x\) is said to competitively exclude \(y\). * \(E_{y}=(0,f)\) is globally asymptotically stable if \(\frac{e}{bf}<\min\left\{\frac{1}{a},\frac{a}{bc}\right\}\). Herein \(y\) is said to competitively exclude \(x\). * \(E^{*}=\left(\frac{e-fa}{1-ac},\frac{f-ce}{1-ac}\right)\) exists when \(b-abc\neq 0\). The positivity of the equilibrium holds if \(bc<\frac{bf}{e}<\frac{b}{a}\) and is globally asymptotically stable if \(b(1-ac)>0\). This is said to be the case of weak competition. * If \(b(1-ac)<0\), then \(E^{*}=\left(\frac{e-fa}{1-ac},\frac{f-ce}{1-ac}\right)\) is unstable as a saddle. In this setting, one has initial condition dependent attraction to either \(E_{x}(e,0)\) or \(E_{y}(0,f)\). This is the case of strong competition. Similarly, by setting \(p(x)\equiv 1\) and \(q(y)=1+py\), we can obtain the Lotka-Volterra competition model for two competing species, where the species \(y\) is afraid of the species \(x\)[40]. On the other hand, if we set \(q(y)\equiv 1\) and \(p(x)=p-x\), we obtain a two species Lotka-Volterra ODE competition model that is subject to the Allee effect.